Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

Photogrammetry Session - new model?
Hi Apple Team, We noticed the following exciting changelog in the latest macOS 26 beta: A new algorithm significantly improves PhotogrammetrySession reconstruction quality of low-texture objects not captured with the ObjectCaptureSession front end. It will be downloaded and cached once in the background when the PhotogrammetrySession is used at runtime. If network isn’t available at that time, the old low quality model will be used until the new one can be downloaded. There is no code change needed to get this improved model. (145220451) However after trying this on the latest beta and running some tests we do not see any differences on objects with low textures such as single coloured surfaces. Is there anything we are missing? the machine is definitely connected to the internet but we have no way of knowing from the logs if the new model is being used? thanks
5
1
628
Jul ’25
VNDetectHumanBodyPose3DRequest failing on Vision Pro
Hi! I attempted running a sample project for detecting human pose in 3D with vision framework, that can be found here: https://developer.apple.com/documentation/vision/detecting-human-body-poses-in-3d-with-vision. It works perfectly on my Macbook Pro M1, but fails on Apple Vision Pro. After selecting a photo, an endless loading screen is displayed and the following message is produced in the console: Failed to initialize 2D Detection Algorithm. Failed to initialize 2D Pose Estimation Algorithm. Failed to initialize algorithm modules Network path is nil: (null) Failed to initialize 2D Detection Algorithm. Failed to initialize 2D Pose Estimation Algorithm. Failed to initialize algorithm modules Unable to perform the request: Error Domain=com.apple.Vision Code=9 "Async status object reported as failed but without an error" UserInfo={NSLocalizedDescription=Async status object reported as failed but without an error}. de-activating session 70138 after timeout Is human pose detection expected to work on VisionOS? Is there any special configuration required, that I might be missing?
1
1
150
Mar ’25
Loading USDZ asset into Model3D causes visionOS 2.0 beta 5 to crash
We've recently discovered that our app crashes on startup on the latest visionOS 2.0 beta 5 (22N5297g) build. In fact, the entire field of view would dim down and visionOS would then restart, showing the Apple logo. Interestingly, no app crash is reported by Xcode during debug. After investigation, we have isolated the issue to a specific USDZ asset in our app. Loading it in a sample, blank project also causes visionOS to reliably crash, or become extremely unresponsive with rendering artifacts everywhere. This looks like a potentially serious issue. Even if the asset is problematic, loading it should not crash the entire OS. We have filed feedback FB14756285, along with a demo project. Hopefully someone can take a look. Thanks!
3
1
585
Jul ’25
[xrsimulator] Exception thrown during compile: cannotGetRkassetsContents
My friend cannot build my visionOS project in the simulator. He gets the following error. Error: [xrsimulator] Exception thrown during compile: cannotGetRkassetsContents(path: "/Users/path/to/Packages/RealityKitContent/Sources/RealityKitContent/RealityKitContent.rkassets") In Xcode, he is able to open the RealityKitContent package in realityComposer Pro by clicking on the Package.realitycomposerpro file. No warnings show up wrt this error in RCP either. All scenes appear to be usable/navigable in RCP. This error only comes up when he tries to build the project in Xcode command+b. The is no other information in the Report Navigator's Build logs for this error. The error is always followed by this next error. Error: Tool exited with code 1 Yikes, please help!
1
1
513
Feb ’25
Force quit apps on Vision Pro simulator
How do you force quit apps on the Vision Pro simulator? I've read online you can double press the digital crown on a real Vision Pro device but there is no such button on the simulator. So far I've tried Pressing the home button twice (⇧⌘H) Pressing the Siri button twice (⌥⇧⌘H) None of them works I'm on Xcode 15.0 beta 5 (15A5209g) & visionOS 1.0 beta 2 (21N5207e)
6
2
3.1k
Feb ’25
Can developers use spatial image in third App's Spatial widget
Spatial widget is a new feature of visionos 26. I notice The system’s Photo app can add a Spatial Image in the widget. I wonder if third apps can use spatial image or any 3D content in it's widget? I try to use RealityView in widget and it run with a crash. So does spatial Image in widget only supported by the system Photo app, and not available to developers now?
1
1
734
Jul ’25
Getting to MeshAnchor.MeshClassification from MeshAnchor?
I am working with MeshAnchors, and I am having troubles getting to the classification of the triangles/faces. This post references the MeshAnchor.Geometry, and that struct does have a property named "classifications", but it is of type GeometrySource. I cannot find any classification information in GeometrySource. Am I missing something there? I think I am looking for something of type MeshAnchor.MeshClassification, but I cannot find any structs with this as a property.
3
0
1.3k
Feb ’25
Does Apple Spatial Audio Format documentation exist
The WWDC25 video and notes titled “Learn About Apple Immersive Video Technologies” introduced the Apple Spatial Audio Format (ASAF) and codec (APAC). However, despite references throughout on using immersive video, there is scant information on ASAF/APAC (including no code examples and no framework references), and I’ve found no documentation in Apple’s APIs/Frameworks about its implementation and use months on. I want to leverage ambisonic audio in my app. I don’t want to write a custom AU if APAC will be opened up to developers. If you read the notes below along with the iPhone 17 advertising (“Video is captured with Spatial Audio for immersive listening”), it sounds like this is very much a live feature in iOS26. Anyone know the state of play? I’m across how the PHASE engine works, which is unrelated to what I’m asking about here. Original quote from video referenced above: “ASAF enables truly externalized audio experiences by ensuring acoustic cues are used to render the audio. It’s composed of new metadata coupled with linear PCM, and a powerful new spatial renderer that’s built into Apple platforms. It produces high resolution Spatial Audio through numerous point sources and high resolution sound scenes, or higher order ambisonics.” ”ASAF is carried inside of broadcast Wave files with linear PCM signals and metadata. You typically use ASAF in production, and to stream ASAF audio, you will need to encode that audio as an mp4 APAC file.” ”APAC efficiently distributes ASAF, and APAC is required for any Apple immersive video experience. APAC playback is available on all Apple platforms except watchOS, and supports Channels, Objects, Higher Order Ambisonics, Dialogue, Binaural audio, interactive elements, as well as provisioning for extendable metadata.”
4
0
674
Sep ’25
Can´t find a DLL in a VisionOS app with Unity
Dear all, I´m using Unity 6.2 beta and Xcode 16.2. I´m creating a simple framework to use the text to speech functionality in VisionOS from unity. The framework is created in Swift. I create an objective-c wrapper with the following declarations: ... void _initTTS(int); ... I create the framework, import it in Unity and call the functions in a c# wrapper class. The code is as follows: public static class TTSPluginManager { [DllImport("TTS_Vision"] private static extern void _initTTS(int val); ... public static void Initialize() { #if UNITY_VISIONOS _initTTS(0); #else Debug.LogWarning("NativeTTS.Initialize called on a non-iOS platform. Ignoring."); #endif } } I have managed to compile and run the program in the Apple Vision Pro, but I keep on getting the following error: DllNotFoundException: TTS_Vision assembly: type: member:(null) TTSPluginManager.Initialize () (at Assets/Plugins/TTSPluginManager.cs:33) LecturePortalManager.OnCreateStory (Ink.Runtime.Story story) (at Assets/AVRLecture/LecturePortalManager.cs:17) InkLoader.StartStory () (at Assets/AVRLecture/InkLoader.cs:24) InkLoader.Start () (at Assets/AVRLecture/InkLoader.cs:18) If I run the generated code from Xcode, I can see the app in the AVP, but I keep getting a loading error: DllNotFoundException: Unable to load DLL 'TTS_Vision'. Tried the load the following dynamic libraries: Unable to load dynamic library '/TTS_Vision' because of 'Failed to open the requested dynamic library (0x06000000) dlerror() = dlopen(/TTS_Vision, 0x0005): tried: '/TTS_Vision' (no such file) at TTSPluginManager.Initialize () [0x00000] in <00000000000000000000000000000000>:0 at LecturePortalManager.OnCreateStory (Ink.Runtime.Story story) [0x00000] in <00000000000000000000000000000000>:0 I can see in the generated code that the framework (TTS_Vision) is there, but the path seems wrong. I've tried to add more options to the searched paths, with no success... Any hints or suggestions are much more appreciated.
1
0
293
Sep ’25
How to update TextureResource with MTLTexture?
Hi I have a monitoring app, that will take input video from uvc and process it using Metal, and eventually get a MTLTexture. The problem I'm facing is I have to convert MTLTexture to CGImage then call TextureResource.replace, which is super slow. Metal processing speed is same as input frame rate(50pfs), but MTLTexture -> CGImage -> TextureResource only got 7fps... Is there any way I can make it faster?
2
0
440
Oct ’25
How to combine Occlusion nodes with soft edges.
At a recent community meeting we were wondering how Apple creates this soft-edge effect around the occlusion cutouts. We see this effect on keyboard cutouts, iPhone cutouts, and in progressive spaces. An example: Notice the soft edged around the occlusion cutout for the keyboard One of our members created some Shader Graph materials to explore soft edges. These work by sending data into the opacity channel of the PreviewSurface node. Unfortunately, the Occlusion Surface nodes lack any sort of input. If you know how to blend these concepts with RealityKit Occlusion, please let us know!
0
1
913
Sep ’25
The AccessoryAnchor transform does not match any of the Accessory.LocationName options.
I am using AccessoryTrackingProvider from ARKit to get the transform of the PSVR2 controller via originFromAnchorTransform of the AccessoryAnchor. I also am trying to use AnchorEntity on the controller using RealityKit However, none of the three options for Accessory.LocationName, which should be used to define the AnchorEntity target, seem to match the position on the controller which is being sent from ARKit. The picture attached is showing two transforms: RealityKit - using .gripSurface to define the AnchoringComponent.Target.accesssory location. ARKit - using originFromAnchorTransform for AccessoryTrackingProvider. They are not aligned at the same point. As for the other options of Accessory.LocationName, using .aim is located at the tip of the controller and .grip is the same position as .gripSurface but with a different orientation. I am wondering why there is not an option for Accessory.LocationName that actually matches the transform captured by ARKit?
3
0
1.2k
1w
LiDAR Projector Pattern iPhone 15 Pro vs. 12 Pro – Research Project Question
Dear Apple Team, I’m a high school student (vocational upper secondary school) working on my final research project about LiDAR sensors in smartphones, specifically Apple’s iPhone implementation. My current understanding (for context): I understand Apple’s LiDAR uses dToF with SPAD detectors: A VCSEL laser emits pulses, a DOE splits the beam into a dot pattern, and each spot’s return time is measured separately → point cloud generation. My specific questions: How many active projection dots does the LiDAR projector have in the iPhone 15 Pro vs. iPhone 12 Pro? Are the dots static or do they shift/move over time? How many depth measurement points does the system deliver internally (after processing)? What is the ranging accuracy (cm-level precision) of each measurement point? Experimental background: Using an IR night vision camera, I counted approximately 111 dots on the 15 Pro vs. 576 dots on the 12 Pro. Do these match the internal specifications? Photos of my measurements are available if helpful. Contact request: I would be very grateful if you could connect me with an Apple engineer or ARKit specialist who works with LiDAR technology. I would love to ask follow-up questions directly and would be happy to provide my contact details for this purpose. These specifications would be essential for my research paper. Thank you very much in advance! Best regards, Max! Vocational Upper Secondary School Hans-Leipelt-Schule Donauwörth Research Project: “LiDAR Sensor Technology in Smartphones”
6
0
635
1w
Human pose detection failing on Vision Pro
Hi! I attempted to run a sample project for detecting human pose in photos, which can be found here: https://developer.apple.com/documentation/vision/detecting-human-body-poses-in-3d-with-vision The project works perfectly when run on my Macbook Pro M1, but it fails on Apple Vision Pro. After selecting the photo an endless loading screen is presented and the following output is produced in the console: Failed to initialize 2D Detection Algorithm. Failed to initialize 2D Pose Estimation Algorithm. Failed to initialize algorithm modules Network path is nil: (null) Failed to initialize 2D Detection Algorithm. Failed to initialize 2D Pose Estimation Algorithm. Failed to initialize algorithm modules Unable to perform the request: Error Domain=com.apple.Vision Code=9 "Async status object reported as failed but without an error" UserInfo={NSLocalizedDescription=Async status object reported as failed but without an error}. de-activating session 70138 after timeout It seems that VNDetectHumanBodyPose3DRequest is failing on Vision Pro for some reason. Are there any additional requirements for running Vision framework on VisionOS, that I might be missing?
3
1
199
Mar ’25
AR sessions fails with "Required sensor failed"
The AR based app I am working on right now is experiencing an issue. Sometimes, the AR session fails with a call to my ARSessionObserver's session(_ session: ARSession, didFailWithError error: Error) with the following error: Error Domain=com.apple.arkit.error Code=102 "Required sensor failed." NSLocalizedFailureReason="A sensor failed to deliver the required input.," NSLocalizedRecoverySuggestion="Make sure that the application has the required privacy settings." The underlying error seems to point to the CoreMotion framework: Domain=CMErrorDomain Code=102 "(null) Some people seem to have experienced this issue and solved it by making sure that the Compass Calibration switch is ON in Settings > Privacy > Location Services > System Services. For context, the ARWorldTrackingConfiguration.worldAlignment is set to .gravity The thing is it is already ON when I experience this issue. I also noticed that this issue happens way more often on the iPhone 16e than in any other device. Has anyone had similar experiences? I am looking for a way to prevent this error from happening (ideally) or handling in a way that does not affect the user. Any help is appreciated
0
1
236
Aug ’25
Access UltraWideCamera when ARSession is running
ARSession provides video stream from the wide angle camera. If ARSession uses the ultra wide camera at the same time, ARSession may provide video stream from that camera, otherwise AVCaptureSession with an ultra wide camera should be allowed to launch. It would be very very useful if we can access different cameras while ARSession is running. We'd like to cooperate with you if possible. Steps to reproduce: run an AVCaptureSession and then run an ARSession. The AVCaptureSession stops.
1
0
458
2w
onWorldRecenter memory leak and duplicate callbacks in ImmersiveSpace
Posting this here in case this information is helpful to other developers: As of visionOS 26.3 beta 1, onWorldRecenter has two significant issues: (FB21557639) Memory Leak: When onWorldRecenter is assigned to a RealityView within an ImmersiveSpace, it appears to retain a strong reference to the view's internal SwiftUI context. When the immersive space is dismissed, the view's @State objects will not be deallocated. Also, each time the immersive space view's body is executed, additional state storage will be allocated and leaked. Multiple Callbacks: When the user long-presses the Digital Crown, the onWorldRecenter closure will be called multiple times, once for each past view body execution, including those of immersive space views that have been previously dismissed. Although these issues seem to be most prevalent when onWorldRecenter is used with an ImmersiveSpace, they may also occur in the context of a WindowGroup under certain circumstances. It's possible to work around this problem by moving onWorldRecenter to an empty overlay view within the app's primary WindowGroup and forwarding the world recenter events to ImmersiveSpace views through a notification system, coupled with a debouncer as an extra precaution.
1
1
972
3w
ImagePresentationComponent .spatialStereoImmersive mode not rendering in WindowGroup context
Platform: visionOS 2.6 Framework: RealityKit, SwiftUIComponent: ImagePresentationComponent I’m working with the new ImagePresentationComponent from visionOS 26 and hitting a rendering limitation when switching to .spatialStereoImmersive viewing mode within a WindowGroup context. This is what I’m seeing: Pure immersive space: ImagePresentationComponent with .spatialStereoImmersive mode works perfectly in a standalone ImmersiveSpace Mode switching API: All mode transitions work correctly (logs confirm the component updates) Spatial content: .spatialStereo mode renders correctly in both window and immersive contexts. This is where it’s breaking for me: Window context: When the same RealityView + ImagePresentationComponent is placed inside a WindowGroup (even when that window is floating in a mixed immersive space), switching to .spatialStereoImmersive mode shows no visual change The API calls succeed, state updates correctly, but the immersive content doesn’t render. Apple’s Spatial Gallery demonstrates exactly what I’m trying to achieve: Spatial photos displayed in a window with what feels like horizontal scroll view using system window control bar, etc. Tapping a spatial photo smoothly transitions it to immersive mode in-place. The immersive content appears to “grow” from the original window position by just changing IPC viewing modes. This proves the functionality should be possible, but I can’t determine the correct configuration. So, my question to is: Is there a specific RealityView or WindowGroup configuration required to enable immersive content rendering from window contexts that you know of? Are there bounds/clipping settings that need to be configured to allow immersive content to “break out” of window constraints? Does .spatialStereoImmersive require a specific rendering context that’s not available in windowed RealityView instances? How do you think Apple’s SG app achieves this functionality? For a little more context: All viewing modes are available: [.mono, .spatialStereo, .spatialStereoImmersive] The spatial photos are valid and work correctly in pure immersive space Mixed immersive space is active when testing window context No errors or warnings in console beyond the successful mode switching logs I’m getting Any insights into the proper configuration for window-hosted immersive content
1
1
194
Aug ’25