Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

ShaderGraphMaterial with Occlusion Surface Output fails to load on iOS and macOS
A ShaderGraphMaterial with an Occlusion Surface Output generated with RealityComposer 2 fails to load on iOS 18 and macOS 15 with the following error: RealityFoundation.ShaderGraphMaterial.LoadError.invalidTypeFound (https://developer.apple.com/documentation/realitykit/shadergraphmaterial/loaderror/invalidtypefound) This happens with both https://developer.apple.com/documentation/shadergraph/realitykit/occlusion-surface-(realitykit) and https://developer.apple.com/documentation/shadergraph/realitykit/shadow-receiving-occlusion-surface-(realitykit) RealityView { content in do { let bgEntity = ModelEntity(mesh: .generateCone(height: 0.5, radius: 0.1), materials: [SimpleMaterial(color: .red, isMetallic: true)]) bgEntity.position.z = -0.2 content.add(bgEntity) let occlusionMaterial = try await ShaderGraphMaterial(named: "/Root/OcclusionMaterial", from: "OcclusionMaterial") let testEntity = ModelEntity(mesh: .generateSphere(radius: 0.4), materials: [occlusionMaterial]) content.add(testEntity) content.cameraTarget = testEntity } catch { print("Shader Graph Load Error:") dump(error) } } .realityViewCameraControls(.orbit) .edgesIgnoringSafeArea(.all) Feedback ID: FB15081296
2
1
1.3k
Nov ’25
Reality Composer to Reality Composer Pro?
I sketched a idea for a project in Reality Composer on my iPad, thinking when I had a chance to sit down I would work it up in Xcode. However, when I got back to my computer, I discovered I cannot open a file created in Reality Composer (or the exported Reality file) in Reality Composer Pro. Am I missing something obvious here, because this seems like a huge oversight. If anyone, can let me know how to open a file created in Reality Composer in Reality Composer Pro, I would greatly appreciate it. Partly, because there seems to be objects available in Reality Composer that are not in Reality Composer Pro. Thanks Stan
2
1
268
Jun ’25
Reality Composer Timeline unfinished
the timeline editor feels often unfinished. Setting the time cursor to a different time is often not reflected in the preview. You either have to click a clip or wait. sometimes the cursor even disappears, eg. when switching tabs to shadergraph Not being able to select and move multiple clips is missing. There is also no snapping to clips or time cursor as found in other tools. And then there is the timeline compile bug https://developer.apple.com/forums/thread/810868 The timeline as it is, is a good start but it definitely needs some more love to be on par with other commercial tools like Unity or After Effects.
1
1
190
3w
How to fix "Sample 0 missing LiDAR point cloud!" error?
I'm trying to run a PhotogrammetrySession based on photos taken in an AVCaptureSession and stored as .heic files. When I load the files I'm always seeing the error "Sample 0 missing LiDAR point cloud!" showing up for each individual sample. Debugging shows that sample.depthDataMap is populated, also the .heic contains depth data which can be extracted using e.g. heif-convert on my Mac. Comparing the .heic I created to one of the ObjectCaptureSession which doesn't show the LiDAR warning, I noticed the only difference being the HEIC information here: So my questions are: Are these the missing information in my manual capture causing this warning? Can I somehow add these information in an AVCaptureSession? Do these information allow better photogrammetry results?
2
0
348
Dec ’25
VisionOS 2 - Screen Capture with passthrough
We're trying to switch from using main camera access on Arkit to screen-capture with passthrough however we're facing some issues and it seems a bit complicated to debug. We have set up a broadcast Extension, set up some logs on the sample Handler but we get nothing in the console nor that the recording starts, we set up the picker as well and we can see our extension in the control center as one of the choices but clicking start, results in it stopping in less than one second after. The only message that is rather contradictory we see in the console.app is the following [INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1333 Extension has passthrough license and just right after [INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1336 Extension does not have passthrough license
2
1
527
Dec ’25
Why VideoMaterial can't show transparency on Apple Vision Pro
https://developer.apple.com/documentation/realitykit/videomaterial The documentation: "Video materials support transparency if the source video’s file format also supports transparency." I have a transparency video(Hand.mov, HEVC with alpha), I can show the video with transparency background correctly on Vision Pro Simulates, but on physic Device the video has a black background. I'm sure the video format is ok because I can see get the texture from video and display it on an UnlitMaterial. How can I show the transparency video correctly with the RealityKit/VideoMaterial?
2
0
258
Dec ’25
ARKit Body Tracking not detecting ARBodyAnchor on iOS 26.x (FB15128723)
Since updating to iOS 26.0 (and confirmed on 26.1), ARBodyTrackingConfiguration no longer detects a valid ARBodyAnchor on devices with LiDAR (e.g., iPhone 15 Pro, iPhone 17 Pro Max). This issue reproduces in custom projects and Apple’s official sample “Capturing Body Motion in 3D”. The AR session runs normally, but the delegate call: func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) never yields an ARBodyAnchor with valid joint transforms. All joints return nil when calling: body.skeleton.modelTransform(for: jointName) resulting in 0 valid joints per frame. Environment • Device: iPhone 17 Pro Max (LiDAR) • iOS: 26.0 / 26.1 • Xcode: 16.0 (stable) • Framework: ARKit + RealityKit • Configuration used: config.worldAlignment = .gravityAndHeading config.isAutoFocusEnabled = true config.environmentTexturing = .none session.run(config) Also tested: with and without frameSemantics = .bodyDetection Expected Behavior ARBodyAnchor should be detected and body.skeleton should contain ~89 valid joints with continuous updates.
4
1
1.1k
Dec ’25
Game Controller Input Limitations in visionOS Volumetric Windows - Need Clarification
Game Controller Input Limitations in visionOS Volumetric Windows Hello Apple Developer Community, I'm developing a game for visionOS and have encountered significant limitations with game controller input when using volumetric windows (WindowGroup with .volumetric style). I'd appreciate clarification on whether this is expected behavior and any guidance on best practices. 🧩 Issue Summary When using a DualSense controller with a volumetric window in visionOS, only a subset of controller inputs are available to the app. The remaining inputs appear to be reserved by the system for UI navigation. ✅ Working Inputs (Volumetric Window) D-Pad (all directions) L3 (left thumbstick button click) R3 (right thumbstick button click) Menu button Options button ❌ Not Working Inputs (Volumetric Window) Left thumbstick analog movement (used for UI scrolling instead) Right thumbstick analog movement (used for UI scrolling instead) Face buttons (Cross, Circle, Square, Triangle / A, B, X, Y) Shoulder buttons (L1, R1) Triggers (L2, R2) Key observation: When moving the left thumbstick in a volumetric window, the window's UI scrolls vertically instead of sending input to my app's GameController handlers. Similarly, face buttons seem to be reserved for system UI interactions. ⚙️ Implementation Details I'm using the standard GameController framework: Connect to controller via GCController.controllers() Access extendedGamepad profile Set up valueChangedHandler and pressedChangedHandler for all inputs Handlers confirmed registered via logging Working inputs (D-Pad, L3, R3) trigger immediately and consistently Non-working inputs (thumbsticks, face buttons) never trigger 🧠 Critical Finding: ImmersiveSpace Works Perfectly When testing the exact same code in an ImmersiveSpace (.mixed immersion style), all controller inputs work perfectly: ✅ Both thumbsticks provide full analog input ✅ All face buttons trigger their handlers ✅ All shoulder buttons and triggers work correctly ✅ 100% success rate with no intermittent issues This suggests the issue isn't with my code, but rather how visionOS handles controller input differently between Volumetric Windows and ImmersiveSpace. 🧪 Test Environment I created a minimal test project (Controller-Playground) to isolate the issue: A simple ControllerTester class that registers all GameController handlers A visual UI showing real-time input state No game logic, RealityKit physics, or other complexity Results In volumetric window: Only D-Pad, L3, R3, Menu, Options work In ImmersiveSpace: All inputs work perfectly This confirms the limitation exists at the visionOS platform level, not in app code. 🧰 Attempted Workarounds I tried the following without success: Setting GCSupportsControllerUserInteraction = false in Info.plist Setting UIRequiresFullScreen = true Changing window styles (.plain, .volumetric) Polling vs. handler-based input approaches Various threading models (MainActor, separate thread) Result: The only way to enable full controller support is to switch to ImmersiveSpace. ❓ Questions for Apple Is this input reservation behavior in volumetric windows intended and documented? Are game controllers expected to have limited functionality in volumetric windows while full functionality is reserved for ImmersiveSpace? Is there a way to request full controller input access in a volumetric window, or is ImmersiveSpace the only option for complete controller support? Where can I find official documentation about controller input differences between window types? Are there any APIs or configuration options to disable system controller shortcuts in volumetric windows? 🎯 Impact This limitation has a significant effect on game design and architecture: Volumetric windows offer a multitasking-friendly, less immersive experience ImmersiveSpace provides full controller support but may be more immersive than some games require Games that only need basic D-Pad and button input can work fine in volumetric windows Games requiring analog sticks or face buttons must currently use ImmersiveSpace It would be very helpful if Apple could clarify or reference existing documentation regarding controller input handling in different visionOS window types. If such documentation doesn't exist yet, it might be valuable to include this information in future developer guides or best-practice documents. 🕹 Current Workaround For now, I'm using: D-Pad for character movement (digital 8-direction) R3 (right stick click) as a substitute for the "X" button This setup allows the game to function within a volumetric window, though full controller support still requires ImmersiveSpace. 📄 Request If this is expected behavior, I may have simply missed the relevant documentation — could you please point me to any existing resources that explain this design? If there isn't one yet, it would be great if future visionOS documentation could: Clearly outline controller input behavior across window types Provide guidance on when to use Volumetric Windows vs. ImmersiveSpace for games Consider adding an API option to request full controller access when appropriate If this is not expected behavior, I'm happy to file a detailed bug report with sample code. 💻 System Information visionOS: Latest Simulator Xcode: Latest version Controller: Sony DualSense Framework: GameController (standard extendedGamepad profile) Test project: Minimal reproducible example available Thank you for any clarification or guidance you can provide. This information would be valuable for many developers working on visionOS games.
1
0
600
Oct ’25
Is ARGeoTrackingConfiguration always more accurate than ARWorldTrackingConfiguration for world scale AR?
We are working on a world scale AR app that leverages the device location and heading to place objects in the streets, so that they are correctly and stably anchored to certain locations. Since the geo-tracking imagery is only available in certain cities and areas, we are trying to figure out how to fallback when geo-tracking is not available as the device move away, to still retain good AR camera accuracy. We might need to come up with some algorithm using the device GPS, to line up the ARCamera with our objects. Question: Does geo-tracking always provide greater than or equal to the accuracy of world tracking, for a GPS outdoor AR experience? If so, we can simply use the ARGeoTrackingConfiguration for the entire time, and rely on the ARView keeping itself aligned. Otherwise, we need to switch between it and ARWorldTrackingConfiguration when geo-tracking is not available and/or its accuracy is low, then roll our own algorithm to keep the camera aligned. Thanks.
3
0
1.1k
Oct ’25
Looking for a way to implement the video display effect in Apple's 'Spatial Gallery'
Hi guys, I noticed that Apple created a really engaging visual effect for browsing spatial videos in the app. The video appears embedded in glass panel with glowing edges and even shows a parallax effect as you move around. When I tried to display the stereo video using RealityView, however, the video entity always floats above the panel. May I ask how does VisionOS implement this effect? Is there any approach to achieve this effect or example code I can use in my own code. Thanks!
3
1
224
Jun ’25
Triangle Recommendation Limit for M5 Vision Pro
For the M2 Apple Vision Pro, there's "a general guideline, we recommend no more than 500 thousand triangles for an immersive scene, with 250 thousand for applications in the shared space." --https://developer.apple.com/videos/play/wwdc2024/10186/?time=147 Is there a revised recommendation for the M5 Apple Vision Pro?
2
1
784
Nov ’25
Enterprise API with Education Account
Hello, I am trying to develop an app that broadcasts what the user sees via Apple Vision Pro. I am a graduate student studying at the university. And I have two problems, If I want to use passthrough in screen capture (in VisionOS), do I have to join Apple Developer Enterprise Program to get Enterprise API? and Can I buy Apple Developer Enterprise Program (Enterprise API) with my university account? Have any of you been able to do this? Thank you
2
1
262
Jul ’25
Issue with tracking multiple images using ARKit on VisionOS
We are using the ARKit image tracking feature on visionOS 2.0 with three pre-registered images. The image tracking works, but only one image is actively tracked at a time. When more than one target image is visible to the camera, it has difficulty detecting and tracking the other images. Is this the expected behavior in visionOS, or is there something we need to do to resolve this issue?
4
1
532
Mar ’25
App Window Closure Sequence Impacts Main Interface Reload Behavior
My VisionOS App (Travel Immersive) has two interface windows: a main 2D interface window and a 3D Earth window. If the user first closes the main interface window and then the Earth window, clicking the app icon again will only launch the Earth window while failing to display the main interface window. However, if the user closes the Earth window first and then the main interface window, the app restarts normally‌. Below is the code of import SwiftUI @main struct Travel_ImmersiveApp: App { @StateObject private var appModel = AppModel() var body: some Scene { WindowGroup(id: "MainWindow") { ContentView() .environmentObject(appModel) .onDisappear { appModel.closeEarthWindow = true } } .windowStyle(.automatic) .defaultSize(width: 1280, height: 825) WindowGroup(id: "Earth") { if !appModel.closeEarthWindow { Globe3DView() .environmentObject(appModel) .onDisappear { appModel.isGlobeWindowOpen = false } } else { EmptyView() // 关闭时渲染空视图 } } .windowStyle(.volumetric) .defaultSize(width: 0.8, height: 0.8, depth: 0.8, in: .meters) ImmersiveSpace(id: "ImmersiveView") { ImmersiveView() .environmentObject(appModel) } } }
6
0
290
Apr ’25
Automatically Enabling Spatial Personas in App
Is it possible to create a button in my app that will turn on the spatial personas for the user? Currently the only way I know of turning on spatial personas is by selecting the cube icon in the FaceTime window which is quite clunky for people unfamiliar with the Vision Pro's UI. Any help would be appreciated.
1
0
423
Feb ’25
Loading USDZ with particle system crashes on Intel Macs
Hello, we have a RealityKit app that also runs on macOS via Catalyst. For specific USD assets containing particle systems we have observed a reproducible crash. Steps to reproduce: Open Reality Composer Pro Create new file Create simple particle system (default one is fine) export as USDZ Create project in Xcode Call Entity.load(… and pass in your USD Running this on an Intel iMac with macOS Sequoia 15.3 will lead to a crash with the following console log: validateWithDevice:4704: failed assertion `Render Pipeline DescrvalidateWithDevice:4704: failed assertion `Render Pipeline Descriptor Validation depthAttachmentPixelFormat (MTLPixelFormatDepthvalidateWithDevice:4704: failed assertion `Render Pipeline Descriptor Validation depthAttachmentPixelFormat (MTLPixelFormatDepthvalidateWithDevice:4704: failed assertion `Render Pipeline Descriptor Validation depthAttachmentPixelFormat (MTLPixelFormatDepth32Float) and stencilAttachmentPixelFormat (MTLPixelFormatStencil8) must match. ' iptor Validation depthAttachmentPixelFormat (MTLPixelFormatDepth32Float) and stencilAttachmentPixelFormat (MTLPixelFormatStencil8) must match. ' 32Float) and stencilAttachmentPixelFormat (MTLPixelFormatStencil32Float) and stencilAttachmentPixelFormat (MTLPixelFormatStencil8) must match. ' 8) must match. ' Xcode version: 16.2.0 iMac 2020 3,8GHz Intel Core i7 macOS Sequoia 15.3 FB16477373 It would be great if this could be fixed quickly or a workaround provided since it affects or production app. Thank you!
1
0
454
Mar ’25
RotateGesture3D auto constrained to axis
Hi, On visionOS to manage entity rotation we can rely on RotateGesture3D. We can even with the constrainedToAxis parameter authorize only rotation on an x, y or z axis or even make combinations. What I want to know is if it is possible to constrain the rotation on axis automatically. Let me explain, the functionality that I would like to implement is to constrain the rotation on an axis only once the user has started his gesture. The initial gesture the user makes should let us know which axis they want to rotate on. This would be equivalent to activating a constraint automatically on one of the axes, as if we were defining the gesture on one of the axes. RotateGesture3D(constrainedToAxis: .x) RotateGesture3D(constrainedToAxis: .y) RotateGesture3D(constrainedToAxis: .z) Is it possible to do this? If so, what would be the best way to do it? A code example would be greatly appreciated. Regards Tof
3
0
434
Feb ’25
ARKit hand tracking
Hello, I am developing a visionOS application and am interested in obtaining detailed data of users’ hands through ARKit, including but not limited to Transform and rotation angle. I have reviewed Happy Beem, but it appears to only introduce the method of identifying the user’s specific gestures. Could you please advise on how to obtain the Transform and rotation angle of the user’s hand? Thank you.
1
0
527
Mar ’25