Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

Implementing Foveated Streaming with Apple Vision Pro
Hello, Want to understand what's the current state for developing for Apple Vision Pro? I want to stream a video from a remote server in realtime. It is a video stream and can't download it. I want to stream a low quality stream and high res stream. The server will only send the "box" where user is looking at. Are there any API to track where the user is looking at in the experience? Thanks,
0
0
3
31m
Roomplan exceeded scene size limit error. (RoomCaptureSession.CaptureError.exceedSceneSizeLimit)
Error: RoomCaptureSession.CaptureError.exceedSceneSizeLimit Apple Documentation Explanation: An error that indicates when the scene size grows past the framework’s limitations. Issue: This error is popping up in my iPhone 14 Pro (128 GB) after a few roomplan scans are done. This error shows up even if the room size is small. It occurs immediately after I start the RoomCaptureSession after the relocalisation of previous AR session (in world tracking configuration). I am having trouble understanding exactly why this error shows and how to debug/solve it. Does anyone have any idea on how to approach to this issue?
2
1
858
2h
PSVR2 controller button quirks
I have an open Feedback conversation with Apple on this topic, but I am curious if others have run into this, or want to try out my sample code in their set up. there are two API’s for reading controller buttons, axis, and D pads: GCPhysicalInputProfile and GCControllerLiveInput. There are inconsistencies in behaviour between the two of them. Apple recommends we use GCControllerLiveInput, however, there are some capabilities on these controllers that are only accessible through GCPhysicalInputProfile, as I’ll discuss below. PSVR2 R2/L2 buttons, a.k.a. triggers, have force input analogue values. These can only be accessed on GCPhysicalInputProfile PSVR2 thumbstick direction values are read through “axes” on GCPhysicalInputProfile, but only “dpads” on GCControllerLiveInput on both GCPhysicalInputProfile and GCControllerLiveInput, All pressed events of all buttons are fired properly using generic aliases ( Trigger, Grip ,Menu, Right Thumbstick, Left Thumbstick, Right Button A & B (Circle & Cross), Left Button A&B (Triangle and Square) ). Apple reserves the system button as the equivalent of a home button for the OS. on GCPhysicalInputProfile, touch events are fired when the button is also pressed, but not for only touches. on GCControllerLiveInput , Touch events only works for the following buttons: Left Thumbstick, Right Thumbstick, Right Button A (Circle), and Right Button B (Cross). But Right Button B touch event isn’t labelled correctly, it fires as the Right Button A event. I observed this inside ALVR which uses a polling based approach to event processing: https://github.com/alvr-org/alvr-visionos/blob/17b5968f9d894944b53e97134b39dfce0993302a/ALVRClient/WorldTracker.swift#L301 To simplify to see this on a very simple app, I used the Apple example TrackingAccessories application: https://developer.apple.com/documentation/ARKit/tracking-accessories-in-volumetric-windows I’ve attached the code that replaces the AccessoryTrackingModel class. I added code that prints out what is touched/pressed, see the trackAllConnectedSpatialControllers method: https://github.com/svrc/TrackingAccessories
6
0
630
7h
The AccessoryAnchor transform does not match any of the Accessory.LocationName options.
I am using AccessoryTrackingProvider from ARKit to get the transform of the PSVR2 controller via originFromAnchorTransform of the AccessoryAnchor. I also am trying to use AnchorEntity on the controller using RealityKit However, none of the three options for Accessory.LocationName, which should be used to define the AnchorEntity target, seem to match the position on the controller which is being sent from ARKit. The picture attached is showing two transforms: RealityKit - using .gripSurface to define the AnchoringComponent.Target.accesssory location. ARKit - using originFromAnchorTransform for AccessoryTrackingProvider. They are not aligned at the same point. As for the other options of Accessory.LocationName, using .aim is located at the tip of the controller and .grip is the same position as .gripSurface but with a different orientation. I am wondering why there is not an option for Accessory.LocationName that actually matches the transform captured by ARKit?
2
0
487
12h
Pinning a pushed window to a wall breaks pushWindow for all other apps on the system
I posted https://developer.apple.com/forums/thread/809481 yesterday about an issue I discovered with pushWindow in visionOS 26.2 RC, but today I discovered a second problem with pushWindow. If window A calls pushWindow to present window B, and the user pins window B to a wall, the following unexpected behaviors are observed: Window B spontaneously disappears. If the user re-launches the (still running) app from the visionOS home view, both window A and window B appear simultaneously. I assume only window B should be visible at this point, since window A pushed window B. If the user closes window B, it's now impossible to present window B again. Calls to pushWindow appear to be ignored. If the user force-quits the app and relaunches it, and pushWindow is called again, window B appears, but window A remains visible. I also noticed this surprising behavior: This broken state of pushWindow behavior now affects all other apps on the system that may call pushWindow in the future, not just the app whose pushed window was pinned above. A workaround is to reboot the device, and then the system will behave as expected until the next time the user pins a pushed window.
2
0
262
23h
In visionOS 26.2 RC, pushWindow + dismissWindow is broken
I recently added pushWindow to my app, and I discovered that in visionOS 26.2 RC (23N301), pushWindow followed by dismissWindow no longer works as expected. Specifically, if the user moves the pushed window, then when the pushed window is later dismissed, the parent window's position isn't aligned with the pushed window's new position. Its original position is restored instead. Curiously, the bug only happens when an app is launched from the visionOS home view, and not when an app is launched from Xcode. It also doesn't happen in the visionOS 26.2 simulator. Another interesting detail is that while the parent window is hidden, if the user long-presses the Digital Crown and then dismisses the pushed window, the parent window's position seems to be immune from the Digital Crown scene reorientation. It's restored to its original real world position. Demo video: https://youtu.be/zR3t2ON3Wz0 I've submitted feedback as FB21287011 with a sample app and detailed repro steps. Has anyone else encountered this issue already and figured out a workaround? It would be nice if I could get pushWindow to work correctly in my app. Thanks everybody! 😀
2
2
265
23h
How to fix "Sample 0 missing LiDAR point cloud!" error?
I'm trying to run a PhotogrammetrySession based on photos taken in an AVCaptureSession and stored as .heic files. When I load the files I'm always seeing the error "Sample 0 missing LiDAR point cloud!" showing up for each individual sample. Debugging shows that sample.depthDataMap is populated, also the .heic contains depth data which can be extracted using e.g. heif-convert on my Mac. Comparing the .heic I created to one of the ObjectCaptureSession which doesn't show the LiDAR warning, I noticed the only difference being the HEIC information here: So my questions are: Are these the missing information in my manual capture causing this warning? Can I somehow add these information in an AVCaptureSession? Do these information allow better photogrammetry results?
1
0
80
23h
visionOS – Starting GroupActivity FaceTime Call dismisses Immersive Space
Hello, I am in the process of implementing SharePlay support in my visionOS app. Everything runs fine when I test locally, but when my app is distributed via TestFlight, calling try await activity.activate() shows the SharePlay dialog as usual, but then when I start a new FaceTime call, my ImmersiveSpace gets dismissed. This is only happening when the app is distributed via TestFlight, when I run it locally the ImmersiveSpace stays active as expected. Looking at the console on my Mac I found this log: Invalid initial client settings class: UIApplicationSceneClientSettings; expected class: MRUISharedApplicationSceneClientSettings; bundle ID: com.apple.facetime; scene ID: com.apple.facetime:SFBSystemService-DDA8C751-C0C4-487E-AD85-59EF4E6C6050 Does anyone have an idea how I can fix this? It's driving me nuts and I wasted over a day looking for a workaround but so far been unsuccessful. Thanks!
7
1
942
1d
Nearby Sharing a Volume won't work
Hi, we've developed an app for Vision Pro that utilises the GroupActivitites SDK to provide shared experiences for our users. Remote Participation works great, but we can't get nearby sharing to work. The behaviour we're observing: User 1 engages share sheet from Volume, 2nd Vision Pro is visible. User 1 starts nearby sharing Session initialisation runs for approx. 30 seconds, then fails Sometimes, the nearby participant doesn't show up at all after the initialisation has failed once. As stated in the Configure your visionOS app for sharing with people nearby article, we didn't make any changes to our implementation to support nearby sharing. Any help would be greatly appreciated. Kind regards, David
2
0
227
1d
RealityKit / visionOS – Memory not released after dismissing ImmersiveSpace with USDZ models
Hi everyone, I’m encountering a memory overflow issue in my visionOS app and I’d like to confirm if this is expected behavior or if I’m missing something in cleanup. App Context The app showcases apartments in real scale using AR. Apartments are heavy USDZ models (hundreds of thousands of triangles, high-resolution textures). Users can walk inside the apartments, and performance is good even close to hardware limits. Flow The app starts in a full immersive space (RealityView) for selecting the apartment. When an apartment is selected, a new ImmersiveSpace opens and the apartment scene loads. The scene includes multiple USDZ models, EnvironmentResources, and dynamic textures for skyboxes. When the user dismisses the experience, we attempt cleanup: Nulling out all entity references. Removing ModelComponents. Clearing cached textures and skyboxes. Forcing dictionaries/collections to empty. Despite this cleanup, memory usage remains very high. Problem After dismissing the ImmersiveSpace, memory does not return to baseline. Check the attached screenshot of the profiling made using Instruments: Initial state: ~30MB (main menu). After loading models sequentially: ~3.3GB. Skybox textures bring it near ~4GB. After dismissing the experience (at ~01:00 mark): memory only drops slightly (to ~2.66GB). When loading the second apartment, memory continues to increase until ~5GB, at which point the app crashes due to memory pressure. The issue is consistently visible under VM: IOSurface in Instruments. No leaks are detected. So it looks like RealityKit (or lower-level frameworks) keeps caching meshes and textures, and does not free them when RealityView is ended. But for my use case, these resources should be fully released once the ImmersiveSpace is dismissed, since new apartments will load entirely different models and textures. Cleanup Code Example Here’s a simplified version of the cleanup I’m doing: func clearAllRoomEntities() { for (entityName, entity) in entityFromMarker { entity.removeFromParent() if let modelEntity = entity as? ModelEntity { modelEntity.components.removeAll() modelEntity.children.forEach { $0.removeFromParent() } modelEntity.clearTexturesAndMaterials() } entityFromMarker[entityName] = nil removeSkyboxPortals(from: entityName) } entityFromMarker.removeAll() } extension ModelEntity { func clearTexturesAndMaterials() { guard var modelComponent = self.model else { return } for index in modelComponent.materials.indices { removeTextures(from: &modelComponent.materials[index]) } modelComponent.materials.removeAll() self.model = modelComponent self.model = nil } private func removeTextures(from material: inout any Material) { if var pbr = material as? PhysicallyBasedMaterial { pbr.baseColor.texture = nil pbr.emissiveColor.texture = nil pbr.metallic.texture = nil pbr.roughness.texture = nil pbr.normal.texture = nil pbr.ambientOcclusion.texture = nil pbr.clearcoat.texture = nil material = pbr } else if var simple = material as? SimpleMaterial { simple.color.texture = nil material = simple } } } Questions Is this expected RealityKit behavior (textures/meshes cached internally)? Is there a way to force RealityKit to release GPU resources tied to USDZ models when they’re no longer used? Should dismissing the ImmersiveSpace automatically free those IOSurfaces, or do I need to handle this differently? Any guidance, best practices, or confirmation would be hugely appreciated. Thanks in advance!
8
0
1.9k
4d
ARKit Body Tracking not detecting ARBodyAnchor on iOS 26.x (FB15128723)
Since updating to iOS 26.0 (and confirmed on 26.1), ARBodyTrackingConfiguration no longer detects a valid ARBodyAnchor on devices with LiDAR (e.g., iPhone 15 Pro, iPhone 17 Pro Max). This issue reproduces in custom projects and Apple’s official sample “Capturing Body Motion in 3D”. The AR session runs normally, but the delegate call: func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) never yields an ARBodyAnchor with valid joint transforms. All joints return nil when calling: body.skeleton.modelTransform(for: jointName) resulting in 0 valid joints per frame. Environment • Device: iPhone 17 Pro Max (LiDAR) • iOS: 26.0 / 26.1 • Xcode: 16.0 (stable) • Framework: ARKit + RealityKit • Configuration used: config.worldAlignment = .gravityAndHeading config.isAutoFocusEnabled = true config.environmentTexturing = .none session.run(config) Also tested: with and without frameSemantics = .bodyDetection Expected Behavior ARBodyAnchor should be detected and body.skeleton should contain ~89 valid joints with continuous updates.
4
1
937
4d
Build failed with error in Reality Kit Content
I have an arguably massive project and am not sure if the issue is with the assets or my approach in the code. the error says : Tool terminated due to error "SIGNAL 6:Abort trap:6" Basically I have around 15-20 assets (usda files built out of usdz files). In the code i am loading a scene with all the usda files and then have the functions to enable and disable a particular asset when needed. This was working as intended when i am using dummy assets(with less polygons, lesser textures) But when i placed the actual assets the error appears and persists. Do I have a bad approach of loading all the scenes at once? Previously i have used an approach which loads the scenes when needed and that involved some lag before rendering the assets. But my current approach(when using dummies) works like a dime rendering and hiding the assets in realtime with no lag. Kindly suggest any workarounds.
0
0
117
5d
Why VideoMaterial can't show transparency on Apple Vision Pro
https://developer.apple.com/documentation/realitykit/videomaterial The documentation: "Video materials support transparency if the source video’s file format also supports transparency." I have a transparency video(Hand.mov, HEVC with alpha), I can show the video with transparency background correctly on Vision Pro Simulates, but on physic Device the video has a black background. I'm sure the video format is ok because I can see get the texture from video and display it on an UnlitMaterial. How can I show the transparency video correctly with the RealityKit/VideoMaterial?
2
0
189
5d
Object tracking capability not available
Hi there, I received an enterprise license file to include enhanced object tracking configuration for the Vision Pro. My account is part of the team which got the allowance from Apple to use this capability. Unfortunately, although I followed the guide, I do not find the Object Tracking capability when I try to add it to my project. There are other capabilities like Main Camera on the Vision Pro, but not for Object Tracking. I am using Xcode 26.1 and visionOS 26.1. What am I missing here? Thanks in advance, Matthias
1
0
217
5d
visualBounds ignores TextComponents set for Entity. Workarounds?
After adding TextComponents to my Entities on visionOS, I have observed that visualBounds will ignore the TextComponents. Documentation states that it should render a rounded rectangle mesh. These mashes are visible on the device, but not visible in the debugger ("Capture Entity Hierarchy") and ignored by visualBounds. Am I missing something? static func makeDirection(_ direction: Direction) -> Entity { let text = Entity() text.name = direction.rawValue text.setScale(SIMD3(repeating: 5), relativeTo: nil) text.transform.rotation = direction.rotation text.components.set(direction.textComponent) return text } My workaround is to add a disabled ModelEntity and take its bounds 😬
1
0
151
6d
ImmersiveSpace orphaned when WindowGroup closes
Environment visionOS 26.1, Xcode 26.1.1 Problem When a WindowGroup opens an ImmersiveSpace and the user closes the window via X button, the async Task in .onDisappear gets cancelled before dismissImmersiveSpace() completes, leaving the ImmersiveSpace active with no way to exit. Steps WindowGroup opens ImmersiveSpace in .onAppear User clicks X to close window .onDisappear fires but async cleanup cancelled ImmersiveSpace remains active, user trapped Expected ImmersiveSpace dismissed when window closes Actual ImmersiveSpace remains active Code .onAppear { Task { await openImmersiveSpace(id: "VideoCallMainCamera") } } .onDisappear { Task { await dismissImmersiveSpace() // Gets cancelled } } What I've Tried Task in .onDisappear ❌ scenePhase monitoring ❌ High priority Task ❌ .restorationBehavior(.disabled) + .defaultLaunchBehavior(.suppressed) ✅ (prevents restoration but doesn't fix immediate cleanup) Question What's the recommended pattern for ensuring ImmersiveSpace cleanup when WindowGroup closes? Is there a way to block window closure until async cleanup completes, or should ImmersiveSpaces automatically dismiss with their parent window?
1
0
140
6d
When running Unity applications in the shared space of visionOS, the Mask/RectMask2D of ScrollView does not function properly
Problem Description: I am developing an application that runs in the Shared Space on Apple Vision Pro using Unity. When using the UI ScrollView (Scroll View) component, I found that the Mask / RectMask2D does not function in the Shared Space. Scrolling content is not masked or cropped; it extends beyond the view boundary and is displayed directly. The same UI works correctly across platforms such as Unity Editor, iOS, and macOS, but the issue only occurs in the shared space of Vision Pro. Reproduction steps: Create a ScrollView in Unity. Add a Mask or RectMask2D to the viewport. Deploy the application to Apple Vision Pro and run it in Shared Space mode. Sliding content will not be clipped by the mask, and the masked area is entirely ineffective. Expected behavior: The content of ScrollView should be properly clipped by Mask / RectMask2D and should not render outside the mask boundary. Actual results: In the shared space of Vision Pro, the mask is ineffective, causing scrolling content to extend beyond the designated area and resulting in severe UI distortion. Environmental Information: Device: Apple Vision Pro Mode: Shared Space Unity Version: 6000.0.40f1 visionOS version: visionOS 26.0 Unity PolySpatial Version: 2.0.4 Impact This issue causes Unity UI to fail to display correctly on Vision Pro, preventing ScrollView from properly clipping content, which impacts the UI experience and interaction effects in practical applications. Expected Result: When running a Unity app in the shared space of visionOS, the Mask / RectMask2D of ScrollView functions correctly
0
0
69
6d
Is it possible to raycast with PSVR2 controllers when developing in Unity for the Apple Vision Pro?
Hi all, I am currently developing a game in Unity for VisionOS and I'd prefer to use the PSVR2 controllers as a source of the raycast for menu selection instead of the default VisionOS gaze for my specific use case. Is there a way to access the IMU of PSVR2 controllers to do this instead of just using eyegaze + controller click for selection? Is there a specific configuration for GCController from within Unity maybe? Thank you!
1
0
500
6d
Reoccurring World Tracking / Scene Exceeded Limit Error
Hi, We’ve been successfully using the RoomPlan API in our application for over two years. Recently, however, users have reported encountering persistent capture errors during their sessions. Specifically, the errors observed are: CaptureError.worldTrackingFailure CaptureError.exceedSceneSizeLimit What we have observed: Persistent Errors: The errors continue to occur even after initiating new capture sessions. Normal Usage: Our implementation adheres to typical usage patterns of the RoomPlan API without exceeding any documented room size limits. Limited Feature Usage: We are not utilizing the WorldTracking feature for the StructureBuilder functionality to stitch rooms together. Potential State Caching: Given that these errors persist across sessions, we suspect that there might be memory or state cached between sessions that is not being cleared, particularly since we are not taking advantage of StructureBuilder. Request: Could you please advise if there is any internal caching or memory retention between capture sessions that might lead to these errors? Additionally, we would appreciate guidance on how to clear or manage this state when the StructureBuilder feature is not in use. Here is a generalised version of our capture session initialization code to help diagnose the issue. struct RoomARCaptureView: UIViewRepresentable { typealias Handler = (CapturedRoom, Error?) -> Void @Binding var stop: Bool @Binding var done: Bool let completion: Handler? func makeUIView(context: Self.Context) -> RoomCaptureView { let view = RoomCaptureView(frame: .zero) view.delegate = context.coordinator view.captureSession.run(configuration: .init()) return view } func updateUIView(_ uiView: RoomCaptureView, context: Self.Context) { if stop { // Stop the session only once, multiple times causes issues with the final presentation uiView.captureSession.stop() stop = false done = true } } static func dismantleUIView(_ uiView: RoomCaptureView, coordinator: Self.Coordinator) { uiView.captureSession.stop() } func makeCoordinator() -> ARViewCoordinator { ARViewCoordinator(completion) } @objc(ARViewCoordinator) class ARViewCoordinator: NSObject, RoomCaptureViewDelegate { var completion: Handler? public required init?(coder: NSCoder) {} public func encode(with coder: NSCoder) {} public init(_ completion: Handler?) { super.init() self.completion = completion } public func captureView(shouldPresent roomDataForProcessing: CapturedRoomData, error: (Error)?) -> Bool { return true } public func captureView(didPresent processedResult: CapturedRoom, error: (Error)?) { completion?(processedResult, error) } } } Thank you for your assistance.
6
3
670
1w