Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

Cannot reassign worldTracking / planeDetection providers in my PlacementManager when switching environments
Environment Xcode: 16.2 VisionOS SDK 2.4 Swift 6.1 Targets: Apple Vision Pro (immersive space) Frameworks: ARKit, RealityKit, SwiftUI What I’m Trying to Do I have a view-model class PlacementManager that holds two AR providers: private var worldTracking: WorldTrackingProvider private var planeDetection: PlaneDetectionProvider I want to dynamically replace these providers in a setEnvironment(_:) method (so I can save/clear a JSON scene and restart ARKit). What’s Happening If I declare them as : private let worldTracking = WorldTrackingProvider() private let planeDetection = PlaneDetectionProvider() I get compile-errors when I later do: self.worldTracking = newWorldTracking // Cannot assign to property: 'worldTracking' is a 'let' constant If I change them to un-initialized vars: private var worldTracking: WorldTrackingProvider private var planeDetection: PlaneDetectionProvider then in my init() I get: self used in property access 'worldTracking' before all stored properties are initialized Code snipet @Observable final class PlacementManager : ObservableObject { private var worldTracking: WorldTrackingProvider private var planeDetection: PlaneDetectionProvider // … other props … @MainActor init() { // error: self.worldTracking used before init… planeAnchorHandler = PlaneAnchorHandler(rootEntity: root) persistenceManager = PersistenceManager( worldTracking: worldTracking, rootEntity: root ) // … } @MainActor func setEnvironment(env: Environnement) async { let newWorldTracking = WorldTrackingProvider() let newPlaneDetection = PlaneDetectionProvider() try await appState!.arkitSession.run( [ newWorldTracking, newPlaneDetection ] ) self.worldTracking = newWorldTracking self.planeDetection = newPlaneDetection // … } } What I’ve Tried Giving them default values at declaration (= WorldTrackingProvider()) Initializing them at the top of init() before any use Passing the new providers into arkitSession.run(...) My Question What is the recommended Swift-style pattern to declare and reassign these ARKit provider properties so that: They’re fully initialized before use in init(), and I can swap them out later in setEnvironment(...) without compiler errors? Any pointers (or links to forum threads / docs) would be greatly appreciated!
0
0
129
May ’25
Hand Tracking Latency When UITextView Becomes Active in Vision Pro Immersive Space
I'm placing sphere at finger tip and updating its position as hand move. Finger joint tracking functions correctly, but I’ve observed noticeable latency in hand tracking updates whenever a UITextView becomes active. This lag happens intermittently during app usage, lasting about 5–10 seconds, after which the latency disappears and the sphere starts following the finger joints immediately. When I open the immersive space for the first time, the profiler shows a large performance spike upto 328%. After that, it stabilizes and runs smoothly. Note: I don’t observe any lag when CPU usage spikes to 300% (upon immersive view load) yet the lag still occurs even when CPU usage remains below 100%. I’m using the following code for hand tracking: private func processHandTrackingUpdates() async { for await update in handTracking.anchorUpdates { let handAnchor = update.anchor if handAnchor.isTracked { switch handAnchor.chirality { case .left: leftHandAnchor = handAnchor updateHandJoints(for: handAnchor, with: leftHandJointEntities) case .right: rightHandAnchor = handAnchor updateHandJoints(for: handAnchor, with: rightHandJointEntities) } } else { switch handAnchor.chirality { case .left: leftHandAnchor = nil hideAllJoints(in: leftHandJointEntities) case .right: rightHandAnchor = nil hideAllJoints(in: rightHandJointEntities) } } await MainActor.run { handTrackingData.processNewHandAnchors( leftHand: self.leftHandAnchor, rightHand: self.rightHandAnchor ) } } } And here’s the function I’m using to update the joint positions: private func updateHandJoints( for handAnchor: HandAnchor, with jointEntities: [HandSkeleton.JointName: Entity] ) { guard handAnchor.isTracked else { hideAllJoints(in: jointEntities) return } // Check if the little finger tip and intermediate base are both tracked. if let tipJoint = handAnchor.handSkeleton?.joint(.littleFingerTip), let intermediateBaseJoint = handAnchor.handSkeleton?.joint(.littleFingerIntermediateTip), tipJoint.isTracked, intermediateBaseJoint.isTracked, let pinkySphere = jointEntities[.littleFingerTip] { // Convert joint transforms to world space. let tipTransform = handAnchor.originFromAnchorTransform * tipJoint.anchorFromJointTransform let intermediateBaseTransform = handAnchor.originFromAnchorTransform * intermediateBaseJoint.anchorFromJointTransform // Extract positions from the transforms. let tipPosition = SIMD3<Float>(tipTransform.columns.3.x, tipTransform.columns.3.y, tipTransform.columns.3.z) let intermediateBasePosition = SIMD3<Float>(intermediateBaseTransform.columns.3.x, intermediateBaseTransform.columns.3.y, intermediateBaseTransform.columns.3.z) // Calculate the midpoint. let midpointPosition = (tipPosition + intermediateBasePosition) / 2.0 // Position the sphere at the midpoint and make it visible. pinkySphere.isEnabled = true pinkySphere.transform.translation = midpointPosition } else { // If either joint is not tracked, hide the sphere. jointEntities[.littleFingerTip]?.isEnabled = false } // Update the positions of all other hand joint spheres. for (jointName, entity) in jointEntities { if jointName == .littleFingerTip { // Already handled the pinky above. continue } guard let joint = handAnchor.handSkeleton?.joint(jointName), joint.isTracked else { entity.isEnabled = false continue } entity.isEnabled = true let jointTransform = handAnchor.originFromAnchorTransform * joint.anchorFromJointTransform entity.transform.translation = SIMD3<Float>(jointTransform.columns.3.x, jointTransform.columns.3.y, jointTransform.columns.3.z) } } I’ve attached both a profiler trace and a video recording from Vision Pro that clearly demonstrate the issue. Profiler: https://drive.google.com/file/d/1fDWyGj_fgxud2ngkGH_IVmuH_kO-z0XZ Vision Pro Recordings: https://drive.google.com/file/d/17qo3U9ivwYBsbaSm26fjaOokkJApbkz- https://drive.google.com/file/d/1LxTxgudMvWDhOqKVuhc3QaHfY_1x8iA0 Has anyone else experienced this behavior? My thought is that there might be some background calculations happening at the OS level causing this latency. Any guidance would be greatly appreciated. Thanks!
0
0
396
Sep ’25
visionOS widget dimensions?
Is there any size guidance for the new WidgetKit integration on visionOS? The Widget HIG provides dimensions for all the widget size classes on iOS, iPadOS and watchOS, but has not been updated for visionOS. https://developer.apple.com/design/human-interface-guidelines/widgets My potential widget use case is image based, so I'm looking to better understand the optimal size, resolution etc I would need, particularly for the new visionOS specific extra large widget size.
0
0
576
Jul ’25
Unity PolySpatial – Live handheld camera feed of graspable objects not rendering on Vision Pro
I am developing a Unity application for the Apple Vision Pro using PolySpatial and RealityKit integration. The goal is to create a graspable object (for example, a handheld cube) that includes a secondary camera. When the user grabs and moves the object, the secondary camera should render its view to a RenderTexture, which is displayed on a quad attached to the object, simulating a live camera screen. In the Unity Editor, this setup works correctly. The RenderTexture updates in real time, and the quad displays the camera’s view as expected. However, when building and running the application on the Vision Pro, the quad only displays the clear background color of the secondary camera. No scene content appears. The graspable interaction itself works fine: the object can be grabbed and moved as intended. Steps I have taken: Created a new layer (CameraFeed) and assigned the relevant objects to it. Set the secondary camera’s culling mask to render only the CameraFeed layer. Assigned the RenderTexture as the camera’s target texture. Applied the RenderTexture to an Unlit/Texture material on a quad. Confirmed the camera is active and correctly positioned relative to the object. From my research, it appears that once objects are managed by RealityKit through PolySpatial (for example, made graspable), they are no longer rendered through Unity's normal camera pipeline. Only the main XR camera (managed by RealityKit) seems able to see these objects. Secondary Unity cameras cannot render RealityKit-synced content to a RenderTexture. If this is correct, it seems there is currently no way to implement a true live secondary camera feed showing graspable objects on Vision Pro using Unity PolySpatial. My questions are: Is there any official way to enable multiple camera rendering of RealityKit-managed objects through PolySpatial? Are there known workarounds to simulate a live camera feed that still allows objects to be grabbed? Has anyone found alternative design patterns or methods for this kind of interaction? Environment: Unity 6.0 , PolySpatial 2.2.4, Apple Vision OS XR 2.2.4 Any insight or suggestions would be greatly appreciated. Thank you.
0
0
118
Apr ’25
white gap between objects in RealityView
I want to display a huge image in RealityView in 3d space on Vision Pro. of course instead of one giant file I'm using a lot of big images. to achieve this, I'm generating multiple planes exactly beside each others and put each image on them. although the planes are exactly beside each others but there is still a white gap between them.(image below) **Does anybody know how to fix this issue? **
0
0
153
May ’25
How to Move and Rotate WindowGroup with Code in Xcode
当我进入混合空间时,出现一个模型,但模型后面有一个 windowGroup,无法完全查看。如果我想点击进入 mix 空间,我需要使用代码将 windowGroup 移动到另一个位置,而不是手动移动 ![](“https://developer.apple.com/forums/content/attachment/0471ead0-4c74-43a7-9ecc-12e67e81cec6” “title=WechatIMG31.jpg;宽度=915;高度=777”)
0
0
62
Mar ’25
Can't establish spatial connection after visionOS update
After updating to visionOS 26.2 Beta 2 (and Beta 3), I'm unable to establish a spatial connection to Vision Pro. This was working fine before the update. To test, I've created a fresh spatialApp project from the Xcode template with zero modifications, but I'm hitting the same issue - the Vision Pro is discovered but won't connect. Am I forgetting to update the config somewhere? Any ideas what might be causing this and how to fix it? Thanks! Warning: -[NSWindow makeKeyWindow] called on <NSWindow: 0xa1f811900> windowNumber=1b9 which returned NO from -[NSWindow canBecomeKeyWindow]. ((processConfiguration != nil && configuration != nil) || (processConfiguration == nil && configuration == nil)) - /AppleInternal/Library/BuildRoots/4~CBS0ugAIF7BrQZjLe6r0lhPXO4GJmNDTovxYoV0/Library/Caches/com.apple.xbs/Sources/ExtensionKit/ExtensionKit/Source/HostViewController/Internal/EXHostSessionDriver.m:80: `processConfiguration` and `configuration` must be both non-nil or both nil Unable to obtain a task name port right for pid 415: (os/kern) failure (0x5) CCContextDeviceGroup.mm(291):+[CCContextDeviceGroup checkBinaryArchivesForDevice:withBundle:]: Failed to find any binary shader archive
0
0
105
Nov ’25
Pass Video/ Frames to a Shader Graph?
Wondering if this is even possible without using CVImageBuffer and passing each frame as an image which I imagine will be very expensive. Have a PoC of a shader graph that applies a radial zoom effect to an image. In RealityKit I'm passing the image as a resource: if let textureResource = try? await TextureResource(named: "fuji") { let value = MaterialParameters.Value.textureResource(textureResource) try? material.setParameter(name: "MyImage", value: value) model.model?.materials = [material] } Thanks in advance
0
0
116
Apr ’25
目前在使用Unity进行开发时,遇到了材质(Material)和Shader在visionOS设备上无法正常渲染的问题
具体表现为:在Unity编辑器中材质显示正常,但部署到Vision Pro真机后部分材质丢失或Shader效果异常(如透明通道失效、光照计算错误等)。此问题影响了开发进度,希望得到技术支持的帮助 Specific results: The materials are displayed normally in the Unity editor, but after being deployed to the Vision Pro real machine, some materials are lost or the Shader effect is abnormal (such as transparent channel failure, antenna calculation error, etc.). This problem has affected the development progress, and I hope to get help from technical support
0
0
72
Apr ’25
How to implement the semi-transparent overlay effect in Immersive View?
I noticed that when I drag the menu window in an Immersive View, the entities behind it becomes semi-transparent, and the boundary between virtual and real-world objects is very pronounced. May I ask how does VisionOS implement this effect? Is there any API or technique I can use in my own code to enable the same semi-transparent overlay - even when I am not dragging the menu window?
0
0
134
May ’25
Unexpected behavior when writing entities and loading realityFiles.
I have a simple visionOS app that creates an Entity, writes it to the device, and then attempts to load it. However, when the entity file get overwritten, it affects the ability for the app to load it correctly. Here is my code for saving the entity. import SwiftUI import RealityKit import UniformTypeIdentifiers struct ContentView: View { var body: some View { VStack { ToggleImmersiveSpaceButton() Button("Save Entity") { Task { // if let entity = await buildEntityHierarchy(from: urdfPath) { let type = UTType.realityFile let filename = "testing.\(type.preferredFilenameExtension ?? "bin")" let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0] let fileURL = documentsURL.appendingPathComponent(filename) do { let mesh = MeshResource.generateBox(size: 1, cornerRadius: 0.05) let material = SimpleMaterial(color: .blue, isMetallic: true) let modelComponent = ModelComponent(mesh: mesh, materials: [material]) let entity = Entity() entity.components.set(modelComponent) print("Writing \(fileURL)") try await entity.write(to: fileURL) } catch { print("Failed writing") } } } } .padding() } } Every time I press "Save Entity", I see a warning similar to: Writing file:///var/mobile/Containers/Data/Application/1140E7D6-D365-48A4-8BED-17BEA34E3F1E/Documents/testing.reality Failed to set dependencies on asset 1941054755064863441 because NetworkAssetManager does not have an asset entity for that id. When I open the immersive space, I attempt to load the same file: import SwiftUI import RealityKit import UniformTypeIdentifiers struct ImmersiveView: View { @Environment(AppModel.self) private var appModel var body: some View { RealityView { content in guard let type = UTType.realityFile.preferredFilenameExtension else { return } let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0] let fileURL = documentsURL.appendingPathComponent("testing.\(type)") guard FileManager.default.fileExists(atPath: fileURL.path) else { print("❌ File does not exist at path: \(fileURL.path)") return } if let entity = try? await Entity(contentsOf: fileURL) { content.add(entity) } } } } I also get errors after I overwrite the entity (by pressing "Save Entity" after I have successfully loaded it once). The warnings that appear when the Immersive space attempts to load the new entity are: Asset 13277375032756336327 Mesh (RealityFileAsset)URL/file:///var/mobile/Containers/Data/Application/1140E7D6-D365-48A4-8BED-17BEA34E3F1E/Documents/testing.reality/Mesh_0.compiledmesh failure: Asset provider load failed: type 'RealityFileAsset' -- RERealityArchive: Failed to open load stream for entry 'assets/Mesh_0.compiledmesh'. Asset 8308977590385781534 Scene (RealityFileAsset)URL/file:///var/mobile/Containers/Data/Application/1140E7D6-D365-48A4-8BED-17BEA34E3F1E/Documents/testing.reality/Scene_0.compiledscene failure: Asset provider load failed: type 'RealityFileAsset' -- RERealityArchive: Failed to read archive entry. AssetLoadRequest failed because asset failed to load '13277375032756336327 Mesh (RealityFileAsset)URL/file:///var/mobile/Containers/Data/Application/1140E7D6-D365-48A4-8BED-17BEA34E3F1E/Documents/testing.reality/Mesh_0.compiledmesh' (Asset provider load failed: type 'RealityFileAsset' -- RERealityArchive: Failed to open load stream for entry 'assets/Mesh_0.compiledmesh'.) The order of operations to make this happen: Launch app Press "Save Entity" to save the entity "Open Immersive Space" to view entity Press "Save Entity" to overwrite the entity "Open Immersive Space" to view entity, failed asset load request Also Launch app, the entity should still be save from last time the app ran "Open Immersive Space" to view entity Press "Save Entity" to overwrite the entity "Open Immersive Space" to view entity, failed asset load request NOTE: It appears I can get it to work slightly better by pressing the "Save Entity" button twice before attempting to view it again in the immersive space.
0
1
207
Aug ’25
Accessing LiDAR Depth Data and Scene Reconstruction on Apple Vision Pro
Hello, I'm developing a visionOS application for Apple Vision Pro that aims to scan unknown physical objects, capture their 3D data (such as meshes or point clouds), and export them as 3D models. Ideally, I'd also like to visualize these reconstructions in real-time within the headset. This functionality is similar to what's available in Reality Composer on iPad and iPhone, but I'm seeking to implement it natively on Vision Pro. I've reviewed the visionOS documentation but haven't found clear guidance on accessing LiDAR depth data or performing scene reconstruction. Specifically, I'm interested in: 1.Accessing LiDAR or depth data from Vision Pro's sensors. 2.Utilizing ARKit's scene reconstruction capabilities on visionOS. 3.Exporting captured 3D data as models (e.g., USDZ or OBJ formats). Are there APIs or frameworks in visionOS that support these features?
0
2
147
May ’25
USDZ Security
I am working on an app that will allow a user to load and share their model files (usdz, usda, usdc). I'm looking at security options to prevent bad actors. Are there security or validation methods built into ARKit/RealityKit/CloudKit when loading models or saving them on the cloud? I want to ensure no one can inject any sort of exploit through these file types.
0
0
484
Jul ’25