Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

Need to rotate child of a 3D mesh
I am creating a vision pro app with a 3D model, it has a mesh hierarchy of head, hands, feet etc. I want the character to look towards the camera, but am not able to access head of character through sceneKit nor reality kit. when I try to print names of the child meshes, it only prints till the character, it does iterate through all the body parts. Can anyone help?
1
0
225
Sep ’25
I need to troubleshoot Transform Drift in ARKit
Hi all, I'm currently developing a real-time object reconstruction app using ARKit. The goal is to scan large objects using ARKit’s depth and transform data, and generate a point cloud. However, I’m facing a major challenge - Transform Drift / World Alignment Issues The localToWorld transform provided by ARKit frequently seems to drift or become unstable across frames. This results in misaligned point clouds even when the device is moved slowly or kept relatively still. In some cases, a static surface scanned over a few seconds results in clearly misaligned fragments. This makes it difficult to accurately stitch a multi-frame point cloud. I have experimented with various lighting conditions and object textures, but the issue persists in all cases. At times, the relative error between frames reaches up to 20 cm, while in other instances the error is minimal; however, the drift gradually accumulates over time, leading to an overall enlargement of the reconstructed object. I have attached images of both cases here. Questions: Are there specific conditions under which ARKit’s world transform is expected to drift? Is there a way to detect or recover from this drift during runtime? Any best practices for maintaining consistent tracking during scanning or measurement sessions?
1
0
141
Jun ’25
ARDepthData.confidenceMap only returns low confidence on certain devices
A few users have recently reported no longer being able to capture point clouds using our app, specifically on iPhone 15 Pro devices. We recently found an in-house device that exhibits this behavior and found that the confidenceMap contains only low confidence values, regardless of the environment being captured. Our app uses a higher confidence threshold; setting the threshold to a lower value produces noisy results as expected, so that is a non-viable option. Other LiDAR based apps have been tested with this device and the results are the same. No points, or noisy point clouds in apps that allow a lower confidence threshold setting. On devices that exhibit this behavior the "Displaying a point cloud using scene depth" Apple sample app can be used to visualize the issue. First reports of this new behavior occurred as early as iOS 18.4. Looking for recommendations on which team(s) at Apple to reach out to with these findings since the behavior manifests on only a small sample of devices.
1
0
291
Jun ’25
Summon gesture
Can you help to write a code able to pick an element a bit far from me, then bring it near to me, flick it a bit and then send it back to its original position when I release it? Thanks a lot, Christophe
1
0
76
Apr ’25
Can´t find a DLL in a VisionOS app with Unity
Dear all, I´m using Unity 6.2 beta and Xcode 16.2. I´m creating a simple framework to use the text to speech functionality in VisionOS from unity. The framework is created in Swift. I create an objective-c wrapper with the following declarations: ... void _initTTS(int); ... I create the framework, import it in Unity and call the functions in a c# wrapper class. The code is as follows: public static class TTSPluginManager { [DllImport("TTS_Vision"] private static extern void _initTTS(int val); ... public static void Initialize() { #if UNITY_VISIONOS _initTTS(0); #else Debug.LogWarning("NativeTTS.Initialize called on a non-iOS platform. Ignoring."); #endif } } I have managed to compile and run the program in the Apple Vision Pro, but I keep on getting the following error: DllNotFoundException: TTS_Vision assembly: type: member:(null) TTSPluginManager.Initialize () (at Assets/Plugins/TTSPluginManager.cs:33) LecturePortalManager.OnCreateStory (Ink.Runtime.Story story) (at Assets/AVRLecture/LecturePortalManager.cs:17) InkLoader.StartStory () (at Assets/AVRLecture/InkLoader.cs:24) InkLoader.Start () (at Assets/AVRLecture/InkLoader.cs:18) If I run the generated code from Xcode, I can see the app in the AVP, but I keep getting a loading error: DllNotFoundException: Unable to load DLL 'TTS_Vision'. Tried the load the following dynamic libraries: Unable to load dynamic library '/TTS_Vision' because of 'Failed to open the requested dynamic library (0x06000000) dlerror() = dlopen(/TTS_Vision, 0x0005): tried: '/TTS_Vision' (no such file) at TTSPluginManager.Initialize () [0x00000] in <00000000000000000000000000000000>:0 at LecturePortalManager.OnCreateStory (Ink.Runtime.Story story) [0x00000] in <00000000000000000000000000000000>:0 I can see in the generated code that the framework (TTS_Vision) is there, but the path seems wrong. I've tried to add more options to the searched paths, with no success... Any hints or suggestions are much more appreciated.
1
0
302
Sep ’25
visionOS Widget Bug
When I was developing the visionOS 26beta Widget, I found that it could not work normally when the real vision OS was running, and an error would appear. Please adopt container background api It is worth mentioning that this problem does not occur on the visionOS virtual machine. Does anyone know what the reason and solution are, or whether this is a visionOS error that needs Feedback? Thank you!
1
0
465
Sep ’25
Displaying spatial photos and videos on web pages in Safari
The purpose is to create a simple web-based gallery of spatial photos and videos using static html files. I have successfully displayed spatial photos using the img tag and IMG.heic files. I can tap and hold the image to bring up the contextual menu and from there select View Spatial Photo. Is there any way to add a control to the image, like a link or overlay on the image itself, that a user can simply tap to show the image in 3D? And how to host a video file on a web page without going through a CDN/streaming service? Sample html would be much appreciated.
1
0
779
Jan ’26
Sample code not working as expected: Implementing SharePlay for immersive spaces in visionOS
The following sample code project does not seem to work as expected: https://developer.apple.com/documentation/visionos/implementing-shareplay-for-immersive-spaces-in-visionos Have tried to get this project working with a client, but while we were able to see nearby users and make facetime calls, the color changing cube experience always remained a single color. Are there step-by-step instructions that Apple has used to verify this sample code so I can try to recreate this sample code's expected behavior for both nearby participants and those in a Facetime call?
1
0
263
Feb ’26
Immersive environment learning material
I really love the immersive environments, but I don’t have experience with creating them. Do you have resources or tutorials you can recommend for creating these from scratch? I’ve seen the sample projects and videos, but they usually start in the middle, assuming you already have the assets created.
1
0
86
Jul ’25
Cursor display issue on attachment view in immersive space
While using Screen Mirroring in developer mode within my immersive space, I noticed an alignment issue with the computer cursor (transparent circle). When I move it toward an attachment view, the cursor remains horizontal instead of aligning with the surface of the attachment view. It shows correctly on a 2D window only wrong on attachment view. Is this behavior a bug, or could it be caused by a missing or incorrect configuration on the attachment view? Want help, thanks.
1
0
103
Apr ’25
Implementing Foveated Streaming with Apple Vision Pro
Hello, Want to understand what's the current state for developing for Apple Vision Pro? I want to stream a video from a remote server in realtime. It is a video stream and can't download it. I want to stream a low quality stream and high res stream. The server will only send the "box" where user is looking at. Are there any API to track where the user is looking at in the experience? Thanks,
1
0
694
Dec ’25
CustomMaterial disable unlit tone mapping
Hi, since iOS 18 UnlitMaterial and ShaderGraphMaterial have the option to disable tone mapping, e.g via https://developer.apple.com/documentation/realitykit/unlitmaterial/init(applypostprocesstonemap:) Is it possible to do the same for CustomMaterial? I tried initializing a CustomMaterial based on an UnlitMaterial where tone mapping is disabled, like so: let unlitMat = UnlitMaterial(applyPostProcessToneMap: false) let customMaterial = try CustomMaterial( from: unlitMat, surfaceShader: surfaceShader, geometryModifier: geometryModifier ) but that does not seem to work. The colors of my texture still look altered in comparison to a plain UnlitMaterial or a ShaderGraphMaterial where its disabled. Any hints? Thank you!
1
0
142
Jun ’25
Can not remove final World Anchor
I’ve been having some issues removing anchors. I can add anchors with no issue. They will be there the next time I run the scene. I can also get updates when ARKit sends them. I can remove anchors, but not all the time. The method I’m using is to call removeAnchor() on the data provider. worldTracking.removeAnchor(forID: uuid) // Yes, I have also tried `removeAnchor(_ worldAnchor: WorldAnchor)` This works if there are more than one anchor in a scene. When I’m down to one remaining anchor, I can remove it. It seems to succeed (does not raise an error) but the next time I run the scene the removed anchor is back. This only happens when there is only one remaining anchor. do { // This always run, but it doesn't seem to "save" the removal when there is only one anchor left. try await worldTracking.removeAnchor(forID: uuid) } catch { // I have never seen this block fire! print("Failed to remove world anchor \(uuid) with error: \(error).") } I posted a video on my website if you want to see it happening. https://stepinto.vision/labs/lab-051-issues-with-world-tracking/ Here is the full code. Can you see if I’m doing something wrong? Is this a bug? struct Lab051: View { @State var session = ARKitSession() @State var worldTracking = WorldTrackingProvider() @State var worldAnchorEntities: [UUID: Entity] = [:] @State var placement = Entity() @State var subject : ModelEntity = { let subject = ModelEntity( mesh: .generateSphere(radius: 0.06), materials: [SimpleMaterial(color: .stepRed, isMetallic: false)]) subject.setPosition([0, 0, 0], relativeTo: nil) let collision = CollisionComponent(shapes: [.generateSphere(radius: 0.06)]) let input = InputTargetComponent() subject.components.set([collision, input]) return subject }() var body: some View { RealityView { content in guard let scene = try? await Entity(named: "WorldTracking", in: realityKitContentBundle) else { return } content.add(scene) if let placementEntity = scene.findEntity(named: "PlacementPreview") { placement = placementEntity } } update: { content in for (_, entity) in worldAnchorEntities { if !content.entities.contains(entity) { content.add(entity) } } } .modifier(DragGestureImproved()) .gesture(tapGesture) .task { try! await setupAndRunWorldTracking() } } var tapGesture: some Gesture { TapGesture() .targetedToAnyEntity() .onEnded { value in if value.entity.name == "PlacementPreview" { // If we tapped the placement preview cube, create an anchor Task { let anchor = WorldAnchor(originFromAnchorTransform: value.entity.transformMatrix(relativeTo: nil)) try await worldTracking.addAnchor(anchor) } } else { Task { // Get the UUID we stored on the entity let uuid = UUID(uuidString: value.entity.name) ?? UUID() do { try await worldTracking.removeAnchor(forID: uuid) } catch { print("Failed to remove world anchor \(uuid) with error: \(error).") } } } } } func setupAndRunWorldTracking() async throws { if WorldTrackingProvider.isSupported { do { try await session.run([worldTracking]) for await update in worldTracking.anchorUpdates { switch update.event { case .added: let subjectClone = subject.clone(recursive: true) subjectClone.isEnabled = true subjectClone.name = update.anchor.id.uuidString subjectClone.transform = Transform(matrix: update.anchor.originFromAnchorTransform) worldAnchorEntities[update.anchor.id] = subjectClone print("🟢 Anchor added \(update.anchor.id)") case .updated: guard let entity = worldAnchorEntities[update.anchor.id] else { print("No entity found to update for anchor \(update.anchor.id)") return } entity.transform = Transform(matrix: update.anchor.originFromAnchorTransform) print("🔵 Anchor updated \(update.anchor.id)") case .removed: worldAnchorEntities[update.anchor.id]?.removeFromParent() worldAnchorEntities.removeValue(forKey: update.anchor.id) print("🔴 Anchor removed \(update.anchor.id)") if let remainingAnchors = await worldTracking.allAnchors { print("Remaining Anchors: \(remainingAnchors.count)") } } } } catch { print("ARKit session error \(error)") } } } }
1
2
227
May ’25
Extending or disabling the 1.5-meter boundary in ImmersiveSpace
I’m currently developing an app for visionOS and working with an ImmersiveSpace. I’ve noticed that the system automatically enforces a safety boundary at approximately 1.5 meters. If the user moves beyond this limit, the content fades out or the system reverts to Passthrough. Is there any way to disable this boundary or extend its radius? This app is currently in the experimental/verification phase, and it is intended to be run on a Vision Pro in Developer Mode. Since the primary goal is to test large-scale spatial interactions during development, I am looking for any way—including developer-specific settings or configurations—to bypass or expand this limit. If there isn't a direct API to change the boundary size, are there any recommended workarounds for testing movement within large environments? Any insights would be greatly appreciated!
1
0
558
Jan ’26
ManipulationComponent in both parent and child entities
Hello, In my project, I have attached a ManipulationComponent to Entity A and as expected, I'm able interact with it using the built-in gestures. I have another Entity B which is a child of A that I would like to interact with as well, so I attempted to add a ManipulationComponent to B. However, no gestures seem to be registered on B; I can still interact with A but B cannot be interacted with despite having ManipulationComponents on both entities. So I'm wondering if I'm just doing something wrong, if this is an issue with the ManipulationComponent, or if this is a limitation of the API. Attached is the code used to add the ManipulationComponent to an Entity and it was done on both A and B: let mc = ManipulationComponent() model.components.set(mc) var boxShape = ShapeResource.generateBox(width: 0.25, height: 0.05, depth: 0.25) boxShape = boxShape.offsetBy(translation: simd_float3(0, -0.05, -0.25)) ManipulationComponent.configureEntity(model, collisionShapes: [boxShape]) if var mc = model.components[ManipulationComponent.self] { mc.releaseBehavior = .stay mc.dynamics.inertia = .low model.components.set(mc) } I am using visionOS 26.0; let me know if there's any additional information needed.
1
0
376
Oct ’25
Using AVAsynchronousKeyValueLoading.load() on an AVAssetTrack gives an error
I'm seeing this error while attempting to compile my VisionOS app under Xcode 26. My existing code looks like: let (naturalSize, formatDescriptions, mediaCharacteristics) = try? await videoTrack.load(.naturalSize, .formatDescriptions, .mediaCharacteristics) This is now giving a compiler error: Type of expression is ambiguous without a type annotation I don't see that anything that was changed or deprecated in the latest version. Also loading the properties individually seems to work fine i.e.: let naturalSize = try? await videoTrack.load(.naturalSize) let formatDescriptions = try? await videoTrack.load(.formatDescriptions) let mediaCharacteristics = try? await videoTrack.load(.mediaCharacteristics)
1
0
121
Jun ’25
Need to rotate child of a 3D mesh
I am creating a vision pro app with a 3D model, it has a mesh hierarchy of head, hands, feet etc. I want the character to look towards the camera, but am not able to access head of character through sceneKit nor reality kit. when I try to print names of the child meshes, it only prints till the character, it does iterate through all the body parts. Can anyone help?
Replies
1
Boosts
0
Views
225
Activity
Sep ’25
Connect external disk using developer strap
I got more than 1 TB Immersive videos, and I want to play from them. Is there a way I can connect a ssd to Vision Pro via developer strap? Or is it possible to connect to a 10G Ethernet ad, and then using Ethernet to connect to a disk/NAS and attach the drive via ip?
Replies
1
Boosts
0
Views
488
Activity
Jan ’26
Can I use `FromToByAction` to animate the ShaderGraphMaterial parameters?
Can I combine FromToByAction and BindTarget.MaterialPath to animate my ShaderGraphMaterial. I don't know how to use the BindTarget.MaterialPath.
Replies
1
Boosts
0
Views
313
Activity
Sep ’25
I need to troubleshoot Transform Drift in ARKit
Hi all, I'm currently developing a real-time object reconstruction app using ARKit. The goal is to scan large objects using ARKit’s depth and transform data, and generate a point cloud. However, I’m facing a major challenge - Transform Drift / World Alignment Issues The localToWorld transform provided by ARKit frequently seems to drift or become unstable across frames. This results in misaligned point clouds even when the device is moved slowly or kept relatively still. In some cases, a static surface scanned over a few seconds results in clearly misaligned fragments. This makes it difficult to accurately stitch a multi-frame point cloud. I have experimented with various lighting conditions and object textures, but the issue persists in all cases. At times, the relative error between frames reaches up to 20 cm, while in other instances the error is minimal; however, the drift gradually accumulates over time, leading to an overall enlargement of the reconstructed object. I have attached images of both cases here. Questions: Are there specific conditions under which ARKit’s world transform is expected to drift? Is there a way to detect or recover from this drift during runtime? Any best practices for maintaining consistent tracking during scanning or measurement sessions?
Replies
1
Boosts
0
Views
141
Activity
Jun ’25
ARDepthData.confidenceMap only returns low confidence on certain devices
A few users have recently reported no longer being able to capture point clouds using our app, specifically on iPhone 15 Pro devices. We recently found an in-house device that exhibits this behavior and found that the confidenceMap contains only low confidence values, regardless of the environment being captured. Our app uses a higher confidence threshold; setting the threshold to a lower value produces noisy results as expected, so that is a non-viable option. Other LiDAR based apps have been tested with this device and the results are the same. No points, or noisy point clouds in apps that allow a lower confidence threshold setting. On devices that exhibit this behavior the "Displaying a point cloud using scene depth" Apple sample app can be used to visualize the issue. First reports of this new behavior occurred as early as iOS 18.4. Looking for recommendations on which team(s) at Apple to reach out to with these findings since the behavior manifests on only a small sample of devices.
Replies
1
Boosts
0
Views
291
Activity
Jun ’25
Summon gesture
Can you help to write a code able to pick an element a bit far from me, then bring it near to me, flick it a bit and then send it back to its original position when I release it? Thanks a lot, Christophe
Replies
1
Boosts
0
Views
76
Activity
Apr ’25
There some limitations if I open a spatial photo with Quick Look when an immersive space is on?
When viewing an immersive space and I open a spatial photo in Quick Look, which hides the entire app interface to show the photo. Is there a memory limit? If the inmersive space is not active, the application keep the interface.
Replies
1
Boosts
0
Views
188
Activity
Jun ’25
Can´t find a DLL in a VisionOS app with Unity
Dear all, I´m using Unity 6.2 beta and Xcode 16.2. I´m creating a simple framework to use the text to speech functionality in VisionOS from unity. The framework is created in Swift. I create an objective-c wrapper with the following declarations: ... void _initTTS(int); ... I create the framework, import it in Unity and call the functions in a c# wrapper class. The code is as follows: public static class TTSPluginManager { [DllImport("TTS_Vision"] private static extern void _initTTS(int val); ... public static void Initialize() { #if UNITY_VISIONOS _initTTS(0); #else Debug.LogWarning("NativeTTS.Initialize called on a non-iOS platform. Ignoring."); #endif } } I have managed to compile and run the program in the Apple Vision Pro, but I keep on getting the following error: DllNotFoundException: TTS_Vision assembly: type: member:(null) TTSPluginManager.Initialize () (at Assets/Plugins/TTSPluginManager.cs:33) LecturePortalManager.OnCreateStory (Ink.Runtime.Story story) (at Assets/AVRLecture/LecturePortalManager.cs:17) InkLoader.StartStory () (at Assets/AVRLecture/InkLoader.cs:24) InkLoader.Start () (at Assets/AVRLecture/InkLoader.cs:18) If I run the generated code from Xcode, I can see the app in the AVP, but I keep getting a loading error: DllNotFoundException: Unable to load DLL 'TTS_Vision'. Tried the load the following dynamic libraries: Unable to load dynamic library '/TTS_Vision' because of 'Failed to open the requested dynamic library (0x06000000) dlerror() = dlopen(/TTS_Vision, 0x0005): tried: '/TTS_Vision' (no such file) at TTSPluginManager.Initialize () [0x00000] in <00000000000000000000000000000000>:0 at LecturePortalManager.OnCreateStory (Ink.Runtime.Story story) [0x00000] in <00000000000000000000000000000000>:0 I can see in the generated code that the framework (TTS_Vision) is there, but the path seems wrong. I've tried to add more options to the searched paths, with no success... Any hints or suggestions are much more appreciated.
Replies
1
Boosts
0
Views
302
Activity
Sep ’25
visionOS Widget Bug
When I was developing the visionOS 26beta Widget, I found that it could not work normally when the real vision OS was running, and an error would appear. Please adopt container background api It is worth mentioning that this problem does not occur on the visionOS virtual machine. Does anyone know what the reason and solution are, or whether this is a visionOS error that needs Feedback? Thank you!
Replies
1
Boosts
0
Views
465
Activity
Sep ’25
Displaying spatial photos and videos on web pages in Safari
The purpose is to create a simple web-based gallery of spatial photos and videos using static html files. I have successfully displayed spatial photos using the img tag and IMG.heic files. I can tap and hold the image to bring up the contextual menu and from there select View Spatial Photo. Is there any way to add a control to the image, like a link or overlay on the image itself, that a user can simply tap to show the image in 3D? And how to host a video file on a web page without going through a CDN/streaming service? Sample html would be much appreciated.
Replies
1
Boosts
0
Views
779
Activity
Jan ’26
Sample code not working as expected: Implementing SharePlay for immersive spaces in visionOS
The following sample code project does not seem to work as expected: https://developer.apple.com/documentation/visionos/implementing-shareplay-for-immersive-spaces-in-visionos Have tried to get this project working with a client, but while we were able to see nearby users and make facetime calls, the color changing cube experience always remained a single color. Are there step-by-step instructions that Apple has used to verify this sample code so I can try to recreate this sample code's expected behavior for both nearby participants and those in a Facetime call?
Replies
1
Boosts
0
Views
263
Activity
Feb ’26
Immersive environment learning material
I really love the immersive environments, but I don’t have experience with creating them. Do you have resources or tutorials you can recommend for creating these from scratch? I’ve seen the sample projects and videos, but they usually start in the middle, assuming you already have the assets created.
Replies
1
Boosts
0
Views
86
Activity
Jul ’25
Why VisionOS doesn't support CustomMaterial ?
It's much easier to use custom material to bridge metal shader onto Reality model than using LowLevelTexture does the same thing. Why VisionOS doesn't support this material:
Replies
1
Boosts
0
Views
164
Activity
Apr ’25
Cursor display issue on attachment view in immersive space
While using Screen Mirroring in developer mode within my immersive space, I noticed an alignment issue with the computer cursor (transparent circle). When I move it toward an attachment view, the cursor remains horizontal instead of aligning with the surface of the attachment view. It shows correctly on a 2D window only wrong on attachment view. Is this behavior a bug, or could it be caused by a missing or incorrect configuration on the attachment view? Want help, thanks.
Replies
1
Boosts
0
Views
103
Activity
Apr ’25
Implementing Foveated Streaming with Apple Vision Pro
Hello, Want to understand what's the current state for developing for Apple Vision Pro? I want to stream a video from a remote server in realtime. It is a video stream and can't download it. I want to stream a low quality stream and high res stream. The server will only send the "box" where user is looking at. Are there any API to track where the user is looking at in the experience? Thanks,
Replies
1
Boosts
0
Views
694
Activity
Dec ’25
CustomMaterial disable unlit tone mapping
Hi, since iOS 18 UnlitMaterial and ShaderGraphMaterial have the option to disable tone mapping, e.g via https://developer.apple.com/documentation/realitykit/unlitmaterial/init(applypostprocesstonemap:) Is it possible to do the same for CustomMaterial? I tried initializing a CustomMaterial based on an UnlitMaterial where tone mapping is disabled, like so: let unlitMat = UnlitMaterial(applyPostProcessToneMap: false) let customMaterial = try CustomMaterial( from: unlitMat, surfaceShader: surfaceShader, geometryModifier: geometryModifier ) but that does not seem to work. The colors of my texture still look altered in comparison to a plain UnlitMaterial or a ShaderGraphMaterial where its disabled. Any hints? Thank you!
Replies
1
Boosts
0
Views
142
Activity
Jun ’25
Can not remove final World Anchor
I’ve been having some issues removing anchors. I can add anchors with no issue. They will be there the next time I run the scene. I can also get updates when ARKit sends them. I can remove anchors, but not all the time. The method I’m using is to call removeAnchor() on the data provider. worldTracking.removeAnchor(forID: uuid) // Yes, I have also tried `removeAnchor(_ worldAnchor: WorldAnchor)` This works if there are more than one anchor in a scene. When I’m down to one remaining anchor, I can remove it. It seems to succeed (does not raise an error) but the next time I run the scene the removed anchor is back. This only happens when there is only one remaining anchor. do { // This always run, but it doesn't seem to "save" the removal when there is only one anchor left. try await worldTracking.removeAnchor(forID: uuid) } catch { // I have never seen this block fire! print("Failed to remove world anchor \(uuid) with error: \(error).") } I posted a video on my website if you want to see it happening. https://stepinto.vision/labs/lab-051-issues-with-world-tracking/ Here is the full code. Can you see if I’m doing something wrong? Is this a bug? struct Lab051: View { @State var session = ARKitSession() @State var worldTracking = WorldTrackingProvider() @State var worldAnchorEntities: [UUID: Entity] = [:] @State var placement = Entity() @State var subject : ModelEntity = { let subject = ModelEntity( mesh: .generateSphere(radius: 0.06), materials: [SimpleMaterial(color: .stepRed, isMetallic: false)]) subject.setPosition([0, 0, 0], relativeTo: nil) let collision = CollisionComponent(shapes: [.generateSphere(radius: 0.06)]) let input = InputTargetComponent() subject.components.set([collision, input]) return subject }() var body: some View { RealityView { content in guard let scene = try? await Entity(named: "WorldTracking", in: realityKitContentBundle) else { return } content.add(scene) if let placementEntity = scene.findEntity(named: "PlacementPreview") { placement = placementEntity } } update: { content in for (_, entity) in worldAnchorEntities { if !content.entities.contains(entity) { content.add(entity) } } } .modifier(DragGestureImproved()) .gesture(tapGesture) .task { try! await setupAndRunWorldTracking() } } var tapGesture: some Gesture { TapGesture() .targetedToAnyEntity() .onEnded { value in if value.entity.name == "PlacementPreview" { // If we tapped the placement preview cube, create an anchor Task { let anchor = WorldAnchor(originFromAnchorTransform: value.entity.transformMatrix(relativeTo: nil)) try await worldTracking.addAnchor(anchor) } } else { Task { // Get the UUID we stored on the entity let uuid = UUID(uuidString: value.entity.name) ?? UUID() do { try await worldTracking.removeAnchor(forID: uuid) } catch { print("Failed to remove world anchor \(uuid) with error: \(error).") } } } } } func setupAndRunWorldTracking() async throws { if WorldTrackingProvider.isSupported { do { try await session.run([worldTracking]) for await update in worldTracking.anchorUpdates { switch update.event { case .added: let subjectClone = subject.clone(recursive: true) subjectClone.isEnabled = true subjectClone.name = update.anchor.id.uuidString subjectClone.transform = Transform(matrix: update.anchor.originFromAnchorTransform) worldAnchorEntities[update.anchor.id] = subjectClone print("🟢 Anchor added \(update.anchor.id)") case .updated: guard let entity = worldAnchorEntities[update.anchor.id] else { print("No entity found to update for anchor \(update.anchor.id)") return } entity.transform = Transform(matrix: update.anchor.originFromAnchorTransform) print("🔵 Anchor updated \(update.anchor.id)") case .removed: worldAnchorEntities[update.anchor.id]?.removeFromParent() worldAnchorEntities.removeValue(forKey: update.anchor.id) print("🔴 Anchor removed \(update.anchor.id)") if let remainingAnchors = await worldTracking.allAnchors { print("Remaining Anchors: \(remainingAnchors.count)") } } } } catch { print("ARKit session error \(error)") } } } }
Replies
1
Boosts
2
Views
227
Activity
May ’25
Extending or disabling the 1.5-meter boundary in ImmersiveSpace
I’m currently developing an app for visionOS and working with an ImmersiveSpace. I’ve noticed that the system automatically enforces a safety boundary at approximately 1.5 meters. If the user moves beyond this limit, the content fades out or the system reverts to Passthrough. Is there any way to disable this boundary or extend its radius? This app is currently in the experimental/verification phase, and it is intended to be run on a Vision Pro in Developer Mode. Since the primary goal is to test large-scale spatial interactions during development, I am looking for any way—including developer-specific settings or configurations—to bypass or expand this limit. If there isn't a direct API to change the boundary size, are there any recommended workarounds for testing movement within large environments? Any insights would be greatly appreciated!
Replies
1
Boosts
0
Views
558
Activity
Jan ’26
ManipulationComponent in both parent and child entities
Hello, In my project, I have attached a ManipulationComponent to Entity A and as expected, I'm able interact with it using the built-in gestures. I have another Entity B which is a child of A that I would like to interact with as well, so I attempted to add a ManipulationComponent to B. However, no gestures seem to be registered on B; I can still interact with A but B cannot be interacted with despite having ManipulationComponents on both entities. So I'm wondering if I'm just doing something wrong, if this is an issue with the ManipulationComponent, or if this is a limitation of the API. Attached is the code used to add the ManipulationComponent to an Entity and it was done on both A and B: let mc = ManipulationComponent() model.components.set(mc) var boxShape = ShapeResource.generateBox(width: 0.25, height: 0.05, depth: 0.25) boxShape = boxShape.offsetBy(translation: simd_float3(0, -0.05, -0.25)) ManipulationComponent.configureEntity(model, collisionShapes: [boxShape]) if var mc = model.components[ManipulationComponent.self] { mc.releaseBehavior = .stay mc.dynamics.inertia = .low model.components.set(mc) } I am using visionOS 26.0; let me know if there's any additional information needed.
Replies
1
Boosts
0
Views
376
Activity
Oct ’25
Using AVAsynchronousKeyValueLoading.load() on an AVAssetTrack gives an error
I'm seeing this error while attempting to compile my VisionOS app under Xcode 26. My existing code looks like: let (naturalSize, formatDescriptions, mediaCharacteristics) = try? await videoTrack.load(.naturalSize, .formatDescriptions, .mediaCharacteristics) This is now giving a compiler error: Type of expression is ambiguous without a type annotation I don't see that anything that was changed or deprecated in the latest version. Also loading the properties individually seems to work fine i.e.: let naturalSize = try? await videoTrack.load(.naturalSize) let formatDescriptions = try? await videoTrack.load(.formatDescriptions) let mediaCharacteristics = try? await videoTrack.load(.mediaCharacteristics)
Replies
1
Boosts
0
Views
121
Activity
Jun ’25