Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

iOS needs to allow for background bluetooth scanning. I can't fully build my app.
iOS currently restricts background Bluetooth advertising and scanning in order to preserve battery life and protect user privacy. While these restrictions serve important purposes, they also limit legitimate use cases where users have explicitly opted in to proximity-based experiences. The core challenge is that modern social applications need a way to detect when users are physically present at the same location or event without requiring every participant to keep their app in the foreground. Under the current system, background BLE advertising is heavily throttled and can only transmit a limited payload, background scanning intervals are sparse and unpredictable, peer-to-peer proximity detection cannot be maintained reliably when apps are in the background, and Background App Refresh is non-deterministic, making any kind of time-based proximity validation impossible. A proposed enhancement would be to introduce an “Enhanced Proximity Permission.” This would allow developers to enable reliable background BLE advertising and scanning for declared time windows, such as a maximum of eight hours. It would also allow devices running the same app to detect each other’s proximity using ephemeral, rotating identifiers that preserve privacy, with clear user consent and prominent indicators whenever the feature is active. Unlocking this capability would open up new categories of applications. Live events could offer automatic attendance tracking at concerts, conferences, or sports venues. Retail environments could support opt-in foot traffic analysis and dwell-time insights. Social apps could allow users to find friends at festivals, campuses, or other large venues. Safety applications could extend to crowd density monitoring and contact tracing beyond COVID-era needs. Gaming could offer real-world multiplayer experiences based on physical proximity, and transportation providers could verify rideshare pickups or measure public transit flows automatically. Privacy safeguards would remain central. Permissions would be time-boxed and expire after an event or session. A mandatory visual indicator would be displayed whenever proximity tracking is active. A user-facing dashboard would show all apps granted enhanced proximity access. Permissions would automatically be revoked after a period of non-use, and only ephemeral tokens not permanent identifiers would be broadcast. The industry impact would be significant. With this enhancement, iOS could power the next generation of location-aware social platforms while maintaining Apple’s leadership in privacy through explicit user control and transparency. Current alternatives, such as requiring users to keep apps in the foreground or deploying dedicated hardware beacons, produce poor user experiences and constrain innovation in spatial computing and social applications. Can anyone from Apple consider this change? Having to buy iBeacons is brutal and means slower adoption. Please reconsider this for users who opt in.
1
0
1.1k
Sep ’25
visionOS Widget Bug
When I was developing the visionOS 26beta Widget, I found that it could not work normally when the real vision OS was running, and an error would appear. Please adopt container background api It is worth mentioning that this problem does not occur on the visionOS virtual machine. Does anyone know what the reason and solution are, or whether this is a visionOS error that needs Feedback? Thank you!
1
0
465
Sep ’25
Do you retain a reference to your content events in RealityView?
Do you retain a reference to your content (RealityViewContent) events? For example, the Manipulation Events docs from Apple use _ to discard the result. In theory the event should keep working while the content is alive. _ = content.subscribe(to: ManipulationEvents.WillBegin.self) { event in event.entity.components[ModelComponent.self]?.materials[0] = SimpleMaterial(color: .blue, isMetallic: false) } _ = content.subscribe(to: ManipulationEvents.WillEnd.self) { event in event.entity.components[ModelComponent.self]?.materials[0] = SimpleMaterial(color: .red, isMetallic: false) } We could store these events in state. I've seen this in a few samples and apps. @State var beginSubscription: EventSubscription? ... beginSubscription = content.subscribe(to: ManipulationEvents.WillBegin.self) { event in event.entity.components[ModelComponent.self]?.materials[0] = SimpleMaterial(color: .blue, isMetallic: false) } The main advantage I see is that we can be more explicit about when we remove the event. Are there other reasons to keep a reference to these events?
1
0
593
Sep ’25
Extending or disabling the 1.5-meter boundary in ImmersiveSpace
I’m currently developing an app for visionOS and working with an ImmersiveSpace. I’ve noticed that the system automatically enforces a safety boundary at approximately 1.5 meters. If the user moves beyond this limit, the content fades out or the system reverts to Passthrough. Is there any way to disable this boundary or extend its radius? This app is currently in the experimental/verification phase, and it is intended to be run on a Vision Pro in Developer Mode. Since the primary goal is to test large-scale spatial interactions during development, I am looking for any way—including developer-specific settings or configurations—to bypass or expand this limit. If there isn't a direct API to change the boundary size, are there any recommended workarounds for testing movement within large environments? Any insights would be greatly appreciated!
1
0
558
Jan ’26
Need to rotate child of a 3D mesh
I am creating a vision pro app with a 3D model, it has a mesh hierarchy of head, hands, feet etc. I want the character to look towards the camera, but am not able to access head of character through sceneKit nor reality kit. when I try to print names of the child meshes, it only prints till the character, it does iterate through all the body parts. Can anyone help?
1
0
225
Sep ’25
Unable to Retain Main App Window State When Transitioning to Immersive Space
In Vision OS app, I have two types of windows: Main App Window – This is the default window that launches when the app starts. It displays the video listings and other primary content. Immersive Space Window – This opens only when a user starts streaming or playing a video. Issue: When entering the immersive space, the main app window remains visible in front of it unless manually closed. To avoid this, I currently close the main window when transitioning to immersive space and reopen it when exiting from immersive space. However, this causes the app to restart instead of resuming from its previous state. Desired Behavior: I want the main app window to retain its state and seamlessly resume from where it was before entering immersive mode, rather than restarting. Attempts & Challenges: Tried managing opacity, visibility but none worked as expected. Couldn’t find a way to push the main window to the background while bringing the immersive space to the foreground. Looking for a solution to keep the main window’s state intact while transitioning between immersive and normal modes.
1
0
94
Mar ’25
Summon gesture
Can you help to write a code able to pick an element a bit far from me, then bring it near to me, flick it a bit and then send it back to its original position when I release it? Thanks a lot, Christophe
1
0
76
Apr ’25
Can´t find a DLL in a VisionOS app with Unity
Dear all, I´m using Unity 6.2 beta and Xcode 16.2. I´m creating a simple framework to use the text to speech functionality in VisionOS from unity. The framework is created in Swift. I create an objective-c wrapper with the following declarations: ... void _initTTS(int); ... I create the framework, import it in Unity and call the functions in a c# wrapper class. The code is as follows: public static class TTSPluginManager { [DllImport("TTS_Vision"] private static extern void _initTTS(int val); ... public static void Initialize() { #if UNITY_VISIONOS _initTTS(0); #else Debug.LogWarning("NativeTTS.Initialize called on a non-iOS platform. Ignoring."); #endif } } I have managed to compile and run the program in the Apple Vision Pro, but I keep on getting the following error: DllNotFoundException: TTS_Vision assembly: type: member:(null) TTSPluginManager.Initialize () (at Assets/Plugins/TTSPluginManager.cs:33) LecturePortalManager.OnCreateStory (Ink.Runtime.Story story) (at Assets/AVRLecture/LecturePortalManager.cs:17) InkLoader.StartStory () (at Assets/AVRLecture/InkLoader.cs:24) InkLoader.Start () (at Assets/AVRLecture/InkLoader.cs:18) If I run the generated code from Xcode, I can see the app in the AVP, but I keep getting a loading error: DllNotFoundException: Unable to load DLL 'TTS_Vision'. Tried the load the following dynamic libraries: Unable to load dynamic library '/TTS_Vision' because of 'Failed to open the requested dynamic library (0x06000000) dlerror() = dlopen(/TTS_Vision, 0x0005): tried: '/TTS_Vision' (no such file) at TTSPluginManager.Initialize () [0x00000] in <00000000000000000000000000000000>:0 at LecturePortalManager.OnCreateStory (Ink.Runtime.Story story) [0x00000] in <00000000000000000000000000000000>:0 I can see in the generated code that the framework (TTS_Vision) is there, but the path seems wrong. I've tried to add more options to the searched paths, with no success... Any hints or suggestions are much more appreciated.
1
0
302
Sep ’25
onWorldRecenter memory leak and duplicate callbacks in ImmersiveSpace
Posting this here in case this information is helpful to other developers: As of visionOS 26.3 beta 1, onWorldRecenter has two significant issues: (FB21557639) Memory Leak: When onWorldRecenter is assigned to a RealityView within an ImmersiveSpace, it appears to retain a strong reference to the view's internal SwiftUI context. When the immersive space is dismissed, the view's @State objects will not be deallocated. Also, each time the immersive space view's body is executed, additional state storage will be allocated and leaked. Multiple Callbacks: When the user long-presses the Digital Crown, the onWorldRecenter closure will be called multiple times, once for each past view body execution, including those of immersive space views that have been previously dismissed. Although these issues seem to be most prevalent when onWorldRecenter is used with an ImmersiveSpace, they may also occur in the context of a WindowGroup under certain circumstances. It's possible to work around this problem by moving onWorldRecenter to an empty overlay view within the app's primary WindowGroup and forwarding the world recenter events to ImmersiveSpace views through a notification system, coupled with a debouncer as an extra precaution.
1
1
1k
Jan ’26
Immersive environment learning material
I really love the immersive environments, but I don’t have experience with creating them. Do you have resources or tutorials you can recommend for creating these from scratch? I’ve seen the sample projects and videos, but they usually start in the middle, assuming you already have the assets created.
1
0
86
Jul ’25
How to Renew visionOS Enterprise API Entitlements?
How can I renew visionOS Enterprise API? I've spent so much time contacting Apple Developer Support. They said they don't know the renewal process either and are "checking with the internal operations team" - but it's been 2 months with no updates. The official documentation (https://developer.apple.com/documentation/visionOS/building-spatial-experiences-for-business-apps-with-enterprise-apis#Request-the-entitlements) says: "The license file comes with an expiration date, so you need to renew it before then to ensure your entitlements continue to function." But it doesn't explain HOW to renew. When I asked Apple Support about this, they told me: "After Apple approves your app for one or more entitlements, you receive a license file, along with additional instructions." But I never received any instructions when I was first approved, and I still don't know how to renew. There's also no direct way to contact the Enterprise API team. Now my visionOS Enterprise API license has been expired for 2 months. I submitted a renewal request, but I still haven't heard anything back. Is it normal to take more than 2 months for approval? Any advice or shared experiences would be really helpful. Thanks!
1
0
506
Jan ’26
How to export an entity to USD?
I want to record animation with entity, then export it to .usd without using Reality Composer Pro, how to achieve that?
Replies
1
Boosts
0
Views
77
Activity
Apr ’25
Does RealityKit support mesh based animation?
I have a mesh based animation 3D model, that means every frame it’s a new mesh. I import it into RealityView, but can’t play it‘s animation, RealityKit tells me this model has no animations by using print(entity.availableAnimations).
Replies
1
Boosts
0
Views
674
Activity
Aug ’25
RotationSystem and RotationComponent API Updates for visionOS 26 Beta
Are there any changes to RotationSystem: System and RotationComponent: Component that I should be aware of to see if I need to update my use in my visionOS app?
Replies
1
Boosts
0
Views
62
Activity
Jun ’25
Connect external disk using developer strap
I got more than 1 TB Immersive videos, and I want to play from them. Is there a way I can connect a ssd to Vision Pro via developer strap? Or is it possible to connect to a 10G Ethernet ad, and then using Ethernet to connect to a disk/NAS and attach the drive via ip?
Replies
1
Boosts
0
Views
488
Activity
Jan ’26
iOS needs to allow for background bluetooth scanning. I can't fully build my app.
iOS currently restricts background Bluetooth advertising and scanning in order to preserve battery life and protect user privacy. While these restrictions serve important purposes, they also limit legitimate use cases where users have explicitly opted in to proximity-based experiences. The core challenge is that modern social applications need a way to detect when users are physically present at the same location or event without requiring every participant to keep their app in the foreground. Under the current system, background BLE advertising is heavily throttled and can only transmit a limited payload, background scanning intervals are sparse and unpredictable, peer-to-peer proximity detection cannot be maintained reliably when apps are in the background, and Background App Refresh is non-deterministic, making any kind of time-based proximity validation impossible. A proposed enhancement would be to introduce an “Enhanced Proximity Permission.” This would allow developers to enable reliable background BLE advertising and scanning for declared time windows, such as a maximum of eight hours. It would also allow devices running the same app to detect each other’s proximity using ephemeral, rotating identifiers that preserve privacy, with clear user consent and prominent indicators whenever the feature is active. Unlocking this capability would open up new categories of applications. Live events could offer automatic attendance tracking at concerts, conferences, or sports venues. Retail environments could support opt-in foot traffic analysis and dwell-time insights. Social apps could allow users to find friends at festivals, campuses, or other large venues. Safety applications could extend to crowd density monitoring and contact tracing beyond COVID-era needs. Gaming could offer real-world multiplayer experiences based on physical proximity, and transportation providers could verify rideshare pickups or measure public transit flows automatically. Privacy safeguards would remain central. Permissions would be time-boxed and expire after an event or session. A mandatory visual indicator would be displayed whenever proximity tracking is active. A user-facing dashboard would show all apps granted enhanced proximity access. Permissions would automatically be revoked after a period of non-use, and only ephemeral tokens not permanent identifiers would be broadcast. The industry impact would be significant. With this enhancement, iOS could power the next generation of location-aware social platforms while maintaining Apple’s leadership in privacy through explicit user control and transparency. Current alternatives, such as requiring users to keep apps in the foreground or deploying dedicated hardware beacons, produce poor user experiences and constrain innovation in spatial computing and social applications. Can anyone from Apple consider this change? Having to buy iBeacons is brutal and means slower adoption. Please reconsider this for users who opt in.
Replies
1
Boosts
0
Views
1.1k
Activity
Sep ’25
View Immsersive/Stereoscopic Images in Immersive Space
Since using Quick Look exits you from both your app and Immersive Space. Is there a way to view immersive images within Immersive Space?
Replies
1
Boosts
0
Views
44
Activity
Jun ’25
Is it possible to create immersive video totally from virtual scene
hi guys, I'm working in VFX industry and I've got the question that, is it possible to create immersive video directly from virtual scene created in DCC software like maya, rendered into footage, then coded into immersive video, and finally play in in vision pro? thanks.
Replies
1
Boosts
1
Views
730
Activity
Sep ’25
visionOS Widget Bug
When I was developing the visionOS 26beta Widget, I found that it could not work normally when the real vision OS was running, and an error would appear. Please adopt container background api It is worth mentioning that this problem does not occur on the visionOS virtual machine. Does anyone know what the reason and solution are, or whether this is a visionOS error that needs Feedback? Thank you!
Replies
1
Boosts
0
Views
465
Activity
Sep ’25
Do you retain a reference to your content events in RealityView?
Do you retain a reference to your content (RealityViewContent) events? For example, the Manipulation Events docs from Apple use _ to discard the result. In theory the event should keep working while the content is alive. _ = content.subscribe(to: ManipulationEvents.WillBegin.self) { event in event.entity.components[ModelComponent.self]?.materials[0] = SimpleMaterial(color: .blue, isMetallic: false) } _ = content.subscribe(to: ManipulationEvents.WillEnd.self) { event in event.entity.components[ModelComponent.self]?.materials[0] = SimpleMaterial(color: .red, isMetallic: false) } We could store these events in state. I've seen this in a few samples and apps. @State var beginSubscription: EventSubscription? ... beginSubscription = content.subscribe(to: ManipulationEvents.WillBegin.self) { event in event.entity.components[ModelComponent.self]?.materials[0] = SimpleMaterial(color: .blue, isMetallic: false) } The main advantage I see is that we can be more explicit about when we remove the event. Are there other reasons to keep a reference to these events?
Replies
1
Boosts
0
Views
593
Activity
Sep ’25
Extending or disabling the 1.5-meter boundary in ImmersiveSpace
I’m currently developing an app for visionOS and working with an ImmersiveSpace. I’ve noticed that the system automatically enforces a safety boundary at approximately 1.5 meters. If the user moves beyond this limit, the content fades out or the system reverts to Passthrough. Is there any way to disable this boundary or extend its radius? This app is currently in the experimental/verification phase, and it is intended to be run on a Vision Pro in Developer Mode. Since the primary goal is to test large-scale spatial interactions during development, I am looking for any way—including developer-specific settings or configurations—to bypass or expand this limit. If there isn't a direct API to change the boundary size, are there any recommended workarounds for testing movement within large environments? Any insights would be greatly appreciated!
Replies
1
Boosts
0
Views
558
Activity
Jan ’26
Need to rotate child of a 3D mesh
I am creating a vision pro app with a 3D model, it has a mesh hierarchy of head, hands, feet etc. I want the character to look towards the camera, but am not able to access head of character through sceneKit nor reality kit. when I try to print names of the child meshes, it only prints till the character, it does iterate through all the body parts. Can anyone help?
Replies
1
Boosts
0
Views
225
Activity
Sep ’25
Unable to Retain Main App Window State When Transitioning to Immersive Space
In Vision OS app, I have two types of windows: Main App Window – This is the default window that launches when the app starts. It displays the video listings and other primary content. Immersive Space Window – This opens only when a user starts streaming or playing a video. Issue: When entering the immersive space, the main app window remains visible in front of it unless manually closed. To avoid this, I currently close the main window when transitioning to immersive space and reopen it when exiting from immersive space. However, this causes the app to restart instead of resuming from its previous state. Desired Behavior: I want the main app window to retain its state and seamlessly resume from where it was before entering immersive mode, rather than restarting. Attempts & Challenges: Tried managing opacity, visibility but none worked as expected. Couldn’t find a way to push the main window to the background while bringing the immersive space to the foreground. Looking for a solution to keep the main window’s state intact while transitioning between immersive and normal modes.
Replies
1
Boosts
0
Views
94
Activity
Mar ’25
Can I use `FromToByAction` to animate the ShaderGraphMaterial parameters?
Can I combine FromToByAction and BindTarget.MaterialPath to animate my ShaderGraphMaterial. I don't know how to use the BindTarget.MaterialPath.
Replies
1
Boosts
0
Views
313
Activity
Sep ’25
Slowness in Developer Strap 2
Hi I’m using Vision Pro m5 and developer strap 2. When I connect it to my Mac, it still shows 480M… all systems are using latest firmware… Anyone knows why?
Replies
1
Boosts
0
Views
525
Activity
Jan ’26
There some limitations if I open a spatial photo with Quick Look when an immersive space is on?
When viewing an immersive space and I open a spatial photo in Quick Look, which hides the entire app interface to show the photo. Is there a memory limit? If the inmersive space is not active, the application keep the interface.
Replies
1
Boosts
0
Views
188
Activity
Jun ’25
Summon gesture
Can you help to write a code able to pick an element a bit far from me, then bring it near to me, flick it a bit and then send it back to its original position when I release it? Thanks a lot, Christophe
Replies
1
Boosts
0
Views
76
Activity
Apr ’25
Can´t find a DLL in a VisionOS app with Unity
Dear all, I´m using Unity 6.2 beta and Xcode 16.2. I´m creating a simple framework to use the text to speech functionality in VisionOS from unity. The framework is created in Swift. I create an objective-c wrapper with the following declarations: ... void _initTTS(int); ... I create the framework, import it in Unity and call the functions in a c# wrapper class. The code is as follows: public static class TTSPluginManager { [DllImport("TTS_Vision"] private static extern void _initTTS(int val); ... public static void Initialize() { #if UNITY_VISIONOS _initTTS(0); #else Debug.LogWarning("NativeTTS.Initialize called on a non-iOS platform. Ignoring."); #endif } } I have managed to compile and run the program in the Apple Vision Pro, but I keep on getting the following error: DllNotFoundException: TTS_Vision assembly: type: member:(null) TTSPluginManager.Initialize () (at Assets/Plugins/TTSPluginManager.cs:33) LecturePortalManager.OnCreateStory (Ink.Runtime.Story story) (at Assets/AVRLecture/LecturePortalManager.cs:17) InkLoader.StartStory () (at Assets/AVRLecture/InkLoader.cs:24) InkLoader.Start () (at Assets/AVRLecture/InkLoader.cs:18) If I run the generated code from Xcode, I can see the app in the AVP, but I keep getting a loading error: DllNotFoundException: Unable to load DLL 'TTS_Vision'. Tried the load the following dynamic libraries: Unable to load dynamic library '/TTS_Vision' because of 'Failed to open the requested dynamic library (0x06000000) dlerror() = dlopen(/TTS_Vision, 0x0005): tried: '/TTS_Vision' (no such file) at TTSPluginManager.Initialize () [0x00000] in <00000000000000000000000000000000>:0 at LecturePortalManager.OnCreateStory (Ink.Runtime.Story story) [0x00000] in <00000000000000000000000000000000>:0 I can see in the generated code that the framework (TTS_Vision) is there, but the path seems wrong. I've tried to add more options to the searched paths, with no success... Any hints or suggestions are much more appreciated.
Replies
1
Boosts
0
Views
302
Activity
Sep ’25
onWorldRecenter memory leak and duplicate callbacks in ImmersiveSpace
Posting this here in case this information is helpful to other developers: As of visionOS 26.3 beta 1, onWorldRecenter has two significant issues: (FB21557639) Memory Leak: When onWorldRecenter is assigned to a RealityView within an ImmersiveSpace, it appears to retain a strong reference to the view's internal SwiftUI context. When the immersive space is dismissed, the view's @State objects will not be deallocated. Also, each time the immersive space view's body is executed, additional state storage will be allocated and leaked. Multiple Callbacks: When the user long-presses the Digital Crown, the onWorldRecenter closure will be called multiple times, once for each past view body execution, including those of immersive space views that have been previously dismissed. Although these issues seem to be most prevalent when onWorldRecenter is used with an ImmersiveSpace, they may also occur in the context of a WindowGroup under certain circumstances. It's possible to work around this problem by moving onWorldRecenter to an empty overlay view within the app's primary WindowGroup and forwarding the world recenter events to ImmersiveSpace views through a notification system, coupled with a debouncer as an extra precaution.
Replies
1
Boosts
1
Views
1k
Activity
Jan ’26
Immersive environment learning material
I really love the immersive environments, but I don’t have experience with creating them. Do you have resources or tutorials you can recommend for creating these from scratch? I’ve seen the sample projects and videos, but they usually start in the middle, assuming you already have the assets created.
Replies
1
Boosts
0
Views
86
Activity
Jul ’25
How to Renew visionOS Enterprise API Entitlements?
How can I renew visionOS Enterprise API? I've spent so much time contacting Apple Developer Support. They said they don't know the renewal process either and are "checking with the internal operations team" - but it's been 2 months with no updates. The official documentation (https://developer.apple.com/documentation/visionOS/building-spatial-experiences-for-business-apps-with-enterprise-apis#Request-the-entitlements) says: "The license file comes with an expiration date, so you need to renew it before then to ensure your entitlements continue to function." But it doesn't explain HOW to renew. When I asked Apple Support about this, they told me: "After Apple approves your app for one or more entitlements, you receive a license file, along with additional instructions." But I never received any instructions when I was first approved, and I still don't know how to renew. There's also no direct way to contact the Enterprise API team. Now my visionOS Enterprise API license has been expired for 2 months. I submitted a renewal request, but I still haven't heard anything back. Is it normal to take more than 2 months for approval? Any advice or shared experiences would be really helpful. Thanks!
Replies
1
Boosts
0
Views
506
Activity
Jan ’26