Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Created

ModelEntity position values are stale after world recenter (Crown long press)
Hi everyone, I’m working with RealityKit on visionOS and I’m seeing unexpected behavior when the user long-presses the Digital Crown, which recenters the world. Observed behavior: When the world is recentered via long-pressing the Crown, the models remain visually in the correct place (as expected). However, if I query the model’s position or transform immediately after recentering (e.g. entity.position or similar), I still get the old values from before recenter. As soon as I interact with the model using a gesture (drag/rotate/scale), the position updates and then querying it returns the correct, updated values. So effectively: Recenter happens Visual position is correct Programmatic position remains stale First gesture causes the position to “snap” to the correct updated value Questions: Is there any event, notification, or callback that fires when the world is recentered due to a long press of the Crown button? Is there a recommended way to get the updated world-space transform immediately after recenter, without waiting for a gesture? Is this expected behavior due to deferred/lazy transform updates in RealityKit? Right now it feels like recentering updates the coordinate system but doesn’t immediately commit new transform values to entities until some interaction occurs. Any guidance or best-practice patterns for handling this would be appreciated. Thanks!
1
0
24
12h
Buttons become unresponsive after using .windowStyle(.plain) with auto-hiding menu
I'm developing a visionOS panorama viewer app where I need to implement an auto-hiding floating menu in immersive space. The menu should: Show for 3 seconds when entering immersive mode Auto-hide after 3 seconds, Reappear when user taps anywhere (using SpatialTapGesture). Buttons should respond to gaze + pinch interaction The Problem: When I add .windowStyle(.plain) to achieve transparent window background for the auto-hide effect, all buttons in the menu become completely unresponsive to gaze + pinch interaction. The buttons only respond to direct finger touch (poking). Without .windowStyle(.plain): Buttons work correctly with gaze + pinch, but I cannot achieve transparent window background for hiding. With .windowStyle(.plain): Window can be transparent, but buttons lose gaze + pinch interaction. Code: App.swift: @main struct MyApp: App { @StateObject private var model = AppModel() var body: some Scene { WindowGroup(id: "MainWindow") { ContentView() .environmentObject(model) } .defaultSize(width: 900, height: 700) .windowResizability(.contentSize) .windowStyle(.plain) // <-- This causes the interaction issue ImmersiveSpace(id: "ImmersiveSpace") { ImmersiveView() .environmentObject(model) } } } ContentView.swift (simplified): struct ContentView: View { @EnvironmentObject var model: AppModel @State private var isMenuVisible: Bool = true var body: some View { VStack { if model.isImmersiveViewActive { if isMenuVisible { // This menu's buttons don't respond to gaze+pinch immersiveControlMenu } } else { mainMenuButtons } } .glassBackgroundEffect() } private var immersiveControlMenu: some View { HStack { Button("Exit") { exitImmersiveSpace() } .buttonStyle(.bordered) // Also tried .plain, same issue } .padding() .glassBackgroundEffect() } } ImmersiveView.swift: struct ImmersiveView: View { @EnvironmentObject var model: AppModel var body: some View { RealityView { content in // Panorama sphere let sphere = ModelEntity(mesh: .generateSphere(radius: 1000), materials: [material]) content.add(sphere) // Tap detector for menu toggle let tapDetector = Entity() tapDetector.components.set(CollisionComponent(shapes: [.generateSphere(radius: 900)])) tapDetector.components.set(InputTargetComponent()) content.add(tapDetector) } .gesture( SpatialTapGesture() .targetedToAnyEntity() .onEnded { _ in model.shouldShowMenu = true } ) } } Environment: Xcode 26.2 visionOS 26.3 Vision Pro device Questions: Is .windowStyle(.plain) expected to affect button interaction behavior? What is the recommended approach to achieve a transparent/hidden window in immersive mode while maintaining button interactivity? Is there an alternative to .windowStyle(.plain) for hiding window chrome in visionOS? Thank you for any guidance!
4
0
274
2d
Guided Access - Detect when setup (Eyes + Hands) is done
Hello, I am building a kiosk-style app for VisionOS which will be used in Guided Access mode, to be given to various visitors. So each of them will do hands + eyes setup, standard Guided Access thing. I want my experience to auto-start playing content when setup is done. I looked everywhere, but found no way do detect whether setup is complete? Also adding any kind of interface to start the app manually is risky, since buttons etc remain visible an interactable WHILE setup takes place. Delay-based approach also wont work, since setup can be skipped, or failed, or be done quickly, slowly... So it takes between 10 seconds and a few minutes. So the question is - is there any way to get notification, or check some bool or something that will tell me that Hands + Eyes setup in Guided mode is complete (or skipped)? Thanks in advance!
2
1
358
2d
Extending or disabling the 1.5-meter boundary in ImmersiveSpace
I’m currently developing an app for visionOS and working with an ImmersiveSpace. I’ve noticed that the system automatically enforces a safety boundary at approximately 1.5 meters. If the user moves beyond this limit, the content fades out or the system reverts to Passthrough. Is there any way to disable this boundary or extend its radius? This app is currently in the experimental/verification phase, and it is intended to be run on a Vision Pro in Developer Mode. Since the primary goal is to test large-scale spatial interactions during development, I am looking for any way—including developer-specific settings or configurations—to bypass or expand this limit. If there isn't a direct API to change the boundary size, are there any recommended workarounds for testing movement within large environments? Any insights would be greatly appreciated!
1
0
413
1w
Low-latency live streaming using APMP Wide FoV
Is it possible to achieve sub-second end-to-end latency when displaying live streaming video using APMP (Apple Projected Media Profile) with Wide FoV? APMP supports HLS playback, but my understanding is that standard HLS introduces several seconds of latency. I would like to know whether APMP (especially Wide FoV) supports Low-Latency HLS, or if there are inherent limitations that make sub-second latency impractical. If APMP is not suitable for this use case, are there any recommended alternatives within AVFoundation or related frameworks for rendering wide-FoV live video with very low latency? Thank you for any insights.
1
0
335
1w
Stop Reality Composer
Is there a way to stop a Reality Composer Timeline ? And restart it later on. For me it looks like you can only start a timeline via notification. and related to it. i have the issue that if the notification to start a certain timeline happens twice or more, it looks as if the timeline actually plays multiple times and animations start to glitch and jump. what is best practise here to avoid this.
1
0
454
1w
How to cast shadow on OcclusionMaterial in visionOS
I have a ModelEntity with GroundingShadowComponent entity.enumerateHierarchy { child, stop in child.components.set(GroundingShadowComponent(castsShadow: true)) } When I set it on the table, I can see the shadow on the table, even if I disable plane detection. However, when I enable plane detection, and the plane's material is OcclusionMaterial. I can not see the shadow on the table. As far as I know, receivesDynamicLighting is not usable in VisionOS. So how can I cast shadow on OcclusionMaterial in VisionOS? Or rather, is it possible to have the shadow properly displayed on the tabletop while ensuring that I cannot see objects beneath the table through it?
1
0
455
2w
With manipulation component, once you let go, how to prevent the entity from disappearing while animating it back into the volume
So with the new ManipulationComponent, we can choose "stay" and then if you drag it out of your volume, once you let go it will instantly disappear. We can "animate" it back to inside the volume, eg.: content.subscribe(to: ManipulationEvents.WillRelease.self) { event in Entity.animate( .easeInOut(duration: 1), body: { event.entity.position = [0, 0.2, 0] }, completion: {} ) }, Howeve,r for the duration that it travels outside of the volume it's invisible the whole time. In this apple video, it seems to be visible when dragging and when letting go, but perhaps that's not a volume they're dragging it out of? https://youtu.be/VtenPKrvPOU?si=y1zoZOs2IMyDzOm6&t=1748 Does anyone know how to keep the entity visible even when after letting the entity go while you animate it back towards inside of your volume?
1
1
881
2w
How to Renew visionOS Enterprise API Entitlements?
How can I renew visionOS Enterprise API? I've spent so much time contacting Apple Developer Support. They said they don't know the renewal process either and are "checking with the internal operations team" - but it's been 2 months with no updates. The official documentation (https://developer.apple.com/documentation/visionOS/building-spatial-experiences-for-business-apps-with-enterprise-apis#Request-the-entitlements) says: "The license file comes with an expiration date, so you need to renew it before then to ensure your entitlements continue to function." But it doesn't explain HOW to renew. When I asked Apple Support about this, they told me: "After Apple approves your app for one or more entitlements, you receive a license file, along with additional instructions." But I never received any instructions when I was first approved, and I still don't know how to renew. There's also no direct way to contact the Enterprise API team. Now my visionOS Enterprise API license has been expired for 2 months. I submitted a renewal request, but I still haven't heard anything back. Is it normal to take more than 2 months for approval? Any advice or shared experiences would be really helpful. Thanks!
1
0
451
2w
LiDAR Projector Pattern iPhone 15 Pro vs. 12 Pro – Research Project Question
Dear Apple Team, I’m a high school student (vocational upper secondary school) working on my final research project about LiDAR sensors in smartphones, specifically Apple’s iPhone implementation. My current understanding (for context): I understand Apple’s LiDAR uses dToF with SPAD detectors: A VCSEL laser emits pulses, a DOE splits the beam into a dot pattern, and each spot’s return time is measured separately → point cloud generation. My specific questions: How many active projection dots does the LiDAR projector have in the iPhone 15 Pro vs. iPhone 12 Pro? Are the dots static or do they shift/move over time? How many depth measurement points does the system deliver internally (after processing)? What is the ranging accuracy (cm-level precision) of each measurement point? Experimental background: Using an IR night vision camera, I counted approximately 111 dots on the 15 Pro vs. 576 dots on the 12 Pro. Do these match the internal specifications? Photos of my measurements are available if helpful. Contact request: I would be very grateful if you could connect me with an Apple engineer or ARKit specialist who works with LiDAR technology. I would love to ask follow-up questions directly and would be happy to provide my contact details for this purpose. These specifications would be essential for my research paper. Thank you very much in advance! Best regards, Max! Vocational Upper Secondary School Hans-Leipelt-Schule Donauwörth Research Project: “LiDAR Sensor Technology in Smartphones”
6
0
635
2w
Access UltraWideCamera when ARSession is running
ARSession provides video stream from the wide angle camera. If ARSession uses the ultra wide camera at the same time, ARSession may provide video stream from that camera, otherwise AVCaptureSession with an ultra wide camera should be allowed to launch. It would be very very useful if we can access different cameras while ARSession is running. We'd like to cooperate with you if possible. Steps to reproduce: run an AVCaptureSession and then run an ARSession. The AVCaptureSession stops.
1
0
455
2w
Reality Composer Timeline unfinished
the timeline editor feels often unfinished. Setting the time cursor to a different time is often not reflected in the preview. You either have to click a clip or wait. sometimes the cursor even disappears, eg. when switching tabs to shadergraph Not being able to select and move multiple clips is missing. There is also no snapping to clips or time cursor as found in other tools. And then there is the timeline compile bug https://developer.apple.com/forums/thread/810868 The timeline as it is, is a good start but it definitely needs some more love to be on par with other commercial tools like Unity or After Effects.
1
1
190
3w
Technical Inquiry regarding iPhone LiDAR Specifications and ARKit Data Integrity
Hardware Specifications Regarding the LiDAR scanner in the iPhone 13/14/15/16/17 Pro series, could you please provide the following technical details for academic verification: Point Cloud Density / Resolution: The effective resolution of the depth map. Sampling Frequency: The sensor's refresh rate. Accuracy Metrics: Official tolerance levels regarding depth accuracy relative to distance (specifically within 0.5m – 2m range). Data Acquisition Methodology For a scientific thesis requiring high data integrity: Does Apple recommend a custom ARKit implementation over third-party applications (e.g., Polycam) to access raw depth data? I need to confirm if third-party apps typically apply smoothing or post-processing that would obscure the sensor's native performance, which must be avoided for my error analysis.
2
0
369
3w
onWorldRecenter memory leak and duplicate callbacks in ImmersiveSpace
Posting this here in case this information is helpful to other developers: As of visionOS 26.3 beta 1, onWorldRecenter has two significant issues: (FB21557639) Memory Leak: When onWorldRecenter is assigned to a RealityView within an ImmersiveSpace, it appears to retain a strong reference to the view's internal SwiftUI context. When the immersive space is dismissed, the view's @State objects will not be deallocated. Also, each time the immersive space view's body is executed, additional state storage will be allocated and leaked. Multiple Callbacks: When the user long-presses the Digital Crown, the onWorldRecenter closure will be called multiple times, once for each past view body execution, including those of immersive space views that have been previously dismissed. Although these issues seem to be most prevalent when onWorldRecenter is used with an ImmersiveSpace, they may also occur in the context of a WindowGroup under certain circumstances. It's possible to work around this problem by moving onWorldRecenter to an empty overlay view within the app's primary WindowGroup and forwarding the world recenter events to ImmersiveSpace views through a notification system, coupled with a debouncer as an extra precaution.
1
1
972
3w
Auto Rig Pro Mixamo Animation to RC Pro
Hi! I have been struggling with this for a little while and most of what I've found has not helped much. I hope to find more success here. Essentially, I have a model I've made in Blender. I rigged it using Auto Rig Pro, and I've also used ARP to add a Mixamo animation to it. That all works fine in Blender. However, when I try to import this model into RCP, I don't get the animation. The "default subtree animation" is completely empty. I attribute this to my lack of experience in this field but here's what I've attempted thus far: Pushing the Mixamo keyframes into an NLA strip. I'm pretty sure this is the correct line of action, but I'm definitely not doing something right. Baking the animation (?) I've made sure that I have animation checked when I export the model! Any ideas or reference projects would be lovely. I haven't really found much that has pushed me in the right direction. This project is, unfortunately, kind of time sensitive, so I would appreciate help ASAP. Thank you and let me know if I can add anymore context!
3
0
412
4w