Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

SpatialEventGesture Not Working to Show Hidden Menu in Immersive Panorama View - visionOS
SpatialEventGesture Not Working to Show Hidden Menu in Immersive Panorama View - visionOS Problem Description I'm developing a Vision Pro app that displays 360° panoramic photos in a full immersive space. I have a floating menu that auto-hides after 5 seconds, and I want users to be able to show the menu again using spatial gestures (particularly pinch gestures) when it's hidden. However, the SpatialEventGesture implementation is not working as expected. The menu doesn't appear when users perform pinch gestures or other spatial interactions in the immersive space. Current Implementation Here's the relevant gesture detection code in my ImmersiveView: import SwiftUI import RealityKit struct ImmersiveView: View { @EnvironmentObject var appModel: AppModel @Environment(\.openWindow) private var openWindow var body: some View { RealityView { content in // RealityView content setup with panoramic sphere... let rootEntity = Entity() content.add(rootEntity) // Load panoramic content here... } // Using SpatialEventGesture to handle multiple spatial gestures .gesture( SpatialEventGesture() .onEnded { eventCollection in // Check menu visibility state if !appModel.isPanoramaMenuVisible { // Iterate through event collection to handle various gestures for event in eventCollection { switch event.kind { case .touch: print("Detected spatial touch gesture, showing menu") showMenuWithGesture() return case .indirectPinch: print("Detected spatial pinch gesture, showing menu") showMenuWithGesture() return case .pointer: print("Detected spatial pointer gesture, showing menu") showMenuWithGesture() return @unknown default: print("Detected unknown spatial gesture: \(event.kind)") showMenuWithGesture() return } } } } ) // Keep long press gesture as backup .simultaneousGesture( LongPressGesture(minimumDuration: 1.5) .onEnded { _ in if !appModel.isPanoramaMenuVisible { print("Detected long press gesture, showing menu") showMenuWithGesture() } } ) } private func showMenuWithGesture() { if !appModel.isPanoramaMenuVisible { appModel.showPanoramaMenu() if !appModel.windowExists(id: "PanoramaMenu") { openWindow(id: "PanoramaMenu", value: "menu") } } } } What I've Tried Multiple SpatialTapGesture approaches: Originally tried using multiple .gesture() modifiers with SpatialTapGesture(count: 1) and SpatialTapGesture(count: 2), but realized they override each other. SpatialEventGesture implementation: Switched to SpatialEventGesture to handle multiple event types (.touch, .indirectPinch, .pointer), but pinch gestures still don't trigger the menu. Added debugging: Console logs show that the gesture callbacks are never called when performing pinch gestures in the immersive space. Backup LongPressGesture: Added a simultaneous long press gesture as backup, which also doesn't work consistently. Expected Behavior When the panorama menu is hidden (after 5-second auto-hide), users should be able to: Perform a pinch gesture (indirect pinch) to show the menu Tap in space to show the menu Use other spatial gestures to show the menu Questions Is SpatialEventGesture the correct approach for detecting gestures in a full immersive RealityView? Are there any special considerations for gesture detection when the RealityView contains a large panoramic sphere that might be intercepting gestures? Should I be using a different gesture approach for visionOS immersive spaces? Is there a way to ensure gestures work even when the RealityView content (panoramic sphere) might be blocking them? Environment Xcode 16.1 visionOS 2.5 Testing on Vision Pro device App uses SwiftUI + RealityKit Any guidance on the proper way to implement spatial gesture detection in visionOS immersive spaces would be greatly appreciated! Additional Context The app manages multiple windows and the gesture detection should work specifically when in the immersive panorama mode with the menu hidden. Thank you for any help or suggestions!
1
0
196
Jun ’25
ARKit camera transform orientation vector doesn't match physical device heading (despite `.gravityAndHeading`)
Hi all, I'm working on an ARKit-based iOS app where I need to accurately determine the direction the device is facing to localize objects in the real world. I'm using: let config = ARWorldTrackingConfiguration() config.worldAlignment = .gravityAndHeading Thus, I would expect the world alignment to behave as given in the gravityAndHeading page. The AR session is started after verifying that CLLocationManager.headingAccuracy <= 20, and the compass appears to be calibrated. However, I'm seeing a major inconsistency: When the rear camera is physically pointed toward true North, I would expect: cameraTransform.columns.2.z ≈ -1 // (i.e. ARKit's -Z pointing North) But instead, I'm consistently seeing: cameraTransform.columns.2.z ≈ +0.97 // Implies camera is facing South Meanwhile, the translation vector behaves as expected: As I physically move North, cameraTransform.columns.3.z becomes more negative, matching the world’s +Z = South assumption. For example, let's say I have the device in landscapeRight (or landscapeLeft for UIDeviceOrientation). Let's say the device rear camera is pointing towards True North, and I start moving towards True North. I get something like this: Camera Transform = simd_float4x4( [ [0.98446155, -0.030119859, 0.172998, 0.0], [0.023979114, 0.9990097, 0.037477385, 0.0], [-0.17395553, -0.032746706, 0.98420894, 0.0], [0.024039675, -0.037087332, -0.22780673, 0.99999994] ]) As you can see, the cameraTransform.columns.2.z is positive despite the rear camera pointing towards True North, while cameraTransform.columns.3.z is correctly positive as the device is moving towards True North. So here is my question: Why is cameraTransform.columns.2.z positive when the rear camera is physically facing North? Any clarity would be deeply appreciated. I've read the documentation and tested with different heading accuracies and AR session resets, but I keep running into this orientation mismatch. Thanks in advance!
1
0
143
Jun ’25
FromToByAnimation triggers availableAnimations not the single bone animation
So, I was trying to animate a single bone using FromToByAnimation, but when I start the animation, the model instead does the full body animation stored in the availableAnimations. If I don't run testAnimation nothing happens. If I run testAnimation I see the same animation as If I had called entity.playAnimation(entity.availableAnimations[0],..) here's the full code I use to animate a single bone: func testAnimation() { guard let jawAnim = jawAnimation(mouthOpen: 0.4) else { print("Failed to create jawAnim") return } guard let creature, let animResource = try? AnimationResource.generate(with: jawAnim) else { return } let controller = creature.playAnimation(animResource, transitionDuration: 0.02, startsPaused: false) print("controller: \(controller)") } func jawAnimation(mouthOpen: Float) -> FromToByAnimation<JointTransforms>? { guard let basePose else { return nil } guard let index = basePose.jointNames.firstIndex(of: jawBoneName) else { print("Target joint \(self.jawBoneName) not found in default pose joint names") return nil } let fromTransforms = basePose.jointTransforms let baseJawTransform = fromTransforms[index] let maxAngle: Float = 40 let angle: Float = maxAngle * mouthOpen * (.pi / 180) let extraRot = simd_quatf(angle: angle, axis: simd_float3(x: 0, y: 0, z: 1)) var toTransforms = basePose.jointTransforms toTransforms[index] = Transform( scale: baseJawTransform.scale * 2, rotation: baseJawTransform.rotation * extraRot, translation: baseJawTransform.translation ) let fromToBy = FromToByAnimation<JointTransforms>( jointNames: basePose.jointNames, name: "jaw-anim", from: fromTransforms, to: toTransforms, duration: 0.1, bindTarget: .jointTransforms, repeatMode: .none, ) return fromToBy } PS: I can confirm that I can set this bone to a specific position if I use guard let index = newPose.jointNames.firstIndex(of: boneName) ... let baseTransform = basePose.jointTransforms[index] newPose.jointTransforms[index] = Transform( scale: baseTransform.scale, rotation: baseTransform.rotation * extraRot, translation: baseTransform.translation ) skeletalComponent.poses.default = newPose creatureMeshEntity.components.set(skeletalComponent) This works for manually setting the bone position, so the jawBoneName and the joint-transformation can't be that wrong.
1
0
318
Aug ’25
how to achieve "concave in" glass view look?
I have been trying to implement this look where a component looks "pushed in" but I could not find any resources regarding this effect. The closest I got was a combination of a RoundedRectangle and .glassBackgroundEffect(), but this makes the view look pushed out, instead of pushed in. I was wondering if this is achievable in SwiftUI level, or even in UIKit level.
1
0
135
Apr ’25
window loses resize feature on reopening
I have this problem on VisionOS. When I dismiss and reopen a window from a ImagePresentationComponent, the window misses the resize ui elements when I look at the window corners. The rest of the window ui elements (drag, close...) are there. Resizing was possible before the window was dismissed. The code is something like this: WindowGroup(id: "image-display-window",..... } .windowResizability(.automatic) .windowStyle(.plain) I call dismissWindow() from the window view and it is dismissed correctly. Then I call openWindow(id: "image-display-window", value: data) from another view to reopen it. It reopens but it missing the possibility to resize. Anyone knows how to fix this? Thanks.
1
0
272
Dec ’25
How to mix Animation and IKRig in RealityKit
I want an AR character to be able to look at a position while still playing the characters animation. So far, I managed to manually adjust a single bone rotation using skeletalComponent.poses.default = Transform( scale: baseTransform.scale, rotation: lookAtRotation, translation: baseTransform.translation ) which I run at every rendering update, while a full body animation is running. But of course, hardcoding single joints to point into a direction (in my case the head) does not look as nice, as if I were to run some inverse cinematic that includes, hips + neck + head joints. I found some good IKRig code in Composing interactive 3D content with RealityKit and Reality Composer Pro. But when I try to adjust rigs while animations are playing, the animations are usually winning over the IKRig changes to the mesh.
1
0
536
Aug ’25
Possible contradiction between ARKit's definition of UIDeviceOrientation.landscapeRight and the actual definition of UIDeviceOrientation.landscapeRight
So it seems to be that there is a contradiction between how ARKit defines UIDeviceOrientation.landscapeRight, and the actual definition of UIDeviceOrientation.landscapeRight in the UIKit documentation. In the ARKit documentation for ARCamera.transform, it says the following: This transform creates a local coordinate space for the camera that is constant with respect to device orientation. In camera space, the x-axis points to the right when the device is in UIDeviceOrientation.landscapeRight orientation—that is, the x-axis always points along the long axis of the device, from the front-facing camera toward the Home button. The y-axis points upward (with respect to UIDeviceOrientation.landscapeRight orientation), and the z-axis points away from the device on the screen side. Going through the same link, we see the definition of UIDeviceOrientation.landscapeRight given as: The device is in landscape mode, with the device held upright and the front-facing camera on the right side. There seems to be a conflict in the two definitions, that has already been asked and visualized in this StackOverflow thread The resolution of that answer says that ARKit landscapeRight, unlike what is given in UIDeviceOrientation.landscapeRight, has home button on the right, as stated in the ARCamera.transform documentation. It says that more details are given in this StackOverflow thread, but this thread talks about the discrepancy between the definitions of landscapeRight in UIDeviceOrientation and UIInterfaceOrientation, and not anything related to ARKit. So I am wondering, why does ARKit definition of landscapeRight contradict with that of UIDeviceOrientation despite explicitly mentioning it? Is it just a mistake by Apple developers that hasn't been resolved even after so long?
1
0
110
Jun ’25
iOS needs to allow for background bluetooth scanning. I can't fully build my app.
iOS currently restricts background Bluetooth advertising and scanning in order to preserve battery life and protect user privacy. While these restrictions serve important purposes, they also limit legitimate use cases where users have explicitly opted in to proximity-based experiences. The core challenge is that modern social applications need a way to detect when users are physically present at the same location or event without requiring every participant to keep their app in the foreground. Under the current system, background BLE advertising is heavily throttled and can only transmit a limited payload, background scanning intervals are sparse and unpredictable, peer-to-peer proximity detection cannot be maintained reliably when apps are in the background, and Background App Refresh is non-deterministic, making any kind of time-based proximity validation impossible. A proposed enhancement would be to introduce an “Enhanced Proximity Permission.” This would allow developers to enable reliable background BLE advertising and scanning for declared time windows, such as a maximum of eight hours. It would also allow devices running the same app to detect each other’s proximity using ephemeral, rotating identifiers that preserve privacy, with clear user consent and prominent indicators whenever the feature is active. Unlocking this capability would open up new categories of applications. Live events could offer automatic attendance tracking at concerts, conferences, or sports venues. Retail environments could support opt-in foot traffic analysis and dwell-time insights. Social apps could allow users to find friends at festivals, campuses, or other large venues. Safety applications could extend to crowd density monitoring and contact tracing beyond COVID-era needs. Gaming could offer real-world multiplayer experiences based on physical proximity, and transportation providers could verify rideshare pickups or measure public transit flows automatically. Privacy safeguards would remain central. Permissions would be time-boxed and expire after an event or session. A mandatory visual indicator would be displayed whenever proximity tracking is active. A user-facing dashboard would show all apps granted enhanced proximity access. Permissions would automatically be revoked after a period of non-use, and only ephemeral tokens not permanent identifiers would be broadcast. The industry impact would be significant. With this enhancement, iOS could power the next generation of location-aware social platforms while maintaining Apple’s leadership in privacy through explicit user control and transparency. Current alternatives, such as requiring users to keep apps in the foreground or deploying dedicated hardware beacons, produce poor user experiences and constrain innovation in spatial computing and social applications. Can anyone from Apple consider this change? Having to buy iBeacons is brutal and means slower adoption. Please reconsider this for users who opt in.
1
0
1.1k
Sep ’25
Do you retain a reference to your content events in RealityView?
Do you retain a reference to your content (RealityViewContent) events? For example, the Manipulation Events docs from Apple use _ to discard the result. In theory the event should keep working while the content is alive. _ = content.subscribe(to: ManipulationEvents.WillBegin.self) { event in event.entity.components[ModelComponent.self]?.materials[0] = SimpleMaterial(color: .blue, isMetallic: false) } _ = content.subscribe(to: ManipulationEvents.WillEnd.self) { event in event.entity.components[ModelComponent.self]?.materials[0] = SimpleMaterial(color: .red, isMetallic: false) } We could store these events in state. I've seen this in a few samples and apps. @State var beginSubscription: EventSubscription? ... beginSubscription = content.subscribe(to: ManipulationEvents.WillBegin.self) { event in event.entity.components[ModelComponent.self]?.materials[0] = SimpleMaterial(color: .blue, isMetallic: false) } The main advantage I see is that we can be more explicit about when we remove the event. Are there other reasons to keep a reference to these events?
1
0
593
Sep ’25
Unable to Retain Main App Window State When Transitioning to Immersive Space
In Vision OS app, I have two types of windows: Main App Window – This is the default window that launches when the app starts. It displays the video listings and other primary content. Immersive Space Window – This opens only when a user starts streaming or playing a video. Issue: When entering the immersive space, the main app window remains visible in front of it unless manually closed. To avoid this, I currently close the main window when transitioning to immersive space and reopen it when exiting from immersive space. However, this causes the app to restart instead of resuming from its previous state. Desired Behavior: I want the main app window to retain its state and seamlessly resume from where it was before entering immersive mode, rather than restarting. Attempts & Challenges: Tried managing opacity, visibility but none worked as expected. Couldn’t find a way to push the main window to the background while bringing the immersive space to the foreground. Looking for a solution to keep the main window’s state intact while transitioning between immersive and normal modes.
1
0
94
Mar ’25
SpatialEventGesture Not Working to Show Hidden Menu in Immersive Panorama View - visionOS
SpatialEventGesture Not Working to Show Hidden Menu in Immersive Panorama View - visionOS Problem Description I'm developing a Vision Pro app that displays 360° panoramic photos in a full immersive space. I have a floating menu that auto-hides after 5 seconds, and I want users to be able to show the menu again using spatial gestures (particularly pinch gestures) when it's hidden. However, the SpatialEventGesture implementation is not working as expected. The menu doesn't appear when users perform pinch gestures or other spatial interactions in the immersive space. Current Implementation Here's the relevant gesture detection code in my ImmersiveView: import SwiftUI import RealityKit struct ImmersiveView: View { @EnvironmentObject var appModel: AppModel @Environment(\.openWindow) private var openWindow var body: some View { RealityView { content in // RealityView content setup with panoramic sphere... let rootEntity = Entity() content.add(rootEntity) // Load panoramic content here... } // Using SpatialEventGesture to handle multiple spatial gestures .gesture( SpatialEventGesture() .onEnded { eventCollection in // Check menu visibility state if !appModel.isPanoramaMenuVisible { // Iterate through event collection to handle various gestures for event in eventCollection { switch event.kind { case .touch: print("Detected spatial touch gesture, showing menu") showMenuWithGesture() return case .indirectPinch: print("Detected spatial pinch gesture, showing menu") showMenuWithGesture() return case .pointer: print("Detected spatial pointer gesture, showing menu") showMenuWithGesture() return @unknown default: print("Detected unknown spatial gesture: \(event.kind)") showMenuWithGesture() return } } } } ) // Keep long press gesture as backup .simultaneousGesture( LongPressGesture(minimumDuration: 1.5) .onEnded { _ in if !appModel.isPanoramaMenuVisible { print("Detected long press gesture, showing menu") showMenuWithGesture() } } ) } private func showMenuWithGesture() { if !appModel.isPanoramaMenuVisible { appModel.showPanoramaMenu() if !appModel.windowExists(id: "PanoramaMenu") { openWindow(id: "PanoramaMenu", value: "menu") } } } } What I've Tried Multiple SpatialTapGesture approaches: Originally tried using multiple .gesture() modifiers with SpatialTapGesture(count: 1) and SpatialTapGesture(count: 2), but realized they override each other. SpatialEventGesture implementation: Switched to SpatialEventGesture to handle multiple event types (.touch, .indirectPinch, .pointer), but pinch gestures still don't trigger the menu. Added debugging: Console logs show that the gesture callbacks are never called when performing pinch gestures in the immersive space. Backup LongPressGesture: Added a simultaneous long press gesture as backup, which also doesn't work consistently. Expected Behavior When the panorama menu is hidden (after 5-second auto-hide), users should be able to: Perform a pinch gesture (indirect pinch) to show the menu Tap in space to show the menu Use other spatial gestures to show the menu Questions Is SpatialEventGesture the correct approach for detecting gestures in a full immersive RealityView? Are there any special considerations for gesture detection when the RealityView contains a large panoramic sphere that might be intercepting gestures? Should I be using a different gesture approach for visionOS immersive spaces? Is there a way to ensure gestures work even when the RealityView content (panoramic sphere) might be blocking them? Environment Xcode 16.1 visionOS 2.5 Testing on Vision Pro device App uses SwiftUI + RealityKit Any guidance on the proper way to implement spatial gesture detection in visionOS immersive spaces would be greatly appreciated! Additional Context The app manages multiple windows and the gesture detection should work specifically when in the immersive panorama mode with the menu hidden. Thank you for any help or suggestions!
Replies
1
Boosts
0
Views
196
Activity
Jun ’25
ARCoachingOverlayView replacement for RealityView
I thought the ARCoachingOverlayView was a nice touch, so each apps ARKit coaching was recognizable and I used it in my ARView/ARSCNView based apps. Now with RealityView, is there any replacement planned? Or should we just use UIViewRepresentable and wrap ARCoachingOverlayView?
Replies
1
Boosts
0
Views
535
Activity
Sep ’25
Can't build old project on Xcode 26 beta5
I have a visionOS 2 project created on Xcode 16, when I updated to Xcode 26 beta5, I can't build it any more, every time it stuck in process like the picture shows below: Already tried many methods to fix this issue, such as clear build folders, but don't work. MacBook Air M2 / MacOS 26 beta5 / Xcode 26 beta5
Replies
1
Boosts
0
Views
223
Activity
Aug ’25
ARKit camera transform orientation vector doesn't match physical device heading (despite `.gravityAndHeading`)
Hi all, I'm working on an ARKit-based iOS app where I need to accurately determine the direction the device is facing to localize objects in the real world. I'm using: let config = ARWorldTrackingConfiguration() config.worldAlignment = .gravityAndHeading Thus, I would expect the world alignment to behave as given in the gravityAndHeading page. The AR session is started after verifying that CLLocationManager.headingAccuracy <= 20, and the compass appears to be calibrated. However, I'm seeing a major inconsistency: When the rear camera is physically pointed toward true North, I would expect: cameraTransform.columns.2.z ≈ -1 // (i.e. ARKit's -Z pointing North) But instead, I'm consistently seeing: cameraTransform.columns.2.z ≈ +0.97 // Implies camera is facing South Meanwhile, the translation vector behaves as expected: As I physically move North, cameraTransform.columns.3.z becomes more negative, matching the world’s +Z = South assumption. For example, let's say I have the device in landscapeRight (or landscapeLeft for UIDeviceOrientation). Let's say the device rear camera is pointing towards True North, and I start moving towards True North. I get something like this: Camera Transform = simd_float4x4( [ [0.98446155, -0.030119859, 0.172998, 0.0], [0.023979114, 0.9990097, 0.037477385, 0.0], [-0.17395553, -0.032746706, 0.98420894, 0.0], [0.024039675, -0.037087332, -0.22780673, 0.99999994] ]) As you can see, the cameraTransform.columns.2.z is positive despite the rear camera pointing towards True North, while cameraTransform.columns.3.z is correctly positive as the device is moving towards True North. So here is my question: Why is cameraTransform.columns.2.z positive when the rear camera is physically facing North? Any clarity would be deeply appreciated. I've read the documentation and tested with different heading accuracies and AR session resets, but I keep running into this orientation mismatch. Thanks in advance!
Replies
1
Boosts
0
Views
143
Activity
Jun ’25
Can't observe Entity's orientation properties in Slider
On Xcode 26 and visionOS 26, apple provides observable property for Entity, so we can easily interact with Entity between RealityScene and SwiftUI, but there is a issue: It's fine to observe Entity's position and scale properties in Slider, but can't observe orientation properties in Slider. MacBook Air M2 / Xcode 26 beta6
Replies
1
Boosts
0
Views
217
Activity
Aug ’25
FromToByAnimation triggers availableAnimations not the single bone animation
So, I was trying to animate a single bone using FromToByAnimation, but when I start the animation, the model instead does the full body animation stored in the availableAnimations. If I don't run testAnimation nothing happens. If I run testAnimation I see the same animation as If I had called entity.playAnimation(entity.availableAnimations[0],..) here's the full code I use to animate a single bone: func testAnimation() { guard let jawAnim = jawAnimation(mouthOpen: 0.4) else { print("Failed to create jawAnim") return } guard let creature, let animResource = try? AnimationResource.generate(with: jawAnim) else { return } let controller = creature.playAnimation(animResource, transitionDuration: 0.02, startsPaused: false) print("controller: \(controller)") } func jawAnimation(mouthOpen: Float) -> FromToByAnimation<JointTransforms>? { guard let basePose else { return nil } guard let index = basePose.jointNames.firstIndex(of: jawBoneName) else { print("Target joint \(self.jawBoneName) not found in default pose joint names") return nil } let fromTransforms = basePose.jointTransforms let baseJawTransform = fromTransforms[index] let maxAngle: Float = 40 let angle: Float = maxAngle * mouthOpen * (.pi / 180) let extraRot = simd_quatf(angle: angle, axis: simd_float3(x: 0, y: 0, z: 1)) var toTransforms = basePose.jointTransforms toTransforms[index] = Transform( scale: baseJawTransform.scale * 2, rotation: baseJawTransform.rotation * extraRot, translation: baseJawTransform.translation ) let fromToBy = FromToByAnimation<JointTransforms>( jointNames: basePose.jointNames, name: "jaw-anim", from: fromTransforms, to: toTransforms, duration: 0.1, bindTarget: .jointTransforms, repeatMode: .none, ) return fromToBy } PS: I can confirm that I can set this bone to a specific position if I use guard let index = newPose.jointNames.firstIndex(of: boneName) ... let baseTransform = basePose.jointTransforms[index] newPose.jointTransforms[index] = Transform( scale: baseTransform.scale, rotation: baseTransform.rotation * extraRot, translation: baseTransform.translation ) skeletalComponent.poses.default = newPose creatureMeshEntity.components.set(skeletalComponent) This works for manually setting the bone position, so the jawBoneName and the joint-transformation can't be that wrong.
Replies
1
Boosts
0
Views
318
Activity
Aug ’25
how to achieve "concave in" glass view look?
I have been trying to implement this look where a component looks "pushed in" but I could not find any resources regarding this effect. The closest I got was a combination of a RoundedRectangle and .glassBackgroundEffect(), but this makes the view look pushed out, instead of pushed in. I was wondering if this is achievable in SwiftUI level, or even in UIKit level.
Replies
1
Boosts
0
Views
135
Activity
Apr ’25
window loses resize feature on reopening
I have this problem on VisionOS. When I dismiss and reopen a window from a ImagePresentationComponent, the window misses the resize ui elements when I look at the window corners. The rest of the window ui elements (drag, close...) are there. Resizing was possible before the window was dismissed. The code is something like this: WindowGroup(id: "image-display-window",..... } .windowResizability(.automatic) .windowStyle(.plain) I call dismissWindow() from the window view and it is dismissed correctly. Then I call openWindow(id: "image-display-window", value: data) from another view to reopen it. It reopens but it missing the possibility to resize. Anyone knows how to fix this? Thanks.
Replies
1
Boosts
0
Views
272
Activity
Dec ’25
How to mix Animation and IKRig in RealityKit
I want an AR character to be able to look at a position while still playing the characters animation. So far, I managed to manually adjust a single bone rotation using skeletalComponent.poses.default = Transform( scale: baseTransform.scale, rotation: lookAtRotation, translation: baseTransform.translation ) which I run at every rendering update, while a full body animation is running. But of course, hardcoding single joints to point into a direction (in my case the head) does not look as nice, as if I were to run some inverse cinematic that includes, hips + neck + head joints. I found some good IKRig code in Composing interactive 3D content with RealityKit and Reality Composer Pro. But when I try to adjust rigs while animations are playing, the animations are usually winning over the IKRig changes to the mesh.
Replies
1
Boosts
0
Views
536
Activity
Aug ’25
Possible contradiction between ARKit's definition of UIDeviceOrientation.landscapeRight and the actual definition of UIDeviceOrientation.landscapeRight
So it seems to be that there is a contradiction between how ARKit defines UIDeviceOrientation.landscapeRight, and the actual definition of UIDeviceOrientation.landscapeRight in the UIKit documentation. In the ARKit documentation for ARCamera.transform, it says the following: This transform creates a local coordinate space for the camera that is constant with respect to device orientation. In camera space, the x-axis points to the right when the device is in UIDeviceOrientation.landscapeRight orientation—that is, the x-axis always points along the long axis of the device, from the front-facing camera toward the Home button. The y-axis points upward (with respect to UIDeviceOrientation.landscapeRight orientation), and the z-axis points away from the device on the screen side. Going through the same link, we see the definition of UIDeviceOrientation.landscapeRight given as: The device is in landscape mode, with the device held upright and the front-facing camera on the right side. There seems to be a conflict in the two definitions, that has already been asked and visualized in this StackOverflow thread The resolution of that answer says that ARKit landscapeRight, unlike what is given in UIDeviceOrientation.landscapeRight, has home button on the right, as stated in the ARCamera.transform documentation. It says that more details are given in this StackOverflow thread, but this thread talks about the discrepancy between the definitions of landscapeRight in UIDeviceOrientation and UIInterfaceOrientation, and not anything related to ARKit. So I am wondering, why does ARKit definition of landscapeRight contradict with that of UIDeviceOrientation despite explicitly mentioning it? Is it just a mistake by Apple developers that hasn't been resolved even after so long?
Replies
1
Boosts
0
Views
110
Activity
Jun ’25
.hoverEffect scaleEffect breaks scrolling in visionOS 26 beta 1
This modifier in visionOS 2.5 works perfectly with LazyVgrid inside a Stack in ScrollView: .hoverEffect { effect, isActive, _ in effect.scaleEffect(isActive ? 1.1 : 1.0) But the grid does not scroll in visionOS 26 beta 1 unless the scaleEffect is commented out. FB17941468
Replies
1
Boosts
0
Views
117
Activity
Jun ’25
How to break the progress of Entity.animate()
Entity.animate() makes entity animation much easier, but in many cases, I want to break the progress because of some gestures, I couldn't find any way to do this, including tried entity.stopAllAnimations(), I have to wait till Entity.animate() completes. iOS 26 / visionOS 26
Replies
1
Boosts
0
Views
365
Activity
Sep ’25
Does RealityKit support mesh based animation?
I have a mesh based animation 3D model, that means every frame it’s a new mesh. I import it into RealityView, but can’t play it‘s animation, RealityKit tells me this model has no animations by using print(entity.availableAnimations).
Replies
1
Boosts
0
Views
674
Activity
Aug ’25
RotationSystem and RotationComponent API Updates for visionOS 26 Beta
Are there any changes to RotationSystem: System and RotationComponent: Component that I should be aware of to see if I need to update my use in my visionOS app?
Replies
1
Boosts
0
Views
62
Activity
Jun ’25
iOS needs to allow for background bluetooth scanning. I can't fully build my app.
iOS currently restricts background Bluetooth advertising and scanning in order to preserve battery life and protect user privacy. While these restrictions serve important purposes, they also limit legitimate use cases where users have explicitly opted in to proximity-based experiences. The core challenge is that modern social applications need a way to detect when users are physically present at the same location or event without requiring every participant to keep their app in the foreground. Under the current system, background BLE advertising is heavily throttled and can only transmit a limited payload, background scanning intervals are sparse and unpredictable, peer-to-peer proximity detection cannot be maintained reliably when apps are in the background, and Background App Refresh is non-deterministic, making any kind of time-based proximity validation impossible. A proposed enhancement would be to introduce an “Enhanced Proximity Permission.” This would allow developers to enable reliable background BLE advertising and scanning for declared time windows, such as a maximum of eight hours. It would also allow devices running the same app to detect each other’s proximity using ephemeral, rotating identifiers that preserve privacy, with clear user consent and prominent indicators whenever the feature is active. Unlocking this capability would open up new categories of applications. Live events could offer automatic attendance tracking at concerts, conferences, or sports venues. Retail environments could support opt-in foot traffic analysis and dwell-time insights. Social apps could allow users to find friends at festivals, campuses, or other large venues. Safety applications could extend to crowd density monitoring and contact tracing beyond COVID-era needs. Gaming could offer real-world multiplayer experiences based on physical proximity, and transportation providers could verify rideshare pickups or measure public transit flows automatically. Privacy safeguards would remain central. Permissions would be time-boxed and expire after an event or session. A mandatory visual indicator would be displayed whenever proximity tracking is active. A user-facing dashboard would show all apps granted enhanced proximity access. Permissions would automatically be revoked after a period of non-use, and only ephemeral tokens not permanent identifiers would be broadcast. The industry impact would be significant. With this enhancement, iOS could power the next generation of location-aware social platforms while maintaining Apple’s leadership in privacy through explicit user control and transparency. Current alternatives, such as requiring users to keep apps in the foreground or deploying dedicated hardware beacons, produce poor user experiences and constrain innovation in spatial computing and social applications. Can anyone from Apple consider this change? Having to buy iBeacons is brutal and means slower adoption. Please reconsider this for users who opt in.
Replies
1
Boosts
0
Views
1.1k
Activity
Sep ’25
View Immsersive/Stereoscopic Images in Immersive Space
Since using Quick Look exits you from both your app and Immersive Space. Is there a way to view immersive images within Immersive Space?
Replies
1
Boosts
0
Views
44
Activity
Jun ’25
Is it possible to create immersive video totally from virtual scene
hi guys, I'm working in VFX industry and I've got the question that, is it possible to create immersive video directly from virtual scene created in DCC software like maya, rendered into footage, then coded into immersive video, and finally play in in vision pro? thanks.
Replies
1
Boosts
1
Views
730
Activity
Sep ’25
How to export an entity to USD?
I want to record animation with entity, then export it to .usd without using Reality Composer Pro, how to achieve that?
Replies
1
Boosts
0
Views
77
Activity
Apr ’25
Do you retain a reference to your content events in RealityView?
Do you retain a reference to your content (RealityViewContent) events? For example, the Manipulation Events docs from Apple use _ to discard the result. In theory the event should keep working while the content is alive. _ = content.subscribe(to: ManipulationEvents.WillBegin.self) { event in event.entity.components[ModelComponent.self]?.materials[0] = SimpleMaterial(color: .blue, isMetallic: false) } _ = content.subscribe(to: ManipulationEvents.WillEnd.self) { event in event.entity.components[ModelComponent.self]?.materials[0] = SimpleMaterial(color: .red, isMetallic: false) } We could store these events in state. I've seen this in a few samples and apps. @State var beginSubscription: EventSubscription? ... beginSubscription = content.subscribe(to: ManipulationEvents.WillBegin.self) { event in event.entity.components[ModelComponent.self]?.materials[0] = SimpleMaterial(color: .blue, isMetallic: false) } The main advantage I see is that we can be more explicit about when we remove the event. Are there other reasons to keep a reference to these events?
Replies
1
Boosts
0
Views
593
Activity
Sep ’25
Unable to Retain Main App Window State When Transitioning to Immersive Space
In Vision OS app, I have two types of windows: Main App Window – This is the default window that launches when the app starts. It displays the video listings and other primary content. Immersive Space Window – This opens only when a user starts streaming or playing a video. Issue: When entering the immersive space, the main app window remains visible in front of it unless manually closed. To avoid this, I currently close the main window when transitioning to immersive space and reopen it when exiting from immersive space. However, this causes the app to restart instead of resuming from its previous state. Desired Behavior: I want the main app window to retain its state and seamlessly resume from where it was before entering immersive mode, rather than restarting. Attempts & Challenges: Tried managing opacity, visibility but none worked as expected. Couldn’t find a way to push the main window to the background while bringing the immersive space to the foreground. Looking for a solution to keep the main window’s state intact while transitioning between immersive and normal modes.
Replies
1
Boosts
0
Views
94
Activity
Mar ’25