Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

visionOS: Unable to programmatically close child WindowGroup when parent window closes
Hi , I'm struggling with visionOS window management and need help with closing child windows programmatically. App Structure My app has a Main-Sub window hierarchy: AWindow (Home/Main) BWindow (Main feature window) CWindow (Tool window - child of BWindow) Navigation flow: AWindow → BWindow (switch, 1 window on screen) BWindow → CWindow (opens child, 2 windows on screen) I want BWindow and CWindow to be separate movable windows (not sheet/popover) so users can position them independently in space. The Problem CWindow doesn't close when BWindow closes by tapping the X button below the app (next to the window bar) User clicks X on BWindow → BWindow closes but CWindow remains CWindow becomes orphaned on screen Can close CWindow programmatically when switching BWindow back to AWindow App launch issue After closing both windows, CWindow is remembered as last window Reopening app shows only CWindow instead of BWindow User gets stuck in CWindow with no way back to BWindow I've Tried Environment dismissWindow in cleanup but its not working. // In BWindow.swift .onDisappear { if windowManager.isWindowOpen("cWindow") { dismissWindow(id: "cWindow") } } My App Structure Code Now // in MyNameApp.swift @main struct MyNameApp: App { var body: some Scene { WindowGroup(id: "aWindow") { AWindow() } WindowGroup(id: "bWindow") { BWindow() } WindowGroup(id: "cWindow") { CWindow() } } } // WindowStateManager.swift class WindowStateManager: ObservableObject { static let shared = WindowStateManager() @Published private var openWindows: Set<String> = [] @Published private var windowDependencies: [String: String] = [:] private init() {} func markWindowAsOpen(_ id: String) { markWindowAsOpen(id, parent: nil) } func markWindowAsClosed(_ id: String) { openWindows.remove(id) windowDependencies[id] = nil } func isWindowOpen(_ id: String) -> Bool { let isOpen = openWindows.contains(id) return isOpen } func markWindowAsOpen(_ id: String, parent: String? = nil) { openWindows.insert(id) if let parentId = parent { windowDependencies[id] = parentId } } func getParentWindow(of childId: String) -> String? { let parent = windowDependencies[childId] return parent } func getChildWindows(of parentId: String) -> [String] { let children = windowDependencies.compactMap { key, value in value == parentId ? key : nil } return children } func setNextWindowParent(_ parentId: String) { UserDefaults.standard.set(parentId, forKey: "nextWindowParent") } func getAndClearNextWindowParent() -> String? { let parent = UserDefaults.standard.string(forKey: "nextWindowParent") UserDefaults.standard.removeObject(forKey: "nextWindowParent") return parent } func forceCloseChildWindows(of parentId: String) { let children = getChildWindows(of: parentId) for child in children { markWindowAsClosed(child) NotificationCenter.default.post( name: Notification.Name("ForceCloseWindow"), object: nil, userInfo: ["windowId": child] ) forceCloseChildWindows(of: child) } } func hasMainWindowOpen() -> Bool { let mainWindows = ["main", "bWindow"] return mainWindows.contains { isWindowOpen($0) } } func cleanupOrphanWindows() { for (child, parent) in windowDependencies { if isWindowOpen(child) && !isWindowOpen(parent) { NotificationCenter.default.post( name: Notification.Name("ForceCloseWindow"), object: nil, userInfo: ["windowId": child] ) markWindowAsClosed(child) } } } } // BWindow.swift struct BWindow: View { @Environment(\.dismissWindow) private var dismissWindow @ObservedObject private var windowManager = WindowStateManager.shared var body: some View { VStack { Button("Open C Window") { windowManager.setNextWindowParent("bWindow") openWindow(id: "cWindow") } } .onAppear { windowManager.markWindowAsOpen("bWindow") } .onDisappear { windowManager.markWindowAsClosed("bWindow") windowManager.forceCloseChildWindows(of: "bWindow") } .onChange(of: scenePhase) { oldValue, newValue in if newValue == .background || newValue == .inactive { windowManager.forceCloseChildWindows(of: "bWindow") } } } } // CWindow.swift import SwiftUI struct cWindow: View { @ObservedObject private var windowManager = WindowStateManager.shared @State private var shouldClose = false var body: some View { // Content } .onDisappear { windowManager.markWindowAsClosed("cWindow") NotificationCenter.default.removeObserver( self, name: Notification.Name("ForceCloseWindow"), object: nil ) } .onChange(of: scenePhase) { oldValue, newValue in if newValue == .background { } } .onAppear { let parent = windowManager.getAndClearNextWindowParent() windowManager.markWindowAsOpen("cWindow", parent: parent) NotificationCenter.default.addObserver( forName: Notification.Name("ForceCloseWindow"), object: nil, queue: .main) { notification in if let windowId = notification.userInfo?["windowId"] as? String, windowId == "cWindow" { shouldClose = true } } } .onChange(of: shouldClose) { _, newValue in if newValue { dismissWindow() } } } The logs show everything executes correctly, but CWindow remains visible on screen. Questions Why doesn't dismissWindow(id:) work in cleanup scenarios? Is there a proper way to create a window relationships like parent-child relationships in visionOS? How can I ensure main windows open on app launch instead of tool windows? What's the recommended pattern for dependent windows in visionOS? Environment: Xcode 16.2, visionOS 2.0, SwiftUI
2
0
343
Aug ’25
RoomCaptureSession with ARSCNView crashes when scanning multiple hotspots across different rooms
We're developing an iOS application that integrates RoomCaptureSession with ARSCNView for room scanning. Our implementation differs from the standard RoomCaptureView because we need custom UI guidance with 3D dots placed in the scanning environment to guide users through the capture process. Bug Description: The application crashes when users attempt to scan multiple rooms or apartments in sequence. The crash specifically occurs with the following pattern: User successfully scans first room with multiple hotspots (working correctly) User stops scanning, moves to a new room In the new room, first 1-2 hotspots work correctly Application crashes when attempting to scan additional hotspots Technical Details: Error: SLAM Anchor assertion failure in SlamAnchor.cpp:37 : HasValidPose() Crash occurs in Thread 27 with CAPIDetectionOutputFwdNode Error suggests invalid positioning when placing AR anchors Steps to Reproduce: Start room scan Complete multiple hotspot captures in first room Stop scanning Start new room scan Capture 1-2 hotspots successfully Attempt additional hotspot captures -> crashes Attempted Solutions: Implemented anchor cleanup between sessions Added position validation before anchor placement Implemented ARSession error handling Added proper thread management for AR operations Environment: Device: iPhone 14 Pro (LiDAR equipped) iOS Version: 18.1.1 (22B91) Testing through TestFlight Crash Log Details: Exception Type: EXC_CRASH (SIGABRT) Exception Codes: 0x0000000000000000, 0x0000000000000000 Exception Note: EXC_CORPSE_NOTIFY Triggered by Thread: 27 Thread 27 Crashed: 0 libsystem_kernel.dylib 0x00000001f0cc91d4 __pthread_kill + 8 1 libsystem_pthread.dylib 0x0000000228e12ef8 pthread_kill + 268 2 libsystem_c.dylib 0x00000001a86bbad8 abort + 128 3 AppleCV3D 0x0000000234d71a28 cv3d::vio::capi::SlamAnchor::SlamAnchor Question: Is there a recommended approach for handling multiple room captures with custom ARSCNView integration? The standard RoomCaptureView implementation doesn't show this behavior, but we need the custom guidance functionality that ARSCNView provides. Crash Log Code and full crash logs can be provided if needed.
2
1
629
Feb ’25
ARView vs RealityView (iOS, iPadOS)
I have been digging through the docs and the developer videos, and I have noticed a mention to RealityView having som potential limitations with anchors and world tracking. However, I haven’t been able to locate my answers. Does anyone know (or point me to) if RealityView supports everything ARView does, and if not what are the difference? I was fooling around with RealityView today with a simple plane anchor, and the stability of that anchor didn’t seem to be as steady as I recall ARView being In the past on iPhone. I’m trying to determine if I should be rolling over into RealityView or stay with ARView on this little educational project. I would imagine the answer is to go RealityView, but I want to make sure I’m not setting myself up for failure based on any current limitations For anchors and world data.
2
1
898
Jan ’25
spatial-backdrop feature available yet?
In WWDC25 session What’s new for the spatial web, the presenter showed creating an immersive environment for a web page by adding to the page's HEAD section <link rel="spatial-backdrop" href="office.usdz" environmentmap="lighting.hdr"> My first attempt failed, and I am trying to track down why. Before I search all the potential failure paths, I wanted to ask the community, Is this feature available in the latest visionOS 26 beta? I haven't seen anyone talk about their use of the feature yet.
2
0
211
Aug ’25
RealityKit fullscreen layer
Hi! I'm currently trying to render another XR scene in front of a RealityKit one. Actually, I'm anchoring a plane to the head with a shader to display for left/right eye side-by-side images. By default, the camera has a near plane so I can directly draw at z=0. Is there a way to change the camera near plane? Or maybe there is a better solution to overlay image/texture for left/right eyes? Ideally, I would layer some kind of CompositorLayer on RealityKit, but that's sadly not possible from what I know. Thanks in advance and have a good day!
2
0
312
Jul ’25
TextComponent on iOS/macOS pixelated when viewed from short distance
Hello, I've been tinkering a bit with TextComponent. Based on the docs it seems like this component should always render sharp and nice text, no matter how close the user gets: RealityKit dynamically adjusts the backing size to a value that results in high-fidelity text at its current location. And it does on visionOS, but on iOS and macOS the text gets pixelated when I get close to it, as if its just rendering it once as a plain image texture. Can anyone tell me if this is expected behavior or a bug? Here two screenshots for comparison (iPhone and Vision Pro): Thanks!
2
2
117
Aug ’25
Is it possible to live render CMTaggedBuffer / MV-HEVC frames in visionOS?
Hey all, I'm working on a visionOS app that captures live frames from the left and right cameras of Apple Vision Pro using cameraFrame.sample(for: .left/.right). Apple provides documentation on encoding side-by-side frames into MV-HEVC spatial video using CMTaggedBuffer: Converting Side-by-Side 3D Video to MV-HEVC My question: Is there any way to render tagged frames (e.g. CMTaggedBuffer with .stereoView(.leftEye/.rightEye)) live, directly to a surface in RealityKit or Metal, without saving them to a file? I’d like to create a true stereoscopic (spatial) live video preview, not just render two images side-by-side. Any advice or insights would be greatly appreciated!
2
0
246
Aug ’25
How to create a MultiplayerDelegate and use TabletopNetworkSession features?
When building a multiplayer Tabletop game, the documentation includes how to attach a custom TabletopNetworkSessionCoordinator, which could be used in addition to TabletopGame.MultiplayerDelegate. But so far, we have been unable to create these types of custom coordinators or have a delegate that works. Our current setup with our generic GroupActivity works by sending the session to TabletopGame's coordinateWithSession method (like in the current sample project), but we didn't find a way to access and control, for example, the arbiter, seats, player events, among other features mentioned on https://developer.apple.com/documentation/tabletopkit/tabletopnetworksession. Is correct to expect having access to the participants, messenger, or journal without having to maintain a parallel coordinator?   possibly we are missing something here; any suggestions?
2
0
235
Apr ’25
Ground shadow and visibility
Hey, I'm building an interior design app In Vision OS 2.0. I'm fetching the planes detected by ARKit and I then proceed to add them with an "OcclusionMaterial" to make sure my object are occluded accordingly. However, I'm facing two problems with this: The ground shadows are completely disabled as soon as an occlusion material is added, even if I inset the planes doing the occlusion. I've looked into this: https://developer.apple.com/documentation/shadergraph/realitykit/shadow-receiving-occlusion-surface-(realitykit) but when I tried to use it, it behaved exactly as "OcclusionMaterial". The planes are also occluding all windows (mines and the system ones), which is a behavior I'd like to avoid. I only want to occluded the Entity I added. Is there a way to achieve this? Thanks in advance
2
1
477
Jan ’25
Using vision pro to detect distance to real life objects
Is it possible to detect distance from the vision pro to real live objects and people? I tried using scene.raycast to perform a raycast forward from the center of the viewport, but it doesn't seem to react to real life objects, only entities. I see mentioned here: https://developer.apple.com/forums/thread/776807?answerId=829576022#829576022, that a raycast with scene reconstruction should allow me to measure that distance, as long as the object is non-moving. How could I accomplish that?
2
0
165
Apr ’25
Person Occlusion on the Vision Pro
I am currently creating an app where two people share an instance of an immersive space so that they are able to point to certain things in the immersive space. Right now, other people are hidden behind the immersive space, and even with people awareness enabled for everything, people are still too difficult to see. I've found this documentation (https://developer.apple.com/documentation/arkit/occluding-virtual-content-with-people) which describes what I want to do, but it is only listed as working on iOS an iPadOS. Is there anything similar to this that will work on VisionOS?
2
0
126
Mar ’25
CameraFrameProvider distortion correction
Hi, I'm trying to correct the lens distortion in frames provided by Enterprise API camera frame provider. The frames provided seem to have only in/extrinsics info, but not the distortion lookup table. Is there some magic setting, or function to do that (I can't seem to find anything like this)? Or is there a way to use AVCameraCalibrationData together with provider?
2
0
401
Mar ’25
Accessing pupil diameter in visionOS
Previously I had developed software using SMI eye trackers, both screen mounted and their mobile glasses, for unique therapeutic and physiology applications. Sadly, after SMI was bought by Apple, their hardware and software have been taken off the market and now it is very difficult to get secondhand-market systems. The Apple Vision Pro integrates the SMI hardware. While I can use ARKit to get gaze position, I do not see a way to access information that was previously made accessible on the SMI hardware, particularly: dwell time and pupil diameter information. I am hopeful (or asking) to see that if a user has a properly set up Optic ID and would opt-in if, either on the present or a future version of visionOS, it might be possible to get access to the data streams for dwell times and pupil diameter. Pupil diameter is particularly important as it is a very good physiological measure of how much stress a person is encountering, which is critical to some of the therapeutic applications that formerly we used SMI hardware. Any ideas, or, if this is not possible, proposing this to the visionOS team would be appreciated!
2
0
248
Jul ’25
CompositorServices Or RealityKit
I have been concentrating on developing the visionOS application. While I am currently quite familiar with RealityKit, CompositorServices has also captured my attention. I have not yet acquired knowledge of CompositorServices. Could you please clarify whether it is essential for me to learn CompositorServices? Additionally, I would appreciate it if you could provide insights into the advantages of RealityKit and CompositorServices.
2
0
761
Mar ’25
Request for gaze data in fully immersive Metal apps
Hi, We are trying to port our Unity app from other XR devices to Vision Pro. Thus it's way easier for us to use the Metal rendering layer, fully immersive. And to stay true to the platform, we want to keep the gaze/pinch interaction system. But we just noticed that, unlike Polyspatial XR apps, VisionOS XR in Metal does not provide gaze info unless the user is actively pinching... Which forbids any attempt to give visual feedback on what they are looking at (buttons, etc). Is this planned in Apple's roadmap ? Thanks
2
0
566
Feb ’25
RealityKit and MacOS - How to load Reality Composer Scene? Documentation Incomplete/Innacurate
"Although Xcode generates loading methods for all Reality Composer files in your Xcode project" I do not find this to be true, sadly. Does anyone have any luck or insight on how one can build just a simple MacOS app that will import a scene from a Reality File? The documentation suggests that the simple act of bringing a .Reality File in (What about .realitycomposerpro?) will generate code, but that doesn't seem to happen. The sample code (Spaceship) does not compile for MacOS. I'd really love just the most generic template of an Xcode Project that compiles with a button that pops open a scene., Like the VisionOS default immersive project.
2
0
821
Dec ’24
How to configure Spatial Audio on a Video Material?, Compile error.
I've tried following apple's documentation to apply a video material on a Model Entity, but I have encountered a compile error while attempting to specify the Spatial Audio type. It is a 360 video on a Sphere which plays just fine, but the audio is too quiet compared to the volume I get when I preview the video on Xcode. So I tried tried to configure audio playback mode on the material but it gives me a compile error: "audioInputMode' is unavailable in visionOS audioInputMode' has been explicitly marked unavailable here RealityFoundation.VideoPlaybackController.audioInputMode)" https://developer.apple.com/documentation/realitykit/videomaterial/ Code: let player = AVPlayer(url: url) // Instantiate and configure the video material. let material = VideoMaterial(avPlayer: player) // Configure audio playback mode. material.controller.audioInputMode = .spatial // this line won’t compile. VisionOS 2.4, Xcode 16.4, also tried Xcode 26 beta 2. The videos are HEVC MPEG-4 codecs. Is there any other way to do this, or is there a workaround available? Thank you.
2
0
240
Jul ’25