Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

When to use an AnchorEntity or HandTrackingProvider in VisionOS
As I understand it there are two ways I can track a hand, or a joint, in RealityKit: either, create an AnchorEntity, for example AnchorEntity(.hand(.left, location: .palm)) or, set up an ARSession with a HandTrackingProvider ( a lot more code which I haven't repeated here). Assuming this is correct, when would I want to use one over the other?
2
0
409
Mar ’25
Eye tracking data access for researchers in the medical field
Hello, esteemed tech developer. I am using the Apple Vision Pro to create an AR assist system about the da Vinci Surgical Robot in a medical surgical suite, and would like to capture eye movement data with tester uniformity. Although the Apple Vision Pro has a superb infrared sensor to monitor eye movement status, Apple does not seem to have open access officially. (I'm aware of many existing discussions about this, but I was still wondering if there might be an option, particularly for research labs.)Here's my FB number.FB16603687
1
0
624
Feb ’25
generateConvex(from mesh: MeshResource) crashes instead of throwing a Swift error
I have a MeshResource and I would like to create a collision component from it. let childBounds = child.visualBounds(relativeTo: self) var childShape: ShapeResource do { // Crashed by the following line instead of throwing a Swift Error childShape = try await ShapeResource.generateConvex(from: childModel.mesh); } catch { childShape = ShapeResource.generateBox(size: childBounds.extents) childShape = childShape.offsetBy(translation: childBounds.center) } Based on this document: https://developer.apple.com/documentation/realitykit/shaperesource/generateconvex(from:)-6upj9 Will throw an error if mesh does not define a nonempty convex volume. For example, will fail if all the vertices in mesh are coplanar. But, the method crashes the app instead of throwing a Swift error Incident Identifier: 35CD58F8-FFE3-48EA-85D3-6D241D8B0B4C CrashReporter Key: FE6790CA-6481-BEFD-CB26-F4E27652BEAE Hardware Model: Mac15,11 ... Version: 1.0 (1) Code Type: ARM-64 (Native) Role: Foreground Parent Process: launchd_sim [2057] Coalition: com.apple.CoreSimulator.SimDevice.85A2B8FA-689F-4237-B4E8-DDB93460F7F6 [1496] Responsible Process: SimulatorTrampoline [910] Date/Time: 2025-01-26 16:13:17.5053 +0800 Launch Time: 2025-01-26 16:13:09.5755 +0800 OS Version: macOS 15.2 (24C101) Release Type: User Report Version: 104 Exception Type: EXC_BREAKPOINT (SIGTRAP) Exception Codes: 0x0000000000000001, 0x00000001abf841d0 Termination Reason: SIGNAL 5 Trace/BPT trap: 5 Terminating Process: exc handler [17316] Triggered by Thread: 0 Thread 0 Crashed: 0 CoreRE 0x1abf841d0 REAssetManagerCollisionShapeAssetCreateConvexPolyhedron + 232 1 CoreRE 0x1abf845f0 REAssetManagerCollisionShapeAssetCreateConvexPolyhedronFromMesh + 868 2 RealityFoundation 0x1d25613bc static ShapeResource.generateConvex(from:) + 148 Here is the message on the app console from Xcode /Library/Caches/com.apple.xbs/Sources/REKit_Sim/ThirdParty/PhysX/physx/source/physxcooking/src/convex/QuickHullConvexHullLib.cpp (935) : internal error : QuickHullConvexHullLib::findSimplex: Simplex input points appers to be coplanar. Failed to cook convex mesh (0x3) assertion failure: 'convexPolyhedronShape != nullptr' (REAssetManagerCollisionShapeAssetCreateConvexPolyhedron:line 356) Bad parameters passed for convex mesh creation. Message from debug The above crash happened on a visionOS simulator (visionOS 2.2 (22N840)
1
0
429
Jan ’25
Barcode Anchor Jitter in Vision Pro due to Invalid enterprise api for barcode scanning Values
We’re using the enterprise API for spatial barcode/QR code scanning in the Vision Pro app, but we often get invalid values for the barcode anchor from the API, leading to jittery barcode positions in the UI. The code we’re using is attached below. import SwiftUI import RealityKit import ARKit import Combine struct ImmersiveView: View { @State private var arkitSession = ARKitSession() @State private var root = Entity() @State private var fadeCompleteSubscriptions: Set = [] var body: some View { RealityView { content in content.add(root) } .task { // Check if barcode detection is supported; otherwise handle this case. guard BarcodeDetectionProvider.isSupported else { return } // Specify the symbologies you want to detect. let barcodeDetection = BarcodeDetectionProvider(symbologies: [.code128, .qr, .upce, .ean13, .ean8]) do { try await arkitSession.requestAuthorization(for: [.worldSensing]) try await arkitSession.run([barcodeDetection]) print("Barcode scanning started") for await update in barcodeDetection.anchorUpdates where update.event == .added { let anchor = update.anchor // Play an animation to indicate the system detected a barcode. playAnimation(for: anchor) // Use the anchor's decoded contents and symbology to take action. print( """ Payload: \(anchor.payloadString ?? "") Symbology: \(anchor.symbology) """) } } catch { // Handle the error. print(error) } } } // Define this function in ImmersiveView. func playAnimation(for anchor: BarcodeAnchor) { guard let scene = root.scene else { return } // Create a plane sized to match the barcode. let extent = anchor.extent let entity = ModelEntity(mesh: .generatePlane(width: extent.x, depth: extent.z), materials: [UnlitMaterial(color: .green)]) entity.components.set(OpacityComponent(opacity: 0)) // Position the plane over the barcode. entity.transform = Transform(matrix: anchor.originFromAnchorTransform) root.addChild(entity) // Fade the plane in and out. do { let duration = 0.5 let fadeIn = try AnimationResource.generate(with: FromToByAnimation<Float>( from: 0, to: 1.0, duration: duration, isAdditive: true, bindTarget: .opacity) ) let fadeOut = try AnimationResource.generate(with: FromToByAnimation<Float>( from: 1.0, to: 0, duration: duration, isAdditive: true, bindTarget: .opacity)) let fadeAnimation = try AnimationResource.sequence(with: [fadeIn, fadeOut]) _ = scene.subscribe(to: AnimationEvents.PlaybackCompleted.self, on: entity, { _ in // Remove the plane after the animation completes. entity.removeFromParent() }).store(in: &fadeCompleteSubscriptions) entity.playAnimation(fadeAnimation) } catch { print("Error") } } }
3
0
523
Jan ’25
Projecting a Cube with a Number in ARKit
I'm a novice in RealityKit and ARKit. I'm using ARKit in SwiftUI to show a cube with a number as shown below. import SwiftUI import RealityKit import ARKit struct ContentView : View { var body: some View { return ARViewContainer() } } #Preview { ContentView() } struct ARViewContainer: UIViewRepresentable { typealias UIViewType = ARView func makeUIView(context: UIViewRepresentableContext<ARViewContainer>) -> ARView { let arView = ARView(frame: .zero, cameraMode: .ar, automaticallyConfigureSession: true) arView.enableTapGesture() return arView } func updateUIView(_ uiView: ARView, context: UIViewRepresentableContext<ARViewContainer>) { } } extension ARView { func enableTapGesture() { let tapGestureRecognizer = UITapGestureRecognizer(target: self, action: #selector(handleTap(recognizer:))) self.addGestureRecognizer(tapGestureRecognizer) } @objc func handleTap(recognizer: UITapGestureRecognizer) { let tapLocation = recognizer.location(in: self) // print("Tap location: \(tapLocation)") guard let rayResult = self.ray(through: tapLocation) else { return } let results = self.raycast(from: tapLocation, allowing: .estimatedPlane, alignment: .any) if let firstResult = results.first { let position = simd_make_float3(firstResult.worldTransform.columns.3) placeObject(at: position) } } func placeObject(at position: SIMD3<Float>) { let mesh = MeshResource.generateBox(size: 0.3) let material = SimpleMaterial(color: UIColor.systemRed, roughness: 0.3, isMetallic: true) let modelEntity = ModelEntity(mesh: mesh, materials: [material]) var unlitMaterial = UnlitMaterial() if let textureResource = generateTextResource(text: "1", textColor: UIColor.white) { unlitMaterial.color = .init(tint: .white, texture: .init(textureResource)) modelEntity.model?.materials = [unlitMaterial] let id = UUID().uuidString modelEntity.name = id modelEntity.transform.scale = [0.3, 0.1, 0.3] modelEntity.generateCollisionShapes(recursive: true) let anchorEntity = AnchorEntity(world: position) anchorEntity.addChild(modelEntity) self.scene.addAnchor(anchorEntity) } } func generateTextResource(text: String, textColor: UIColor) -> TextureResource? { if let image = text.image(withAttributes: [NSAttributedString.Key.foregroundColor: textColor], size: CGSize(width: 18, height: 18)), let cgImage = image.cgImage { let textureResource = try? TextureResource(image: cgImage, options: TextureResource.CreateOptions.init(semantic: nil)) return textureResource } return nil } } I tap the floor and get a cube with '1' as shown below. The background color of the cube is black, I guess. Where does this color come from and how can I change it into, say, red? Thanks.
4
0
145
Jul ’25
App crashes after requesting PhotoLibrary limited access
My visionOS requires access to users' personal photos. The trigger mechanism is: when user firstly opens a FooView, a task attached to that FooView and calling let status = PHPhotoLibrary.authorizationStatus(for: .readWrite), if the status is .notDetermined, then calling PHPhotoLibrary.requestAuthorization(for: .readWrite, handler: authCompletionHandler) to let visionOS pop out a window to request Photo access. However, the app crashes every time when user selects Limited Access and the system try to pop out a photo library picker. And btw, I have set Prevent limited photos access alert to Yes, but it shouldn't affect the behavior here I guess. There was a debugger message here: *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Presentations are not permitted within volumetric window scenes.' However, the window this view belongs to is a .plain style window (though there were 3D object appearing in the other view of same windowgroup) This is my code snippet if this helps: checkAndUpdatePhotoAuthorization is just a wrapper of PHPhotoLibrary.authorizationStatus(for: .readWrite) private func checkAndUpdatePhotoAuthorization() -> PHAuthorizationStatus { let currentStatus = PHPhotoLibrary.authorizationStatus(for: .readWrite) switch currentStatus { case .authorized: print("Photo library access authorized.") isPhotoGalleryAuthorized = true isPhotoGalleryLimited = false isPhotoGalleryAccessRestricted = false isPhotoGalleryDetermined = true case .limited: print("Photo library access limited.") isPhotoGalleryLimited = true isPhotoGalleryAuthorized = false isPhotoGalleryAccessRestricted = false isPhotoGalleryDetermined = true case .notDetermined: isPhotoGalleryDetermined = false print("Photo library access not determined.") case .denied: print("Photo library access denied.") isPhotoGalleryAuthorized = false isPhotoGalleryLimited = false isPhotoGalleryAccessRestricted = false showSettingsAlert = true isPhotoGalleryDetermined = true case .restricted: print("Photo library access restricted.") isPhotoGalleryAuthorized = false isPhotoGalleryLimited = false isPhotoGalleryAccessRestricted = true showPhotoAuthExplainationAlert = true isPhotoGalleryDetermined = true @unknown default: print("Photo library Unknown authorization status.") isPhotoGalleryAuthorized = false isPhotoGalleryLimited = false isPhotoGalleryAccessRestricted = false isPhotoGalleryDetermined = true } return currentStatus } And then FooView attaches task to fire up checkAndUpdatePhotoAuthorization() var body: some View { EmptyView() } .task { try? await Task.sleep(for: .seconds(1.0)) let status = self.checkAndUpdatePhotoAuthorization() if status == .notDetermined { DispatchQueue.main.async { PHPhotoLibrary.requestAuthorization(for: .readWrite, handler: authCompletionHandler) } } Another thing worth to mention is that SOMETIMES it won't crash when running on a debug build. But it crashes when it comes to TF. Any other idea? Big thanks in advance XCode version: 16.2 beta 3 VisionOS version: 2.2
1
0
661
Jan ’25
Loading USDZ with particle system crashes on Intel Macs
Hello, we have a RealityKit app that also runs on macOS via Catalyst. For specific USD assets containing particle systems we have observed a reproducible crash. Steps to reproduce: Open Reality Composer Pro Create new file Create simple particle system (default one is fine) export as USDZ Create project in Xcode Call Entity.load(… and pass in your USD Running this on an Intel iMac with macOS Sequoia 15.3 will lead to a crash with the following console log: validateWithDevice:4704: failed assertion `Render Pipeline DescrvalidateWithDevice:4704: failed assertion `Render Pipeline Descriptor Validation depthAttachmentPixelFormat (MTLPixelFormatDepthvalidateWithDevice:4704: failed assertion `Render Pipeline Descriptor Validation depthAttachmentPixelFormat (MTLPixelFormatDepthvalidateWithDevice:4704: failed assertion `Render Pipeline Descriptor Validation depthAttachmentPixelFormat (MTLPixelFormatDepth32Float) and stencilAttachmentPixelFormat (MTLPixelFormatStencil8) must match. ' iptor Validation depthAttachmentPixelFormat (MTLPixelFormatDepth32Float) and stencilAttachmentPixelFormat (MTLPixelFormatStencil8) must match. ' 32Float) and stencilAttachmentPixelFormat (MTLPixelFormatStencil32Float) and stencilAttachmentPixelFormat (MTLPixelFormatStencil8) must match. ' 8) must match. ' Xcode version: 16.2.0 iMac 2020 3,8GHz Intel Core i7 macOS Sequoia 15.3 FB16477373 It would be great if this could be fixed quickly or a workaround provided since it affects or production app. Thank you!
1
0
423
Mar ’25
ARKit hand tracking
Hello, I am developing a visionOS application and am interested in obtaining detailed data of users’ hands through ARKit, including but not limited to Transform and rotation angle. I have reviewed Happy Beem, but it appears to only introduce the method of identifying the user’s specific gestures. Could you please advise on how to obtain the Transform and rotation angle of the user’s hand? Thank you.
1
0
501
Mar ’25
Scene's origin relative to portal's window?
I am experimenting with RealityKit to set up a portal. Everything works, but I was wondering where the scene's origin is with respect to the front of the portal window? From experiments, the origin's X and Y appear to be at the center of the portal window, while the origin's Z appearing to be about a meter behind the portal window. Is this (at least roughly) correct? Is it documented anywhere? PS. I began with the standard visionOS app and edited the Reality Composer Pro file to create the scene.
5
0
625
Mar ’25
RealityKit entity.write(to:) generates fatal protection error
My app for framing and arranging pictures from Photos on visionOS allows users to write the arrangements they create to .reality files using RealityKit entity.write(to:) that they then display to customers on their websites. This works perfectly on visionOS 2, but fails with a fatal protection error on visionOS 26 beta 1 and beta 2 when write(to:) attempts to write to its internal cache: 2025-06-29 14:03:04.688 Failed to write reality file Error Domain=RERealityFileWriterErrorDomain Code=10 "Could not create parent folders for file path /var/mobile/Containers/Data/Application/81E1DDC4-331F-425D-919B-3AB87390479A/Library/Caches/com.GeorgePurvis.Photography.FrameItVision/RealityFileBundleZippingTmp_A049685F-C9B2-479B-890D-CF43D13B60E9/41453BC9-26CB-46C5-ADBE-C0A50253EC27." UserInfo={NSLocalizedDescription=Could not create parent folders for file path /var/mobile/Containers/Data/Application/81E1DDC4-331F-425D-919B-3AB87390479A/Library/Caches/com.GeorgePurvis.Photography.FrameItVision/RealityFileBundleZippingTmp_A049685F-C9B2-479B-890D-CF43D13B60E9/41453BC9-26CB-46C5-ADBE-C0A50253EC27.} Has anyone else encountered this problem? Do you have a workaround? Have you filed a feedback? ChatGPT analysis of the error and my code reports: Why there is no workaround • entity.write(to:) is a black box — you cannot override where it builds its staging bundle • it always tries to create those random folders itself • you cannot supply a parent or working directory to RealityFileWriter • so if the system fails to create that folder, you cannot patch it 👉 This is why you see a fatal error with no recovery. See also feedbacks: FB18494954, FB18036627, FB18063766
10
0
404
Jul ’25
Keyboard Tracking
Hi! I'm currently experimenting on Apple Vision Pro with hand and head anchors. Is there a way to get an anchor linked to the apple magic keyboard (as the detection is already done to display inputs at the top)? Thanks in advance, Have a good day!
1
0
645
Jul ’25
WorldAnchors added and removed immediately
I can add a WorldAnchor to a WorldTrackingProvider. The next time I start my app, the WorldAnchor is added back, and then is immediately removed: dataProviderStateChanged(dataProviders: [WorldTrackingProvider(0x0000000300bc8370, state: running)], newState: running, error: nil) AnchorUpdate(event: added, timestamp: 43025.248134708, anchor: WorldAnchor(id: C0A1AE95-F156-45F5-9030-895CAABF16E9, isTracked: true, originFromAnchorTransform: <translation=(0.048458 0.000108 -0.317565) rotation=(0.00° 15.44° -0.00°)>)) AnchorUpdate(event: removed, timestamp: 43025.348131208, anchor: WorldAnchor(id: C0A1AE95-F156-45F5-9030-895CAABF16E9, isTracked: false, originFromAnchorTransform: <translation=(0.000000 0.000000 0.000000) rotation=(-0.00° 0.00° 0.00°)>)) It always leaves me with zero anchors in .allAnchors...the ARKitSession is still active at this point
8
0
724
Jan ’25
How to handle tasks when the Vision Pro is taken off?
I have a grpc server running inside of a task. When the user takes the headset off, the grpc server will no longer work when they put the headset back on. I would like to have this action detected so that I can cancel the task (which will effectively close the grpc server). I am also using a visual indicator to let the user know if the server is running, but it will not accurately reflect the state of the server when removing and putting back on the headset.
1
0
290
Mar ’25
Creating spatial video with one camera
Hello everyone I would like to create my own spatial video on my Apple Vision Pro. According to all the documentation from Apple, this requires two camera angles that enhance the spatial perception. I have purchased the Enterprise license with main camera access for this purpose. However, this only gives me access to the left main camera of the glasses. Is there a way to access the right camera as well? Or is the one camera image enough to create a spatial video by splitting the image, for example? I am open to any help and ideas. My goal is to create the video with the cameras on the glasses, not externally.
1
0
291
Jun ’25
ReplayKit start and stop capture breaks and give me an error when switching from Immersive to Mixed and back.
Hi, I'm developing a virtual camera system using ReplayKit to capture scene video by directly accessing raw video buffers. The capture mechanism works flawlessly when repeatedly starting and stopping video capture within a continuous immersive environment. However, a critical issue arises when interrupting the immersive space: Step 1: Enter immersive environment and start and stop capture videos(Multiple times with no issues) Step 2: Press the crown button to exit the immersive environment Step 3: Return to the immersive space subsequently Step 4: Attempt to start the video capture At this point, the startCapture method throws an unexpected error, disrupting the video capture workflow. This is the Xcode error that I see " [ERROR] -[RPScreenRecorder startCaptureWithHandler:completionHandler:]_block_invoke_2:500 failed to start due to error: Error Domain=com.apple.ReplayKit.RPRecordingErrorDomain Code=-5803 "Recording failed to start" UserInfo={NSLocalizedDescription=Recording failed to start}" I have tried all possible ways to stopCapture including OnDisappear and other methods and nothing seems to solve this.
3
0
300
Mar ’25
Significant deviation of depth map values captured in ARKit framework
I use ARKit to build an app, scan rooms to collect the spatial data of objects and re-construct the 3D scene. the problem is I found the depth map values captured in ARFrame significantly deviate from the real distances, even nonlinearly, for the distances below 1.5m, values are basically correct, but beyond 1.5m, they are smaller than real values. for example read 1.9m from the generated depthmap.tiff, but real distance is 3 meters. below is my code of generating tiff file to record depth map data: Generated TIFF file (captured from ARKit): as shown above, the maximum distance is around 1.9m, but real distance to that wall is more than 3 meters, and also you can see, the depth map picture captured in ARKit is quite blurry, particularly at far distance (> 2.0m), almost smeared out. Generated TIFF file (captured from AVFoundation): In comparison, the depth map captured from traditional AVFoundation and with the same hardware device is much clear, the values seem not in meter unit though.
1
0
493
Feb ’25