Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

VisionOS hands replacement inside an Immersive Space
Still don't understand why no one is clarifying about this Apple Video https://developer.apple.com/videos/play/wwdc2023/10111 At the end of this video, there's an incomplete tutorial about connecting a USDZ with mesh and Skeleton structure to the hand tracking system. No example project is linked, and no one is giving the community any clarification. Please can you help us to understand how to proceed?
4
0
563
Mar ’25
Alternatives to SceneView
Hey there, since SceneView has been marked as „deprecated“ for SwiftUI, I‘m wondering which alternatives should be considered for the following situation: I have a SwiftUI app (for iOS and iPadOS) where users can view (with rotate, scale, move gestures) 3D models (USDZ) in a scene. The models will be downloaded from web backend and called via local URL paths. What I tested: I‘ve tried ARView in .nonAR mode, RealityView, however I didn‘t get the expected response -> User can rotate, scale the 3D models in a virtual space. ARView in nonAR mode still shows the object like in normal AR mode without camera stream. I tried to add Gestures to the RealityView on iOS - loading USDZ 3D models worked but the gestures didn’t). Model3D is only available for visionOS (that would be amazing to have it for iOS) I also checked QuickLook Preview however it works pretty strange via Filepicker etc, which is not the way how the user should load the 3D models in my app. Maybe I missed something, I couldn’t find anything which can help me. I‘m pretty much stucked adopting the latest and greatest frameworks/APIs in my App and taking the next steps porting my app to visionOS. Long story short 😃: Does someone have an idea what is the alternative to SceneView for USDZ 3D models? I appreciate your support!! Thanks in advance!
4
0
219
Jul ’25
VisionOS: Detect plane to place objects issue for animated objects
Hi, I have used the template code for Plane Detection and placing models on them from here https://developer.apple.com/documentation/visionos/placing-content-on-detected-planes This source code did not copy the animations in the preview model to the PlacedModel and hence I modified it to do a manual copy of animations and textures. There is a function called materialize() that does this and I was able to modify it to get it working where the placed models are now animating. The issue is when I apply gestures on them like drag or rotate. For those models that go through this logic I'm unable to add gestures even though I'm making sure that Collision and Input Target is set on the Placed Models. Has anyone been able to get this working or is it even a possibility? My materialize function func materialize() -> PlacedObject { let shapes = previewEntity.components[CollisionComponent.self]!.shapes // Clone render content first as we need its materials let clonedRenderContent = renderContent.clone(recursive: true) print("To be finding main model: \(descriptor.displayName)") // Find the main model in preview hierarchy func findMainModel(_ entity: Entity) -> Entity? { if entity.name == descriptor.displayName.replacingOccurrences(of: " ", with: "_") { print("Found main model: \(entity.name)") return entity } for child in entity.children { if child.name == descriptor.displayName.replacingOccurrences(of: " ", with: "_") { print("Found main model in children: \(child.name)") return child } } return nil } // Clone hierarchy preserving structure, names, and materials func cloneHierarchy(_ entity: Entity) -> Entity { print("Cloning: \(entity.name)") let cloned: Entity if let model = entity as? ModelEntity { // Clone with recursive false to handle children manually cloned = model.clone(recursive: false) if let clonedModel = cloned as? ModelEntity, let originalMaterials = model.model?.materials { // Preserve the original model's materials clonedModel.model?.materials = originalMaterials } } else { cloned = Entity() } // Preserve name and transform cloned.name = entity.name cloned.transform = entity.transform // Clone children for child in entity.children { let clonedChild = cloneHierarchy(child) cloned.addChild(clonedChild) } return cloned } print("=== Cloning Preview Structure ===") // Clone the preview hierarchy with proper structure let clonedStructure = cloneHierarchy(previewEntity) // Find and use the main model if let mainModel = findMainModel(clonedStructure) { print("Using main model for PlacedObject") let modelEntity: ModelEntity if let asModel = mainModel as? ModelEntity { print("Using asModel ") modelEntity = asModel } else { modelEntity = ModelEntity() modelEntity.name = mainModel.name // Copy children and transforms for child in mainModel.children { modelEntity.addChild(child) } modelEntity.transform = mainModel.transform } // Add collision component here let collisionComponent = CollisionComponent(shapes: shapes, isStatic: false, filter: CollisionFilter(group: PlacedObject.collisionGroup, mask: .all)) modelEntity.components.set(collisionComponent) // Create the placed object let placedObject = PlacedObject(descriptor: descriptor, renderContentToClone: modelEntity, shapes: shapes) // Set input target on the placed object itself placedObject.components.set(InputTargetComponent(allowedInputTypes: [.direct, .indirect])) return placedObject } else { print("Fallback to original render content") let placedObject = PlacedObject(descriptor: descriptor, renderContentToClone: clonedRenderContent, shapes: shapes) placedObject.components.set(InputTargetComponent(allowedInputTypes: [.direct, .indirect])) return placedObject } } My PlacedObject class where the init has the recursive cloning removed because it is handled in materialize class PlacedObject: Entity { let fileName: String // The 3D model displayed for this object. private let renderContent: ModelEntity static let collisionGroup = CollisionGroup(rawValue: 1 << 29) // The origin of the UI attached to this object. // The UI is gravity aligned and oriented towards the user. let uiOrigin = Entity() var affectedByPhysics = false { didSet { guard affectedByPhysics != oldValue else { return } if affectedByPhysics { components[PhysicsBodyComponent.self]!.mode = .static } else { components[PhysicsBodyComponent.self]!.mode = .static } } } var isBeingDragged = false { didSet { affectedByPhysics = !isBeingDragged } } var positionAtLastReanchoringCheck: SIMD3<Float>? var atRest = false init(descriptor: ModelDescriptor, renderContentToClone: ModelEntity, shapes: [ShapeResource]) { fileName = descriptor.fileName // renderContent = renderContentToClone.clone(recursive: true) renderContent = renderContentToClone super.init() name = renderContent.name // Apply the rendered content’s scale to this parent entity to ensure // that the scale of the collision shape and physics body are correct. scale = renderContent.scale renderContent.scale = .one // Make the object respond to gravity. let physicsMaterial = PhysicsMaterialResource.generate(restitution: 0.0) let physicsBodyComponent = PhysicsBodyComponent(shapes: shapes, mass: 1.0, material: physicsMaterial, mode: .static) components.set(physicsBodyComponent) components.set(CollisionComponent(shapes: shapes, isStatic: false, filter: CollisionFilter(group: PlacedObject.collisionGroup, mask: .all))) addChild(renderContent) addChild(uiOrigin) uiOrigin.position.y = extents.y / 2 // Position the UI origin in the object’s center. // Allow direct and indirect manipulation of placed objects. components.set(InputTargetComponent(allowedInputTypes: [.direct, .indirect])) // Add a grounding shadow to placed objects. renderContent.components.set(GroundingShadowComponent(castsShadow: true)) } required init() { fatalError("`init` is unimplemented.") } } Thanks
4
0
467
Feb ’25
Issue with tracking multiple images using ARKit on VisionOS
We are using the ARKit image tracking feature on visionOS 2.0 with three pre-registered images. The image tracking works, but only one image is actively tracked at a time. When more than one target image is visible to the camera, it has difficulty detecting and tracking the other images. Is this the expected behavior in visionOS, or is there something we need to do to resolve this issue?
4
1
532
Mar ’25
Creating a voxel mesh and render it using metal within a RealityKit ImmersiveView
Hi everyone, I'm creating an educational App that allows doing computational design in an immersive environment with the Vision Pro. The App is free and can be found here: https://apps.apple.com/us/app/arcade-topology/id6742103633 The problem I have is that the mesh of voxels I currently create use ModelEntity and I recently read that this is horrible for scalability. I already start to see issues when I try to use thousands of voxels. I also read somewhere that I should then take advantage of GPUs and use metal to that end. I was wondering if someone could point me to a tutorial or article that discusses this. In essence, I need to create a 3D voxel mesh, and those voxels have to update their opacity within an iterative loop. Thanks! —Alejandro
3
0
144
Jun ’25
ReplayKit start and stop capture breaks and give me an error when switching from Immersive to Mixed and back.
Hi, I'm developing a virtual camera system using ReplayKit to capture scene video by directly accessing raw video buffers. The capture mechanism works flawlessly when repeatedly starting and stopping video capture within a continuous immersive environment. However, a critical issue arises when interrupting the immersive space: Step 1: Enter immersive environment and start and stop capture videos(Multiple times with no issues) Step 2: Press the crown button to exit the immersive environment Step 3: Return to the immersive space subsequently Step 4: Attempt to start the video capture At this point, the startCapture method throws an unexpected error, disrupting the video capture workflow. This is the Xcode error that I see " [ERROR] -[RPScreenRecorder startCaptureWithHandler:completionHandler:]_block_invoke_2:500 failed to start due to error: Error Domain=com.apple.ReplayKit.RPRecordingErrorDomain Code=-5803 "Recording failed to start" UserInfo={NSLocalizedDescription=Recording failed to start}" I have tried all possible ways to stopCapture including OnDisappear and other methods and nothing seems to solve this.
3
0
324
Mar ’25
VisionOS 26 - threadsPerThreadgroup limit causing crash on device (but not in simulator)
Hi all, I'm running into an issue with an app that previously worked fine on device using visionOS 2.0. After updating to visionOS 26, the same code runs fine in the simulator but crashes on the device with the following error: -[MTLDebugComputeCommandEncoder _validateThreadsPerThreadgroup:]:1330: failed assertion `(threadsPerThreadgroup.width(32) * threadsPerThreadgroup.height(32) * threadsPerThreadgroup.depth(1))(1024) must be <= 832. (kernel threadgroup size limit)` Is there any documented way to check or increase the allowed threadsPerThreadgroup size on Apple Vision Pro? Or any recommended workaround for this regression? Thanks in advance!
3
0
183
Jun ’25
visionOS pushWindow being dismissed on app foreground
We seen to have found an issue when using the pushWindow action on visionOS. The issue occurs if the app is backgrounded then reopened by selecting the apps icon on the home screen. Any window that is opened via the pushWindow action is then dismissed. We've been able to replicate the issue in a small sample project. Replication steps Open app Open window via the push action Press the digital crown On the home screen select the apps icon again The pushed window will now be dismissed. There is a sample project linked here that shows off the issue, including a video of the bug in progress
3
1
790
3w
Scene not found after changes in Reality Composer Pro
randomly, the app does not work after small changes in Reality Composer. Small changes like scaling a object a tiny bit. to fix the error, i have to change another element in reality composer and hope for the best. if this does not help, i change (transform) something else, or deactive/activate something to get the project working again. I can't see a pattern why the Reality Composer Project sometimes gets in a state where it does not compile anymore.
3
0
1.9k
3w
RealityKit Entity ComponentSet does not conform to Sequence?
Hello, I'm trying to view the components of an Entity I'm creating in RealityKit by reading from a USDZ file. I have the following code snippet in my app. if let appleEntity = try? Entity.loadModel(named: "apple_tile") { let c = appleEntity.components for comp in c { // <- compiler error here print(comp) } } The compiler error I'm receiving says "For-in loop requires 'Entity.ComponentSet' to conform to 'Sequence'". However, I thought this was the case, according to the documentation for Entity.ComponentSet? Curious if anyone else has had this problem. Running XCode 15.4, and my Swift version is xcrun swift -version swift-driver version: 1.90.11.1 Apple Swift version 5.10 (swiftlang-5.10.0.13 clang-1500.3.9.4) Target: x86_64-apple-macosx14.0
3
0
419
Mar ’25
RotateGesture3D auto constrained to axis
Hi, On visionOS to manage entity rotation we can rely on RotateGesture3D. We can even with the constrainedToAxis parameter authorize only rotation on an x, y or z axis or even make combinations. What I want to know is if it is possible to constrain the rotation on axis automatically. Let me explain, the functionality that I would like to implement is to constrain the rotation on an axis only once the user has started his gesture. The initial gesture the user makes should let us know which axis they want to rotate on. This would be equivalent to activating a constraint automatically on one of the axes, as if we were defining the gesture on one of the axes. RotateGesture3D(constrainedToAxis: .x) RotateGesture3D(constrainedToAxis: .y) RotateGesture3D(constrainedToAxis: .z) Is it possible to do this? If so, what would be the best way to do it? A code example would be greatly appreciated. Regards Tof
3
0
434
Feb ’25
Auto Rig Pro Mixamo Animation to RC Pro
Hi! I have been struggling with this for a little while and most of what I've found has not helped much. I hope to find more success here. Essentially, I have a model I've made in Blender. I rigged it using Auto Rig Pro, and I've also used ARP to add a Mixamo animation to it. That all works fine in Blender. However, when I try to import this model into RCP, I don't get the animation. The "default subtree animation" is completely empty. I attribute this to my lack of experience in this field but here's what I've attempted thus far: Pushing the Mixamo keyframes into an NLA strip. I'm pretty sure this is the correct line of action, but I'm definitely not doing something right. Baking the animation (?) I've made sure that I have animation checked when I export the model! Any ideas or reference projects would be lovely. I haven't really found much that has pushed me in the right direction. This project is, unfortunately, kind of time sensitive, so I would appreciate help ASAP. Thank you and let me know if I can add anymore context!
3
0
413
4w
ARKit Planes do not appear where expected on visionOS
I'm using ARKitSession and PlaneDetectionProvider to detect planes. I have a basics process to create an entity for each detected plane. Each one will get a random color for the material. Each plane is sized based on the bounds of the anchor provided by ARKit. let mesh = MeshResource.generatePlane( width: anchor.geometry.extent.width, depth: anchor.geometry.extent.height ) Then I'm using this to position each entity. entity.transform = Transform(matrix: anchor.originFromAnchorTransform) This seems to be the right method, but many (not all) planes are not where they should be. The sizes look OK, but the X and Y positions off. Take this large green plane on the wall. It should span the entire wall, but it is offset along the X position so that it is pushed to the left from where the center of the anchor is. When I visualize surfaces using the Xcode debugging tools, that tool reports the planes where I'd expect them to be. Can you see what I'm getting wrong here? Full code below struct Example068: View { @State var session = ARKitSession() @State private var planeAnchors: [UUID: Entity] = [:] @State private var planeColors: [UUID: Color] = [:] var body: some View { RealityView { content in } update: { content in for (_, entity) in planeAnchors { if !content.entities.contains(entity) { content.add(entity) } } } .task { try! await setupAndRunPlaneDetection() } } func setupAndRunPlaneDetection() async throws { let planeData = PlaneDetectionProvider(alignments: [.horizontal, .vertical, .slanted]) if PlaneDetectionProvider.isSupported { do { try await session.run([planeData]) for await update in planeData.anchorUpdates { switch update.event { case .added, .updated: let anchor = update.anchor if planeColors[anchor.id] == nil { planeColors[anchor.id] = generatePastelColor() } let planeEntity = createPlaneEntity(for: anchor, color: planeColors[anchor.id]!) planeAnchors[anchor.id] = planeEntity case .removed: let anchor = update.anchor planeAnchors.removeValue(forKey: anchor.id) planeColors.removeValue(forKey: anchor.id) } } } catch { print("ARKit session error \(error)") } } } private func generatePastelColor() -> Color { let hue = Double.random(in: 0...1) let saturation = Double.random(in: 0.2...0.4) let brightness = Double.random(in: 0.8...1.0) return Color(hue: hue, saturation: saturation, brightness: brightness) } private func createPlaneEntity(for anchor: PlaneAnchor, color: Color) -> Entity { let mesh = MeshResource.generatePlane( width: anchor.geometry.extent.width, depth: anchor.geometry.extent.height ) var material = PhysicallyBasedMaterial() material.baseColor.tint = UIColor(color) let entity = ModelEntity(mesh: mesh, materials: [material]) entity.transform = Transform(matrix: anchor.originFromAnchorTransform) return entity } }
3
0
191
Apr ’25
VisionOS Enterprise API Not Working
My development team admin requested the Enterprise API for camera access on the vision pro. We got that granted, got a license for usage, and got instructions for integrating it with next steps. We did the following: Even when I try to download and run the sample project for "Accessing the Main Camera", and follow all the exact instructions mentioned here: https://developer.apple.com/documentation/visionos/accessing-the-main-camera I am just unable to receive camera frames. I added the capabilities, created a new provisioning profile with this access, added the entitlements to info.plist and entitlements, replaced the dummy license file with the one we were sent, and also have a matching bundle identifier and development certificate, but it is still not showing camera access for some reason. "Main Camera Access" shows up in our Signing & Capabilities tab, and we also added the NSMainCameraDescription in the Info.plist and allow access while opening the app. None of this works. Not on my app, and not on the sample app that I just downloaded and tried to run on the Vision Pro after replacing the dummy license file.
3
0
812
Dec ’25
How to get multiple animations into USDZ
Most models are only available as glb or fbx, so I usually reexport them into usdz using Blender. When I import them into Reality Composer Pro, Mesh, Textures etc look great, but in the Animation Library subsection all I can see is one default subtree animation. In Blender I can see all available animations and play them individually. The default subtree animation just plays the default idle animation. In fact when I open the nonlinear animation view in Blender and select a different animation as the default animation, the exported usdz shows the newly selected animation as default subtree animation. I can see in the Apple sample apps models can have multiple animations in their Animation Library. I'm using the latest Blender 4.5 and the usdz exporter should be working properly?
3
1
698
Oct ’25
Transparent Material Turning into Occlusion Material
I am trying to create an object in immersive space that is partially transparent (~50% opacity). I have implemented this in a few different ways including creating a model entity and setting its opacity component to 0.5, and creating a custom material with blending set to a transparent opacity of 0.5. These both work partially, as they behaved as intended for many cases, but seemingly randomly would act like occlusion material and block any other immersive content behind them, showing the real world instead. Some notes: I am using RealityKit to render the semi-transparent object and an opaque object that is behind the semi-transparent object. I am using VisionOS 2.1, and am updating the location of the semi-transparent object often. Both objects are ModelEntities. I would appreciate any guidance on how to implement this. Please let me know if there are any other questions.
3
0
169
May ’25
ManipulationComponent create parent/child crash
Hello, If you add a ManipulationComponent to a RealityKit entity and then continue to add instructions, sooner or later you will encounter a crash with the following error message: Attempting to move entity “%s” (%p) under “%s” (%p), but the new parent entity is currently being removed. Changing the parent/child entities of an entity in an event handler while that entity is already being reassigned is not supported. CoreSimulator 1048 – Device: Apple Vision Pro 4K (B87DD32A-E862-4791-8B71-92E50CE6EC06) – Runtime: visionOS 26.0 (23M336) – Device Type: Apple Vision Pro The problem occurs precisely with this code: ManipulationComponent.configureEntity(object) I adapted Apple's ObjectPlacementExample and made the changes available via GitHub. The desired behavior is that I add entities to ManipulationComponent and then Realitiykit runs stably and does not crash randomly. GitHub Repo Thanks Andre
3
0
495
Oct ’25
[WWDC25] For GuessTogether, can you initiate a FaceTime call via the custom SharePlay button?
Hello, For GuessTogether source code, it seems like the code assumes that you're already in a FaceTime call before pressing the custom SharePlay button (labeled "Play Guess Together"). If not already on a FaceTime call, my Apple Vision Pro and the visionOS simulator both do nothing after throwing warnings. Is this intended behavior? If so, how do I make it so that pressing the button can also initiate FaceTime calls? Is this allowed? Thank you!
3
0
132
Sep ’25
RealityKit Trace Metric Max/Range for VisionOS app
Hi Nathaniel, I spoke with you yesterday in the WWDC lab. Thanks for chatting with me! Is it possible to get a link to a doc that has some key metrics I'd find in a RealityKit trace so I know if that metric is exceeding limits and probably causing a problem? Right now, I just see numbers and have no idea if a metric is high or low :). This is specifically for a VisionOS app. Thanks, Bob
3
0
142
Jun ’25