Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

Custom Material half visible..?
I'm currently implementing 180° / 360° immersive video for my app. I easily implemented 360° by just applying VideoMaterial to flipped sphere. But I'm stuck at 180°. I'm trying to implement by applying VideoMaterial to hemisphere (half sphere). I want to make VideoMaterial to be visible half front sphere and half back sphere transparent / clear. Would there be any advice / information / idea to implement this? Your help would be grateful.
0
0
124
Oct ’25
Why don’t the dinosaurs in “Encounter Dinosaurs” respond to real-world light intensity?
I have a question about Apple’s preinstalled visionOS app “Encounter Dinosaurs.” In this app, the dinosaurs are displayed over the real-world background, but the PhysicallyBasedMaterial (PBM) in RealityKit doesn’t appear to respond to the actual brightness of the environment. Even when I change the lighting in the room, the dinosaurs’ brightness and shading remain almost the same. If this behavior is intentional — for example, if the app disables real-world lighting influence or uses a fixed lighting setup — could someone explain how and why it’s implemented that way?
1
0
619
Nov ’25
ARKit: Keep USDZ node fixed after image tracking is lost (prevent drifting)
0 I’m using ARKit + SceneKit (Swift) with ARWorldTrackingConfiguration and detectionImages to place a 3D object (USDZ via SCNScene(named:)) when a reference image is detected. While the image is tracked, the object stays correctly aligned. Goal: When the tracked image is no longer visible, I want the placed node to remain visible and fixed at its last known pose (no drifting) as I move the camera. What works so far: Detect image → add node → track updates When the image disappears → keep showing the node at its last pose Problem: After the image is no longer tracked, the node drifts as I move the device/camera. It looks like it’s still influenced by the (now unreliable) image anchor or accumulating small world-tracking errors. Question: What’s the correct way in ARKit to “freeze” the node at its last known world transform once ARImageAnchor stops tracking, so it doesn’t drift?
2
0
492
Oct ’25
Depth matrix accuracy with the iPhone 14 Pro and Lidar
Hello Community, I’m currently working with the sample code “CapturingDepthUsingTheLiDARCamera” and using it to capture the depth map of an image taken with the iPhone 14 Pro. From this depth map, I generate a point cloud using the intrinsic camera parameters. I've noticed that objects not facing the camera directly appear distorted in the resulting point cloud. For example: An object with surfaces that are perpendicular to each other appears with a sharper angle in the point cloud — around 60° instead of 90°. My question is: Is this due to the general accuracy limitations of the LiDAR sensor? Or could it be related to the sample code? To obtain the depth map, I’m using: AVCapturePhoto.depthData.converting(toDepthDataType: kCVPixelFormatType_DepthFloat32) Thanks in advance for your help!
0
0
117
Apr ’25
How to speed up build time when placing large USDZ files in RCP scenes
I’m currently developing a visionOS app that includes an RCP scene with a large USDZ file (around 2GB). Each time I make adjustments to the CG model in Blender, I export it as USDZ again, place it in the RCP scene, and then build the app using Xcode. However, because the USDZ file is quite large, the build process takes a long time, significantly slowing down my development speed. For example, I’d like to know if there are any effective ways to: Improve overall build performance Reduce the time between updating the USDZ file and completing the build Any advice or best practices for optimizing this workflow would be greatly appreciated. Best regards, Sadao
1
0
217
Nov ’25
Apply mesh to real world people.
As far as I know, Apple hasn’t opened access to the Vision Pro camera for developers yet, so I’m trying to find possible workarounds within the current capabilities. I’m wondering if there’s any way to apply a mesh to a person in the scene in Vision Pro, or if there’s an alternative approach to roughly detect a human shape in front of the user?
2
0
108
Apr ’25
ManipulationComponent create parent/child crash
Hello, If you add a ManipulationComponent to a RealityKit entity and then continue to add instructions, sooner or later you will encounter a crash with the following error message: Attempting to move entity “%s” (%p) under “%s” (%p), but the new parent entity is currently being removed. Changing the parent/child entities of an entity in an event handler while that entity is already being reassigned is not supported. CoreSimulator 1048 – Device: Apple Vision Pro 4K (B87DD32A-E862-4791-8B71-92E50CE6EC06) – Runtime: visionOS 26.0 (23M336) – Device Type: Apple Vision Pro The problem occurs precisely with this code: ManipulationComponent.configureEntity(object) I adapted Apple's ObjectPlacementExample and made the changes available via GitHub. The desired behavior is that I add entities to ManipulationComponent and then Realitiykit runs stably and does not crash randomly. GitHub Repo Thanks Andre
3
0
495
Oct ’25
mipmapsMode trade-off?
I am building a 360 photo viewer in VisionOS 26. Which allows the user to choose a 2 by 1 jpg and then renders it with a sphere mesh entity. And I use: TextureResource(contentsOf: url, options: options). I noticed two situations here in terms of mipmaps options. When setting "mipmapsMode: .none": The graphic quality within the "gaze area" looks sharp and clear The two poles (top and bottom) are perfectly rendered Massive shimmer around the "gaze area" When setting "mipmapsMode: .allocateAndGenerateAll": The graphic looks slightly blurrier than in ".none" within the "gaze area" The two poles are very blurry and hard to recognize the texture Much less shimmer around the "gaze area" My question would be: Is there a way to have the perfect graphic quality in ".none" without the massive shimmer? Thank you! Screenshots: mipmapsMode: .none mipmapsMode: .allocateAndGenerateAll
0
0
230
Oct ’25
RoomPlan crash on iOS 16 devices even when using available checks
I've encountered an unexpected crash with RoomPlan on iOS 16 devices. The odd part is the code is protected by an available check, since I'm using newer RoomPlan features. Xcode error dyld[40588]: Symbol not found: _$s8RoomPlan08CapturedA0V16USDExportOptionsV5modelAEvgZ I can repro using the Apple sample code. https://developer.apple.com/documentation/roomplan/create-a-3d-model-of-an-interior-room-by-guiding-the-user-through-an-ar-experience Modify RoomCaptureViewController.swift as follows. Remove try finalResults?.export(to: destinationURL, exportOptions: .parametric) Add if #available(iOS 17.0, *) { try finalResults?.export(to: destinationURL, exportOptions: .model) } else { try finalResults?.export(to: destinationURL, exportOptions: .parametric) } I would have expected this code to at least compile and run on older devices. When the app was targeting iOS 15, the available checks worked as expected and the app is able to launch properly.
0
0
166
Apr ’25
Cursor display issue on attachment view in immersive space
While using Screen Mirroring in developer mode within my immersive space, I noticed an alignment issue with the computer cursor (transparent circle). When I move it toward an attachment view, the cursor remains horizontal instead of aligning with the surface of the attachment view. It shows correctly on a 2D window only wrong on attachment view. Is this behavior a bug, or could it be caused by a missing or incorrect configuration on the attachment view? Want help, thanks.
1
0
94
Apr ’25
目前在使用Unity进行开发时,遇到了材质(Material)和Shader在visionOS设备上无法正常渲染的问题
具体表现为:在Unity编辑器中材质显示正常,但部署到Vision Pro真机后部分材质丢失或Shader效果异常(如透明通道失效、光照计算错误等)。此问题影响了开发进度,希望得到技术支持的帮助 Specific results: The materials are displayed normally in the Unity editor, but after being deployed to the Vision Pro real machine, some materials are lost or the Shader effect is abnormal (such as transparent channel failure, antenna calculation error, etc.). This problem has affected the development progress, and I hope to get help from technical support
0
0
72
Apr ’25
version update in Vision Pro
Hi, I'm developing an app for Vision Pro using Xcode, while updating the latest update, things that worked in my app suddenly didn't. in my app flow I'm tapping spheres to get their positions, from some reason I get an offset from where I tap to where a marker on that position is showing up. here's the part of code that does that, and a part that is responsible for an alignment that happens afterwards: func loadMainScene(at position: SIMD3) async { guard let content = self.content else { return } do { let rootEntity = try await Entity(named: "surgery 16.09", in: realityKitContentBundle) rootEntity.scale = SIMD3<Float>(repeating: 0.5) rootEntity.generateCollisionShapes(recursive: true) self.modelRootEntity = rootEntity let bounds = rootEntity.visualBounds(relativeTo: nil) print("📏 Model bounds: center=\(bounds.center), extents=\(bounds.extents)") let pivotEntity = Entity() pivotEntity.addChild(rootEntity) self.pivotEntity = pivotEntity let modelAnchor = AnchorEntity(world: [1, 1.3, -0.8]) modelAnchor.addChild(pivotEntity) content.add(modelAnchor) updateModelOpacity(0.5) self.modelAnchor = modelAnchor rootEntity.visit { entity in print("👀 Entity in model: \(entity.name)") if entity.name.lowercased().hasPrefix("focus") { entity.generateCollisionShapes(recursive: true) entity.components.set(InputTargetComponent()) print("🎯 Made tappable: \(entity.name)") } } print("✅ Model loaded with collisions") guard let sphere = placementSphere else { return } let sphereWorldXform = sphere.transformMatrix(relativeTo: nil) var newXform = sphereWorldXform newXform.columns.3.y += 0.1 // move up by 20 cm let gridAnchor = AnchorEntity(world: newXform) self.gridAnchor = gridAnchor content.add(gridAnchor) let baseScene = try await Entity(named: "Scene", in: realityKitContentBundle) let gridSizeX = 18 let gridSizeY = 10 let gridSizeZ = 10 let spacing: Float = 0.05 let startX: Float = -Float(gridSizeX - 1) * spacing * 0.5 + 0.3 let startY: Float = -Float(gridSizeY - 1) * spacing * 0.5 - 0.1 let startZ: Float = -Float(gridSizeZ - 1) * spacing * 0.5 for i in 0..<gridSizeX { for j in 0..<gridSizeY { for k in 0..<gridSizeZ { if j < 2 || j > gridSizeY - 5 { continue } // remove 2 bottom, 4 top let cell = baseScene.clone(recursive: true) cell.name = "Sphere" cell.scale = .one * 0.02 cell.position = [ startX + Float(i) * spacing, startY + Float(j) * spacing, startZ + Float(k) * spacing ] cell.generateCollisionShapes(recursive: true) gridCells.append(cell) gridAnchor.addChild(cell) } } } content.add(gridAnchor) print("✅ Grid added") } catch { print("❌ Failed to load: \(error)") } } private func handleModelOrGridTap(_ tappedEntity: Entity) { guard let modelRootEntity = modelRootEntity else { return } let localPosition = tappedEntity.position(relativeTo: modelRootEntity) let worldPosition = tappedEntity.position(relativeTo: nil) switch tapStep { case 0: modelPointA = localPosition modelAnchor?.addChild(createMarker(at: worldPosition, color: [1, 0, 0])) print("📍 Model point A: \(localPosition)") tapStep += 1 case 1: modelPointB = localPosition modelAnchor?.addChild(createMarker(at: worldPosition, color: [1, 0.5, 0])) print("📍 Model point B: \(localPosition)") tapStep += 1 case 2: targetPointA = worldPosition targetMarkerA = createMarker(at: worldPosition,color: [0, 1, 0]) modelAnchor?.addChild(targetMarkerA!) print("✅ Target point A: \(worldPosition)") tapStep += 1 case 3: targetPointB = worldPosition targetMarkerB = createMarker(at: worldPosition,color: [0, 0, 1]) modelAnchor?.addChild(targetMarkerB!) print("✅ Target point B: \(worldPosition)") alignmentReady = true tapStep += 1 default: print("⚠️ Unexpected tap on model helper at step \(tapStep)") } } func alignModel2Points() { guard let modelPointA = modelPointA, let modelPointB = modelPointB, let targetPointA = targetPointA, let targetPointB = targetPointB, let modelRootEntity = modelRootEntity, let pivotEntity = pivotEntity, let modelAnchor = modelAnchor else { print("❌ Missing data for alignment") return } let modelVec = modelPointB - modelPointA let targetVec = targetPointB - targetPointA let modelLength = length(modelVec) let targetLength = length(targetVec) let scale = targetLength / modelLength let modelDir = normalize(modelVec) let targetDir = normalize(targetVec) var axis = cross(modelDir, targetDir) let axisLength = length(axis) var rotation = simd_quatf() if axisLength < 1e-6 { if dot(modelDir, targetDir) > 0 { rotation = simd_quatf(angle: 0, axis: [0,1,0]) } else { let up: SIMD3<Float> = [0,1,0] axis = cross(modelDir, up) if length(axis) < 1e-6 { axis = cross(modelDir, [1,0,0]) } rotation = simd_quatf(angle: .pi, axis: normalize(axis)) } } else { let dotProduct = dot(modelDir, targetDir) let clampedDot = max(-1.0, min(dotProduct, 1.0)) let angle = acos(clampedDot) rotation = simd_quatf(angle: angle, axis: normalize(axis)) } modelRootEntity.scale = .one * scale modelRootEntity.orientation = rotation let transformedPointA = rotation.act(modelPointA * scale) pivotEntity.position = -transformedPointA modelAnchor.position = targetPointA alignedModelPosition = modelAnchor.position print("✅ Aligned with scale \(scale), rotation \(rotation)")
2
0
317
Oct ’25
Share Extensions embedded in visionOS apps
I'm trying to add a feature to my app to allow a user to import items from other apps, like Safari, via the share sheet. I've done this many times on iOS/iPadOS easily with a Share Extension. From what I can tell, Xcode tells me share extensions are not available on visionOS - though my experience on device tells me differently (It seems Reminders, Notes & more implement them somehow.) I was finally able to get it working on device only...but I can now no longer test in the simulator, and have not found a way to distribute this app. When attempting to run on the simulator, I get this issue: Please try again later. Appex bundle at /Users/jason/Library/Developer/CoreSimulator/Devices/09A70160-4F4F-4F5E-B679-F6F7D876D7EF/data/Library/Caches/com.apple.mobile.installd.staging/temp.6OAEZp/extracted/LaunchBar.app/PlugIns/LaunchBarShareExtension.appex with id co.swiftfox.LaunchBar.ShareExtension specifies a value (com.apple.share-services) for the NSExtensionPointIdentifier key in the NSExtension dictionary in its Info.plist that does not correspond to a known extension point. When trying to archive an upload to test flight, I get this similar error: Invalid Info.plist value. The value for the key 'DTPlatformName' in bundle LaunchBar.app/PlugIns/LaunchBarShareExtension.appex is invalid. (ID: 207610c7-b7e1-48be-959b-22a43cd32d16) The app is for visionOS only - which I'm thinking might be the problem? The share extension is "Designed For iPhone" and requires me to include iPhone as a run destination. In the worst case I can build an iPhone UI for the app but I'd rather not, as it is very specific to visionOS. Has anyone successfully launched a share extension on a visionOS only app? I have an iPad app with a share extension that shows up fine on visionOS, but the issue seems to be specifically with visionOS only apps.
1
0
109
Oct ’25
VisionPro Enterprise.license file
I have read in the apple documentation and on forums that in order to access the camera and capture images on VisionPro, both an Entitlement and an Enterprise.license are required. I already have the Entitlement, but I don’t yet have the Enterprise.license. I would like to ask: is the Enterprise.license strictly required to gain camera access for capturing images? How can I obtain this file, and does it require an Enterprise account? Currently, my developer account is a regular Developer 99$, not an Enterprise account.
2
0
411
Oct ’25