Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

Request: More Fine-Grained Control in Object Capture (PhotogrammetrySession)
Hi Apple Team and Developers, First of all, I’d like to express my appreciation for the incredible results achieved using PhotogrammetrySession. I’ve been developing a portrait scanning app using Object Capture, and in many tests—especially with human models—I’ve found the reconstructed body surfaces are remarkably smooth and clean, often outperforming tools like Metashape and RealityCapture in terms of aesthetic results. However, I’ve encountered some challenges when working with complex areas like long hair overlapping the face. For instance, with female models where strands of hair partially occlude the face, the resulting mesh tends to merge the hair and facial geometry. This leads to distorted or “melted” facial features, likely due to ambiguity in the geometry estimation phase. Feature Suggestion: Would it be possible to allow developers to supply two versions of the input images: • One version (original) for texture generation • A pre-processed version (e.g., contrast-enhanced or CLAHE filtered) to guide mesh reconstruction only This would give us the flexibility to enhance edge features or shadow detail without affecting the final texture appearance. In other photogrammetry pipelines, applying image enhancement selectively before dense reconstruction improves geometry quality in low-contrast areas. Question: Is there any plan to support this kind of two-path workflow in future versions of PhotogrammetrySession? Or perhaps expose more intermediate stages or tunable parameters to developers? Also, any hints on what we can expect from WWDC 2025 regarding improvements to Object Capture or related vision/3D technologies? Thanks again for this powerful API. Looking forward to hearing insights from the team and other developers. Warm regards, KitCheng
0
0
73
May ’25
Spaceship sample code does not compile
I'm getting the following error message when compiling the Apple provided sample, Spaceship game for the Apple Visio Pro. I've already tried deleting the derived data resetting the package cache and restarting Xcode but still getting the following error: [xrsimulator] Exception thrown during compile: Cannot get rkassets content for path /Users/myoungkang/Downloads/CreatingASpaceshipGame/Packages/Studio/Sources/Studio/Studio.rkassets because 'The file “Studio.rkassets” couldn’t be opened because you don’t have permission to view it.' error: Tool exited with code 1
0
1
64
Apr ’25
Custom Material half visible..?
I'm currently implementing 180° / 360° immersive video for my app. I easily implemented 360° by just applying VideoMaterial to flipped sphere. But I'm stuck at 180°. I'm trying to implement by applying VideoMaterial to hemisphere (half sphere). I want to make VideoMaterial to be visible half front sphere and half back sphere transparent / clear. Would there be any advice / information / idea to implement this? Your help would be grateful.
0
0
102
Oct ’25
Is it Possible to Place a 3D Model at the Exact Position of a QR Code in AR with ARKit ?
I'm working on an iOS app using ARKit and RealityKit where I scan QR codes and want to place 3D models at the exact position of the QR code in the real world. Is it possible to accurately place a 3D model at the exact position of a QR code in AR using ARKit and RealityKit? Specifically, I want the model to appear at the precise location where the QR code is detected, rather than just somewhere in the AR space. If this is possible, could you point me in the right direction or recommend the best approach to achieve this? Thank you for your help!
0
0
105
May ’25
AR sessions fails with "Required sensor failed"
The AR based app I am working on right now is experiencing an issue. Sometimes, the AR session fails with a call to my ARSessionObserver's session(_ session: ARSession, didFailWithError error: Error) with the following error: Error Domain=com.apple.arkit.error Code=102 "Required sensor failed." NSLocalizedFailureReason="A sensor failed to deliver the required input.," NSLocalizedRecoverySuggestion="Make sure that the application has the required privacy settings." The underlying error seems to point to the CoreMotion framework: Domain=CMErrorDomain Code=102 "(null) Some people seem to have experienced this issue and solved it by making sure that the Compass Calibration switch is ON in Settings > Privacy > Location Services > System Services. For context, the ARWorldTrackingConfiguration.worldAlignment is set to .gravity The thing is it is already ON when I experience this issue. I also noticed that this issue happens way more often on the iPhone 16e than in any other device. Has anyone had similar experiences? I am looking for a way to prevent this error from happening (ideally) or handling in a way that does not affect the user. Any help is appreciated
0
1
196
Aug ’25
VisionPro MRApp issue: location reset
We use Unity6+VisonOS2.2 to develop an MR Application. App Mode: RealityKit with PolySpatial, In the actual test, we found that when my moving position is more than 80~100 meters away from the starting position of the application, my current position will be reset to Vector.zero, which will cause my application experience to be very bad. Is anyone experiencing the same problem? Is there a solution?
0
0
221
Jan ’25
Unexpected behavior when writing entities and loading realityFiles.
I have a simple visionOS app that creates an Entity, writes it to the device, and then attempts to load it. However, when the entity file get overwritten, it affects the ability for the app to load it correctly. Here is my code for saving the entity. import SwiftUI import RealityKit import UniformTypeIdentifiers struct ContentView: View { var body: some View { VStack { ToggleImmersiveSpaceButton() Button("Save Entity") { Task { // if let entity = await buildEntityHierarchy(from: urdfPath) { let type = UTType.realityFile let filename = "testing.\(type.preferredFilenameExtension ?? "bin")" let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0] let fileURL = documentsURL.appendingPathComponent(filename) do { let mesh = MeshResource.generateBox(size: 1, cornerRadius: 0.05) let material = SimpleMaterial(color: .blue, isMetallic: true) let modelComponent = ModelComponent(mesh: mesh, materials: [material]) let entity = Entity() entity.components.set(modelComponent) print("Writing \(fileURL)") try await entity.write(to: fileURL) } catch { print("Failed writing") } } } } .padding() } } Every time I press "Save Entity", I see a warning similar to: Writing file:///var/mobile/Containers/Data/Application/1140E7D6-D365-48A4-8BED-17BEA34E3F1E/Documents/testing.reality Failed to set dependencies on asset 1941054755064863441 because NetworkAssetManager does not have an asset entity for that id. When I open the immersive space, I attempt to load the same file: import SwiftUI import RealityKit import UniformTypeIdentifiers struct ImmersiveView: View { @Environment(AppModel.self) private var appModel var body: some View { RealityView { content in guard let type = UTType.realityFile.preferredFilenameExtension else { return } let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0] let fileURL = documentsURL.appendingPathComponent("testing.\(type)") guard FileManager.default.fileExists(atPath: fileURL.path) else { print("❌ File does not exist at path: \(fileURL.path)") return } if let entity = try? await Entity(contentsOf: fileURL) { content.add(entity) } } } } I also get errors after I overwrite the entity (by pressing "Save Entity" after I have successfully loaded it once). The warnings that appear when the Immersive space attempts to load the new entity are: Asset 13277375032756336327 Mesh (RealityFileAsset)URL/file:///var/mobile/Containers/Data/Application/1140E7D6-D365-48A4-8BED-17BEA34E3F1E/Documents/testing.reality/Mesh_0.compiledmesh failure: Asset provider load failed: type 'RealityFileAsset' -- RERealityArchive: Failed to open load stream for entry 'assets/Mesh_0.compiledmesh'. Asset 8308977590385781534 Scene (RealityFileAsset)URL/file:///var/mobile/Containers/Data/Application/1140E7D6-D365-48A4-8BED-17BEA34E3F1E/Documents/testing.reality/Scene_0.compiledscene failure: Asset provider load failed: type 'RealityFileAsset' -- RERealityArchive: Failed to read archive entry. AssetLoadRequest failed because asset failed to load '13277375032756336327 Mesh (RealityFileAsset)URL/file:///var/mobile/Containers/Data/Application/1140E7D6-D365-48A4-8BED-17BEA34E3F1E/Documents/testing.reality/Mesh_0.compiledmesh' (Asset provider load failed: type 'RealityFileAsset' -- RERealityArchive: Failed to open load stream for entry 'assets/Mesh_0.compiledmesh'.) The order of operations to make this happen: Launch app Press "Save Entity" to save the entity "Open Immersive Space" to view entity Press "Save Entity" to overwrite the entity "Open Immersive Space" to view entity, failed asset load request Also Launch app, the entity should still be save from last time the app ran "Open Immersive Space" to view entity Press "Save Entity" to overwrite the entity "Open Immersive Space" to view entity, failed asset load request NOTE: It appears I can get it to work slightly better by pressing the "Save Entity" button twice before attempting to view it again in the immersive space.
0
0
178
Aug ’25
When running Unity applications in the shared space of visionOS, the Mask/RectMask2D of ScrollView does not function properly
Problem Description: I am developing an application that runs in the Shared Space on Apple Vision Pro using Unity. When using the UI ScrollView (Scroll View) component, I found that the Mask / RectMask2D does not function in the Shared Space. Scrolling content is not masked or cropped; it extends beyond the view boundary and is displayed directly. The same UI works correctly across platforms such as Unity Editor, iOS, and macOS, but the issue only occurs in the shared space of Vision Pro. Reproduction steps: Create a ScrollView in Unity. Add a Mask or RectMask2D to the viewport. Deploy the application to Apple Vision Pro and run it in Shared Space mode. Sliding content will not be clipped by the mask, and the masked area is entirely ineffective. Expected behavior: The content of ScrollView should be properly clipped by Mask / RectMask2D and should not render outside the mask boundary. Actual results: In the shared space of Vision Pro, the mask is ineffective, causing scrolling content to extend beyond the designated area and resulting in severe UI distortion. Environmental Information: Device: Apple Vision Pro Mode: Shared Space Unity Version: 6000.0.40f1 visionOS version: visionOS 26.0 Unity PolySpatial Version: 2.0.4 Impact This issue causes Unity UI to fail to display correctly on Vision Pro, preventing ScrollView from properly clipping content, which impacts the UI experience and interaction effects in practical applications. Expected Result: When running a Unity app in the shared space of visionOS, the Mask / RectMask2D of ScrollView functions correctly
0
0
81
1w
visionOS widget dimensions?
Is there any size guidance for the new WidgetKit integration on visionOS? The Widget HIG provides dimensions for all the widget size classes on iOS, iPadOS and watchOS, but has not been updated for visionOS. https://developer.apple.com/design/human-interface-guidelines/widgets My potential widget use case is image based, so I'm looking to better understand the optimal size, resolution etc I would need, particularly for the new visionOS specific extra large widget size.
0
0
568
Jul ’25
360 Image quality too low even with 72MP How to improve or decrease sphere size
Using a 360 image that I have taken with 72MP with a Insta360 X3 I would like to add those images into my VisionPro and see them surrounding me completely as we expect of a 360 image. I was able to do by performing the described on some tutorial. The problem is the quality. On my 2D window the image looks with great quality. I will still write down the code: struct ImmersiveView: View { @Environment(AppModel.self) var appModel var body: some View { RealityView { content in content.add(createImmersivePicture(imageName: appModel.activeSpace)) } } func createImmersivePicture(imageName: String) -> Entity { let sphereRadius: Float = 1000 let modelEntity = Entity() let texture = try? TextureResource.load(named: imageName, options: .init(semantic: .raw, compression: .none)) var material = UnlitMaterial() material.color = .init(texture: .init(texture!)) modelEntity.components.set( ModelComponent( mesh: .generateSphere( radius: sphereRadius ), materials: [material] ) ) modelEntity.scale = .init(x: -1, y: 1, z: 1) modelEntity.transform.translation += SIMD3<Float>(0.0, 10.0, 0.0) return modelEntity } } Since the quality is a problem. I thought about reducing the radius of the sphere or decreasing the scale. On both cases, nothing changes. I have tried: modelEntity.scale = .init(x: -0.5, y: 0.5, z: 0.5) And also let sphereRadius: Float = 2000, let sphereRadius: Float = 500, but nothing is changed. I also get the warning: IOSurface creation failed: e00002c2 parentID: 00000000 properties: { IOSurfaceAddress = 4651830624; IOSurfaceAllocSize = 35478941; IOSurfaceCacheMode = 0; IOSurfaceMapCacheAttribute = 1; IOSurfaceName = CMPhoto; IOSurfacePixelFormat = 1246774599; } IOSurface creation failed: e00002c2 parentID: 00000000 property: IOSurfaceCacheMode IOSurface creation failed: e00002c2 parentID: 00000000 property: IOSurfacePixelFormat IOSurface creation failed: e00002c2 parentID: 00000000 property: IOSurfaceMapCacheAttribute IOSurface creation failed: e00002c2 parentID: 00000000 property: IOSurfaceAddress IOSurface creation failed: e00002c2 parentID: 00000000 property: IOSurfaceAllocSize IOSurface creation failed: e00002c2 parentID: 00000000 property: IOSurfaceName Is there anything I can do to reduce the radius or just to improve the quality itself?
0
0
375
Jan ’25
VisionOS 26 new hand Tracking: how to use it for replacing hands in a virtual environment.
I am still not finding resources to know how to replace hands in a full immersive Space.... I reached the goal by creating a ARKit session that can detect the USDZ hand mesh Joints and connect to the hand-tracked-joints.... but I feel that's not the best solution... I really want to use the RealityKit potential to track and replace hands (with USDZ skinned ones) in an immersive environment, but the only resources I found are from November 2023.... :( Can someone help me?
0
0
94
Jun ’25
Camera settings at intrinsic calibration time
Hi everyone, I am wondering under which settings the camera(s) were set by the time they were calibrated. For instance, one aspect that is easy to find is the reference resolution of the images taken when calibrating the intrinsics, this is by retrieving intrinsicMatrixReferenceDimensions. Making sure that the principal point is referenced to the by the time resolution used when the calibration was ongoing. However, recently I saw that there are focusing modes that potentially displace the lens' physical position. Settings like: AutoFocusRangeRestriction: none, near, far setFocusModeLocked: Locks the lens position at the specified value, and sets the focus mode to a locked state. My concern lies the impact this focusing lens displacements can have on the intrinsic matrix parameters, like these parameters no longer describe the camera since the lens position has changed. In simple words, what is the focus 'mode'/'range' the cameras were set when calibrating them for intrnisics?
0
0
510
Jan ’25
Reality Composer Pro Performance on iOS
I need help to wrap my head around this... If I import the Reality Composer Pro package and load it into an ARView, I will see 1.3gb of memory usage and about 180-220% cpu usage. The frames will start at around 60fps, and then eventually drop to around 30fps. If I export the usdz from Reality Composer Pro and load that into the same ARView, I will see about 1gb of memory usage and around 150% cpu usage; fps holds longer at 60 but eventually drops. If I load that same usdz into a QuickLook view, I will see about 55mb of memory usage, 9-11% cpu, and the frames stay locked at 116fps. The only thing I notice is the button I have is slightly less responsive, but it all still works fine. I don't understand. How can I make the ARView work as efficiently as QuickLook?
0
0
245
Mar ’25
.usdz files not loading in iOS app
Hello everyone, I'm a new developer and I'm still learning the foundations of Swift and SwiftUI while building my first app. Today I wanted to ask you how to implement AR Quixck Views inside my app. I wanna be able to dynamically preview AR objects in a dedicated view, however, I don't seem to have understood where and how to locate AR objects inside my project. I tried including them in the Assets folder of the project, or in the Recources folder, or within the main folder of my project alongside the MyAppApp.swift file. None of the methods I used seemed have worked in that none of the objects was ever located. I made sure to specify the path to the files every time, but somehow the location isn't recognized. I also tried giving no path so that the app would search for the files in their default location (which I apparently haven't grasped yet), but still my attempt failed. I don't have the code sample on me at the moment, but I will write a followup comment on this post to show you what I wrote in case anyone was interested in debugging my code. Meanwhile, if anyone would be so kind to point me at the support article or to comment below the sample code they used in their app, I would very much appreciate it, so that I can start debugging. Thank you for reading this, I appreciate you.
0
0
117
Jun ’25
Persisting Anchors in RealityView with ARMode on iOS
Platform: iOS18 Tech: RealityView Hi! I was wondering if RealityView now provides ways for their session to persist Anchor data in a world such that the anchor locations in one session can be saved and loaded in a another session that persists the exact same anchor positions. I know that ARWorldMap in ARKit does that, but I was not able to find a way to use it with RealityView. I think it's because RealityView has ARKit under its hood but does not expose the ARKit session info publicly to the client code. So I was wondering if there's a SwiftUI + RealityView approach that can help me to achieve a similar goal: Come back to the same location and see the object in exactly the same place. Thanks!
0
1
536
Jan ’25
How to implement the semi-transparent overlay effect in Immersive View?
I noticed that when I drag the menu window in an Immersive View, the entities behind it becomes semi-transparent, and the boundary between virtual and real-world objects is very pronounced. May I ask how does VisionOS implement this effect? Is there any API or technique I can use in my own code to enable the same semi-transparent overlay - even when I am not dragging the menu window?
0
0
116
May ’25
Entity jittering when using hand anchor
Hello! I'm a developer of Ping Pong for Apple Vision Pro (Ping Pong Club). I'm experiencing an issues with collisions between ball and paddle. Sometimes the ball just unexpectedly flying away like it has been hit to hard. I believe this is happening because hand anchors are jittering (ping pong paddle has hand anchor component) and at the time of collision unnecessary micro motion produced. I've build a demo app to demonstrate that: https://drive.google.com/file/d/1gBh-DsXuVZdvDrrD7gJcbJ_hg7tvVKUV/view?usp=share_link Video: https://drive.google.com/file/d/1poPbsJdBz87edqrWIfBt1eSVzfm5CTek/view?usp=sharing Is there a way to add some threshold to prevent jittering?
0
0
89
Jun ’25