Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

Gaze, Eye Tracking responsive Stuff
I am wondering, is it possible to somehow configure a 3D object to respond to the gaze of a person, like change colors of some parts of the 3D-Model where a person is looking, i.e. where a person's gaze lands on the surface of the 3D-model ? For example, if there is a 3D model of a Cool Dragon 🐉 in the physical space of a person, when seen through with the mixed reality view, of a VisionPro. Now, it would be really cool to change only the color, or make some specific parts of the dragon skin shimmer, but only in the areas where a person is looking. Is this possible ? Is it do-able with eye-tracking of VisionPro ? Any advice would be appreciated. 🤝 🤓 I am new to iOS and VisionOS development.
1
0
228
Mar ’25
CustomMaterial disable unlit tone mapping
Hi, since iOS 18 UnlitMaterial and ShaderGraphMaterial have the option to disable tone mapping, e.g via https://developer.apple.com/documentation/realitykit/unlitmaterial/init(applypostprocesstonemap:) Is it possible to do the same for CustomMaterial? I tried initializing a CustomMaterial based on an UnlitMaterial where tone mapping is disabled, like so: let unlitMat = UnlitMaterial(applyPostProcessToneMap: false) let customMaterial = try CustomMaterial( from: unlitMat, surfaceShader: surfaceShader, geometryModifier: geometryModifier ) but that does not seem to work. The colors of my texture still look altered in comparison to a plain UnlitMaterial or a ShaderGraphMaterial where its disabled. Any hints? Thank you!
1
0
121
Jun ’25
Loading USDZ with particle system crashes on Intel Macs
Hello, we have a RealityKit app that also runs on macOS via Catalyst. For specific USD assets containing particle systems we have observed a reproducible crash. Steps to reproduce: Open Reality Composer Pro Create new file Create simple particle system (default one is fine) export as USDZ Create project in Xcode Call Entity.load(… and pass in your USD Running this on an Intel iMac with macOS Sequoia 15.3 will lead to a crash with the following console log: validateWithDevice:4704: failed assertion `Render Pipeline DescrvalidateWithDevice:4704: failed assertion `Render Pipeline Descriptor Validation depthAttachmentPixelFormat (MTLPixelFormatDepthvalidateWithDevice:4704: failed assertion `Render Pipeline Descriptor Validation depthAttachmentPixelFormat (MTLPixelFormatDepthvalidateWithDevice:4704: failed assertion `Render Pipeline Descriptor Validation depthAttachmentPixelFormat (MTLPixelFormatDepth32Float) and stencilAttachmentPixelFormat (MTLPixelFormatStencil8) must match. ' iptor Validation depthAttachmentPixelFormat (MTLPixelFormatDepth32Float) and stencilAttachmentPixelFormat (MTLPixelFormatStencil8) must match. ' 32Float) and stencilAttachmentPixelFormat (MTLPixelFormatStencil32Float) and stencilAttachmentPixelFormat (MTLPixelFormatStencil8) must match. ' 8) must match. ' Xcode version: 16.2.0 iMac 2020 3,8GHz Intel Core i7 macOS Sequoia 15.3 FB16477373 It would be great if this could be fixed quickly or a workaround provided since it affects or production app. Thank you!
1
0
424
Mar ’25
How to programmatically update Model Position Offset of GeometryModifier?
is it possible to dynamically update ModelPositionOffset of GeometryModifier with a depth map image? in my code I set up the parameter for "DepthMapTexture" universal input node and tried setting the depth map for depthTextureResource. I have 2 DrawableQueues. One for setting InputTexture, and one for setting DepthMapTexture. This only shows the part that concerns setting DepthMapTexture this is where I define the plane entity. and this is the shader graph what I noticed with GeometryModifier is that, the depthMap image has to be same as input image's dimensions. and when I applied this material to usdz file, with pre-assigned image and depth map from RCP, and loaded that Entity from code, depth map was applied correctly. what I am unsure is that if it is impossible to define a model entity from code, apply ShaderGraphMaterial from RCP, and dynamically update the image used in GeometryModifier. Maybe I'm missing something when defining Entity, something that allows geometric modifications?
1
0
295
Mar ’25
Missing Properties in BillboardComponent
In an earlier beta, BillboardComponent had rotationAxis and upDirection properties which allowed more fine-grained control of how an entity rotates towards the camera. Currently, it is only possible to orient the z axis of the entity. Looking at the robot in the documentation, the rotation of its z axis causes its feet to lift off the ground. Before, it was possible to restrain the rotation to one axis (y, for example) so that the robot's feet stayed on the ground with billboard.upDirection = [0, 1, 0] billboard.rotationAxis = [0, 1, 0] Is there an alternative way to achieve this? Are these properties (or similar) coming back?
1
0
294
Mar ’25
The folding and unfolding effect of the NBA sand table
Seeing this magical sand table, the unfolding and folding effects are similar to spreading out cards, which is very interesting. But I don't know how to achieve it. I want to see if there are any ways to achieve this effect and give some ideas. May I ask if this effect can be achieved under the existing API
1
0
61
May ’25
Obect Capture App getting rejected for not supporting non Pro Models
Hi, I created an app using iOS Object Capture API which works only on Lidar enabled phones. It's a limitation of the Api provided by apple itself. I Submitted an app for Review , but It is getting rejected (Twice) saying it doesnt work on non pro models. Even though I explained that capturing Needs Lidar and supported only in PRO models, It still gets rejected after testing in Non Pro models. is there a way out?
1
0
350
Feb ’25
Is there any way, I can use the Object Tracking applications on an iOS (iPhone) AR App.
I have been referencing the Object Tracking Tutorial from WWDC 2024 on Vision OS, how Create ML is used to create a reference object, and we can track them in the ARSession. I am looking forward to building this feature on an AR app for iPhone, I am using iPhone 13 Pro Max. I have created couple of reference objects from the Create ML.
1
0
322
Mar ’25
Using AVAsynchronousKeyValueLoading.load() on an AVAssetTrack gives an error
I'm seeing this error while attempting to compile my VisionOS app under Xcode 26. My existing code looks like: let (naturalSize, formatDescriptions, mediaCharacteristics) = try? await videoTrack.load(.naturalSize, .formatDescriptions, .mediaCharacteristics) This is now giving a compiler error: Type of expression is ambiguous without a type annotation I don't see that anything that was changed or deprecated in the latest version. Also loading the properties individually seems to work fine i.e.: let naturalSize = try? await videoTrack.load(.naturalSize) let formatDescriptions = try? await videoTrack.load(.formatDescriptions) let mediaCharacteristics = try? await videoTrack.load(.mediaCharacteristics)
1
0
92
Jun ’25
Rendering Order with ModelSortGroup
I have a huge sphere where the camera stays inside the sphere and turn on front face culling on my ShaderGraphMaterial applied on that sphere, so that I can place other 3D stuff inside. However when it comes to attachment, the object occlusion never works as I am expecting. Specifically my attachments are occluded by my sphere (some are not so the behavior is not deterministic. Then I suspect it was the issue of depth testing so I started using ModelSortGroup to reorder the rending sequence. However it doesn't work. As I was searching through the internet, this post's comments shows that ModelSortGroup simply doesn't work on attachments. So I wonder how should I tackle this issue now? To let my attachments appear inside my sphere. OS/Sys: VisionOS 2.3/XCode 16.3
1
0
494
Feb ’25
Implementing multi-pass rendering in VisionOS
I’m working on a Vision Pro app using Metal and need to implement multi-pass rendering. Specifically, I want to render intermediate results to a texture, then use that texture in a second pass for post-processing before presenting the final output. What’s the best approach in visionOS? Should I use multiple render passes in a single command buffer or separate command buffers? Any insights on efficiently handling this in RealityKit or Metal? Thanks!
1
0
363
Mar ’25
ARKit: Prevent Asset Clipping
Hello Apple Team, I am working on a RealityKit project for iOS, where I need to place a 3D asset far away from the camera (approximately 15 to 30 meters). When enabling people occlusion, the 3D asset gets clipped when moved far away. Is it possible to enable people occlusion for assets at close range (less than 10 meters) while disabling it for assets farther away to prevent clipping? I understand that it is possible to switch configurations at runtime. However, I would like to place assets both close to and far from the camera simultaneously. Thank you for your help! Kind regards
1
0
519
Jan ’25
AR QuickLook Frame Rate Questions
When I've made an animated UDSZ, at what framerate will the animation be rendered in QuickLook? Is it the same across all devices? (iPhone, Apple Vision Pro, etc.) and viewing environments? (QuickLook, inside an ARView, etc.) Suppose I export my file at 30fps and the device draws at 60fps, does the device interpolate between frames automatically, animate at a lower frame rate, or play it at twice the speed? What if it were 24fps? My primary concern with understanding frame rates is a bit of trouble I've had making perfectly looping animations. There always seems to be the slightest stutter between iterations. Thanks in advance for any insights you're able to provide!
1
0
249
Mar ’25
The pinch operation by the left hand should be stopped.
Hi, I have a question. In visionOS, when a user looks at a button and performs a pinch gesture with their index finger and thumb, the button responds. By default, this works with both the left and right hands. However, I want to disable the pinch gesture when performed with the left hand while keeping it functional with the right hand. I understand that the system settings allow users to configure input for both hands, the left hand only, or the right hand only. However, I would like to control this behavior within the app itself. Is this possible? Best regards.
1
0
276
Feb ’25
I want to know the update principle of the `RealityView`.
Here is the code snippets. struct RealityViewTestView: View { @State private var texts: [String] = [] var body: some View { RealityView { content, attachments in } update: { content, attachments in for text in texts { if let textEntity = attachments.entity(for: text) { textEntity.position.x = Float.random(in: -0.1...0.1) content.add(textEntity) } } } attachments: { ForEach(texts, id: \.self) { text in Attachment(id: text) { Text(text) .padding() .glassBackgroundEffect() } } } .toolbar { ToolbarItem { Button("Add") { texts.append(String(UUID().uuidString.prefix(6))) } } ToolbarItem { Button("Remove") { texts.remove(at: Int.random(in: 0..<texts.count)) } } } } } struct RealityViewTestView: View { @State private var texts: [String] = [] @State private var entities: [Entity] = [] var body: some View { RealityView { content, attachments in } update: { content, attachments in // for text in texts { // if let textEntity = attachments.entity(for: text) { // textEntity.position.x = Float.random(in: -0.1...0.1) // content.add(textEntity) // } // } for entity in entities { content.add(entity) } } attachments: { ForEach(texts, id: \.self) { text in Attachment(id: text) { Text(text) .padding() .glassBackgroundEffect() } } } .toolbar { ToolbarItem { Button("Add") { //texts.append(String(UUID().uuidString.prefix(6))) let m = ModelEntity(mesh: .generateSphere(radius: 0.1), materials: [SimpleMaterial(color: .white, isMetallic: false)]) m.position.x = Float.random(in: -0.2...0.2) entities.append(m) } } ToolbarItem { Button("Remove") { //texts.remove(at: Int.random(in: 0..<texts.count)) entities.removeLast() } } } } } About the first code snippet, when I remove an element from the texts, why content can automatically remove the corresponding entity? And about the second code snippet, content do not automatically remove the corresponding entity. I am very curious.
1
0
433
Jan ’25
I need to loop my videoMaterial.
I need to loop my videoMaterial and I don't know how to make it happen in my code. I have included an image of my videoMaterial code. Any help making this happen with be greatly appreciated. Thank you, Christopher
1
0
117
Jun ’25