Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

AudioPlaybackController stop playing when .plain window is closed
Suppose there was an immersiveSpace, and an Entity() being added to the space as child entity of the content. This entity is responsible for playing background music by calling prepareAudio, gaining a controller and play the music. (check the basic code below) When it was playing music, a .plain window and an immersiveSpace are both presented. I believe this immersiveSpace is holding the handle of the controller so as long as immersiveSpace is open, the music won't stop. However if I close the .plain window (by closing system-level close button), the music just stopped. But the immersiveSpace is still open. If right now I check the value of controller.isPlaying, it was still true. But you just cannot hear the music anymore. To reproduce, simply open an visionOS template App project, selecting volume and full immersive, and replace some code inImmersiveView.swift with the code below. Also simply drag any .mp3 file and replace the AudioFileResource's name. And you could reproduce this bug. RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) // Put skybox here. See example in World project available at // https://developer.apple.com/ if let audioResource = try? await AudioFileResource(named: "anyMP3file.mp3") { let ent = Entity() immersiveContentEntity.addChild(ent) let controller = ent.prepareAudio(audioResource) controller.play() } } } I wonder why this happen? I mean how should I keep the music playing when I close the .plain window? Thanks!
1
1
535
Jan ’25
ARKit camera transform orientation vector doesn't match physical device heading (despite `.gravityAndHeading`)
Hi all, I'm working on an ARKit-based iOS app where I need to accurately determine the direction the device is facing to localize objects in the real world. I'm using: let config = ARWorldTrackingConfiguration() config.worldAlignment = .gravityAndHeading Thus, I would expect the world alignment to behave as given in the gravityAndHeading page. The AR session is started after verifying that CLLocationManager.headingAccuracy <= 20, and the compass appears to be calibrated. However, I'm seeing a major inconsistency: When the rear camera is physically pointed toward true North, I would expect: cameraTransform.columns.2.z ≈ -1 // (i.e. ARKit's -Z pointing North) But instead, I'm consistently seeing: cameraTransform.columns.2.z ≈ +0.97 // Implies camera is facing South Meanwhile, the translation vector behaves as expected: As I physically move North, cameraTransform.columns.3.z becomes more negative, matching the world’s +Z = South assumption. For example, let's say I have the device in landscapeRight (or landscapeLeft for UIDeviceOrientation). Let's say the device rear camera is pointing towards True North, and I start moving towards True North. I get something like this: Camera Transform = simd_float4x4( [ [0.98446155, -0.030119859, 0.172998, 0.0], [0.023979114, 0.9990097, 0.037477385, 0.0], [-0.17395553, -0.032746706, 0.98420894, 0.0], [0.024039675, -0.037087332, -0.22780673, 0.99999994] ]) As you can see, the cameraTransform.columns.2.z is positive despite the rear camera pointing towards True North, while cameraTransform.columns.3.z is correctly positive as the device is moving towards True North. So here is my question: Why is cameraTransform.columns.2.z positive when the rear camera is physically facing North? Any clarity would be deeply appreciated. I've read the documentation and tested with different heading accuracies and AR session resets, but I keep running into this orientation mismatch. Thanks in advance!
1
0
118
Jun ’25
How to trigger and timeline-control particle emitters in Reality Composer Pro (without Xcode or code)?
Hi, I’m currently using Reality Composer Pro on macOS, and I’m trying to work with particle emitters using only the built-in tools — no Xcode, no custom Swift code. Here’s what I’m trying to achieve: 1. Is it possible to trigger a particle emitter with a tap gesture, directly inside Reality Composer Pro? 2. Can I add a particle emitter to a timeline, so that it emits at a specific time during an animation or behavior sequence? 3. Most importantly, is there a way to precisely control emission behavior — such as when particles start or stop — using visual and intuitive controls, without scripting? My goal is to set up particle effects that are interactive and timeline-controlled, but managed entirely through Reality Composer Pro’s GUI, not through code. Any suggestions, tips, or examples would be very helpful. Thanks in advance!
1
0
108
Jun ’25
How to handle tasks when the Vision Pro is taken off?
I have a grpc server running inside of a task. When the user takes the headset off, the grpc server will no longer work when they put the headset back on. I would like to have this action detected so that I can cancel the task (which will effectively close the grpc server). I am also using a visual indicator to let the user know if the server is running, but it will not accurately reflect the state of the server when removing and putting back on the headset.
1
0
290
Mar ’25
Independent gestures for multiple entities in RealityView
Hi, I'm developing an app for the Apple Vision Pro. Inside the app the user should be able to load objects from the web into the scene and then be able to move them around (dragging and rotating) via gestures. My question: I'm working with RealityKit and use RealityView.. I have no issues loading in one object and making it interactive by adding gestures to the entire RealityView via the .gestures() function. Also I succeeded in loading multiple objects into the scene. My problem is that I can't figure out how to add my gestures to multiple objects independently. I can't use Reality composer since I'm loading the objects dynamically into the scene. Using .gestures() doesn't work for multiple objects since every gesture needs to be targeted to a specific entity but I have multiple entities. I also tried defining a GestureComponent and adding it to every newly loaded entity but that doesn't seem to work as nothing happens, even though my gesture is targeted to every entity having my GestureComponent. I only found solutions that are not usable on Visionos / RealityView like installGestures. Also I tried following this guide: https://developer.apple.com/documentation/realitykit/transforming-realitykit-entities-with-gestures But I feel like there are things missing and contradictory, like a extension of RealityView is mentioned but not shown and the guide states that we don't need to store the translation values for every entity and therefore creates a EntityGestureState.swift file to store these values but then it's not used and a GestureStateComponent is used instead that was never mentioned and contradicts what was just said about not storing values per entity because a new instance of it is created for every entity. Nevermind, following the guide didn't lead me to a working solution.
1
0
336
Feb ’25
Dynamically assigning texture resource to ShaderGraphMaterial on VisionOS
I implemented a ShaderGraphMaterial and tried to load it from my usda scene by ShaderGraphMaterial.init(name: in: bundle). I want to dynamically set TextureResource on that material, so I wanted to expose texture as Uniform Input of a ShaderGraphMaterial. But obviously RCP's Shader Graph doesn't support Texture input as parameter as the image shows: And from the code level, ShaderGraphMaterial also didn't expose a way to set TexturesResources neither. Its parameterNames shows an empty array if I didn't set any custom input params. The texture I get is from my backend so it really cannot be saved into a file and load it again (that would be too weird). Is there something I am missing?
1
0
545
Jan ’25
xform called "Scene" breaks animations on Quicklook starting with iOS15
Hello, We discovered that a bunch of our old animated models were no longer animated on iOS15 and onwards. After a few days of playing spot the difference between usda files I noticed that all the broken models had an xform called "Scene". Lo and behold, changing the name of that xform fixed the issue on all the models. Even lowercase "scene" makes the animations work again. Is "Scene" a reserved keyword or something? What other keywords do we need to avoid so we can create more robust USDZ files? I'm surprised this issue isn't more widespread considering Blender wraps models in a "Scene" node. At the drive link below you can find two animated cube USDZs. The only difference is the name of one of the xforms. The one with a "Scene" xform is not animated in quicklook (replicated on iPhone 13 iOS v15.2, iPhone 13 iOS v 18.3, and various devices on Browserstack including iPhone 16 iOS v18.3). https://drive.google.com/drive/folders/1dch1WaM9O6mbHy29S6NGWgnSHkZkPiBf?usp=sharing
1
0
358
Mar ’25
How to Achieve Realistic Colors and Textures in LiDAR Scanning with Swift
Hello, I'm developing a LiDAR-based scanning app using Swift, where I can successfully perform scans and export the results as .obj files. My goal is to have the scan's colors and textures closely resemble real-world visuals as captured by the camera, similar to the results shown in this repository. In the referenced repository, the result is demonstrated with a single screenshot, but I want to display the textures and colors throughout the entire scanning process, not just at the final result. To clarify, I'm not focused on scanning individual objects but rather larger environments like rooms, houses, or outdoor spaces such as streets. Here’s what I’m aiming for: Realistic colors and textures that match what the camera sees during the scan. Continuous texture rendering during the scanning process, not just in the final exported model. Could anyone share guidance, sample code, or point me to relevant documentation to achieve this? Any help would be greatly appreciated! Thank you!
1
0
107
May ’25
Is `ParticleEmitterComponent` implemented for `RealityKit` on iOS?
Hi there, I was looking to add a particle emitter to my augmented reality app I'm developing using RealityKit. I'm targeting iOS. I noticed in the documentation for the ParticleEmitterComponent that it looks like iOS 18.0+ is supported, but when I try to use the ParticleEmitterComponent in my code in XCode, I get an error that it isn't found. Furthermore, this StackOverflow post seems to indicate that particle systems are not available for iOS. Would it be possible to get clarification on this?
1
0
135
May ’25
Apple Vision Pro Developer Strap: Video Out and Recording Time Limit
Any way to extend the video recording time in Reality Composer Pro from 3:00 to any longer value, such as editing preferences in Terminal or other workaround? Is there any way to use the strap and a USB-C cable as a live video stream input source that would mirror to Quicktime or some other video capture tool? I am assuming there is no online documentation or user manual for the strap, but please correct me if I'm wrong. Thank you.
1
0
141
Jun ’25
ARKit hand tracking
Hello, I am developing a visionOS application and am interested in obtaining detailed data of users’ hands through ARKit, including but not limited to Transform and rotation angle. I have reviewed Happy Beem, but it appears to only introduce the method of identifying the user’s specific gestures. Could you please advise on how to obtain the Transform and rotation angle of the user’s hand? Thank you.
1
0
501
Mar ’25
Summon gesture
Can you help to write a code able to pick an element a bit far from me, then bring it near to me, flick it a bit and then send it back to its original position when I release it? Thanks a lot, Christophe
1
0
62
Apr ’25
What's the relation of SwiftUI frames' sizes and RealityKit Entities sizes
Currently I want to recreate a window which is similar to system window in ImmersiveSpace. But we only can use the meter unit in RealityKit. I create a plane entity, I don't know how to set the size using meter unit to make the plane's size totally consistent with the system window. Also, I want to know the z and y position of the system window in the immersive space.
1
0
342
Jan ’25
Significant deviation of depth map values captured in ARKit framework
I use ARKit to build an app, scan rooms to collect the spatial data of objects and re-construct the 3D scene. the problem is I found the depth map values captured in ARFrame significantly deviate from the real distances, even nonlinearly, for the distances below 1.5m, values are basically correct, but beyond 1.5m, they are smaller than real values. for example read 1.9m from the generated depthmap.tiff, but real distance is 3 meters. below is my code of generating tiff file to record depth map data: Generated TIFF file (captured from ARKit): as shown above, the maximum distance is around 1.9m, but real distance to that wall is more than 3 meters, and also you can see, the depth map picture captured in ARKit is quite blurry, particularly at far distance (> 2.0m), almost smeared out. Generated TIFF file (captured from AVFoundation): In comparison, the depth map captured from traditional AVFoundation and with the same hardware device is much clear, the values seem not in meter unit though.
1
0
493
Feb ’25