Discuss Spatial Computing on Apple Platforms.

Posts under General subtopic

Post

Replies

Boosts

Views

Activity

Is it possible to live render CMTaggedBuffer / MV-HEVC frames in visionOS?
Hey all, I'm working on a visionOS app that captures live frames from the left and right cameras of Apple Vision Pro using cameraFrame.sample(for: .left/.right). Apple provides documentation on encoding side-by-side frames into MV-HEVC spatial video using CMTaggedBuffer: Converting Side-by-Side 3D Video to MV-HEVC My question: Is there any way to render tagged frames (e.g. CMTaggedBuffer with .stereoView(.leftEye/.rightEye)) live, directly to a surface in RealityKit or Metal, without saving them to a file? I’d like to create a true stereoscopic (spatial) live video preview, not just render two images side-by-side. Any advice or insights would be greatly appreciated!
2
0
246
Aug ’25
How to request several models simultaneously
I am using HelloPhotogrammetry in Xcode I can make one model with something like HelloPhotogrammetry.main([path_to_folder_of images, path_to_output/model.usdz, "-d", "medium", "-o", "unordered", "-f", "high" ]) But how would I request several models simultaneously? I only want to vary the detail. [ ("/Users/you/Desktop/model_medium.usdz", detail: .medium), ("/Users/you/Desktop/model_full.usdz", detail: .full), ("/Users/you/Desktop/model_raw.usdz", detail: .raw ]
2
0
92
Apr ’25
spatial-backdrop feature available yet?
In WWDC25 session What’s new for the spatial web, the presenter showed creating an immersive environment for a web page by adding to the page's HEAD section <link rel="spatial-backdrop" href="office.usdz" environmentmap="lighting.hdr"> My first attempt failed, and I am trying to track down why. Before I search all the potential failure paths, I wanted to ask the community, Is this feature available in the latest visionOS 26 beta? I haven't seen anyone talk about their use of the feature yet.
2
0
211
Aug ’25
Converting a Stop Motion Animation to usdz
Hello everyone, I've been trying for a few weeks now to convert a sequential series of meshes into a stop-motion animation in USDZ format. In Unreal Engine, I’ve already figured out how to transform the sequential series of individual meshes into a smooth animation using the node system and arrays. Unfortunately, the node system cannot be exported as a usdz animation logic in either Unreal or Blender. Because of this, I have tried several other methods to incorporate the animation logic. Here’s what I’ve tried so far: I attempted to create the animation in Blender with Render-/Viewports and mapping it to keyframes. However, in my experience, Viewports are not supported in the conversion. I tried aligning the vertices of individual objects and merging the frames using the Shrinkwrap modifier in Blender, then setting up a morph animation with keyframes. However, because the individual meshes are too different, this results in artifacts, and manually editing each mesh is too difficult for me to handle. I placed all individual meshes at the same position and animated them sequentially by scaling them from 0 to 100 in keyframes (Frame 1 is visible for 10 frames, then scales down at frame 11, while Frame 2 becomes visible at frame 11, and so on). I also adjusted the keyframes so that the scaling happens in a "constant" manner rather than the default Bezier or linear interpolation. I then converted this animation to .abc, and the result initially looked good. However, some information is lost when converting it with OpenUSD. The animation does not maintain its intended jump-like behavior in USDZ format, and instead, the scaling of individual files is visible in the animation. I tried using a Blender add-on (StepMotion), which allows the animation to be exported as .abc, but it can only be read in Blender or Unreal. Even in the preview, the animation is not displayed correctly, so converting the animation logic does not work either. 
Unfortunately, I have no alternative way to create the animation, as the individual frames have been provided to me as meshes. So far, I haven’t found a way to implement this successfully. I would be very grateful for any tips or ideas, as I am running out of options on how to make this work. Thanks in advance!
2
0
150
Apr ’25
Ornaments in Presentations
We can add ornaments to popovers shown by PresentationComponent, but I’m not sure if we should. While working on the editor for entities in a Volume-based app, I had the idea to add ornaments to the presented views. The entire app exists inside a volume. A user can tap a item to present a popoverUI to edit it. This is displayed using the new PresentationComponent in visionOS 26. Ornaments have a new attachment anchor option this year: .parent(). .ornament(attachmentAnchor: .parent(.top), ornament: {...}) This works well in the Simulator. We can add ornaments around this popover view just like we would with a window. Unfortunately, when I run this on device I get a different experience. Any part of the ornament that overlaps with the popover content isn’t rendered correctly. Sometimes it entirely disappears, other times it becomes partially transparent. We could use content alignment to try to make sure the ornament doesn’t overlap the popover content. .ornament(attachmentAnchor: .parent(.top), contentAlignment: .bottom, ornament: {...}) This works sometimes–but not all the time. It’s not clear if this is a bug or not, because I’m not sure if we are even supposed to be able to use ornaments in this way. Here is my hierarchy: An app opens as a Volume Volume presenting a RealityView, with its own ornament using .scene() anchor Multiple Entities with Presentation Component show an edit view The view uses .parent() anchor to add ornaments. What makes me unsure is that other methods for drawing UI in RealityView don’t seem to work with ornaments. For example, if I add an attachment to show a view with the ornament–even when I use the .parent() anchor–the ornament is anchor to the volume, not the attachment view. So what do we think? Is this a rendering bug? Are ornaments intended to work with attachments and presentations?
2
0
352
Aug ’25
Accessing pupil diameter in visionOS
Previously I had developed software using SMI eye trackers, both screen mounted and their mobile glasses, for unique therapeutic and physiology applications. Sadly, after SMI was bought by Apple, their hardware and software have been taken off the market and now it is very difficult to get secondhand-market systems. The Apple Vision Pro integrates the SMI hardware. While I can use ARKit to get gaze position, I do not see a way to access information that was previously made accessible on the SMI hardware, particularly: dwell time and pupil diameter information. I am hopeful (or asking) to see that if a user has a properly set up Optic ID and would opt-in if, either on the present or a future version of visionOS, it might be possible to get access to the data streams for dwell times and pupil diameter. Pupil diameter is particularly important as it is a very good physiological measure of how much stress a person is encountering, which is critical to some of the therapeutic applications that formerly we used SMI hardware. Any ideas, or, if this is not possible, proposing this to the visionOS team would be appreciated!
2
0
248
Jul ’25
How to create a MultiplayerDelegate and use TabletopNetworkSession features?
When building a multiplayer Tabletop game, the documentation includes how to attach a custom TabletopNetworkSessionCoordinator, which could be used in addition to TabletopGame.MultiplayerDelegate. But so far, we have been unable to create these types of custom coordinators or have a delegate that works. Our current setup with our generic GroupActivity works by sending the session to TabletopGame's coordinateWithSession method (like in the current sample project), but we didn't find a way to access and control, for example, the arbiter, seats, player events, among other features mentioned on https://developer.apple.com/documentation/tabletopkit/tabletopnetworksession. Is correct to expect having access to the participants, messenger, or journal without having to maintain a parallel coordinator?   possibly we are missing something here; any suggestions?
2
0
235
Apr ’25
How to fix "Sample 0 missing LiDAR point cloud!" error?
I'm trying to run a PhotogrammetrySession based on photos taken in an AVCaptureSession and stored as .heic files. When I load the files I'm always seeing the error "Sample 0 missing LiDAR point cloud!" showing up for each individual sample. Debugging shows that sample.depthDataMap is populated, also the .heic contains depth data which can be extracted using e.g. heif-convert on my Mac. Comparing the .heic I created to one of the ObjectCaptureSession which doesn't show the LiDAR warning, I noticed the only difference being the HEIC information here: So my questions are: Are these the missing information in my manual capture causing this warning? Can I somehow add these information in an AVCaptureSession? Do these information allow better photogrammetry results?
2
0
286
2d
Shared/GroupImmersive Space – Query Local Device Transform
Hi, I am in the process of implementing SharePlay into our app. The shared experience opens an Immersive Space and we set systemCoordinator.configuration.supportsGroupImmersiveSpace = true Now visionOS establishes a shared coordinate space for the immersive space. From the docs: To achieve consistent positioning of RealityKit entities across multiple devices in an immersive space during a SharePlay session There are cases where we want to position content in front of the user (independent of the shared session, and for each user individually). Normally to do that we use the transform retrieved via worldTrackingProvider.queryDeviceAnchor.originFromAnchorTransform to position content in front of the user (plus some Z Offset and smooth interpolation). This works fine in non-SharePlay instances and the device transform is where I would expect it to be but during the FaceTime call deviceAnchor.originFromAnchorTransform seems to use the shared origin of the immersive space and then I end up with a transform that might be offset. Here is a video of the issue in action: https://streamable.com/205r2p The blue rect is place using AnchorEntity(.head, trackingMode: .continuous). This works regardless of the call and the entity is always placed based on the head position. The green rect is adjusted on every frame using the transform I get from worldTrackingProvider.queryDeviceAnchor. As you can see it's offset. Is there any way I can query query this transform locally for the user during a FaceTime call? Also I would like to know if it's possible to disable this automatic entity transform syncing behavior? Setting entity.synchronization = nil results in the entity not showing up at all. https://developer.apple.com/documentation/realitykit/synchronizationcomponent Is SynchronizationComponent only relevant for the legacy MultiPeerConnectivity approach? Thank you!
2
0
280
Oct ’25
CompositorServices Or RealityKit
I have been concentrating on developing the visionOS application. While I am currently quite familiar with RealityKit, CompositorServices has also captured my attention. I have not yet acquired knowledge of CompositorServices. Could you please clarify whether it is essential for me to learn CompositorServices? Additionally, I would appreciate it if you could provide insights into the advantages of RealityKit and CompositorServices.
2
0
762
Mar ’25
ImpulseAction giving strange error
I am trying to apply impulseAction to an entity but everytime entity.playAnimation(impulseAnimation) is executed, the log says Cannot find a BindPoint for any bind path: "". I can't figure out what is wrong. Could someone please help me with this? import SwiftUI import RealityKit import RealityKitContent struct ImmersiveView: View { var body: some View { RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle), var sphere = immersiveContentEntity.findEntity(named: "Sphere") { sphere.components.set(CollisionComponent(shapes: [ShapeResource.generateSphere(radius: 0.1)])) sphere.components.set(PhysicsBodyComponent(shapes: [ShapeResource.generateSphere(radius: 0.1)], mass: 1000)) sphere.components[PhysicsBodyComponent.self]?.isAffectedByGravity = false sphere.position = [0, 1, -1] content.add(immersiveContentEntity) // Create an action to apply an impulse, forcing the object to move upwards. let impulseAction = ImpulseAction(linearImpulse: [0, 1, 0]) // Create a small positive duration value. let duration: TimeInterval = 1 / 30.0 // Create an animation for the action, which will start playing // after five seconds. do { let impulseAnimation = try AnimationResource .makeActionAnimation(for: impulseAction, duration: duration, delay: 5.0) // Play the sequence animation that will play the actions. sphere.playAnimation(impulseAnimation) } catch { print("Error: \(error)") } } } } } All the logs: Could not locate file 'default-binaryarchive.metallib' in bundle. Error creating the CFMessagePort needed to communicate with PPT. AddInstanceForFactory: No factory registered for id <CFUUID 0x6000029a5b80> F8BB1C28-BAE8-11D6-9C31-00039315CD46 cannot add handler to 0 from 1 - dropping nw_socket_copy_info [C1:2] getsockopt TCP_INFO failed [102: Operation not supported on socket] nw_socket_copy_info getsockopt TCP_INFO failed [102: Operation not supported on socket] Registering library (/Library/Developer/CoreSimulator/Volumes/xrOS_22N840/Library/Developer/CoreSimulator/Profiles/Runtimes/xrOS 2.2.simruntime/Contents/Resources/RuntimeRoot/System/Library/PrivateFrameworks/CoreRE.framework/default.metallib) that already exists in shader manager. Library will be overwritten. cannot add handler to 0 from 1 - dropping Cannot find a BindPoint for any bind path: "", "" Sync object without snapshot while removing view (id: 2816861686082450363, type: 6373420419761316588[SelectableSceneContentIdentifierComponent]). But i think only Cannot find a BindPoint for any bind path: "", "" is relevant.
2
0
634
Jan ’25
How to prevent system windows occlusion when using OcclusionMaterial
We use SceneReconstructionProvider to detect meshes in the surrounding environment and apply an OcclusionMaterial to them. // Assuming `entity` represents one of the detected mesh in the environment entity.components.set(ModelComponent( mesh: mesh, materials: [OcclusionMaterial()] )) While this correctly occludes entities placed in the immersive space, it also occludes system windows. This becomes problematic when a window is dragged into an occluded area (before or after entering the immersive space), preventing interaction with its elements. In some cases, it also makes it impossible to focus on the window’s drag handle, since this might become occluded as well after moving the window nearby. More generally, system windows can be occluded when they come into proximity with a model that has OcclusionMaterial applied. I'm aware of a change introduced in visionOS 2 regarding how occlusions interact with UI elements (as noted in the release notes). I believe this change was intended to ensure windows do not remain visible when opened in another room. However, this also introduces some challenges, as described in the scenario above. Is there a way to prevent system window occlusion while still allowing entities to be occluded by environmental features? Perhaps not using OcclusionMaterial at all? Development environment: Xcode 16.2, macOS 15.2 Run-time configuration: visionOS 2.2 and 2.3
2
2
324
Feb ’25
Summon gesture
Can you help to write a code able to pick an element a bit far from me, then bring it near to me, flick it a bit and then send it back to its original position when I release it? Thanks a lot, Christophe
1
0
62
Apr ’25
App crashes after requesting PhotoLibrary limited access
My visionOS requires access to users' personal photos. The trigger mechanism is: when user firstly opens a FooView, a task attached to that FooView and calling let status = PHPhotoLibrary.authorizationStatus(for: .readWrite), if the status is .notDetermined, then calling PHPhotoLibrary.requestAuthorization(for: .readWrite, handler: authCompletionHandler) to let visionOS pop out a window to request Photo access. However, the app crashes every time when user selects Limited Access and the system try to pop out a photo library picker. And btw, I have set Prevent limited photos access alert to Yes, but it shouldn't affect the behavior here I guess. There was a debugger message here: *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Presentations are not permitted within volumetric window scenes.' However, the window this view belongs to is a .plain style window (though there were 3D object appearing in the other view of same windowgroup) This is my code snippet if this helps: checkAndUpdatePhotoAuthorization is just a wrapper of PHPhotoLibrary.authorizationStatus(for: .readWrite) private func checkAndUpdatePhotoAuthorization() -> PHAuthorizationStatus { let currentStatus = PHPhotoLibrary.authorizationStatus(for: .readWrite) switch currentStatus { case .authorized: print("Photo library access authorized.") isPhotoGalleryAuthorized = true isPhotoGalleryLimited = false isPhotoGalleryAccessRestricted = false isPhotoGalleryDetermined = true case .limited: print("Photo library access limited.") isPhotoGalleryLimited = true isPhotoGalleryAuthorized = false isPhotoGalleryAccessRestricted = false isPhotoGalleryDetermined = true case .notDetermined: isPhotoGalleryDetermined = false print("Photo library access not determined.") case .denied: print("Photo library access denied.") isPhotoGalleryAuthorized = false isPhotoGalleryLimited = false isPhotoGalleryAccessRestricted = false showSettingsAlert = true isPhotoGalleryDetermined = true case .restricted: print("Photo library access restricted.") isPhotoGalleryAuthorized = false isPhotoGalleryLimited = false isPhotoGalleryAccessRestricted = true showPhotoAuthExplainationAlert = true isPhotoGalleryDetermined = true @unknown default: print("Photo library Unknown authorization status.") isPhotoGalleryAuthorized = false isPhotoGalleryLimited = false isPhotoGalleryAccessRestricted = false isPhotoGalleryDetermined = true } return currentStatus } And then FooView attaches task to fire up checkAndUpdatePhotoAuthorization() var body: some View { EmptyView() } .task { try? await Task.sleep(for: .seconds(1.0)) let status = self.checkAndUpdatePhotoAuthorization() if status == .notDetermined { DispatchQueue.main.async { PHPhotoLibrary.requestAuthorization(for: .readWrite, handler: authCompletionHandler) } } Another thing worth to mention is that SOMETIMES it won't crash when running on a debug build. But it crashes when it comes to TF. Any other idea? Big thanks in advance XCode version: 16.2 beta 3 VisionOS version: 2.2
1
0
661
Jan ’25