Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

SharePlay on the VisionOS with remote participants.
Hi everyone, I’m building a visualization app for VisionPro that uses SharePlay and GroupActivities to explore datasets collaboratively. I’ve successfully implemented the new SharedWorldAnchor feature, and everything works well with nearby, local participants. However, I’m stuck on one point: How can I share a world anchor with remote participants who join via FaceTime as spatial personas? Apple’s demo app (where multiple users move a plane model around) seems to suggest that this is possible. For context, I’m building an immersive app with Metal rendering. Any guidance or examples would be greatly appreciated! Thanks, Jens
2
1
374
Sep ’25
HoverEffectStyle in visionOS 26.0
This is no longer highlighting my entity when looking at it: RealityView { content let hoverComponent = HoverEffectComponent(.spotlight( HoverEffectComponent.SpotlightHoverEffectStyle( color: .white, strength: 2.0 ) )) entity.components.set(hoverComponent) The entity is in a window. The same code works in an immersive view. Collision Component and Input type are set in RCP. It's also stopped working on my published app (built under visionOS 2.x) using my visionOS 26 device. If I use a 2.x simulator, it works. Is this a bug or is there something I'm missing? Thanks.
2
0
843
Oct ’25
How to get the floor plane with Spatial Tracking Session and Anchor Entity
In the WWDC session titled "Deep dive into volumes and immersive spaces", the developers discussed adding a Spatial Tracking Session and an Anchor Entity to detect the floor. They then glossed over some important details. They added a spatial tap gesture to let the user place content relative to the floor anchor, but they left a lot of information. .gesture( SpatialTapGesture( coordinateSpace: .immersiveSpace ) .targetedToAnyEntity() .onEnded { value in handleTapOnFloor(value: value) } ) My understanding is that an entity has to have input and collision components for gestures like this to work. How can we add a collision to an AnchorEntity when we don't know its size or shape? I've been trying for days to understand what is happening here and I just don't get it. It is even more frustrating that the example project that Apple released does not contain any of these features. I would like to be able Detect the floor plane Get the position/transform of the floor plane Add a collider to the floor plane Enable collisions and physics on the floor plane Enable gestures on the floor plane It seems to me that the Anchor Entity is placed as an entirely arbitrary position. It has absolutely no relationship to the rectangle with the floor label that I can see in the Xcode visualization. It is just a point, not a plane or rect that I can use. I've tried manually calculating the collision shape after the anchor is detected, but nothing that I have tried works. I can't tap on the floor with gestures. I can't drop entities onto the floor. I can't seem to do ANYTHING at all with this floor anchor other than place entity at the totally arbitrary location somewhere on the floor. Is there anyway at all with Spatial Tracking Session and Anchor Entity to get the actual plane that was detected? struct FloorExample: View { @State var trackingSession: SpatialTrackingSession = SpatialTrackingSession() @State var subject: Entity? @State var floor: AnchorEntity? var body: some View { RealityView { content, attachments in let session = SpatialTrackingSession() let configuration = SpatialTrackingSession.Configuration(tracking: [.plane]) _ = await session.run(configuration) self.trackingSession = session let floorAnchor = AnchorEntity(.plane(.horizontal, classification: .floor, minimumBounds: SIMD2(x: 0.1, y: 0.1))) floorAnchor.anchoring.physicsSimulation = .none floorAnchor.name = "FloorAnchorEntity" floorAnchor.components.set(InputTargetComponent()) floorAnchor.components.set(CollisionComponent(shapes: .init())) content.add(floorAnchor) self.floor = floorAnchor // This is just here to let me see where visinoOS decided to "place" the floor anchor. let floorPlaced = ModelEntity( mesh: .generateSphere(radius: 0.1), materials: [SimpleMaterial(color: .black, isMetallic: false)]) floorAnchor.addChild(floorPlaced) if let scene = try? await Entity(named: "AnchorLabsFloor", in: realityKitContentBundle) { content.add(scene) if let subject = scene.findEntity(named: "StepSphereRed") { self.subject = subject } // I can see when the anchor is added _ = content.subscribe(to: SceneEvents.AnchoredStateChanged.self) { event in event.anchor.generateCollisionShapes(recursive: true) // this doesn't seem to work print("**anchor changed** \(event)") print("**anchor** \(event.anchor)") } // place the reset button near the user if let panel = attachments.entity(for: "Panel") { panel.position = [0, 1, -0.5] content.add(panel) } } } update: { content, attachments in } attachments: { Attachment(id: "Panel", { Button(action: { print("**button pressed**") if let subject = self.subject { subject.position = [-0.5, 1.5, -1.5] // Remove the physics body and assign a new one - hack to remove momentum if let physics = subject.components[PhysicsBodyComponent.self] { subject.components.remove(PhysicsBodyComponent.self) subject.components.set(physics) } } }, label: { Text("Reset Sphere") }) }) } } }
2
0
827
Jan ’25
Can iOS and visionOS Devices Share the Same Spatial World in a Multiuser AR Session?
Hey everyone, I'm working on an object viewer where users can place objects in a real room using AR, and I want both visionOS (Apple Vision Pro) and iOS devices (iPad, iPhone) to participate in the same shared spatial experience. The idea is that a user with a Vision Pro can place an object, and peers using iPhones/iPads can see the same object in the same position in their AR view. I've looked into ARKit's Shared ARWorldMap and MultipeerConnectivity, but I'm not sure if this extends seamlessly to visionOS or if Apple has an official way to sync spatial data between visionOS and iOS devices. Has anyone tried sharing a spatial world between visionOS and iOS? Are there any built-in frameworks that allow for a shared multiuser AR session across these devices? If not, what would be the best way to sync object positions between them? Would love to hear if anyone has insights or experience with this! 🚀 Thanks!
2
0
531
Mar ’25
ARKit: Keep USDZ node fixed after image tracking is lost (prevent drifting)
0 I’m using ARKit + SceneKit (Swift) with ARWorldTrackingConfiguration and detectionImages to place a 3D object (USDZ via SCNScene(named:)) when a reference image is detected. While the image is tracked, the object stays correctly aligned. Goal: When the tracked image is no longer visible, I want the placed node to remain visible and fixed at its last known pose (no drifting) as I move the camera. What works so far: Detect image → add node → track updates When the image disappears → keep showing the node at its last pose Problem: After the image is no longer tracked, the node drifts as I move the device/camera. It looks like it’s still influenced by the (now unreliable) image anchor or accumulating small world-tracking errors. Question: What’s the correct way in ARKit to “freeze” the node at its last known world transform once ARImageAnchor stops tracking, so it doesn’t drift?
2
0
459
Oct ’25
How to update TextureResource with MTLTexture?
Hi I have a monitoring app, that will take input video from uvc and process it using Metal, and eventually get a MTLTexture. The problem I'm facing is I have to convert MTLTexture to CGImage then call TextureResource.replace, which is super slow. Metal processing speed is same as input frame rate(50pfs), but MTLTexture -> CGImage -> TextureResource only got 7fps... Is there any way I can make it faster?
2
0
409
Oct ’25
Metal Compositor Service & Persona (VisionOS)
Hello, I'm currently trying to make a collaborative app. But it just works only on Reality View, when I tried to use Compositor Layer like below, the personas disappeared. ImmersiveSpace(id: "ImmersiveSpace-Metal") { CompositorLayer(configuration: MetalLayerConfiguration()) { layerRenderer in SpatialRenderer_InitAndRun(layerRenderer) } } Is there any potential solution too see Personas in Metal view? Thanks in advance!
2
0
747
Sep ’25
Volumetric window not sharing in SharePlay session for VisionOS
I've been struggling with this for far too long so I've decided to finally come here and see if anyone can point me to the documentation that I'm missing. I'm sure it's something so simple but I just can't figure it out. I can SharePlay our test app with my brother (device to device) but when I open a volumetric window, it says "not shared" under it. I assume this will likely fix the video sharing problem we have as well. Everything else works so smooth but SharePlay has just been such a struggle for me. It's the last piece to the puzzle before we can put it on the App Store.
2
0
124
Sep ’25
Reading Imagefiles outside of app content
Hi there. Thanks to amazing help from you guys, I've managed to code a 360 image carousel, where the user can browse 360 images located inside the project package. Is there a way to access the filesystem on AVP outside the app? I know about the FileManager, and I can get access to the .documentsDirectory, but how do I access documents folder from the "Files" app on the AVP? My goal is to read images from a hardcoded folderlocation on the AVP, such that the user never will have to select the images themselves. I know this may not be the "right" way to do this. The app is supposed to be "foolproof" with a minimum of userinteraction. The only way to change the images should be to change the contents of the hardcoded imagefolder. I hope this makes sense =) Thanks in advance! Regards, Kim
2
0
275
Feb ’25
Ornaments in Presentations
We can add ornaments to popovers shown by PresentationComponent, but I’m not sure if we should. While working on the editor for entities in a Volume-based app, I had the idea to add ornaments to the presented views. The entire app exists inside a volume. A user can tap a item to present a popoverUI to edit it. This is displayed using the new PresentationComponent in visionOS 26. Ornaments have a new attachment anchor option this year: .parent(). .ornament(attachmentAnchor: .parent(.top), ornament: {...}) This works well in the Simulator. We can add ornaments around this popover view just like we would with a window. Unfortunately, when I run this on device I get a different experience. Any part of the ornament that overlaps with the popover content isn’t rendered correctly. Sometimes it entirely disappears, other times it becomes partially transparent. We could use content alignment to try to make sure the ornament doesn’t overlap the popover content. .ornament(attachmentAnchor: .parent(.top), contentAlignment: .bottom, ornament: {...}) This works sometimes–but not all the time. It’s not clear if this is a bug or not, because I’m not sure if we are even supposed to be able to use ornaments in this way. Here is my hierarchy: An app opens as a Volume Volume presenting a RealityView, with its own ornament using .scene() anchor Multiple Entities with Presentation Component show an edit view The view uses .parent() anchor to add ornaments. What makes me unsure is that other methods for drawing UI in RealityView don’t seem to work with ornaments. For example, if I add an attachment to show a view with the ornament–even when I use the .parent() anchor–the ornament is anchor to the volume, not the attachment view. So what do we think? Is this a rendering bug? Are ornaments intended to work with attachments and presentations?
2
0
352
Aug ’25
Misaligned visionOS Simulator Home Position
Using Xcode v26 Beta 6 on macOS v26 Beta 25a5349a When pressing on the home button of the visionOS simulator, I am not positioned in the middle of the room like would normally be. This occurred when moving a lot in the space to find an element added to an ImmersiveSpace. How to resolve: restart simulator device. See attached the pictures of the visionOSSimulatorCorrectHomePosition and the visionOSSimulatorMisallignedHomePosition.
2
0
880
Sep ’25
Unexpected Behavior in Entity Movement System When Using AVAudioPlayer in visionOS Development
I am currently developing an app for visionOS and have encountered an issue involving a component and system that moves an entity up and down within a specific Y-axis range. The system works as expected until I introduce sound playback using AVAudioPlayer. Whenever I use AVAudioPlayer to play sound, the entity exhibits unexpected behaviors, such as freezing or becoming unresponsive. The freezing of the entity's movement is particularly noticeable when playing the audio for the first time. After that, it becomes less noticeable, but you can still feel it, especially when the audio is played in quick succession. Also, the issue is more noticable on real device than the simulator // // IssueApp.swift // Issue // // Created by Zhendong Chen on 2/1/25. // import SwiftUI @main struct IssueApp: App { var body: some Scene { WindowGroup { ContentView() } .windowStyle(.volumetric) } } // // ContentView.swift // Issue // // Created by Zhendong Chen on 2/1/25. // import SwiftUI import RealityKit import RealityKitContent struct ContentView: View { @State var enlarge = false var body: some View { RealityView { content, attachments in // Add the initial RealityKit content if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle) { if let sphere = scene.findEntity(named: "Sphere") { sphere.components.set(UpAndDownComponent(speed: 0.03, minY: -0.05, maxY: 0.05)) } if let button = attachments.entity(for: "Button") { button.position.y -= 0.3 scene.addChild(button) } content.add(scene) } } attachments: { Attachment(id: "Button") { VStack { Button { SoundManager.instance.playSound(filePath: "apple_en") } label: { Text("Play audio") } .animation(.none, value: 0) .fontWeight(.semibold) } .padding() .glassBackgroundEffect() } } .onAppear { UpAndDownSystem.registerSystem() } } } // // SoundManager.swift // LinguaBubble // // Created by Zhendong Chen on 1/14/25. // import Foundation import AVFoundation class SoundManager { static let instance = SoundManager() private var audioPlayer: AVAudioPlayer? func playSound(filePath: String) { guard let url = Bundle.main.url(forResource: filePath, withExtension: ".mp3") else { return } do { audioPlayer = try AVAudioPlayer(contentsOf: url) audioPlayer?.play() } catch let error { print("Error playing sound. \(error.localizedDescription)") } } } // // UpAndDownComponent+System.swift // Issue // // Created by Zhendong Chen on 2/1/25. // import RealityKit struct UpAndDownComponent: Component { var speed: Float var axis: SIMD3<Float> var minY: Float var maxY: Float var direction: Float = 1.0 // 1 for up, -1 for down var initialY: Float? init(speed: Float = 1.0, axis: SIMD3<Float> = [0, 1, 0], minY: Float = 0.0, maxY: Float = 1.0) { self.speed = speed self.axis = axis self.minY = minY self.maxY = maxY } } struct UpAndDownSystem: System { static let query = EntityQuery(where: .has(UpAndDownComponent.self)) init(scene: RealityKit.Scene) {} func update(context: SceneUpdateContext) { let deltaTime = Float(context.deltaTime) // Time between frames for entity in context.entities(matching: Self.query, updatingSystemWhen: .rendering) { guard var component: UpAndDownComponent = entity.components[UpAndDownComponent.self] else { continue } // Ensure we have the initial Y value set if component.initialY == nil { component.initialY = entity.transform.translation.y } // Calculate the current position let currentY = entity.transform.translation.y // Move the entity up or down let newY = currentY + (component.speed * component.direction * deltaTime) // If the entity moves out of the allowed range, reverse the direction if newY >= component.initialY! + component.maxY { component.direction = -1.0 // Move down } else if newY <= component.initialY! + component.minY { component.direction = 1.0 // Move up } // Apply the new position entity.transform.translation = SIMD3<Float>(entity.transform.translation.x, newY, entity.transform.translation.z) // Update the component with the new direction entity.components[UpAndDownComponent.self] = component } } } Could someone help me with this?
2
0
350
Feb ’25