Discuss Spatial Computing on Apple Platforms.

Posts under General subtopic

Post

Replies

Boosts

Views

Activity

Hand Tracking Latency When UITextView Becomes Active in Vision Pro Immersive Space
I'm placing sphere at finger tip and updating its position as hand move. Finger joint tracking functions correctly, but I’ve observed noticeable latency in hand tracking updates whenever a UITextView becomes active. This lag happens intermittently during app usage, lasting about 5–10 seconds, after which the latency disappears and the sphere starts following the finger joints immediately. When I open the immersive space for the first time, the profiler shows a large performance spike upto 328%. After that, it stabilizes and runs smoothly. Note: I don’t observe any lag when CPU usage spikes to 300% (upon immersive view load) yet the lag still occurs even when CPU usage remains below 100%. I’m using the following code for hand tracking: private func processHandTrackingUpdates() async { for await update in handTracking.anchorUpdates { let handAnchor = update.anchor if handAnchor.isTracked { switch handAnchor.chirality { case .left: leftHandAnchor = handAnchor updateHandJoints(for: handAnchor, with: leftHandJointEntities) case .right: rightHandAnchor = handAnchor updateHandJoints(for: handAnchor, with: rightHandJointEntities) } } else { switch handAnchor.chirality { case .left: leftHandAnchor = nil hideAllJoints(in: leftHandJointEntities) case .right: rightHandAnchor = nil hideAllJoints(in: rightHandJointEntities) } } await MainActor.run { handTrackingData.processNewHandAnchors( leftHand: self.leftHandAnchor, rightHand: self.rightHandAnchor ) } } } And here’s the function I’m using to update the joint positions: private func updateHandJoints( for handAnchor: HandAnchor, with jointEntities: [HandSkeleton.JointName: Entity] ) { guard handAnchor.isTracked else { hideAllJoints(in: jointEntities) return } // Check if the little finger tip and intermediate base are both tracked. if let tipJoint = handAnchor.handSkeleton?.joint(.littleFingerTip), let intermediateBaseJoint = handAnchor.handSkeleton?.joint(.littleFingerIntermediateTip), tipJoint.isTracked, intermediateBaseJoint.isTracked, let pinkySphere = jointEntities[.littleFingerTip] { // Convert joint transforms to world space. let tipTransform = handAnchor.originFromAnchorTransform * tipJoint.anchorFromJointTransform let intermediateBaseTransform = handAnchor.originFromAnchorTransform * intermediateBaseJoint.anchorFromJointTransform // Extract positions from the transforms. let tipPosition = SIMD3<Float>(tipTransform.columns.3.x, tipTransform.columns.3.y, tipTransform.columns.3.z) let intermediateBasePosition = SIMD3<Float>(intermediateBaseTransform.columns.3.x, intermediateBaseTransform.columns.3.y, intermediateBaseTransform.columns.3.z) // Calculate the midpoint. let midpointPosition = (tipPosition + intermediateBasePosition) / 2.0 // Position the sphere at the midpoint and make it visible. pinkySphere.isEnabled = true pinkySphere.transform.translation = midpointPosition } else { // If either joint is not tracked, hide the sphere. jointEntities[.littleFingerTip]?.isEnabled = false } // Update the positions of all other hand joint spheres. for (jointName, entity) in jointEntities { if jointName == .littleFingerTip { // Already handled the pinky above. continue } guard let joint = handAnchor.handSkeleton?.joint(jointName), joint.isTracked else { entity.isEnabled = false continue } entity.isEnabled = true let jointTransform = handAnchor.originFromAnchorTransform * joint.anchorFromJointTransform entity.transform.translation = SIMD3<Float>(jointTransform.columns.3.x, jointTransform.columns.3.y, jointTransform.columns.3.z) } } I’ve attached both a profiler trace and a video recording from Vision Pro that clearly demonstrate the issue. Profiler: https://drive.google.com/file/d/1fDWyGj_fgxud2ngkGH_IVmuH_kO-z0XZ Vision Pro Recordings: https://drive.google.com/file/d/17qo3U9ivwYBsbaSm26fjaOokkJApbkz- https://drive.google.com/file/d/1LxTxgudMvWDhOqKVuhc3QaHfY_1x8iA0 Has anyone else experienced this behavior? My thought is that there might be some background calculations happening at the OS level causing this latency. Any guidance would be greatly appreciated. Thanks!
0
0
432
Sep ’25
Logitech Muse: OS-level support?
I've been experimenting with the Muse pen and understand that it can be accessed by my app through a SpatialTrackingSession, but is there any current or planned support for devices like this as for general UI input like game controllers are? For example, using the button as a tap analogue for SwiftUI views.
0
0
111
Oct ’25
多设备协同操作繁琐
直播过程中需同时操作 Vision Pro(拍摄)、Mac(推流)、中控台(画面切换),无统一控制入口,调节 3D 模型、校准画质等操作需在多设备间切换,易出错且效率低。 期望 针对直播场景,提供桌面端专属控制软件,支持一站式管理 Vision Pro 的拍摄参数、3D 模型切换、虚实融合效果等,实现多设备协同操作的可视化、便捷化。
0
0
353
Dec ’25
How to add visual thickness to a glass background view
Hi guys, In visionOS, when using a ZStack decorated with .glassBackgroundEffect(), you can see the 3D glass background from the front, but when viewed from the side, the view appears to have no thickness. However, I noticed that in an app built by Apple, when viewing a glass background view from the side, it appears to have thickness. I tried adding .frame(depth:) to a glass background view, but it appears as two separate layers spaced by the depth value. My question is: Is there a view modifier that adds visual thickness to a glass background view, as shown in the picture? Or, if not, how should I write a custom view modifier to achieve this effect? Thanks!
0
0
120
May ’25
RealityView content disappears when selecting Lock In Place on visionOS
Hi, I'm experiencing an issue where all RealityView content disappears when the user selects "Lock In Place" from the window management menu (long press on close button). "Follow Me" works correctly and this happens in Testflight builds only not reproducible when I run locally I have reproduced this with a minimal project containing nothing but a simple red cube — no custom anchors, no app state, no dependencies. Steps to Reproduce: Open an ImmersiveSpace. A red cube is placed 1m in front of the user via RealityView. Long press the X button on any floating window Select "Lock In Place". The cube disappears immediately. Expected: Cube remains visible after window is locked Actual: Cube disappears. Minimal reproducible code: var body: some View { RealityView { content in let cube = ModelEntity( mesh: .generateBox(size: 0.3), materials: [SimpleMaterial(color: .red, isMetallic: false)] ) cube.setPosition(SIMD3<Float>(0, 1.5, -1), relativeTo: nil) content.add(cube) } } } Device: Apple Vision Pro visionOS version: Vision OS 26.2 (23N301) Xcode version: Version 26.3 (17C529) Is this a known issue? Is there a recommended workaround to preserve RealityView content during Lock In Place transitions? Thank you!
0
0
327
2d
Vision Pro stucks on "Retrieving configuration"
Hi, after upgrading to 2.4.1 (from 1.0) my vision stucks on "Retrieving configuration" screen. Apple Store didn't support my case since it has been sold in USA and the product isn't still present in italian market. I don't have dev strap, how can I manage the issue? Thank you
0
0
119
May ’25
Cannot reassign worldTracking / planeDetection providers in my PlacementManager when switching environments
Environment Xcode: 16.2 VisionOS SDK 2.4 Swift 6.1 Targets: Apple Vision Pro (immersive space) Frameworks: ARKit, RealityKit, SwiftUI What I’m Trying to Do I have a view-model class PlacementManager that holds two AR providers: private var worldTracking: WorldTrackingProvider private var planeDetection: PlaneDetectionProvider I want to dynamically replace these providers in a setEnvironment(_:) method (so I can save/clear a JSON scene and restart ARKit). What’s Happening If I declare them as : private let worldTracking = WorldTrackingProvider() private let planeDetection = PlaneDetectionProvider() I get compile-errors when I later do: self.worldTracking = newWorldTracking // Cannot assign to property: 'worldTracking' is a 'let' constant If I change them to un-initialized vars: private var worldTracking: WorldTrackingProvider private var planeDetection: PlaneDetectionProvider then in my init() I get: self used in property access 'worldTracking' before all stored properties are initialized Code snipet @Observable final class PlacementManager : ObservableObject { private var worldTracking: WorldTrackingProvider private var planeDetection: PlaneDetectionProvider // … other props … @MainActor init() { // error: self.worldTracking used before init… planeAnchorHandler = PlaneAnchorHandler(rootEntity: root) persistenceManager = PersistenceManager( worldTracking: worldTracking, rootEntity: root ) // … } @MainActor func setEnvironment(env: Environnement) async { let newWorldTracking = WorldTrackingProvider() let newPlaneDetection = PlaneDetectionProvider() try await appState!.arkitSession.run( [ newWorldTracking, newPlaneDetection ] ) self.worldTracking = newWorldTracking self.planeDetection = newPlaneDetection // … } } What I’ve Tried Giving them default values at declaration (= WorldTrackingProvider()) Initializing them at the top of init() before any use Passing the new providers into arkitSession.run(...) My Question What is the recommended Swift-style pattern to declare and reassign these ARKit provider properties so that: They’re fully initialized before use in init(), and I can swap them out later in setEnvironment(...) without compiler errors? Any pointers (or links to forum threads / docs) would be greatly appreciated!
0
0
145
May ’25
visionOS 2.0 – Persist room‑fixed RealityKit entities on walls across app launches (WorldAnchor?)
I want to let users place 2D/3D “artworks” on detected walls and have them reappear in exactly the same real‑world spot after quitting and relaunching the app (like widgets do, but for my own entities).Environment: Xcode 26, visionOS 2.0, RealityKit + ARKitSession/WorldTrackingProvider Entities are parented to a holder that’s aligned to a wall via plane/mesh raycasts What I’ve tried: Create a WorldAnchor at placement, save UUID + full 4×4 transform On next launch, re-create the WorldAnchor (or set the saved transform) and attach the entity Gate restore on relocalization/mesh updates and disable all raycast/search after restore Issue: After relaunch, placement still resolves relative to current device pose, not the same wall position. Questions: Is there a public API in visionOS 2.0 to persist app‑managed world anchors across sessions (room‑fixed), e.g., AnchorStore or equivalent? If not, what’s the recommended pattern to reliably restore wall‑anchored content? Are persistence features mentioned for widgets/windows available to third‑party RealityKit entities?
0
0
193
Oct ’25
RealityView content disappears when selecting Lock In Place on visionOS
I'm experiencing an issue where all RealityView content disappears when the user selects "Lock In Place" from the window management menu (long press on close button). "Follow Me" works correctly and this happens in Testflight builds only not reproducible when I run locally I have reproduced this with a minimal project containing nothing but a simple red cube — no custom anchors, no app state, no dependencies. Steps to Reproduce: Open an ImmersiveSpace A red cube is placed 1m in front of the user via RealityView Long press the X button on any floating window Select "Lock In Place" The cube disappears immediately Expected: Cube remains visible after window is locked Actual: Cube disappears. Note: "Follow Me" does NOT reproduce this issue. Minimal reproducible code: struct ImmersiveView: View { var body: some View { RealityView { content in let cube = ModelEntity( mesh: .generateBox(size: 0.3), materials: [SimpleMaterial(color: .red, isMetallic: false)] ) cube.setPosition(SIMD3(0, 1.5, -1), relativeTo: nil) content.add(cube) } } } Device: Apple Vision Pro visionOS version: Vision OS 26.2 (23N301) Xcode version: Version 26.3 (17C529) Is this a known issue? Is there a recommended workaround to preserve RealityView content during Lock In Place transitions? Thank you!
0
0
281
2d
New Spatial Rendering App on macOS doesn't display on visionOS device
I created a new Spatial Rendering App from the template in Xcode 26.0.1. When I run the app, click 'Show Immersive Space' and select my Vision Pro from the pop-up dialog, the content in the dialog flickers (which seems to indicate something crashed) and nothing appears on my Vision Pro. I'm running the released macOS 26.0 (25A354) and visionOS 26.0 (23M336). Filed as FB20397093.
0
0
362
Sep ’25
visionOS 26 - Rendering Issues related to Transparency
Summary After updating to visionOS 26, we’ve encountered severe transparency rendering issues in RealityKit that did not exist in visionOS 2.6 and earlier. These regressions affect applications that dynamically control scene opacity (via OpacityComponent). Our app renders ultra-realistic apartment environments in real time, where users can walk or teleport inside 3D spaces. When the user moves above a speed threshold, we apply a global transparency effect to prevent physical collisions with real-world objects. Everything worked perfectly in visionOS 2.6 — the problems appeared only after upgrading to 26. Scene Setup Overview The environment consists of multiple USDZ models (e.g., architecture, rooms, furniture). We manage LODs manually for performance (e.g., walls and floors always visible in full-res, while rooms swap between low/high-res versions based on user position and field of view). Transparency is achieved using OpacityComponent, applied dynamically when the user moves. Some meshes (e.g., portals to skyboxes, glass windows) use alpha materials We also use OcclusionMaterials to prevent things to be seen through walls when scene is transparent Observed Behavior by Scenario (I can share a video showing the results of each scenario if needed.) Scenario 1 — Severe Flickering (Root Opacity) Setup: OpacityComponent applied to the root entity NO ModelSortGroupComponent used Symptoms: Strong flickering when transparency is active Triangles within the same mesh render at inconsistent opacity levels Appears as if per-triangle alpha sorting is broken Workaround: Moving the OpacityComponent from the root to each individual USDZ entity removes the per-triangle flicker Pros: No conflicts with portals or alpha materials Scenario 2 — Partially Stable, But Alpha Conflicts Setup: OpacityComponent applied per USDZ entity ModelSortGroupComponent(planarUIAlwaysBehind) applied to portal meshes Other entities have NO ModelSortGroupComponent Symptoms: Frequent alpha blending conflicts: Transparent surfaces behind other transparent surfaces flicker or disappear Example: Wine glasses behind glass doors — sometimes neither is rendered, or only one Even opaque meshes behind glass flicker due to depth buffer confusion Alpha materials sometimes render portals or the real world behind them, ignoring other geometry entirely Analysis: Appears related to internal changes in alpha sorting or depth pre-pass behavior introduced in visionOS 26 Pros: Most stable setup so far Cons: Still unreliable when OpacityComponent is active Scenario 3 — Layer Separation Attempt (Regression) Setup: Same as Scenario 2, but: Entities with alpha materials moved to separate USDZs Explicit ModelSortGroupComponent order set (alpha surfaces rendered last) Symptoms: Transparent surfaces behind other transparent surfaces flicker or disappear Depth is completely broken when there's a large transparent surface Alpha materials sometimes render portals or the real world behind them, ignoring other geometry entirely Workaround Attempt: Re-ordering and further separating models did not solve it Pros: None — this setup makes transparency unusable Conclusion There appears to be a regression in RealityKit’s handling of transparency and sorting in visionOS 26, particularly when: OpacityComponent is applied dynamically, and Scenes rely on multiple overlapping transparent materials. These issues did not exist prior to 26, and the same project (no code changes) behaves correctly on previous versions. Request We’d appreciate any insight or confirmation from Apple engineers regarding: Whether alpha sorting or opacity blending behavior changed in visionOS 26 If there are new recommended practices for combining OpacityComponent with transparent materials If a bug report already exists for this regression Thanks in advance!
0
0
213
Nov ’25
Real Time Spatial Video Streaming with Vision Pro
Hello, I am trying to build an AVP app for real-time "zero-latency" spatial video streaming. I am trying to figure out, on a high level, the best way to do this. Currently this is my method: Server sends stereo images via a WebRTC service (ie, livekit) The WebRTC stream is converted to a CVPixelBuffer, writes them to file, plays via AVPlayer, and applies a VideoMaterial to a plane entity. However, this is a bit hacky and it seems like this won't be compatible with Apple's spatial experinces. To my understanding, Apple supports HLS streaming for spatial experiences and APMP content. However, HLS (and even Low Latency HLS) introduces a second or more of latency, likely do to the segmentation nature of HLS. Thus, HLS will not work for us. Some other alternatives I've thought of are streaming the live stream video via webrtc from the server to a local computer in the AVP's network, and then using LL-HLS to stream from the local computer to the vision pro. Still, it seems like this would introduce latency on the order of seconds. Is my current approach the best way to implement this? Or could anyone suggest a better way, perhaps something compatible with AVP's spatial experiences
0
1
135
Dec ’25
Unity PolySpatial – Live handheld camera feed of graspable objects not rendering on Vision Pro
I am developing a Unity application for the Apple Vision Pro using PolySpatial and RealityKit integration. The goal is to create a graspable object (for example, a handheld cube) that includes a secondary camera. When the user grabs and moves the object, the secondary camera should render its view to a RenderTexture, which is displayed on a quad attached to the object, simulating a live camera screen. In the Unity Editor, this setup works correctly. The RenderTexture updates in real time, and the quad displays the camera’s view as expected. However, when building and running the application on the Vision Pro, the quad only displays the clear background color of the secondary camera. No scene content appears. The graspable interaction itself works fine: the object can be grabbed and moved as intended. Steps I have taken: Created a new layer (CameraFeed) and assigned the relevant objects to it. Set the secondary camera’s culling mask to render only the CameraFeed layer. Assigned the RenderTexture as the camera’s target texture. Applied the RenderTexture to an Unlit/Texture material on a quad. Confirmed the camera is active and correctly positioned relative to the object. From my research, it appears that once objects are managed by RealityKit through PolySpatial (for example, made graspable), they are no longer rendered through Unity's normal camera pipeline. Only the main XR camera (managed by RealityKit) seems able to see these objects. Secondary Unity cameras cannot render RealityKit-synced content to a RenderTexture. If this is correct, it seems there is currently no way to implement a true live secondary camera feed showing graspable objects on Vision Pro using Unity PolySpatial. My questions are: Is there any official way to enable multiple camera rendering of RealityKit-managed objects through PolySpatial? Are there known workarounds to simulate a live camera feed that still allows objects to be grabbed? Has anyone found alternative design patterns or methods for this kind of interaction? Environment: Unity 6.0 , PolySpatial 2.2.4, Apple Vision OS XR 2.2.4 Any insight or suggestions would be greatly appreciated. Thank you.
0
0
128
Apr ’25
Can't establish spatial connection after visionOS update
After updating to visionOS 26.2 Beta 2 (and Beta 3), I'm unable to establish a spatial connection to Vision Pro. This was working fine before the update. To test, I've created a fresh spatialApp project from the Xcode template with zero modifications, but I'm hitting the same issue - the Vision Pro is discovered but won't connect. Am I forgetting to update the config somewhere? Any ideas what might be causing this and how to fix it? Thanks! Warning: -[NSWindow makeKeyWindow] called on <NSWindow: 0xa1f811900> windowNumber=1b9 which returned NO from -[NSWindow canBecomeKeyWindow]. ((processConfiguration != nil && configuration != nil) || (processConfiguration == nil && configuration == nil)) - /AppleInternal/Library/BuildRoots/4~CBS0ugAIF7BrQZjLe6r0lhPXO4GJmNDTovxYoV0/Library/Caches/com.apple.xbs/Sources/ExtensionKit/ExtensionKit/Source/HostViewController/Internal/EXHostSessionDriver.m:80: `processConfiguration` and `configuration` must be both non-nil or both nil Unable to obtain a task name port right for pid 415: (os/kern) failure (0x5) CCContextDeviceGroup.mm(291):+[CCContextDeviceGroup checkBinaryArchivesForDevice:withBundle:]: Failed to find any binary shader archive
0
0
117
Nov ’25
Header Blur Effect on visionOS SwiftUI
Hi, I'm looking to build something similar to the header blur in the App Store and Apple TV app settings. Does anyone know the best way to achieve this so that when there is nothing behind the header it looks the same as the rest of the view background but when content goes underneath it has a blur effect. I've seen .scrollEdgeEffect on IOS26 is there something similar for visionOS? Thanks!
0
0
141
Sep ’25
360° video playback Issue
When rendering an equirectangular video on a sphere using VideoMaterial and MeshResource.generateSphere(), there is a visible black seam line running vertically on the sphere. This appears to be at the UV seam where the texture coordinates wrap from 1.0 back to 0.0. The same video file plays without any visible seam in other 360° video players on Vision Pro, so the issue is not with the video content itself. Here is the relevant code: private func createVideoSphere(content: RealityViewContent, player: AVPlayer) { let sphere = MeshResource.generateSphere(radius: 1000) let material = VideoMaterial(avPlayer: player) let entity = ModelEntity(mesh: sphere, materials: [material]) entity.scale *= .init(x: -1, y: 1, z: 1) // Flip to render on inside content.add(entity) player.play() } The setup is straightforward: MeshResource.generateSphere(radius: 1000) generates the sphere mesh VideoMaterial(avPlayer:) provides the video texture X scale is flipped to -1 so the texture renders on the inside of the sphere The video is a standard equirectangular 360° MP4 file What I've tried: I attempted to create a custom sphere mesh using MeshDescriptor with duplicate vertices at the UV seam (longitude 0°/360°) to ensure proper UV continuity. However, VideoMaterial did not render any video on the custom mesh (only audio played), and the app eventually crashed. It seems VideoMaterial may have specific mesh requirements. Questions: Is the black seam a known limitation of MeshResource.generateSphere() when used with VideoMaterial for 360° video? Is there a recommended way to eliminate this UV seam — for example, a texture addressing mode or a specific mesh configuration that works with VideoMaterial? Is there an official sample project or code example for playing 360° equirectangular video in a fully immersive space on visionOS? That would be extremely helpful as a reference. Any guidance would be greatly appreciated. Thank you!
0
0
279
4w
Odd image placeholder appearing when dismissing an ImmersiveSpace with a ImagePresentationComponent
Hello, There are odd artifacts (one looks like an image placeholder) appearing when dismissing an immersive space which is displaying an ImagePresentationComponent. Both artifacts look like widgets.. See below our simple code displaying the ImagePresentationComponent and the images of the odd artifacts that appear briefly when dismissing the immersive space. import OSLog import RealityKit import SwiftUI struct ImmersiveImageView: View { let logger = Logger(subsystem: AppConstant.SUBSYSTEM, category: "ImmersiveImageView") @Environment(AppModel.self) private var appModel var body: some View { RealityView { content in if let currentMedia = appModel.currentMedia, var imagePresentationComponent = currentMedia.imagePresentationComponent { let imagePresentationComponentEntity = Entity() switch currentMedia.type { case .iphoneSpatialMovie: logger.info("\(#function) \(#line) spatial3DImmersive display for \(String(describing: currentMedia))") imagePresentationComponent.desiredViewingMode = .spatial3DImmersive case .twoD: logger.info("\(#function) \(#line) spatial3DImmersive display for \(String(describing: currentMedia))") imagePresentationComponent.desiredViewingMode = .spatial3DImmersive case .visionProConvertedSpatialPhoto: logger.info("\(#function) \(#line) spatialStereoImmersive display for \(String(describing: currentMedia))") imagePresentationComponent.desiredViewingMode = .spatialStereoImmersive default : logger.error("\(#function) \(#line) Unsupported media type \(currentMedia.type)") assertionFailure("Unsupported media type \(currentMedia.type)") } imagePresentationComponentEntity.components.set(imagePresentationComponent) imagePresentationComponentEntity.position = AppConstant.Position.spacialImagePosition content.add(imagePresentationComponentEntity) } let toggleViewAttachmentComponent = ViewAttachmentComponent(rootView: ToggleImmersiveSpaceButton()) let toggleViewAttachmentComponentEntity = Entity(components: toggleViewAttachmentComponent) toggleViewAttachmentComponentEntity.position = SIMD3<Float>( AppConstant.Position.spacialImagePosition.x + 1, AppConstant.Position.spacialImagePosition.y, AppConstant.Position.spacialImagePosition.z ) toggleViewAttachmentComponentEntity.scale = AppConstant.Scale.attachments content.add(toggleViewAttachmentComponentEntity) } } }
0
1
219
Jul ’25
Position and orientation of a window in an immersive space
Is it possible to retrieve the position and orientation of a window that is opened in an immersive space? The following code: struct MyWindow: View { var body: some View { VStack { Text("Hello") } .onGeometryChange3D(for: Point3D.self) { proxy in try! proxy .coordinateSpace3D() .convert(value: Point3D.zero, to: .worldReference) } action : { point in print(point) } } } seems to work for the position, but I also need the orientation.
0
0
789
1w
Custom Material half visible..?
I'm currently implementing 180° / 360° immersive video for my app. I easily implemented 360° by just applying VideoMaterial to flipped sphere. But I'm stuck at 180°. I'm trying to implement by applying VideoMaterial to hemisphere (half sphere). I want to make VideoMaterial to be visible half front sphere and half back sphere transparent / clear. Would there be any advice / information / idea to implement this? Your help would be grateful.
0
0
138
Oct ’25
Hand Tracking Latency When UITextView Becomes Active in Vision Pro Immersive Space
I'm placing sphere at finger tip and updating its position as hand move. Finger joint tracking functions correctly, but I’ve observed noticeable latency in hand tracking updates whenever a UITextView becomes active. This lag happens intermittently during app usage, lasting about 5–10 seconds, after which the latency disappears and the sphere starts following the finger joints immediately. When I open the immersive space for the first time, the profiler shows a large performance spike upto 328%. After that, it stabilizes and runs smoothly. Note: I don’t observe any lag when CPU usage spikes to 300% (upon immersive view load) yet the lag still occurs even when CPU usage remains below 100%. I’m using the following code for hand tracking: private func processHandTrackingUpdates() async { for await update in handTracking.anchorUpdates { let handAnchor = update.anchor if handAnchor.isTracked { switch handAnchor.chirality { case .left: leftHandAnchor = handAnchor updateHandJoints(for: handAnchor, with: leftHandJointEntities) case .right: rightHandAnchor = handAnchor updateHandJoints(for: handAnchor, with: rightHandJointEntities) } } else { switch handAnchor.chirality { case .left: leftHandAnchor = nil hideAllJoints(in: leftHandJointEntities) case .right: rightHandAnchor = nil hideAllJoints(in: rightHandJointEntities) } } await MainActor.run { handTrackingData.processNewHandAnchors( leftHand: self.leftHandAnchor, rightHand: self.rightHandAnchor ) } } } And here’s the function I’m using to update the joint positions: private func updateHandJoints( for handAnchor: HandAnchor, with jointEntities: [HandSkeleton.JointName: Entity] ) { guard handAnchor.isTracked else { hideAllJoints(in: jointEntities) return } // Check if the little finger tip and intermediate base are both tracked. if let tipJoint = handAnchor.handSkeleton?.joint(.littleFingerTip), let intermediateBaseJoint = handAnchor.handSkeleton?.joint(.littleFingerIntermediateTip), tipJoint.isTracked, intermediateBaseJoint.isTracked, let pinkySphere = jointEntities[.littleFingerTip] { // Convert joint transforms to world space. let tipTransform = handAnchor.originFromAnchorTransform * tipJoint.anchorFromJointTransform let intermediateBaseTransform = handAnchor.originFromAnchorTransform * intermediateBaseJoint.anchorFromJointTransform // Extract positions from the transforms. let tipPosition = SIMD3<Float>(tipTransform.columns.3.x, tipTransform.columns.3.y, tipTransform.columns.3.z) let intermediateBasePosition = SIMD3<Float>(intermediateBaseTransform.columns.3.x, intermediateBaseTransform.columns.3.y, intermediateBaseTransform.columns.3.z) // Calculate the midpoint. let midpointPosition = (tipPosition + intermediateBasePosition) / 2.0 // Position the sphere at the midpoint and make it visible. pinkySphere.isEnabled = true pinkySphere.transform.translation = midpointPosition } else { // If either joint is not tracked, hide the sphere. jointEntities[.littleFingerTip]?.isEnabled = false } // Update the positions of all other hand joint spheres. for (jointName, entity) in jointEntities { if jointName == .littleFingerTip { // Already handled the pinky above. continue } guard let joint = handAnchor.handSkeleton?.joint(jointName), joint.isTracked else { entity.isEnabled = false continue } entity.isEnabled = true let jointTransform = handAnchor.originFromAnchorTransform * joint.anchorFromJointTransform entity.transform.translation = SIMD3<Float>(jointTransform.columns.3.x, jointTransform.columns.3.y, jointTransform.columns.3.z) } } I’ve attached both a profiler trace and a video recording from Vision Pro that clearly demonstrate the issue. Profiler: https://drive.google.com/file/d/1fDWyGj_fgxud2ngkGH_IVmuH_kO-z0XZ Vision Pro Recordings: https://drive.google.com/file/d/17qo3U9ivwYBsbaSm26fjaOokkJApbkz- https://drive.google.com/file/d/1LxTxgudMvWDhOqKVuhc3QaHfY_1x8iA0 Has anyone else experienced this behavior? My thought is that there might be some background calculations happening at the OS level causing this latency. Any guidance would be greatly appreciated. Thanks!
Replies
0
Boosts
0
Views
432
Activity
Sep ’25
Logitech Muse: OS-level support?
I've been experimenting with the Muse pen and understand that it can be accessed by my app through a SpatialTrackingSession, but is there any current or planned support for devices like this as for general UI input like game controllers are? For example, using the button as a tap analogue for SwiftUI views.
Replies
0
Boosts
0
Views
111
Activity
Oct ’25
多设备协同操作繁琐
直播过程中需同时操作 Vision Pro(拍摄)、Mac(推流)、中控台(画面切换),无统一控制入口,调节 3D 模型、校准画质等操作需在多设备间切换,易出错且效率低。 期望 针对直播场景,提供桌面端专属控制软件,支持一站式管理 Vision Pro 的拍摄参数、3D 模型切换、虚实融合效果等,实现多设备协同操作的可视化、便捷化。
Replies
0
Boosts
0
Views
353
Activity
Dec ’25
How to give spatial photo a custom corner radius?
Spatial photo in RealityView has a default corner radius. I made a parallel effect with spatial photos in ScrollView(like Spatial Gallery), but the corner radius disappeared on left and right spatial photos. I've tried .clipShape and .mask modifiers, but they did't work. How to clip or mask spatial photo with corner radius effect?
Replies
0
Boosts
1
Views
482
Activity
Aug ’25
How to add visual thickness to a glass background view
Hi guys, In visionOS, when using a ZStack decorated with .glassBackgroundEffect(), you can see the 3D glass background from the front, but when viewed from the side, the view appears to have no thickness. However, I noticed that in an app built by Apple, when viewing a glass background view from the side, it appears to have thickness. I tried adding .frame(depth:) to a glass background view, but it appears as two separate layers spaced by the depth value. My question is: Is there a view modifier that adds visual thickness to a glass background view, as shown in the picture? Or, if not, how should I write a custom view modifier to achieve this effect? Thanks!
Replies
0
Boosts
0
Views
120
Activity
May ’25
RealityView content disappears when selecting Lock In Place on visionOS
Hi, I'm experiencing an issue where all RealityView content disappears when the user selects "Lock In Place" from the window management menu (long press on close button). "Follow Me" works correctly and this happens in Testflight builds only not reproducible when I run locally I have reproduced this with a minimal project containing nothing but a simple red cube — no custom anchors, no app state, no dependencies. Steps to Reproduce: Open an ImmersiveSpace. A red cube is placed 1m in front of the user via RealityView. Long press the X button on any floating window Select "Lock In Place". The cube disappears immediately. Expected: Cube remains visible after window is locked Actual: Cube disappears. Minimal reproducible code: var body: some View { RealityView { content in let cube = ModelEntity( mesh: .generateBox(size: 0.3), materials: [SimpleMaterial(color: .red, isMetallic: false)] ) cube.setPosition(SIMD3<Float>(0, 1.5, -1), relativeTo: nil) content.add(cube) } } } Device: Apple Vision Pro visionOS version: Vision OS 26.2 (23N301) Xcode version: Version 26.3 (17C529) Is this a known issue? Is there a recommended workaround to preserve RealityView content during Lock In Place transitions? Thank you!
Replies
0
Boosts
0
Views
327
Activity
2d
Vision Pro stucks on "Retrieving configuration"
Hi, after upgrading to 2.4.1 (from 1.0) my vision stucks on "Retrieving configuration" screen. Apple Store didn't support my case since it has been sold in USA and the product isn't still present in italian market. I don't have dev strap, how can I manage the issue? Thank you
Replies
0
Boosts
0
Views
119
Activity
May ’25
Cannot reassign worldTracking / planeDetection providers in my PlacementManager when switching environments
Environment Xcode: 16.2 VisionOS SDK 2.4 Swift 6.1 Targets: Apple Vision Pro (immersive space) Frameworks: ARKit, RealityKit, SwiftUI What I’m Trying to Do I have a view-model class PlacementManager that holds two AR providers: private var worldTracking: WorldTrackingProvider private var planeDetection: PlaneDetectionProvider I want to dynamically replace these providers in a setEnvironment(_:) method (so I can save/clear a JSON scene and restart ARKit). What’s Happening If I declare them as : private let worldTracking = WorldTrackingProvider() private let planeDetection = PlaneDetectionProvider() I get compile-errors when I later do: self.worldTracking = newWorldTracking // Cannot assign to property: 'worldTracking' is a 'let' constant If I change them to un-initialized vars: private var worldTracking: WorldTrackingProvider private var planeDetection: PlaneDetectionProvider then in my init() I get: self used in property access 'worldTracking' before all stored properties are initialized Code snipet @Observable final class PlacementManager : ObservableObject { private var worldTracking: WorldTrackingProvider private var planeDetection: PlaneDetectionProvider // … other props … @MainActor init() { // error: self.worldTracking used before init… planeAnchorHandler = PlaneAnchorHandler(rootEntity: root) persistenceManager = PersistenceManager( worldTracking: worldTracking, rootEntity: root ) // … } @MainActor func setEnvironment(env: Environnement) async { let newWorldTracking = WorldTrackingProvider() let newPlaneDetection = PlaneDetectionProvider() try await appState!.arkitSession.run( [ newWorldTracking, newPlaneDetection ] ) self.worldTracking = newWorldTracking self.planeDetection = newPlaneDetection // … } } What I’ve Tried Giving them default values at declaration (= WorldTrackingProvider()) Initializing them at the top of init() before any use Passing the new providers into arkitSession.run(...) My Question What is the recommended Swift-style pattern to declare and reassign these ARKit provider properties so that: They’re fully initialized before use in init(), and I can swap them out later in setEnvironment(...) without compiler errors? Any pointers (or links to forum threads / docs) would be greatly appreciated!
Replies
0
Boosts
0
Views
145
Activity
May ’25
visionOS 2.0 – Persist room‑fixed RealityKit entities on walls across app launches (WorldAnchor?)
I want to let users place 2D/3D “artworks” on detected walls and have them reappear in exactly the same real‑world spot after quitting and relaunching the app (like widgets do, but for my own entities).Environment: Xcode 26, visionOS 2.0, RealityKit + ARKitSession/WorldTrackingProvider Entities are parented to a holder that’s aligned to a wall via plane/mesh raycasts What I’ve tried: Create a WorldAnchor at placement, save UUID + full 4×4 transform On next launch, re-create the WorldAnchor (or set the saved transform) and attach the entity Gate restore on relocalization/mesh updates and disable all raycast/search after restore Issue: After relaunch, placement still resolves relative to current device pose, not the same wall position. Questions: Is there a public API in visionOS 2.0 to persist app‑managed world anchors across sessions (room‑fixed), e.g., AnchorStore or equivalent? If not, what’s the recommended pattern to reliably restore wall‑anchored content? Are persistence features mentioned for widgets/windows available to third‑party RealityKit entities?
Replies
0
Boosts
0
Views
193
Activity
Oct ’25
RealityView content disappears when selecting Lock In Place on visionOS
I'm experiencing an issue where all RealityView content disappears when the user selects "Lock In Place" from the window management menu (long press on close button). "Follow Me" works correctly and this happens in Testflight builds only not reproducible when I run locally I have reproduced this with a minimal project containing nothing but a simple red cube — no custom anchors, no app state, no dependencies. Steps to Reproduce: Open an ImmersiveSpace A red cube is placed 1m in front of the user via RealityView Long press the X button on any floating window Select "Lock In Place" The cube disappears immediately Expected: Cube remains visible after window is locked Actual: Cube disappears. Note: "Follow Me" does NOT reproduce this issue. Minimal reproducible code: struct ImmersiveView: View { var body: some View { RealityView { content in let cube = ModelEntity( mesh: .generateBox(size: 0.3), materials: [SimpleMaterial(color: .red, isMetallic: false)] ) cube.setPosition(SIMD3(0, 1.5, -1), relativeTo: nil) content.add(cube) } } } Device: Apple Vision Pro visionOS version: Vision OS 26.2 (23N301) Xcode version: Version 26.3 (17C529) Is this a known issue? Is there a recommended workaround to preserve RealityView content during Lock In Place transitions? Thank you!
Replies
0
Boosts
0
Views
281
Activity
2d
New Spatial Rendering App on macOS doesn't display on visionOS device
I created a new Spatial Rendering App from the template in Xcode 26.0.1. When I run the app, click 'Show Immersive Space' and select my Vision Pro from the pop-up dialog, the content in the dialog flickers (which seems to indicate something crashed) and nothing appears on my Vision Pro. I'm running the released macOS 26.0 (25A354) and visionOS 26.0 (23M336). Filed as FB20397093.
Replies
0
Boosts
0
Views
362
Activity
Sep ’25
visionOS 26 - Rendering Issues related to Transparency
Summary After updating to visionOS 26, we’ve encountered severe transparency rendering issues in RealityKit that did not exist in visionOS 2.6 and earlier. These regressions affect applications that dynamically control scene opacity (via OpacityComponent). Our app renders ultra-realistic apartment environments in real time, where users can walk or teleport inside 3D spaces. When the user moves above a speed threshold, we apply a global transparency effect to prevent physical collisions with real-world objects. Everything worked perfectly in visionOS 2.6 — the problems appeared only after upgrading to 26. Scene Setup Overview The environment consists of multiple USDZ models (e.g., architecture, rooms, furniture). We manage LODs manually for performance (e.g., walls and floors always visible in full-res, while rooms swap between low/high-res versions based on user position and field of view). Transparency is achieved using OpacityComponent, applied dynamically when the user moves. Some meshes (e.g., portals to skyboxes, glass windows) use alpha materials We also use OcclusionMaterials to prevent things to be seen through walls when scene is transparent Observed Behavior by Scenario (I can share a video showing the results of each scenario if needed.) Scenario 1 — Severe Flickering (Root Opacity) Setup: OpacityComponent applied to the root entity NO ModelSortGroupComponent used Symptoms: Strong flickering when transparency is active Triangles within the same mesh render at inconsistent opacity levels Appears as if per-triangle alpha sorting is broken Workaround: Moving the OpacityComponent from the root to each individual USDZ entity removes the per-triangle flicker Pros: No conflicts with portals or alpha materials Scenario 2 — Partially Stable, But Alpha Conflicts Setup: OpacityComponent applied per USDZ entity ModelSortGroupComponent(planarUIAlwaysBehind) applied to portal meshes Other entities have NO ModelSortGroupComponent Symptoms: Frequent alpha blending conflicts: Transparent surfaces behind other transparent surfaces flicker or disappear Example: Wine glasses behind glass doors — sometimes neither is rendered, or only one Even opaque meshes behind glass flicker due to depth buffer confusion Alpha materials sometimes render portals or the real world behind them, ignoring other geometry entirely Analysis: Appears related to internal changes in alpha sorting or depth pre-pass behavior introduced in visionOS 26 Pros: Most stable setup so far Cons: Still unreliable when OpacityComponent is active Scenario 3 — Layer Separation Attempt (Regression) Setup: Same as Scenario 2, but: Entities with alpha materials moved to separate USDZs Explicit ModelSortGroupComponent order set (alpha surfaces rendered last) Symptoms: Transparent surfaces behind other transparent surfaces flicker or disappear Depth is completely broken when there's a large transparent surface Alpha materials sometimes render portals or the real world behind them, ignoring other geometry entirely Workaround Attempt: Re-ordering and further separating models did not solve it Pros: None — this setup makes transparency unusable Conclusion There appears to be a regression in RealityKit’s handling of transparency and sorting in visionOS 26, particularly when: OpacityComponent is applied dynamically, and Scenes rely on multiple overlapping transparent materials. These issues did not exist prior to 26, and the same project (no code changes) behaves correctly on previous versions. Request We’d appreciate any insight or confirmation from Apple engineers regarding: Whether alpha sorting or opacity blending behavior changed in visionOS 26 If there are new recommended practices for combining OpacityComponent with transparent materials If a bug report already exists for this regression Thanks in advance!
Replies
0
Boosts
0
Views
213
Activity
Nov ’25
Real Time Spatial Video Streaming with Vision Pro
Hello, I am trying to build an AVP app for real-time "zero-latency" spatial video streaming. I am trying to figure out, on a high level, the best way to do this. Currently this is my method: Server sends stereo images via a WebRTC service (ie, livekit) The WebRTC stream is converted to a CVPixelBuffer, writes them to file, plays via AVPlayer, and applies a VideoMaterial to a plane entity. However, this is a bit hacky and it seems like this won't be compatible with Apple's spatial experinces. To my understanding, Apple supports HLS streaming for spatial experiences and APMP content. However, HLS (and even Low Latency HLS) introduces a second or more of latency, likely do to the segmentation nature of HLS. Thus, HLS will not work for us. Some other alternatives I've thought of are streaming the live stream video via webrtc from the server to a local computer in the AVP's network, and then using LL-HLS to stream from the local computer to the vision pro. Still, it seems like this would introduce latency on the order of seconds. Is my current approach the best way to implement this? Or could anyone suggest a better way, perhaps something compatible with AVP's spatial experiences
Replies
0
Boosts
1
Views
135
Activity
Dec ’25
Unity PolySpatial – Live handheld camera feed of graspable objects not rendering on Vision Pro
I am developing a Unity application for the Apple Vision Pro using PolySpatial and RealityKit integration. The goal is to create a graspable object (for example, a handheld cube) that includes a secondary camera. When the user grabs and moves the object, the secondary camera should render its view to a RenderTexture, which is displayed on a quad attached to the object, simulating a live camera screen. In the Unity Editor, this setup works correctly. The RenderTexture updates in real time, and the quad displays the camera’s view as expected. However, when building and running the application on the Vision Pro, the quad only displays the clear background color of the secondary camera. No scene content appears. The graspable interaction itself works fine: the object can be grabbed and moved as intended. Steps I have taken: Created a new layer (CameraFeed) and assigned the relevant objects to it. Set the secondary camera’s culling mask to render only the CameraFeed layer. Assigned the RenderTexture as the camera’s target texture. Applied the RenderTexture to an Unlit/Texture material on a quad. Confirmed the camera is active and correctly positioned relative to the object. From my research, it appears that once objects are managed by RealityKit through PolySpatial (for example, made graspable), they are no longer rendered through Unity's normal camera pipeline. Only the main XR camera (managed by RealityKit) seems able to see these objects. Secondary Unity cameras cannot render RealityKit-synced content to a RenderTexture. If this is correct, it seems there is currently no way to implement a true live secondary camera feed showing graspable objects on Vision Pro using Unity PolySpatial. My questions are: Is there any official way to enable multiple camera rendering of RealityKit-managed objects through PolySpatial? Are there known workarounds to simulate a live camera feed that still allows objects to be grabbed? Has anyone found alternative design patterns or methods for this kind of interaction? Environment: Unity 6.0 , PolySpatial 2.2.4, Apple Vision OS XR 2.2.4 Any insight or suggestions would be greatly appreciated. Thank you.
Replies
0
Boosts
0
Views
128
Activity
Apr ’25
Can't establish spatial connection after visionOS update
After updating to visionOS 26.2 Beta 2 (and Beta 3), I'm unable to establish a spatial connection to Vision Pro. This was working fine before the update. To test, I've created a fresh spatialApp project from the Xcode template with zero modifications, but I'm hitting the same issue - the Vision Pro is discovered but won't connect. Am I forgetting to update the config somewhere? Any ideas what might be causing this and how to fix it? Thanks! Warning: -[NSWindow makeKeyWindow] called on <NSWindow: 0xa1f811900> windowNumber=1b9 which returned NO from -[NSWindow canBecomeKeyWindow]. ((processConfiguration != nil && configuration != nil) || (processConfiguration == nil && configuration == nil)) - /AppleInternal/Library/BuildRoots/4~CBS0ugAIF7BrQZjLe6r0lhPXO4GJmNDTovxYoV0/Library/Caches/com.apple.xbs/Sources/ExtensionKit/ExtensionKit/Source/HostViewController/Internal/EXHostSessionDriver.m:80: `processConfiguration` and `configuration` must be both non-nil or both nil Unable to obtain a task name port right for pid 415: (os/kern) failure (0x5) CCContextDeviceGroup.mm(291):+[CCContextDeviceGroup checkBinaryArchivesForDevice:withBundle:]: Failed to find any binary shader archive
Replies
0
Boosts
0
Views
117
Activity
Nov ’25
Header Blur Effect on visionOS SwiftUI
Hi, I'm looking to build something similar to the header blur in the App Store and Apple TV app settings. Does anyone know the best way to achieve this so that when there is nothing behind the header it looks the same as the rest of the view background but when content goes underneath it has a blur effect. I've seen .scrollEdgeEffect on IOS26 is there something similar for visionOS? Thanks!
Replies
0
Boosts
0
Views
141
Activity
Sep ’25
360° video playback Issue
When rendering an equirectangular video on a sphere using VideoMaterial and MeshResource.generateSphere(), there is a visible black seam line running vertically on the sphere. This appears to be at the UV seam where the texture coordinates wrap from 1.0 back to 0.0. The same video file plays without any visible seam in other 360° video players on Vision Pro, so the issue is not with the video content itself. Here is the relevant code: private func createVideoSphere(content: RealityViewContent, player: AVPlayer) { let sphere = MeshResource.generateSphere(radius: 1000) let material = VideoMaterial(avPlayer: player) let entity = ModelEntity(mesh: sphere, materials: [material]) entity.scale *= .init(x: -1, y: 1, z: 1) // Flip to render on inside content.add(entity) player.play() } The setup is straightforward: MeshResource.generateSphere(radius: 1000) generates the sphere mesh VideoMaterial(avPlayer:) provides the video texture X scale is flipped to -1 so the texture renders on the inside of the sphere The video is a standard equirectangular 360° MP4 file What I've tried: I attempted to create a custom sphere mesh using MeshDescriptor with duplicate vertices at the UV seam (longitude 0°/360°) to ensure proper UV continuity. However, VideoMaterial did not render any video on the custom mesh (only audio played), and the app eventually crashed. It seems VideoMaterial may have specific mesh requirements. Questions: Is the black seam a known limitation of MeshResource.generateSphere() when used with VideoMaterial for 360° video? Is there a recommended way to eliminate this UV seam — for example, a texture addressing mode or a specific mesh configuration that works with VideoMaterial? Is there an official sample project or code example for playing 360° equirectangular video in a fully immersive space on visionOS? That would be extremely helpful as a reference. Any guidance would be greatly appreciated. Thank you!
Replies
0
Boosts
0
Views
279
Activity
4w
Odd image placeholder appearing when dismissing an ImmersiveSpace with a ImagePresentationComponent
Hello, There are odd artifacts (one looks like an image placeholder) appearing when dismissing an immersive space which is displaying an ImagePresentationComponent. Both artifacts look like widgets.. See below our simple code displaying the ImagePresentationComponent and the images of the odd artifacts that appear briefly when dismissing the immersive space. import OSLog import RealityKit import SwiftUI struct ImmersiveImageView: View { let logger = Logger(subsystem: AppConstant.SUBSYSTEM, category: "ImmersiveImageView") @Environment(AppModel.self) private var appModel var body: some View { RealityView { content in if let currentMedia = appModel.currentMedia, var imagePresentationComponent = currentMedia.imagePresentationComponent { let imagePresentationComponentEntity = Entity() switch currentMedia.type { case .iphoneSpatialMovie: logger.info("\(#function) \(#line) spatial3DImmersive display for \(String(describing: currentMedia))") imagePresentationComponent.desiredViewingMode = .spatial3DImmersive case .twoD: logger.info("\(#function) \(#line) spatial3DImmersive display for \(String(describing: currentMedia))") imagePresentationComponent.desiredViewingMode = .spatial3DImmersive case .visionProConvertedSpatialPhoto: logger.info("\(#function) \(#line) spatialStereoImmersive display for \(String(describing: currentMedia))") imagePresentationComponent.desiredViewingMode = .spatialStereoImmersive default : logger.error("\(#function) \(#line) Unsupported media type \(currentMedia.type)") assertionFailure("Unsupported media type \(currentMedia.type)") } imagePresentationComponentEntity.components.set(imagePresentationComponent) imagePresentationComponentEntity.position = AppConstant.Position.spacialImagePosition content.add(imagePresentationComponentEntity) } let toggleViewAttachmentComponent = ViewAttachmentComponent(rootView: ToggleImmersiveSpaceButton()) let toggleViewAttachmentComponentEntity = Entity(components: toggleViewAttachmentComponent) toggleViewAttachmentComponentEntity.position = SIMD3<Float>( AppConstant.Position.spacialImagePosition.x + 1, AppConstant.Position.spacialImagePosition.y, AppConstant.Position.spacialImagePosition.z ) toggleViewAttachmentComponentEntity.scale = AppConstant.Scale.attachments content.add(toggleViewAttachmentComponentEntity) } } }
Replies
0
Boosts
1
Views
219
Activity
Jul ’25
Position and orientation of a window in an immersive space
Is it possible to retrieve the position and orientation of a window that is opened in an immersive space? The following code: struct MyWindow: View { var body: some View { VStack { Text("Hello") } .onGeometryChange3D(for: Point3D.self) { proxy in try! proxy .coordinateSpace3D() .convert(value: Point3D.zero, to: .worldReference) } action : { point in print(point) } } } seems to work for the position, but I also need the orientation.
Replies
0
Boosts
0
Views
789
Activity
1w
Custom Material half visible..?
I'm currently implementing 180° / 360° immersive video for my app. I easily implemented 360° by just applying VideoMaterial to flipped sphere. But I'm stuck at 180°. I'm trying to implement by applying VideoMaterial to hemisphere (half sphere). I want to make VideoMaterial to be visible half front sphere and half back sphere transparent / clear. Would there be any advice / information / idea to implement this? Your help would be grateful.
Replies
0
Boosts
0
Views
138
Activity
Oct ’25