Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Created

`ARCamera.unprojectPoint` and `ARCamera.TrackingState` behavior changes between iOS 26.3 and 26.4 under AR resource pressure
ARCamera.TrackingState questions: Did the threshold or sensitivity for transitioning ARCamera.TrackingState from .normal to .limited(.excessiveMotion) or .limited(.insufficientFeatures) change between iOS 26.3 and iOS 26.4? What does "ARWorldTrackingTechnique: resource constraints [33]" mean, and is it new in iOS 26.4? Does it correspond to a tracking state degradation? Is there a way for the client to detect or respond to ARKit entering a resource-constrained mode short of the full tracking state transition — for example, a lower-level notification or a flag on ARFrame — so that apps can take protective action without interpreting it as a full tracking failure? ARCamera.unprojectPoint questions: Did the behavior of ARCamera.unprojectPoint(_:ontoPlane:orientation:viewportSize:) change between iOS 26.3 and iOS 26.4 for near-parallel geometry? Specifically, on iOS 26.3 this method returns nil when the camera ray is nearly parallel to the target plane (denominator of the ray-plane intersection → 0 at ~90° of camera rotation). On iOS 26.4, with identical code and environment, it returns a large finite value instead — we observed z = −12.27m. Since the method's optional return type implies nil is the documented signal for no valid intersection, this reads as a behavioral regression rather than an intentional change. If returning the computed value for near-parallel geometry is now the intended behavior, is there a recommended way for the caller to guard against it? For example, should we check abs(dot(rayDirection, planeNormal)) against a threshold before calling, and if so, is there a documented epsilon Apple uses internally? Alternatively, is there a newer API we should prefer over unprojectPoint(:ontoPlane:) for this use case that handles degenerate geometry more gracefully — such as ARSession.raycast(:)? Are there any other ARKit API adjustments between OS 26.3 and 26.4? We are using the same codebase, but it behaves differently in between these 2 OS versions now. Thanks!
0
0
223
2d
How do I dismiss a presented sheet?
I'm developing an app requiring data entry across several devices. My SwiftUI app runs on iOS and iPadOS but I also want to run it on visionOS. I'm using the visionOS simulator. When I enter data in one of my views I use a Form within a .sheet and this works perfectly well on iOS and iPadOS and I can dismiss the sheet by simply tapping the view behind the sheet. On visionOS I click my + button, the sheet appears, I enter the data as usual but after that there is no gesture in the app I can perform with keyboard or mouse that will make the sheet disappear! Do I have to add a "Close" button for visionOS or is there a way to enable the same interaction that works on iPadOS?
0
0
350
5d
RealityView Camera Target Error when set while Orbiting
When interacting with RealityView’s realityViewCameraControls .orbit and setting a new RealityViewCameraContent .cameraTarget, the resulting camera target and camera orbit is incorrect. This can be demonstrated where one finger is orbiting the RealityView, and another pushes a button which changes the camera target. Instead of the camera facing the new target, some point in the scene is the new effective camera target and orbit point. This only occurs when an orbit interaction is currently taking place. If you stop interacting with the orbit, change target, then start orbit interacting again, everything works as expected. Though this example uses two-touches, any change of the camera target has this conflict with orbit interaction. This means interacting with orbit will result in the wrong camera view which is unexpected for users and difficult to reconcile or detect, for developers. Expected: Interacting (orbiting) the scene while setting a new camera target with the buttons on screen (at the same time), the camera’s new target shows centred in view the orbit revolves the new target and continues to match my gestures. Reality: Interacting (orbiting) the scene while setting a new camera target with the buttons on screen (at the same time), the camera’s new target is not centred in view, and camera is now orbiting an unexpected point in the scene, that is not my expected target. One imperfect workaround is to force a rebuild of the view after setting a new cameraTarget. This sets all targets correctly but results in a flicker, loss of orbit controls until re-touch and ultimately is a poor user experience, but is better than the wrong target being shown unexpectedly. Code Sample: import SwiftUI import RealityKit struct RKOribtTarget: View { @State private var target: Int = 0 @State private var rcContent: RealityViewCameraContent? @State private var rkID: UUID = UUID() let root = Entity() let center = ModelEntity(mesh: .generateSphere(radius: 0.05), materials: [UnlitMaterial(color: UIColor(.gray.opacity(0.5)))]) let red = ModelEntity(mesh: .generateBox(size: 0.1), materials: [SimpleMaterial(color: .red, isMetallic: false)]) let blue = ModelEntity(mesh: .generateBox(size: 0.1), materials: [SimpleMaterial(color: .blue, isMetallic: false)]) let green = ModelEntity(mesh: .generateBox(size: 0.1), materials: [SimpleMaterial(color: .green, isMetallic: false)]) var body: some View { VStack{ RealityView { content in red.position.x = 0.5 blue.position.z = 0.5 green.position.y = 0.5 center.position = .init(repeating: 0.25) content.cameraTarget = target == 0 ? root : blue root.addChild(red) root.addChild(blue) root.addChild(green) root.addChild(center) content.add(root) } update: { content in switch target{ case 0: content.cameraTarget = root case 1: content.cameraTarget = blue case 2: content.cameraTarget = red case 3: content.cameraTarget = green default: content.cameraTarget = root } } .id(rkID) .realityViewCameraControls(.orbit) VStack{ Text("Target") Button("Default") { target = 0 // Force rebuilding view resets orbit target and rotation // But shows a flicker, interaction requires touch reset // Not an ideal workaround // rkID = UUID() } .buttonStyle(.bordered) Button("Blue") { target = 1 // rkID = UUID() } .buttonStyle(.bordered) .tint(.blue) Button("Red") { target = 2 // rkID = UUID() } .buttonStyle(.bordered) .tint(.red) Button("Green") { target = 3 // rkID = UUID() } .buttonStyle(.bordered) .tint(.green) } } } } Xcode Version: Version 26.0 (17A324) iOS Version: iOS 26.5 (23F75) Tested on devices, iPhone 12 Pro, iPhone 15 Pro
2
0
454
6d
WWDC25 Houdini VR Optimisation Toolkit Texture Baking
The texture baking section of the WWDC25 session "Optimize your custom environments for visionOS" (https://youtu.be/RELnRZmb02c?t=1485) moves very quickly and leaves a lot unexplained. Has anyone worked through this part of the toolkit in practice and can speak to what's actually going on, particularly around projection baking and how it addresses the reprojection artifacts the presenter briefly mentions? Thankyou
0
0
774
1w
visionOS: AVFoundation cannot deliver simultaneous video from two external (UVC) cameras; no public USB fallback exists
Area: visionOS 26.4 · AVFoundation · AVCapture · External/UVC video Classification: Suggestion / API Enhancement Request (also: Incorrect/Missing Documentation) Device / OS: Apple Vision Pro, visionOS 26.x. Xcode 26.4.1, XROS26.4.sdk. Summary On visionOS, a third-party app cannot display two UVC USB cameras (connected through a powered USB-C hub) at the same time. Every AVFoundation path that would enable this on iPadOS is either unavailable or fails at runtime on visionOS, and there is no public non-AVFoundation fallback (no IOUSBHost, no DriverKit, no usable CoreMediaIO, no MFi path for generic UVC devices). This is a real capability gap relative to iPadOS and macOS, and Camo Studio on iPadOS (App Store ID 6450313385) demonstrates the two-camera USB-hub use case is legitimate and valuable for spatial-video/hybrid-capture workflows on Vision Pro. Steps to reproduce Connect a powered USB-C hub to Apple Vision Pro with two UVC webcams attached. Build a visionOS app that uses AVCaptureDevice.DiscoverySession(deviceTypes: [.external], …). Observe: both cameras are discovered and enumerate as distinct AVCaptureDevices. Attempt A — two independent sessions: Create two independent AVCaptureSessions, each with one AVCaptureDeviceInput and one AVCaptureVideoDataOutput, start both. Result: only one session delivers sample buffers. The other stalls silently with no error and no interruption notification. Attempt B — AVCaptureMultiCamSession with manual connections (the pattern that works on iPadOS 18+): Result: code does not compile. In XROS26.4.sdk: AVCaptureInputPort is API_UNAVAILABLE(visionos) (AVCaptureInput.h) AVCaptureInput.ports is API_UNAVAILABLE(visionos) AVCaptureDeviceInput.portsWithMediaType:sourceDeviceType:sourceDevicePosition: is API_UNAVAILABLE(macos, visionos) Therefore AVCaptureConnection(inputPorts:output:) cannot be constructed. AVCaptureMultiCamSession itself is declared API_AVAILABLE(… visionos(2.1)), which is misleading because without input-port access the manual-connection path the class requires is unreachable. Expected behavior Either of the following would resolve this, in order of preference: Expose the missing API surface on visionOS. Make AVCaptureInputPort, AVCaptureInput.ports, and AVCaptureDeviceInput.portsWithMediaType:sourceDeviceType:sourceDevicePosition: available on visionOS so the documented iPadOS multi-cam pattern compiles and runs. AVCaptureMultiCamSession is already declared available — the supporting API surface should match. Allow two concurrent plain AVCaptureSessions to each own a distinct external AVCaptureDevice. Each session binds a different hardware device, and the current serialization appears to be a software policy rather than a hardware constraint (a powered hub has bandwidth for both). Document the limit explicitly and surface a clear error or interruption reason on the stalled session so apps can fail loudly instead of appearing to work. Actual behavior AVCaptureMultiCamSession advertises visionos(2.1) availability but the APIs required to wire its connections are marked unavailable on visionOS. Two concurrent AVCaptureSessions silently deliver frames to only one session; no error is reported on the other. There is no public alternative framework on visionOS for raw UVC access to work around this: IOUSBHost.framework — not present in XROS26.4.sdk DriverKit — not present in XROS26.4.sdk IOKit — ships a stub (IOKit.tbd); no public USB device interfaces CoreMediaIO — headers are an apinotes stub on visionOS ExternalAccessory — MFi-only; generic UVC devices don't enumerate This means there is no public path, AVFoundation or otherwise, for a third-party visionOS app to display two UVC cameras at once. Impact / use cases Apple Vision Pro is uniquely suited to multi-camera monitoring and capture workflows — spatial creators, broadcast/AV producers, multi-angle reference during immersive authoring, clinical and field-recording use cases, and apps that combine a primary UVC cinema camera with a secondary UVC reference/overview angle. iPadOS already supports this via AVCaptureMultiCamSession (demonstrated shipping by Camo Studio). The current visionOS limitation pushes these workflows back to iPad or macOS and undermines Vision Pro's positioning as a pro capture/monitor environment. References iPadOS reference implementation: Apple sample Displaying Video From Connected Devices + AVCaptureMultiCamSession with manual AVCaptureConnection wiring — works on iPadOS 18+ with two UVC cameras via a powered hub. Shipping precedent: Camo Studio — two simultaneous UVC cameras via USB hub on iPad — https://apps.apple.com/us/app/camo-studio-stream-record/id6450313385 visionOS 26.4 SDK headers cited above (AVCaptureInput.h, AVCaptureSession.h).
1
0
1.2k
3w
LowLevelInstanceData & animation
AppleOS 26 introduces LowLevelInstanceData that can reduce CPU draw calls significantly by instancing. However, I have noticed trouble with animating each individual instance. As I wanted low-level control, I'm using a custom system and LowLevelInstanceData.replace(using:) to update the transform each frame. The update closure itself is extremely efficient (Xcode Instruments reports nearly no cost). But I noticed extremely high runloop time, reach around 20ms. Time Profiler shows that the CPU is blocked by kernel.release.t6401. I think it is caused by synchronization between CPU and GPU, however, as I am already using a MTLCommandBuffer to coordinate it, I don't understand why I am still seeing large CPU time.
3
0
733
3w
How to renew visionOS Enterprise API entitlement / license?
Hi everyone, I’m currently using the visionOS Enterprise APIs, and I noticed in the official documentation that: However, I couldn’t find any clear instructions on how to actually renew the license. What is the correct process to renew a visionOS Enterprise API license? Do I need to submit a new entitlement request again to renew? Is there any official step-by-step guide or documentation for renewal? Any advice or shared experience would be greatly appreciated 🙏 Thank you!
1
0
1.3k
Mar ’26
Xcode fails to compile Blender-exported USDZ in .rkassets with misleading "permission" error — Xcode 26.3
The error: When building a RealityKitContent package that contains a USDZ file exported from Blender, Xcode throws the following error: error: [xrsimulator] Exception thrown during compile: Cannot get rkassets content for path .../RealityKitContent.rkassets because 'The file "RealityKitContent.rkassets" couldn't be opened because you don't have permission to view it.' error: Tool exited with code 1 The error message mentions "permission" — but permissions are not the issue. This appears to be a misleading error from realitytool masking a USD validation failure. What I've ruled out File permissions — all files are -rw-r--r--, user has Read & Write on the folder Extended attributes / quarantine flag — other files with the same @ flag work fine Corrupted archive — unzip -t confirms the USDZ is valid (board.usdc + textures) Stale build cache — deleted DerivedData and com.apple.DeveloperTools cache, no change Key observations The same file builds successfully on my colleague's machine running identical Xcode 26.3 - MacOS 26.3 Other USDZ files in the same .rkassets bundle (downloaded from Sketchfab, or created in Reality Composer Pro) compile without any issue. Only USDZ files exported directly from Blender are affected. When the file is placed in Bundle.main and loaded via Entity(named:in:.main), it works perfectly — no errors Reality Converter flags the file with two errors: UsdGeomPointInstancers not allowed, and root layer must be .usdc with no external dependencies The confusing part: the same file compiles fine on an identical Xcode 26.3 setup and importing method. This suggests either a machine-specific difference in Xcode's validation behavior, or a cached .reality bundle on my colleague's machine that isn't being recompiled. Current workaround: Loading from Bundle.main instead of the RealityKitContent package bypasses realitytool entirely and works, but loses Reality Composer Pro integration: if let entity = try? await Entity(named: "test", in: Bundle.main)
1
0
1.3k
Mar ’26
Immersive API
After updating to visionOS 26.4, I went to Safari's Feature Flags page to reenable Website environment, and I found the Website environment switch had been moved from the top and into the list of switches below. In its place is "Immersive API". I could not find any documentation on this. Anyone know what it is for or can point me to documentation?
1
0
352
Mar ’26
Website environment disappears suddenly
After I updated to visionOS 26.4, I noticed my website environment would suddenly turn off occasionally while I was watching YouTube in Safari. My M2 AVP was still warm after the update. Is turning off a website environment expected behavior when the headset gets warm (e.g., perhaps to reduce load)? If not, anyone have an idea this might happen?
1
0
482
Mar ’26
Official visionOS sample "Creating an interactive 3D model in visionOS" fails to restore/show the car model when relaunched from Home
Environment: Device: Apple Vision Pro visionOS: 26.3.1 Xcode: 26.2 (17C52) Sample: Creating an interactive 3D model in visionOS Repro steps: Build and run the official sample from Xcode Confirm the car model displays correctly Quit the app Relaunch the app from Home Observe that the official car model no longer appears / fails to restore correctly Expected: The official car model should display normally after relaunching from Home Actual: The sample works when launched from Xcode, but fails when relaunched from Home
1
0
98
Mar ’26
Real world anchors
I’m trying to build a persistent world map of my college campus using ARKit, but it’s not very reliable. Anchors don’t consistently appear in the same place across sessions. I’ve tried using image anchors, but they didn’t improve accuracy much. How can I create a stable world map for a larger area and reliably relocalize anchors? Are there better approaches or recommended resources for this?
1
0
1.1k
Mar ’26
ARSession Error: Required sensor failed
Hi everyone, I’m currently using the RoomPlan API, which has been working reliably until recently. However, I’ve started encountering an intermittent error and I’m trying to understand what might be causing it. The error is triggered in the ARSession observer method: session(_ session: ARSession, didFailWithError error: Error) It has occurred on at least two devices: iPhone 14 Pro iPhone 17 Pro Here’s the full error message: ARSession failed domain=com.apple.arkit.error code=102 desc=Required sensor failed. userInfo=["NSLocalizedFailureReason": A sensor failed to deliver the required input., "NSUnderlyingError": Error Domain=AVFoundationErrorDomain Code=-11819 "Cannot Complete Action" UserInfo={NSLocalizedDescription=Cannot Complete Action, NSLocalizedRecoverySuggestion=Try again later.}, "NSLocalizedDescription": Required sensor failed.] This seems to indicate that a required sensor (likely LiDAR or camera) failed to provide input, but I’m not sure what’s causing it or why it happens only occasionally. Has anyone experienced something similar or has insight into possible causes or fixes? Thanks in advance!
0
0
355
Mar ’26
RoomCaptureView runs on the latest system with a serious bug causing errors
I am using RoomCaptureView for house scanning and modeling On the latest iOS 26, the following issues were encountered The program runs well on iOS 26 and below, but on iOS 26 and above, the probability of scene localization failure becomes abnormally high, and accurate indoor localization cannot be obtained. Additionally, the probability of using RoomBuilder to merge models is also high After compiling the program using xcode 26 or above, a necessary bug appeared when running it on iOS 26, RoomCaptureView is completely unable to run The error message is {RoomCaptureSession. Capture Error's' Internal error '} And the camera interface of RoomCaptureView has turned into a splash screen Another Debug error occurred:{ -[MTLDebugRenderCommandEncoder validateCommonDrawErrors:]:5970: failed assertion `Draw Errors Validation Fragment Function(realitykit::fsSurfaceShadow): incorrect type of texture (MTLTextureType2D) bound at Texture binding at index 14 (expect MTLTextureType1D) for tonemapLUT[0]. } When using programs compiled under xcode 26 and running on iOS 26, this issue will not occur
1
0
370
Mar ’26
RealityView content disappears when selecting Lock In Place on visionOS
Hi, I'm experiencing an issue where all RealityView content disappears when the user selects "Lock In Place" from the window management menu (long press on close button). "Follow Me" works correctly and this happens in Testflight builds only not reproducible when I run locally I have reproduced this with a minimal project containing nothing but a simple red cube — no custom anchors, no app state, no dependencies. Steps to Reproduce: Open an ImmersiveSpace. A red cube is placed 1m in front of the user via RealityView. Long press the X button on any floating window Select "Lock In Place". The cube disappears immediately. Expected: Cube remains visible after window is locked Actual: Cube disappears. Minimal reproducible code: var body: some View { RealityView { content in let cube = ModelEntity( mesh: .generateBox(size: 0.3), materials: [SimpleMaterial(color: .red, isMetallic: false)] ) cube.setPosition(SIMD3<Float>(0, 1.5, -1), relativeTo: nil) content.add(cube) } } } Device: Apple Vision Pro visionOS version: Vision OS 26.2 (23N301) Xcode version: Version 26.3 (17C529) Is this a known issue? Is there a recommended workaround to preserve RealityView content during Lock In Place transitions? Thank you!
5
0
1.4k
Mar ’26
Work with Reality Composer Pro content in Xcode
May I ask if there is a complete source code project for this instructional video that needs to be learned. Work with Reality Composer Pro content in Xcode
Replies
0
Boosts
0
Views
76
Activity
10h
VirtualEnvironmentProbeComponent VS ImageBasedLightComponent
Hi. I want to know what's the difference between VirtualEnvironmentProbeComponent and ImageBasedLightComponent? It seems they both can achieve the same light and reflection effect of environment.
Replies
0
Boosts
0
Views
254
Activity
2d
`ARCamera.unprojectPoint` and `ARCamera.TrackingState` behavior changes between iOS 26.3 and 26.4 under AR resource pressure
ARCamera.TrackingState questions: Did the threshold or sensitivity for transitioning ARCamera.TrackingState from .normal to .limited(.excessiveMotion) or .limited(.insufficientFeatures) change between iOS 26.3 and iOS 26.4? What does "ARWorldTrackingTechnique: resource constraints [33]" mean, and is it new in iOS 26.4? Does it correspond to a tracking state degradation? Is there a way for the client to detect or respond to ARKit entering a resource-constrained mode short of the full tracking state transition — for example, a lower-level notification or a flag on ARFrame — so that apps can take protective action without interpreting it as a full tracking failure? ARCamera.unprojectPoint questions: Did the behavior of ARCamera.unprojectPoint(_:ontoPlane:orientation:viewportSize:) change between iOS 26.3 and iOS 26.4 for near-parallel geometry? Specifically, on iOS 26.3 this method returns nil when the camera ray is nearly parallel to the target plane (denominator of the ray-plane intersection → 0 at ~90° of camera rotation). On iOS 26.4, with identical code and environment, it returns a large finite value instead — we observed z = −12.27m. Since the method's optional return type implies nil is the documented signal for no valid intersection, this reads as a behavioral regression rather than an intentional change. If returning the computed value for near-parallel geometry is now the intended behavior, is there a recommended way for the caller to guard against it? For example, should we check abs(dot(rayDirection, planeNormal)) against a threshold before calling, and if so, is there a documented epsilon Apple uses internally? Alternatively, is there a newer API we should prefer over unprojectPoint(:ontoPlane:) for this use case that handles degenerate geometry more gracefully — such as ARSession.raycast(:)? Are there any other ARKit API adjustments between OS 26.3 and 26.4? We are using the same codebase, but it behaves differently in between these 2 OS versions now. Thanks!
Replies
0
Boosts
0
Views
223
Activity
2d
AVP Developer Strap
I'm trying to find where to buy the Vision Pro Developer Strap Gen 2. I've looked all around this site and cannot find it. Help
Replies
0
Boosts
0
Views
374
Activity
3d
How do I dismiss a presented sheet?
I'm developing an app requiring data entry across several devices. My SwiftUI app runs on iOS and iPadOS but I also want to run it on visionOS. I'm using the visionOS simulator. When I enter data in one of my views I use a Form within a .sheet and this works perfectly well on iOS and iPadOS and I can dismiss the sheet by simply tapping the view behind the sheet. On visionOS I click my + button, the sheet appears, I enter the data as usual but after that there is no gesture in the app I can perform with keyboard or mouse that will make the sheet disappear! Do I have to add a "Close" button for visionOS or is there a way to enable the same interaction that works on iPadOS?
Replies
0
Boosts
0
Views
350
Activity
5d
RealityView Camera Target Error when set while Orbiting
When interacting with RealityView’s realityViewCameraControls .orbit and setting a new RealityViewCameraContent .cameraTarget, the resulting camera target and camera orbit is incorrect. This can be demonstrated where one finger is orbiting the RealityView, and another pushes a button which changes the camera target. Instead of the camera facing the new target, some point in the scene is the new effective camera target and orbit point. This only occurs when an orbit interaction is currently taking place. If you stop interacting with the orbit, change target, then start orbit interacting again, everything works as expected. Though this example uses two-touches, any change of the camera target has this conflict with orbit interaction. This means interacting with orbit will result in the wrong camera view which is unexpected for users and difficult to reconcile or detect, for developers. Expected: Interacting (orbiting) the scene while setting a new camera target with the buttons on screen (at the same time), the camera’s new target shows centred in view the orbit revolves the new target and continues to match my gestures. Reality: Interacting (orbiting) the scene while setting a new camera target with the buttons on screen (at the same time), the camera’s new target is not centred in view, and camera is now orbiting an unexpected point in the scene, that is not my expected target. One imperfect workaround is to force a rebuild of the view after setting a new cameraTarget. This sets all targets correctly but results in a flicker, loss of orbit controls until re-touch and ultimately is a poor user experience, but is better than the wrong target being shown unexpectedly. Code Sample: import SwiftUI import RealityKit struct RKOribtTarget: View { @State private var target: Int = 0 @State private var rcContent: RealityViewCameraContent? @State private var rkID: UUID = UUID() let root = Entity() let center = ModelEntity(mesh: .generateSphere(radius: 0.05), materials: [UnlitMaterial(color: UIColor(.gray.opacity(0.5)))]) let red = ModelEntity(mesh: .generateBox(size: 0.1), materials: [SimpleMaterial(color: .red, isMetallic: false)]) let blue = ModelEntity(mesh: .generateBox(size: 0.1), materials: [SimpleMaterial(color: .blue, isMetallic: false)]) let green = ModelEntity(mesh: .generateBox(size: 0.1), materials: [SimpleMaterial(color: .green, isMetallic: false)]) var body: some View { VStack{ RealityView { content in red.position.x = 0.5 blue.position.z = 0.5 green.position.y = 0.5 center.position = .init(repeating: 0.25) content.cameraTarget = target == 0 ? root : blue root.addChild(red) root.addChild(blue) root.addChild(green) root.addChild(center) content.add(root) } update: { content in switch target{ case 0: content.cameraTarget = root case 1: content.cameraTarget = blue case 2: content.cameraTarget = red case 3: content.cameraTarget = green default: content.cameraTarget = root } } .id(rkID) .realityViewCameraControls(.orbit) VStack{ Text("Target") Button("Default") { target = 0 // Force rebuilding view resets orbit target and rotation // But shows a flicker, interaction requires touch reset // Not an ideal workaround // rkID = UUID() } .buttonStyle(.bordered) Button("Blue") { target = 1 // rkID = UUID() } .buttonStyle(.bordered) .tint(.blue) Button("Red") { target = 2 // rkID = UUID() } .buttonStyle(.bordered) .tint(.red) Button("Green") { target = 3 // rkID = UUID() } .buttonStyle(.bordered) .tint(.green) } } } } Xcode Version: Version 26.0 (17A324) iOS Version: iOS 26.5 (23F75) Tested on devices, iPhone 12 Pro, iPhone 15 Pro
Replies
2
Boosts
0
Views
454
Activity
6d
Spatial Audio: <<<< FigAudioSession(AV) >>>> signalled err=-19224 at <>:612
Ok trying to play Spatial Audio on my VisionPro. OS26.4, using Xcode 26.4.1. Every attempt gives me the following error. <<<< FigAudioSession(AV) >>>> signalled err=-19224 at <>:612 I have tried the sample code at https://developer.apple.com/documentation/visionos/playing-spatial-audio-in-visionos and it gives the same error.
Replies
2
Boosts
0
Views
767
Activity
1w
WWDC25 Houdini VR Optimisation Toolkit Texture Baking
The texture baking section of the WWDC25 session "Optimize your custom environments for visionOS" (https://youtu.be/RELnRZmb02c?t=1485) moves very quickly and leaves a lot unexplained. Has anyone worked through this part of the toolkit in practice and can speak to what's actually going on, particularly around projection baking and how it addresses the reprojection artifacts the presenter briefly mentions? Thankyou
Replies
0
Boosts
0
Views
774
Activity
1w
visionOS: AVFoundation cannot deliver simultaneous video from two external (UVC) cameras; no public USB fallback exists
Area: visionOS 26.4 · AVFoundation · AVCapture · External/UVC video Classification: Suggestion / API Enhancement Request (also: Incorrect/Missing Documentation) Device / OS: Apple Vision Pro, visionOS 26.x. Xcode 26.4.1, XROS26.4.sdk. Summary On visionOS, a third-party app cannot display two UVC USB cameras (connected through a powered USB-C hub) at the same time. Every AVFoundation path that would enable this on iPadOS is either unavailable or fails at runtime on visionOS, and there is no public non-AVFoundation fallback (no IOUSBHost, no DriverKit, no usable CoreMediaIO, no MFi path for generic UVC devices). This is a real capability gap relative to iPadOS and macOS, and Camo Studio on iPadOS (App Store ID 6450313385) demonstrates the two-camera USB-hub use case is legitimate and valuable for spatial-video/hybrid-capture workflows on Vision Pro. Steps to reproduce Connect a powered USB-C hub to Apple Vision Pro with two UVC webcams attached. Build a visionOS app that uses AVCaptureDevice.DiscoverySession(deviceTypes: [.external], …). Observe: both cameras are discovered and enumerate as distinct AVCaptureDevices. Attempt A — two independent sessions: Create two independent AVCaptureSessions, each with one AVCaptureDeviceInput and one AVCaptureVideoDataOutput, start both. Result: only one session delivers sample buffers. The other stalls silently with no error and no interruption notification. Attempt B — AVCaptureMultiCamSession with manual connections (the pattern that works on iPadOS 18+): Result: code does not compile. In XROS26.4.sdk: AVCaptureInputPort is API_UNAVAILABLE(visionos) (AVCaptureInput.h) AVCaptureInput.ports is API_UNAVAILABLE(visionos) AVCaptureDeviceInput.portsWithMediaType:sourceDeviceType:sourceDevicePosition: is API_UNAVAILABLE(macos, visionos) Therefore AVCaptureConnection(inputPorts:output:) cannot be constructed. AVCaptureMultiCamSession itself is declared API_AVAILABLE(… visionos(2.1)), which is misleading because without input-port access the manual-connection path the class requires is unreachable. Expected behavior Either of the following would resolve this, in order of preference: Expose the missing API surface on visionOS. Make AVCaptureInputPort, AVCaptureInput.ports, and AVCaptureDeviceInput.portsWithMediaType:sourceDeviceType:sourceDevicePosition: available on visionOS so the documented iPadOS multi-cam pattern compiles and runs. AVCaptureMultiCamSession is already declared available — the supporting API surface should match. Allow two concurrent plain AVCaptureSessions to each own a distinct external AVCaptureDevice. Each session binds a different hardware device, and the current serialization appears to be a software policy rather than a hardware constraint (a powered hub has bandwidth for both). Document the limit explicitly and surface a clear error or interruption reason on the stalled session so apps can fail loudly instead of appearing to work. Actual behavior AVCaptureMultiCamSession advertises visionos(2.1) availability but the APIs required to wire its connections are marked unavailable on visionOS. Two concurrent AVCaptureSessions silently deliver frames to only one session; no error is reported on the other. There is no public alternative framework on visionOS for raw UVC access to work around this: IOUSBHost.framework — not present in XROS26.4.sdk DriverKit — not present in XROS26.4.sdk IOKit — ships a stub (IOKit.tbd); no public USB device interfaces CoreMediaIO — headers are an apinotes stub on visionOS ExternalAccessory — MFi-only; generic UVC devices don't enumerate This means there is no public path, AVFoundation or otherwise, for a third-party visionOS app to display two UVC cameras at once. Impact / use cases Apple Vision Pro is uniquely suited to multi-camera monitoring and capture workflows — spatial creators, broadcast/AV producers, multi-angle reference during immersive authoring, clinical and field-recording use cases, and apps that combine a primary UVC cinema camera with a secondary UVC reference/overview angle. iPadOS already supports this via AVCaptureMultiCamSession (demonstrated shipping by Camo Studio). The current visionOS limitation pushes these workflows back to iPad or macOS and undermines Vision Pro's positioning as a pro capture/monitor environment. References iPadOS reference implementation: Apple sample Displaying Video From Connected Devices + AVCaptureMultiCamSession with manual AVCaptureConnection wiring — works on iPadOS 18+ with two UVC cameras via a powered hub. Shipping precedent: Camo Studio — two simultaneous UVC cameras via USB hub on iPad — https://apps.apple.com/us/app/camo-studio-stream-record/id6450313385 visionOS 26.4 SDK headers cited above (AVCaptureInput.h, AVCaptureSession.h).
Replies
1
Boosts
0
Views
1.2k
Activity
3w
LowLevelInstanceData & animation
AppleOS 26 introduces LowLevelInstanceData that can reduce CPU draw calls significantly by instancing. However, I have noticed trouble with animating each individual instance. As I wanted low-level control, I'm using a custom system and LowLevelInstanceData.replace(using:) to update the transform each frame. The update closure itself is extremely efficient (Xcode Instruments reports nearly no cost). But I noticed extremely high runloop time, reach around 20ms. Time Profiler shows that the CPU is blocked by kernel.release.t6401. I think it is caused by synchronization between CPU and GPU, however, as I am already using a MTLCommandBuffer to coordinate it, I don't understand why I am still seeing large CPU time.
Replies
3
Boosts
0
Views
733
Activity
3w
How to renew visionOS Enterprise API entitlement / license?
Hi everyone, I’m currently using the visionOS Enterprise APIs, and I noticed in the official documentation that: However, I couldn’t find any clear instructions on how to actually renew the license. What is the correct process to renew a visionOS Enterprise API license? Do I need to submit a new entitlement request again to renew? Is there any official step-by-step guide or documentation for renewal? Any advice or shared experience would be greatly appreciated 🙏 Thank you!
Replies
1
Boosts
0
Views
1.3k
Activity
Mar ’26
Xcode fails to compile Blender-exported USDZ in .rkassets with misleading "permission" error — Xcode 26.3
The error: When building a RealityKitContent package that contains a USDZ file exported from Blender, Xcode throws the following error: error: [xrsimulator] Exception thrown during compile: Cannot get rkassets content for path .../RealityKitContent.rkassets because 'The file "RealityKitContent.rkassets" couldn't be opened because you don't have permission to view it.' error: Tool exited with code 1 The error message mentions "permission" — but permissions are not the issue. This appears to be a misleading error from realitytool masking a USD validation failure. What I've ruled out File permissions — all files are -rw-r--r--, user has Read & Write on the folder Extended attributes / quarantine flag — other files with the same @ flag work fine Corrupted archive — unzip -t confirms the USDZ is valid (board.usdc + textures) Stale build cache — deleted DerivedData and com.apple.DeveloperTools cache, no change Key observations The same file builds successfully on my colleague's machine running identical Xcode 26.3 - MacOS 26.3 Other USDZ files in the same .rkassets bundle (downloaded from Sketchfab, or created in Reality Composer Pro) compile without any issue. Only USDZ files exported directly from Blender are affected. When the file is placed in Bundle.main and loaded via Entity(named:in:.main), it works perfectly — no errors Reality Converter flags the file with two errors: UsdGeomPointInstancers not allowed, and root layer must be .usdc with no external dependencies The confusing part: the same file compiles fine on an identical Xcode 26.3 setup and importing method. This suggests either a machine-specific difference in Xcode's validation behavior, or a cached .reality bundle on my colleague's machine that isn't being recompiled. Current workaround: Loading from Bundle.main instead of the RealityKitContent package bypasses realitytool entirely and works, but loses Reality Composer Pro integration: if let entity = try? await Entity(named: "test", in: Bundle.main)
Replies
1
Boosts
0
Views
1.3k
Activity
Mar ’26
Immersive API
After updating to visionOS 26.4, I went to Safari's Feature Flags page to reenable Website environment, and I found the Website environment switch had been moved from the top and into the list of switches below. In its place is "Immersive API". I could not find any documentation on this. Anyone know what it is for or can point me to documentation?
Replies
1
Boosts
0
Views
352
Activity
Mar ’26
Website environment disappears suddenly
After I updated to visionOS 26.4, I noticed my website environment would suddenly turn off occasionally while I was watching YouTube in Safari. My M2 AVP was still warm after the update. Is turning off a website environment expected behavior when the headset gets warm (e.g., perhaps to reduce load)? If not, anyone have an idea this might happen?
Replies
1
Boosts
0
Views
482
Activity
Mar ’26
Official visionOS sample "Creating an interactive 3D model in visionOS" fails to restore/show the car model when relaunched from Home
Environment: Device: Apple Vision Pro visionOS: 26.3.1 Xcode: 26.2 (17C52) Sample: Creating an interactive 3D model in visionOS Repro steps: Build and run the official sample from Xcode Confirm the car model displays correctly Quit the app Relaunch the app from Home Observe that the official car model no longer appears / fails to restore correctly Expected: The official car model should display normally after relaunching from Home Actual: The sample works when launched from Xcode, but fails when relaunched from Home
Replies
1
Boosts
0
Views
98
Activity
Mar ’26
Real world anchors
I’m trying to build a persistent world map of my college campus using ARKit, but it’s not very reliable. Anchors don’t consistently appear in the same place across sessions. I’ve tried using image anchors, but they didn’t improve accuracy much. How can I create a stable world map for a larger area and reliably relocalize anchors? Are there better approaches or recommended resources for this?
Replies
1
Boosts
0
Views
1.1k
Activity
Mar ’26
How do you collect eye gaze data from vision pro
Hello, I know that Apple bans user from accessing to raw gaze data like gaze vector (x,y,z) or eye position. But when you do research on gaze data, how did you collect them from vision pro? Is there any App to solve this problem?
Replies
0
Boosts
0
Views
144
Activity
Mar ’26
ARSession Error: Required sensor failed
Hi everyone, I’m currently using the RoomPlan API, which has been working reliably until recently. However, I’ve started encountering an intermittent error and I’m trying to understand what might be causing it. The error is triggered in the ARSession observer method: session(_ session: ARSession, didFailWithError error: Error) It has occurred on at least two devices: iPhone 14 Pro iPhone 17 Pro Here’s the full error message: ARSession failed domain=com.apple.arkit.error code=102 desc=Required sensor failed. userInfo=["NSLocalizedFailureReason": A sensor failed to deliver the required input., "NSUnderlyingError": Error Domain=AVFoundationErrorDomain Code=-11819 "Cannot Complete Action" UserInfo={NSLocalizedDescription=Cannot Complete Action, NSLocalizedRecoverySuggestion=Try again later.}, "NSLocalizedDescription": Required sensor failed.] This seems to indicate that a required sensor (likely LiDAR or camera) failed to provide input, but I’m not sure what’s causing it or why it happens only occasionally. Has anyone experienced something similar or has insight into possible causes or fixes? Thanks in advance!
Replies
0
Boosts
0
Views
355
Activity
Mar ’26
RoomCaptureView runs on the latest system with a serious bug causing errors
I am using RoomCaptureView for house scanning and modeling On the latest iOS 26, the following issues were encountered The program runs well on iOS 26 and below, but on iOS 26 and above, the probability of scene localization failure becomes abnormally high, and accurate indoor localization cannot be obtained. Additionally, the probability of using RoomBuilder to merge models is also high After compiling the program using xcode 26 or above, a necessary bug appeared when running it on iOS 26, RoomCaptureView is completely unable to run The error message is {RoomCaptureSession. Capture Error's' Internal error '} And the camera interface of RoomCaptureView has turned into a splash screen Another Debug error occurred:{ -[MTLDebugRenderCommandEncoder validateCommonDrawErrors:]:5970: failed assertion `Draw Errors Validation Fragment Function(realitykit::fsSurfaceShadow): incorrect type of texture (MTLTextureType2D) bound at Texture binding at index 14 (expect MTLTextureType1D) for tonemapLUT[0]. } When using programs compiled under xcode 26 and running on iOS 26, this issue will not occur
Replies
1
Boosts
0
Views
370
Activity
Mar ’26
RealityView content disappears when selecting Lock In Place on visionOS
Hi, I'm experiencing an issue where all RealityView content disappears when the user selects "Lock In Place" from the window management menu (long press on close button). "Follow Me" works correctly and this happens in Testflight builds only not reproducible when I run locally I have reproduced this with a minimal project containing nothing but a simple red cube — no custom anchors, no app state, no dependencies. Steps to Reproduce: Open an ImmersiveSpace. A red cube is placed 1m in front of the user via RealityView. Long press the X button on any floating window Select "Lock In Place". The cube disappears immediately. Expected: Cube remains visible after window is locked Actual: Cube disappears. Minimal reproducible code: var body: some View { RealityView { content in let cube = ModelEntity( mesh: .generateBox(size: 0.3), materials: [SimpleMaterial(color: .red, isMetallic: false)] ) cube.setPosition(SIMD3<Float>(0, 1.5, -1), relativeTo: nil) content.add(cube) } } } Device: Apple Vision Pro visionOS version: Vision OS 26.2 (23N301) Xcode version: Version 26.3 (17C529) Is this a known issue? Is there a recommended workaround to preserve RealityView content during Lock In Place transitions? Thank you!
Replies
5
Boosts
0
Views
1.4k
Activity
Mar ’26