I've been struggling with this for far too long so I've decided to finally come here and see if anyone can point me to the documentation that I'm missing. I'm sure it's something so simple but I just can't figure it out.
I can SharePlay our test app with my brother (device to device) but when I open a volumetric window, it says "not shared" under it. I assume this will likely fix the video sharing problem we have as well. Everything else works so smooth but SharePlay has just been such a struggle for me. It's the last piece to the puzzle before we can put it on the App Store.
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
After updating to the latest visionOS beta, visionOS 26 Beta 4 (23M5300g) the ‘Presenting images in RealityKit’ sample from the following link no longer builds due to an error. https://developer.apple.com/documentation/RealityKit/presenting-images-in-realitykit
Expected / Previous:
Application builds and runs on device, working as described in the documentation.
Reality:
Application builds, but does not run on device due to an error (shown in screenshot) “Thread 1: EXC_BAD_ACCESS (code=1, address=0xb)”. The application still runs on the simulator, but not on device. When launching the app from Xcode, it builds and installs correctly but hangs due to the respective error. When loading the app from the Home Screen, the app does not load, and immediately returns to the Home Screen.
This Xcode project previously ran with no changes to code - the only change was updating the visionOS system software to the latest version. visionOS 26 Beta 4 (23M5300g)
Is anyone else experiencing this issue?
I am building a 360 photo viewer in VisionOS 26. Which allows the user to choose a 2 by 1 jpg and then renders it with a sphere mesh entity. And I use: TextureResource(contentsOf: url, options: options).
I noticed two situations here in terms of mipmaps options.
When setting "mipmapsMode: .none":
The graphic quality within the "gaze area" looks sharp and clear
The two poles (top and bottom) are perfectly rendered
Massive shimmer around the "gaze area"
When setting "mipmapsMode: .allocateAndGenerateAll":
The graphic looks slightly blurrier than in ".none" within the "gaze area"
The two poles are very blurry and hard to recognize the texture
Much less shimmer around the "gaze area"
My question would be: Is there a way to have the perfect graphic quality in ".none" without the massive shimmer?
Thank you!
Screenshots:
mipmapsMode: .none
mipmapsMode: .allocateAndGenerateAll
How do I configure a Unity project for a fully immersive VR app on Apple Vision Pro using Metal Rendering, and add a simple pinch-to-teleport-where-looking feature? I've tried the available samples and docs, but they don't cover this clearly (to me).
So far, I've reviewed Unity XR docs, Apple dev guides, and tutorials, but most emphasize spatial apps. Metal examples exist but don't include teleportation. Specifically:
visionOS sample "XRI_SimpleRig" – Deploys to device/simulator, but no full immersion or teleport.
XRI Toolkit sample "XR Origin Hands (XR Rig)" – Pinch gestures detect, but not linked to movement.
visionOS "XR Plugin" sample "Metal Sample URP" – Metal setup works, but static scene without locomotion.
I'm new in Unity XR development and would appreciate a simple, standalone scene or document focused only on the essentials for "teleport to gaze on pinch" in VR mode—no extra features. I do have some experience in unreal, world toolkit, cosmo, etc from the 90's and I'm ok with code.
Please include steps for:
Setting up immersive VR (disabling spatial defaults if needed).
Integrating pinch detection with ray-based teleport.
Any config changes or basic scripts.
Project Configuration:
Unity Editor Version: 6000.2.5f1.2588.7373 (Revision: 6000.2/staging 43d04cd1df69)
Installed Packages:
Apple visionOS XR Plugin: 2.3.1
AR Foundation: 6.2.0
PolySpatial XR: 2.3.1
XR Core Utilities: 2.5.3
XR Hands: 1.6.1
XR Interaction Toolkit: 3.2.1
XR Legacy Input Helpers: 2.1.12
XR Plugin Management: 4.5.1
Imported Samples:
Apple visionOS XR Plugin 2.3.1: Metal Sample - URP
XR Hands 1.6.1
XR Interaction Toolkit 3.2.1: Hands Interaction Demo, Starter Assets, visionOS
Build Platform Settings:
Target: Apple visionOS
App Mode: Metal Rendering with Compositor Services
Selected Validation Profiles: visionOS Metal
Documentation: Enabled
Xcode Version: 26.01
visionOS SDK: 26
Mac Hardware: Apple M1 Max
Target visionOS Version: 20 or 26
Test Environment: Model: Apple Vision Pro, visionOS 26.0.1 (23M341), Apple M1 Max
No errors in builds so far; just missing the desired functionality.
Thanks for a complete response with actionable steps.
Im just trying to setup some kind of visual on my iPad for testing and play purposes... I tried adding iPad to both APPLE Visual APPS.
I get this error for both
Building for 'iphoneos', but '12.0' must be >= '18.0'
Topic:
Spatial Computing
SubTopic:
General
Can you help to write a code able to pick an element a bit far from me, then bring it near to me, flick it a bit and then send it back to its original position when I release it?
Thanks a lot,
Christophe
Hi everyone,
I am wondering under which settings the camera(s) were set by the time they were calibrated.
For instance, one aspect that is easy to find is the reference resolution of the images taken when calibrating the intrinsics, this is by retrieving
intrinsicMatrixReferenceDimensions.
Making sure that the principal point is referenced to the by the time resolution used when the calibration was ongoing.
However, recently I saw that there are focusing modes that potentially displace the lens' physical position.
Settings like:
AutoFocusRangeRestriction: none, near, far
setFocusModeLocked: Locks the lens position at the specified value, and sets the focus mode to a locked state.
My concern lies the impact this focusing lens displacements can have on the intrinsic matrix parameters, like these parameters no longer describe the camera since the lens position has changed.
In simple words, what is the focus 'mode'/'range' the cameras were set when calibrating them for intrnisics?
I'm using Unity 2022.3.56f, with Apple VisionOS App Mode set to 'Virtual Reality - Fully Immersive Space'.
It seems that the render resolution of my game in the Apple Vision Pro when I build is well below the native resolution of the AVP displays.
I can't see a setting in XR Plug-in Management Apple visionOS options, or in Quality settings, to increase the render resolutions. Is this possible?
I tried setting:
UnityEngine.XR.XRSettings.eyeTextureResolutionScale= 2.0f
For example, but this doesn't seem to do anything to the render resolution in the build.
Topic:
Spatial Computing
SubTopic:
General
Hi there,
I was looking to add a particle emitter to my augmented reality app I'm developing using RealityKit. I'm targeting iOS. I noticed in the documentation for the ParticleEmitterComponent that it looks like iOS 18.0+ is supported, but when I try to use the ParticleEmitterComponent in my code in XCode, I get an error that it isn't found. Furthermore, this StackOverflow post seems to indicate that particle systems are not available for iOS. Would it be possible to get clarification on this?
According to the official documentation, the .blur(radius:) modifier could apply gaussian blur to a realityview. However, when applied directly to a RealityView, nothing inside it (neither 2D attachments nor 3D entities) appears to be blurred.
Here’s the test code:
struct ContentView: View {
var body: some View {
VStack(spacing: 20) {
Text("Above the RealityView")
.font(.title)
RealityView { content, attachments in
if let text = attachments.entity(for: "2dView") {
text.position.y = 0.1
content.add(text)
}
let box = ModelEntity(
mesh: .generateBox(size: 0.1),
materials: [SimpleMaterial(color: .red, isMetallic: true)]
)
content.add(box)
} attachments: {
Attachment(id: "2dView") {
Text("Above the Box")
.font(.title)
}
}
.frame(width: 300, height: 300)
.border(.blue)
.blur(radius: 99) // Has no visual effect
Text("Below the RealityView")
.font(.subheadline)
}
.padding()
}
}
My question:
How can I make .blur(radius:) visually affect the content rendered in a RealityView?
Can you provide a working example that .blur() to visually affect any part of a RealityView?
Thanks!
Hi guys,
In visionOS, when using a ZStack decorated with .glassBackgroundEffect(), you can see the 3D glass background from the front, but when viewed from the side, the view appears to have no thickness.
However, I noticed that in an app built by Apple, when viewing a glass background view from the side, it appears to have thickness.
I tried adding .frame(depth:) to a glass background view, but it appears as two separate layers spaced by the depth value.
My question is:
Is there a view modifier that adds visual thickness to a glass background view, as shown in the picture?
Or, if not, how should I write a custom view modifier to achieve this effect? Thanks!
I'm seeing this error while attempting to compile my VisionOS app under Xcode 26. My existing code looks like:
let (naturalSize, formatDescriptions, mediaCharacteristics) = try? await videoTrack.load(.naturalSize, .formatDescriptions, .mediaCharacteristics)
This is now giving a compiler error: Type of expression is ambiguous without a type annotation
I don't see that anything that was changed or deprecated in the latest version. Also loading the properties individually seems to work fine i.e.:
let naturalSize = try? await videoTrack.load(.naturalSize)
let formatDescriptions = try? await videoTrack.load(.formatDescriptions)
let mediaCharacteristics = try? await videoTrack.load(.mediaCharacteristics)
I am trying to launch a fully immersive game from Unity on a SwiftUI view. The game is using Metal Rendering with Compositor Services.
I added the unity Xcode project into the workspace, added the necessary bridge code. When I click on the button to call ufw?.showUnityWindow(), it does not start and I get the following in the console:
AR session failed to start after 5 seconds. Is the app configured to use an immersive space?
I want to display a huge image in RealityView in 3d space on Vision Pro. of course instead of one giant file I'm using a lot of big images.
to achieve this, I'm generating multiple planes exactly beside each others and put each image on them. although the planes are exactly beside each others but there is still a white gap between them.(image below)
**Does anybody know how to fix this issue? **
Topic:
Spatial Computing
SubTopic:
General
Tags:
RealityKit
Reality Composer Pro
Shader Graph Editor
visionOS
.glassEffect(.regular, in: .rect(cornerRadius: 24))
error; 'glassEffect(_:in:isEnabled:)' is unavailable in visionOS
This is not surprising since visionOS already has a native glass interface that formed a model for the other OS's, but this error will create additional overhead for developers creating multi-platform apps that include visionOS.
Hi there,
I’m building a workplace experience that requires using virtual desktop, is there a way to launch it in my code, so user doesn’t have to do it manually?
Thanks in advance!
Hi, I am trying to update what entities are visible in my RealityView. After the SwiftData set is updated, I have to restart the app for it to appear in the RealityView.
Also, the RealityView does not close when I move to a different tab. It keeps everything on and tracking, leaving the model in the same location I left it.
import SwiftUI
import RealityKit
import MountainLake
import SwiftData
struct RealityLakeView: View {
@Environment(\.modelContext) private var context
@Query private var items: [Item]
var body: some View {
RealityView { content in
print("View Loaded")
let lakeScene = try? await Entity(named: "Lake", in: mountainLakeBundle)
let anchor = AnchorEntity(.plane(.horizontal, classification: .any, minimumBounds: SIMD2<Float>(0.2, 0.2)))
@MainActor func addEntity(name: String) {
if let lakeEntity = lakeScene?.findEntity(named: name) {
// Add the Cube_1 entity to the RealityView
anchor.addChild(lakeEntity)
} else {
print(name + "entity not found in the Lake scene.")
}
}
addEntity(name: "Island")
for item in items {
if(item.enabled) {
addEntity(name: item.value)
}
}
// Add the horizontal plane anchor to the scene
content.add(anchor)
content.camera = .spatialTracking
} placeholder: {
ProgressView()
}
.edgesIgnoringSafeArea(.all)
}
}
#Preview {
RealityLakeView()
}
Topic:
Spatial Computing
SubTopic:
General
Tags:
Swift Packages
RealityKit
Reality Composer Pro
SwiftData
Hello all, I saw this interesting VisionOS app: https://apps.apple.com/us/app/splitscreen-multi-display/id6478007837
I was wondering if there was any documentation on the Swift APIs that were used to create this app.
Following up on my previous question here: https://developer.apple.com/forums/thread/774262
Having solved the clipping problem, I am now trying to overlay some content in front of the RealityView. However, it looks like any content with transparency does not render in front of the RealityView, while opaque views seem to work; placing content with transparency like glassBackgroundEffect() behind the RealityView in a ZStack causes the entire window to flicker.
Additionally, my SwiftUI attachment placed in front of the stereoscopic image plane are invisible if the user look at it straight at 90 degrees. However, if the user look at it from increasing angles from the sides, the attachment gradually turns visible again.
Are these behaviors expected? What is a recommended approach to overlay content in front of a RealityView? Thanks!
This might be a very silly question, anyway I tried many ways and didn't find a solution:
My Mac mini M4 is basic version which only has 16GB memory, it is very shy for developing Vision Pro application and testing with the simulator (CleanMacX always warning me low memory), I want to debug and test the application directly on Vision Pro (+ my app need both two hands gestures which simulator might not support) in stead of simulator, is there any proper instructions on how I test/debug/run the App on VP device directly instead of on simulator in favor of speed?
Topic:
Spatial Computing
SubTopic:
General