I am working on a project that requires access to the main camera on the Vision Pro. My main account holder applied for the necessary enterprise entitlement and we were approved and received the Enterprise.license file by email. I have added the Enterprise.license file to my project, and manually added the com.apple.developer.arkit.main-camera-access.allow entitlement to the entitlement file and set it to true since it was not available in the list when I tried to use the + Capability button in the Signing & Capabilites tab.
I am getting an error: Provisioning profile "iOS Team Provisioning Profile: " doesn't include the com.apple.developer.arkit.main-camera-access.allow entitlement. I have checked the provisioning profile settings online, and there is no manual option for adding the main camera access entitlement, and it does not seem to be getting the approval from the license.
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello Community,
I am currently developing an experimental VisionOS app, to investigate the social effects of the new Spatial Persona feature, for my bachelor thesis. My setup includes a simple board game for the participants, in which they can engage with their persona avatars.
I tried to use the TabletopKit for this setup, but ran into issues when starting the SharePlay session. When I testes my app, I couldn't see the other spatial persona anymore, despite the green SharePlay button indicating the session started. The other person can see my actions in their version of the app on the board, but can not interact with anything. Also, we are both seat on the default side of the seat.
I tried to remove the environment I added, because it doesn't seem to synch with the other player. When I tried the FaceTime feature in the simulator without the environment, I could then see the test robot avatar, but at a totally wrong place. It's seems like it isn't just my environment occluding the seats, but a flaw in the seating process as well.
When I tried the FaceTime feature in the simulator on the official test scene (TabletopKit Sample), I got the same incorrect placement and the warning "role(for:inSeatNumber:): The provided role identifier does not match a role in the current template."
So my questions are:
What needs to be changed so the TabletopKit can handle seating correctly?
How can I correctly use immersive scenes in combination with the TabletopKit?
I tried to keep the implementation of the TabletopKit example as close as possible, so I think it will enough to look into this codebase for now.
I debugged the position of seats and they are placed correctly in front of their equipment. The personas are just not placed on them.
Dear Apple Team,
I’m a high school student (vocational upper secondary school) working on my final research project about LiDAR sensors in smartphones, specifically Apple’s iPhone implementation.
My current understanding (for context):
I understand Apple’s LiDAR uses dToF with SPAD detectors: A VCSEL laser emits pulses, a DOE splits the beam into a dot pattern, and each spot’s return time is measured separately → point cloud generation.
My specific questions:
How many active projection dots does the LiDAR projector have in the iPhone 15 Pro vs. iPhone 12 Pro?
Are the dots static or do they shift/move over time?
How many depth measurement points does the system deliver internally (after processing)?
What is the ranging accuracy (cm-level precision) of each measurement point?
Experimental background: Using an IR night vision camera, I counted approximately 111 dots on the 15 Pro vs. 576 dots on the 12 Pro. Do these match the internal specifications?
Photos of my measurements are available if helpful.
Contact request: I would be very grateful if you could connect me with an Apple engineer or ARKit specialist who works with LiDAR technology. I would love to ask follow-up questions directly and would be happy to provide my contact details for this purpose.
These specifications would be essential for my research paper. Thank you very much in advance!
Best regards,
Max!
Vocational Upper Secondary School Hans-Leipelt-Schule Donauwörth
Research Project: “LiDAR Sensor Technology in Smartphones”
Hi Apple Team,
We noticed the following exciting changelog in the latest macOS 26 beta:
A new algorithm significantly improves PhotogrammetrySession reconstruction quality of low-texture objects not captured with the ObjectCaptureSession front end. It will be downloaded and cached once in the background when the PhotogrammetrySession is used at runtime. If network isn’t available at that time, the old low quality model will be used until the new one can be downloaded. There is no code change needed to get this improved model. (145220451)
However after trying this on the latest beta and running some tests we do not see any differences on objects with low textures such as single coloured surfaces. Is there anything we are missing? the machine is definitely connected to the internet but we have no way of knowing from the logs if the new model is being used?
thanks
I have a scene that has been assembled in RCP but I'm losing the correct hierarchy and transforms when running the scene in the headset or the simulator.
This is in RCP:
This is at runtime with the debugger:
As you can see the "MAIN_WAGON" entity is gone and part of the hierarchy are now children of "TRAIN_ROOT" instead.
Another issue is that not only part of the hieararchy disappears, it also reverts back to default values of the transform instead of what is set in RCP:
This is in RCP:
This is in the simulator/headset:
I'm filing a feedback ticket too and will post the number here.
Anyone had a similar issue and found a fix or workaround ?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Reality Composer
RealityKit
Reality Composer Pro
When I show a window while a sky sphere is shown, the handles to drag/close/resize the window are hidden. The colliders still work, so they are there, but only the visuals are hidden. I already know from another project, that this also happens to volumes.
They only appear once you get closer to the window or if the sky sphere gets removed.
Is this a known issue or is there a fix for that?
.persistentSystemOverlays(.visible)does not fix it
Xcode 16.3.0 Beta, visionOS 2.4
I exported some usd assets from IsaacSim but they are not showing up correctly on my Apple Vision Pro.
Even though the mesh looks to be the correct color in Finder and I can see the Diffuse Color looks correct, the object is still just gray. It should be green!
I use ARKit for motion tracking. I get the skeleton joint coordinates and use them for animation. I didn't make any changes to the code, but I updated the iOS version from 18 to 26, and modelTransform now always returns nil.
https://developer.apple.com/documentation/arkit/arskeleton3d/modeltransform(for:)
For example
bodyAnchor.skeleton.modelTransform(for: .init(rawValue: "head_joint"))
bodyAnchor is ARBodyAnchor.
I see the default skeleton on the screen, but now I can't get the coordinates out of it.
I'm using an example from Apple's WWDC presentation.
https://developer.apple.com/documentation/arkit/capturing-body-motion-in-3d
Are there any changes in the API? Or just bug?
The new Mac virtual display feature on visionOS 2 offers a curved/panoramic window. I was wondering if this is simply a property that can be applied to a window, or if it involves an immersive mode or SceneKit/RealityKit?
I am testing out the Gen 2 of the developer strap on my Vision Pro M2 and I have only been able to get USB 2 speeds when connecting it to my MacBook Pro Max M3. I used the official Apple Thunderbolt 4 cable, which does get Thunderbolt speeds on my T7 Touch drive. Has anyone figured out a solution for this issue?
The Gen 2 developer strap does advertise 20 Gb/s speeds.
Topic:
Spatial Computing
SubTopic:
General
I'm having a heck of a time getting this to work. I'm trying to add an event notification at the end of a timeline animation to trigger something in code but I'm not receiving the notification from RC Pro. I've watched that Compose Interactive 3D Content video quite a few times now and have tried many different ways. RC Pro has the correct ID names on the notifications. I'm not a programmer at all. Just a lowly 3D artist. Here is my code...
import SwiftUI
import RealityKit
import RealityKitContent
extension Notification.Name {
static let button1Pressed = Notification.Name("button1pressed")
static let button2Pressed = Notification.Name("button2pressed")
static let button3Pressed = Notification.Name("button3pressed")
}
struct MainButtons: View {
@State private var transitionToNextSceneForButton1 = false
@State private var transitionToNextSceneForButton2 = false
@State private var transitionToNextSceneForButton3 = false
@Environment(AppModel.self) var appModel
@Environment(\.dismissWindow) var dismissWindow
// Notification publishers for each button
private let button1PressedReceived = NotificationCenter.default.publisher(for: .button1Pressed)
private let button2PressedReceived = NotificationCenter.default.publisher(for: .button2Pressed)
private let button3PressedReceived = NotificationCenter.default.publisher(for: .button3Pressed)
var body: some View {
ZStack {
RealityView { content in
// Load your RC Pro scene that contains the 3D buttons.
if let immersiveContentEntity = try? await Entity(named: "MainButtons", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
}
}
// Optionally attach a gesture if you want to debug a generic tap:
.gesture(
TapGesture().targetedToAnyEntity().onEnded { value in
print("3D Object tapped")
_ = value.entity.applyTapForBehaviors()
// Do not post a test notification here—rely on RC Pro timeline events.
}
)
}
.onAppear {
dismissWindow(id: "main")
// Remove any test notification posting code.
}
// Listen for distinct button notifications.
.onReceive(button1PressedReceived) { (output) in
print("Button 1 pressed notification received")
transitionToNextSceneForButton1 = true
}
.onReceive(button2PressedReceived.receive(on: DispatchQueue.main)) { _ in
print("Button 2 pressed notification received")
transitionToNextSceneForButton2 = true
}
.onReceive(button3PressedReceived.receive(on: DispatchQueue.main)) { _ in
print("Button 3 pressed notification received")
transitionToNextSceneForButton3 = true
}
// Present next scenes for each button as needed. For example, for button 1:
.fullScreenCover(isPresented: $transitionToNextSceneForButton1) {
FacilityTour()
.environment(appModel)
}
// You can add additional fullScreenCover modifiers for button 2 and 3 transitions.
}
}
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Graphics and Games
Xcode
SwiftUI
Reality Composer Pro
I have been experimenting with the Hello World sample app from https://developer.apple.com/documentation/visionos/world and I came across behavior that appears inconsistent with user-facing documentation describing the device controls at https://support.apple.com/en-gb/guide/apple-vision-pro/tan1e2a29e00/visionos
I tried pressing simulator's "Home" button while "Objects in Orbit" immersive space was presented alongside with the main application window. According to user documentation, pressing Digital Crown should take the user directly to Home View. In my test a single press only dismissed the immersive space, I needed another press to "exit" the app and go to Home View.
Is this behavior expected? I am assuming that "Home" button in the simulator behaves as if the user pressed Digital Crown on the device, I don't have access to the actual hardware.
This is related to the WWDC presentation, What's new in Metal rendering for immersive apps..
Specifically, the macOS spatial streaming to visionOS feature: For reference: the page in the docs.
The presentation demonstrates it using a full immersive space and Metal rendering using compositor services.
I'd like clarity on a few things:
Is the remote device wireless, or must the visionOS device be connected via a wired connected?
Is there a limit to the number of remote devices, and if not, could macOS render different things per remote device simultaneously?
Can I also use mixed mode with passthrough enabled, instead of just a fully-immersive mode?
Can I use RealityKit instead of Metal? If so, may I have an example, or would someone point to an example?
I am experimenting with RealityKit to set up a portal. Everything works, but I was wondering where the scene's origin is with respect to the front of the portal window?
From experiments, the origin's X and Y appear to be at the center of the portal window, while the origin's Z appearing to be about a meter behind the portal window.
Is this (at least roughly) correct? Is it documented anywhere?
PS. I began with the standard visionOS app and edited the Reality Composer Pro file to create the scene.
What is the reason the hand-tracking joints have these axes? I'm trying to create a virtual hands model and that's a mess.
I want adding grounding shadow on my Entity in RealityView on visionPro. However it seems that the shadow can only appear on another Entity. So I using plane detection in ARKit and add a transparent plane on it to render shadow.
let planeEntity = ModelEntity(mesh: .generatePlane(width: anchor.geometry.extent.width, height: anchor.geometry.extent.height), materials: [material])
planeEntity.components.set(OpacityComponent(opacity: 0.0))
But sometimes there will be a border around my Entityon the plane.
I do not know why it will happen, and I want remove the border.
Hi, I am trying to load files from the Apple Vision Pro's storage into a Unity App (using Apple visionOS XR Plugin and not PolySpatial package). So far, I've tried using UnitySimpleFileBrowser and UnityStandaloneFileBrowser (both aren't made for the Vision Pro and don't work there), and then implemented my own naive file browser that at least allows me to view directories (that I can see from the App Sandbox). This is of course very limited:
Gray folders can't be accessed, the only 3 available ones don't contain anything where a user would put files through the "Files" app.
I know that an app can request access to these "Files & Folders":
So my question is: Is there a way to request this access for a Unity-built app at the moment? If yes, what do I need to do? I've looked into the generated Xcode project's "Capabilities", but did not find anything related to file access. Any help is appreciated!
Hello,
I'm working with the new PortalComponent introduced in visionOS 2.0, and I've encountered some issues when transitioning entities between virtual and real-world spaces using crossingMode.
Specifically:
Lighting inconsistency: When CG content (ModelEntities with PhysicallyBasedMaterial) crosses the portal from virtual space into the real environment, the way light reflects on the objects changes noticeably. This causes a jarring visual effect, as the same material appears differently depending on the space it's in.
Unnatural transition visuals: During the transition, the CG models often appear to "emerge from the wall," especially when crossing from virtual to real. This ruins the immersive illusion and feels visually unnatural.
IBL adjustment attempts: I’ve tried adding an ImageBasedLightComponent to the world entity, and while it slightly improves the lighting consistency, the issue still remains to a noticeable degree.
My goal is to create a seamless visual experience when CG entities cross between spaces, without sudden lighting shifts or immersion-breaking geometry reveals.
Has anyone else experienced similar issues?
Is there a recommended setup or workaround to better control lighting and visual fidelity when using crossingMode with portals in visionOS 2.0?
Any guidance would be greatly appreciated.
Thank you!
`error: [xrsimulator] Component Compatibility: EnvironmentLightingConfiguration not available for 'xros 1.0', please update 'platforms' array in Package.swift
error: [xrsimulator] Exception thrown during compile: compileFailedBecause(reason: "compatibility faults")
error: Tool exited with code 1
I'm developing a visionOS panorama viewer app where I need to implement an auto-hiding floating menu in immersive space. The menu should:
Show for 3 seconds when entering immersive mode
Auto-hide after 3 seconds,
Reappear when user taps anywhere (using SpatialTapGesture).
Buttons should respond to gaze + pinch interaction
The Problem:
When I add .windowStyle(.plain) to achieve transparent window background for the auto-hide effect, all buttons in the menu become completely unresponsive to gaze + pinch interaction. The buttons only respond to direct finger touch (poking).
Without .windowStyle(.plain): Buttons work correctly with gaze + pinch, but I cannot achieve transparent window background for hiding.
With .windowStyle(.plain): Window can be transparent, but buttons lose gaze + pinch interaction.
Code:
App.swift:
@main
struct MyApp: App {
@StateObject private var model = AppModel()
var body: some Scene {
WindowGroup(id: "MainWindow") {
ContentView()
.environmentObject(model)
}
.defaultSize(width: 900, height: 700)
.windowResizability(.contentSize)
.windowStyle(.plain) // <-- This causes the interaction issue
ImmersiveSpace(id: "ImmersiveSpace") {
ImmersiveView()
.environmentObject(model)
}
}
}
ContentView.swift (simplified):
struct ContentView: View {
@EnvironmentObject var model: AppModel
@State private var isMenuVisible: Bool = true
var body: some View {
VStack {
if model.isImmersiveViewActive {
if isMenuVisible {
// This menu's buttons don't respond to gaze+pinch
immersiveControlMenu
}
} else {
mainMenuButtons
}
}
.glassBackgroundEffect()
}
private var immersiveControlMenu: some View {
HStack {
Button("Exit") {
exitImmersiveSpace()
}
.buttonStyle(.bordered) // Also tried .plain, same issue
}
.padding()
.glassBackgroundEffect()
}
}
ImmersiveView.swift:
struct ImmersiveView: View {
@EnvironmentObject var model: AppModel
var body: some View {
RealityView { content in
// Panorama sphere
let sphere = ModelEntity(mesh: .generateSphere(radius: 1000), materials: [material])
content.add(sphere)
// Tap detector for menu toggle
let tapDetector = Entity()
tapDetector.components.set(CollisionComponent(shapes: [.generateSphere(radius: 900)]))
tapDetector.components.set(InputTargetComponent())
content.add(tapDetector)
}
.gesture(
SpatialTapGesture()
.targetedToAnyEntity()
.onEnded { _ in
model.shouldShowMenu = true
}
)
}
}
Environment:
Xcode 26.2
visionOS 26.3
Vision Pro device
Questions:
Is .windowStyle(.plain) expected to affect button interaction behavior?
What is the recommended approach to achieve a transparent/hidden window in immersive mode while maintaining button interactivity?
Is there an alternative to .windowStyle(.plain) for hiding window chrome in visionOS?
Thank you for any guidance!