Discuss Spatial Computing on Apple Platforms.

Posts under General subtopic

Post

Replies

Boosts

Views

Activity

Buttons become unresponsive after using .windowStyle(.plain) with auto-hiding menu
I'm developing a visionOS panorama viewer app where I need to implement an auto-hiding floating menu in immersive space. The menu should: Show for 3 seconds when entering immersive mode Auto-hide after 3 seconds, Reappear when user taps anywhere (using SpatialTapGesture). Buttons should respond to gaze + pinch interaction The Problem: When I add .windowStyle(.plain) to achieve transparent window background for the auto-hide effect, all buttons in the menu become completely unresponsive to gaze + pinch interaction. The buttons only respond to direct finger touch (poking). Without .windowStyle(.plain): Buttons work correctly with gaze + pinch, but I cannot achieve transparent window background for hiding. With .windowStyle(.plain): Window can be transparent, but buttons lose gaze + pinch interaction. Code: App.swift: @main struct MyApp: App { @StateObject private var model = AppModel() var body: some Scene { WindowGroup(id: "MainWindow") { ContentView() .environmentObject(model) } .defaultSize(width: 900, height: 700) .windowResizability(.contentSize) .windowStyle(.plain) // <-- This causes the interaction issue ImmersiveSpace(id: "ImmersiveSpace") { ImmersiveView() .environmentObject(model) } } } ContentView.swift (simplified): struct ContentView: View { @EnvironmentObject var model: AppModel @State private var isMenuVisible: Bool = true var body: some View { VStack { if model.isImmersiveViewActive { if isMenuVisible { // This menu's buttons don't respond to gaze+pinch immersiveControlMenu } } else { mainMenuButtons } } .glassBackgroundEffect() } private var immersiveControlMenu: some View { HStack { Button("Exit") { exitImmersiveSpace() } .buttonStyle(.bordered) // Also tried .plain, same issue } .padding() .glassBackgroundEffect() } } ImmersiveView.swift: struct ImmersiveView: View { @EnvironmentObject var model: AppModel var body: some View { RealityView { content in // Panorama sphere let sphere = ModelEntity(mesh: .generateSphere(radius: 1000), materials: [material]) content.add(sphere) // Tap detector for menu toggle let tapDetector = Entity() tapDetector.components.set(CollisionComponent(shapes: [.generateSphere(radius: 900)])) tapDetector.components.set(InputTargetComponent()) content.add(tapDetector) } .gesture( SpatialTapGesture() .targetedToAnyEntity() .onEnded { _ in model.shouldShowMenu = true } ) } } Environment: Xcode 26.2 visionOS 26.3 Vision Pro device Questions: Is .windowStyle(.plain) expected to affect button interaction behavior? What is the recommended approach to achieve a transparent/hidden window in immersive mode while maintaining button interactivity? Is there an alternative to .windowStyle(.plain) for hiding window chrome in visionOS? Thank you for any guidance!
4
0
272
1d
Manipulation stops working when changing rooms
This post documents an issue I reported in feedback FB19610114 and see if anyone knows of a workaround. Here is a copy of the feedback. Short version Manipulation (SwiftUI OR RealityKit) fails to translate entities after changing rooms. By changing rooms, I mean a human wearing an Apple Vision Pro leaving one room and entering another room. Once this issue occurs, it impacts all apps that use these features. A device restart is the only solution I have to fix it. Feedback FB19610114 This is an odd one. I'm using the new Manipulation Component in visionOS 26. Most of the time this works well. Sometime it stops working and when it does the only way to get it working again is to reboot the headset. When this happens, I can continue to rotate and scale items, but translation no longer works. It is as if the item is stuck to a fixed point in the parent scene (window, volume, etc). When this bug occurs, it affects every app across the entire operating system that is using manipulation, including the RealityKit component AND the SwiftUI version. This is not limited to one app and is not limited to apps that I am working on. Once this error occurs, it affects literally any application across the operating system that is using this API, including apps from Apple. I won't speculate on the cause of this, but I do know of one way where I can always get it to happen. Here is how to reproduce it: Make an Xcode project with a single entity that uses the Manipulation Component. There is no need to customize the configuration of this component. The default implementation will work. Build and run this app on device. You can keep running from device or quit and launch the app like normal on device. Open the app and manipulate the entity - it should work as expected. Physically walk into another room. It is vital that you leave the current room that you are in and enter a different room entirely. Use the digital crown to recenter your view and bring your window or volume to you. Test the manipulation on the entity again - it should still be working as expected at this point. Physically, move yourself and your headset into the original room where you started. Use the digital crown to recenter your view and bring your window or volume to you. Test the manipulation on the entity again - you should now see the issue. When I follow the steps above, then 100% of the time manipulation translation stops working at this point. It will impact any application using this API. The only way to fix it is to restart my headset. A few points to keep in mind It does not matter if an app is actively being run from Xcode. When this occurs, it impacts every single app, not just one. When this occurs, rotation and scaling continue to work, but the entity/view cannot be translated. This impacts BOTH the SwiftUI version and the RealityKit version. When this occurs, the only way to "fix" it is to reboot the device.
6
3
916
1w
visionOS – Starting GroupActivity FaceTime Call dismisses Immersive Space
Hello, I am in the process of implementing SharePlay support in my visionOS app. Everything runs fine when I test locally, but when my app is distributed via TestFlight, calling try await activity.activate() shows the SharePlay dialog as usual, but then when I start a new FaceTime call, my ImmersiveSpace gets dismissed. This is only happening when the app is distributed via TestFlight, when I run it locally the ImmersiveSpace stays active as expected. Looking at the console on my Mac I found this log: Invalid initial client settings class: UIApplicationSceneClientSettings; expected class: MRUISharedApplicationSceneClientSettings; bundle ID: com.apple.facetime; scene ID: com.apple.facetime:SFBSystemService-DDA8C751-C0C4-487E-AD85-59EF4E6C6050 Does anyone have an idea how I can fix this? It's driving me nuts and I wasted over a day looking for a workaround but so far been unsuccessful. Thanks!
7
2
1k
Dec ’25
Please provide access to face tracking blendshapes on vision os
Apple, please provide access to face tracking blend shapes on vision os, just like you do on iOS. You have the best eye and face tracking implementation on the market, please let us use it. There is a sizable audience who will buy the headset just for it. I personally know multiple people who are not buying the headset simply because you locked those features out. No raw camera access is needed, just abstracted blendshapes values. You will make the headset so much more useful if you do this simple thing.
1
1
139
Jun ’25
Issue in TabletopKit Sample scene + shared immersive space
Hello Community, I am currently developing an experimental VisionOS app, to investigate the social effects of the new Spatial Persona feature, for my bachelor thesis. My setup includes a simple board game for the participants, in which they can engage with their persona avatars. I tried to use the TabletopKit for this setup, but ran into issues when starting the SharePlay session. When I testes my app, I couldn't see the other spatial persona anymore, despite the green SharePlay button indicating the session started. The other person can see my actions in their version of the app on the board, but can not interact with anything. Also, we are both seat on the default side of the seat. I tried to remove the environment I added, because it doesn't seem to synch with the other player. When I tried the FaceTime feature in the simulator without the environment, I could then see the test robot avatar, but at a totally wrong place. It's seems like it isn't just my environment occluding the seats, but a flaw in the seating process as well. When I tried the FaceTime feature in the simulator on the official test scene (TabletopKit Sample), I got the same incorrect placement and the warning "role(for:inSeatNumber:): The provided role identifier does not match a role in the current template." So my questions are: What needs to be changed so the TabletopKit can handle seating correctly? How can I correctly use immersive scenes in combination with the TabletopKit? I tried to keep the implementation of the TabletopKit example as close as possible, so I think it will enough to look into this codebase for now. I debugged the position of seats and they are placed correctly in front of their equipment. The personas are just not placed on them.
6
4
663
Mar ’25
Why VideoMaterial can't show transparency on Apple Vision Pro
https://developer.apple.com/documentation/realitykit/videomaterial The documentation: "Video materials support transparency if the source video’s file format also supports transparency." I have a transparency video(Hand.mov, HEVC with alpha), I can show the video with transparency background correctly on Vision Pro Simulates, but on physic Device the video has a black background. I'm sure the video format is ok because I can see get the texture from video and display it on an UnlitMaterial. How can I show the transparency video correctly with the RealityKit/VideoMaterial?
2
0
258
Dec ’25
How to Achieve Volumetric Lighting (Light Shafts) in RealityKit on visionOS?
Hello everyone, I am currently developing an experience for visionOS using RealityKit and I would like to achieve volumetric light effects, such as visible light rays or shafts through fog or dust. I found this GitHub project: https://github.com/robcupisz/LightShafts, which demonstrates the kind of visual style I am aiming for. I would like to know if there is a way to create similar effects using RealityKit on visionOS. So far, I have experimented with DirectionalLight, SpotLight, ImageBasedLight, and custom materials (e.g., additive blending on translucent meshes), but none of these approaches can replicate the volumetric light shaft look shown in the repository above. Questions: Is there a recommended technique or workaround in RealityKit to simulate light shafts or volumetric lighting? Is creating a custom mesh (e.g., cone or volume geometry with gradient alpha and additive blending) the only feasible method? Are there any examples, best practices, or sample projects from Apple or other developers that showcase a similar visual style? Any advice or hints would be greatly appreciated. Thank you in advance!
9
1
885
Aug ’25
Bouncy ball in RealityKit - game
I'm developing a VisionOS app with bouncing ball physics and struggling to achieve natural bouncing behavior using RealityKit's physics system. Despite following Apple's recommended parameters, the ball loses significant energy on each bounce and doesn't behave like a real basketball, tennis ball, or football would. With identical physics parameters (restitution = 1.0), RealityKit shows significant energy loss. I've had to implement a custom physics system to compensate, but I want to use native RealityKit physics. It's impossible to make it work by applying custom impulses. Ball Physics Setup (Following Apple Forum Recommendations) // From PhysicsManager.swift private func createBallEntityRealityKit() -> Entity { let ballRadius: Float = 0.05 let ballEntity = Entity() ballEntity.name = "bouncingBall" // Mesh and material let mesh = MeshResource.generateSphere(radius: ballRadius) var material = PhysicallyBasedMaterial() material.baseColor = .init(tint: .cyan) material.roughness = .float(0.3) material.metallic = .float(0.8) ballEntity.components.set(ModelComponent(mesh: mesh, materials: [material])) // Physics setup from Apple Developer Forums let physics = PhysicsBodyComponent( massProperties: .init(mass: 0.624), // Seems too heavy for 5cm ball material: PhysicsMaterialResource.generate( staticFriction: 0.8, dynamicFriction: 0.6, restitution: 1.0 // Perfect elasticity, yet still loses energy ), mode: .dynamic ) ballEntity.components.set(physics) ballEntity.components.set(PhysicsMotionComponent()) // Collision setup let collisionShape = ShapeResource.generateSphere(radius: ballRadius) ballEntity.components.set(CollisionComponent(shapes: [collisionShape])) return ballEntity } Ground Plane Physics // From GroundPlaneView.swift let groundPhysics = PhysicsBodyComponent( massProperties: .init(mass: 1000), material: PhysicsMaterialResource.generate( staticFriction: 0.7, dynamicFriction: 0.6, restitution: 1.0 // Perfect bounce ), mode: .static ) entity.components.set(groundPhysics) Wall Physics // From WalledBoxManager.swift let wallPhysics = PhysicsBodyComponent( massProperties: .init(mass: 1000), material: PhysicsMaterialResource.generate( staticFriction: 0.7, dynamicFriction: 0.6, restitution: 0.85 // Slightly less than ground ), mode: .static ) wall.components.set(wallPhysics) Collision Detection // From GroundPlaneView.swift content.subscribe(to: CollisionEvents.Began.self) { event in guard physicsMode == .realityKit else { return } let currentTime = Date().timeIntervalSince1970 guard currentTime - lastCollisionTime > 0.1 else { return } if event.entityA.name == "bouncingBall" || event.entityB.name == "bouncingBall" { let normal = event.collision.normal // Distinguish between wall and ground collisions if abs(normal.y) < 0.3 { // Wall bounce print("Wall collision detected") } else if normal.y > 0.7 { // Ground bounce print("Ground collision detected") } lastCollisionTime = currentTime } } Issues Observed Energy Loss: Despite restitution = 1.0 (perfect elasticity), the ball loses ~20-30% energy per bounce Wall Sliding: Ball tends to slide down walls instead of bouncing naturally No Damping Control: Comments mention damping values but they don't seem to affect the physics Change in mass also doesn't do much. Custom Physics System (Workaround) I've implemented a custom physics system that manually calculates velocities and applies more realistic restitution values: // From BouncingBallComponent.swift struct BouncingBallComponent: Component { var velocity: SIMD3<Float> = .zero var angularVelocity: SIMD3<Float> = .zero var bounceState: BounceState = .idle var lastBounceTime: TimeInterval = 0 var bounceCount: Int = 0 var peakHeight: Float = 0 var totalFallDistance: Float = 0 enum BounceState { case idle case falling case justBounced case bouncing case settled } } Is this energy loss expected behavior in RealityKit, even with perfect restitution (1.0)? Are there additional physics parameters (damping, solver iterations, etc.) that could improve bounce behavior? Would switching to Unity be necessary for more realistic ball physics, or am I missing something in RealityKit? Even in the last video here: https://stepinto.vision/example-code/collisions-physics-physics-material/ bounce of the ball is very unnatural - stops after 3-4 bounces. I apply custom impulses, but then if I have walls around the ball, it's almost impossible to make it look natural. I also saw this post https://developer.apple.com/forums/thread/759422 and ball is still not bouncing naturally.
9
0
912
Nov ’25
RoomCaptureSession with ARSCNView crashes when scanning multiple hotspots across different rooms
We're developing an iOS application that integrates RoomCaptureSession with ARSCNView for room scanning. Our implementation differs from the standard RoomCaptureView because we need custom UI guidance with 3D dots placed in the scanning environment to guide users through the capture process. Bug Description: The application crashes when users attempt to scan multiple rooms or apartments in sequence. The crash specifically occurs with the following pattern: User successfully scans first room with multiple hotspots (working correctly) User stops scanning, moves to a new room In the new room, first 1-2 hotspots work correctly Application crashes when attempting to scan additional hotspots Technical Details: Error: SLAM Anchor assertion failure in SlamAnchor.cpp:37 : HasValidPose() Crash occurs in Thread 27 with CAPIDetectionOutputFwdNode Error suggests invalid positioning when placing AR anchors Steps to Reproduce: Start room scan Complete multiple hotspot captures in first room Stop scanning Start new room scan Capture 1-2 hotspots successfully Attempt additional hotspot captures -> crashes Attempted Solutions: Implemented anchor cleanup between sessions Added position validation before anchor placement Implemented ARSession error handling Added proper thread management for AR operations Environment: Device: iPhone 14 Pro (LiDAR equipped) iOS Version: 18.1.1 (22B91) Testing through TestFlight Crash Log Details: Exception Type: EXC_CRASH (SIGABRT) Exception Codes: 0x0000000000000000, 0x0000000000000000 Exception Note: EXC_CORPSE_NOTIFY Triggered by Thread: 27 Thread 27 Crashed: 0 libsystem_kernel.dylib 0x00000001f0cc91d4 __pthread_kill + 8 1 libsystem_pthread.dylib 0x0000000228e12ef8 pthread_kill + 268 2 libsystem_c.dylib 0x00000001a86bbad8 abort + 128 3 AppleCV3D 0x0000000234d71a28 cv3d::vio::capi::SlamAnchor::SlamAnchor Question: Is there a recommended approach for handling multiple room captures with custom ARSCNView integration? The standard RoomCaptureView implementation doesn't show this behavior, but we need the custom guidance functionality that ARSCNView provides. Crash Log Code and full crash logs can be provided if needed.
2
1
684
Feb ’25
With manipulation component, once you let go, how to prevent the entity from disappearing while animating it back into the volume
So with the new ManipulationComponent, we can choose "stay" and then if you drag it out of your volume, once you let go it will instantly disappear. We can "animate" it back to inside the volume, eg.: content.subscribe(to: ManipulationEvents.WillRelease.self) { event in Entity.animate( .easeInOut(duration: 1), body: { event.entity.position = [0, 0.2, 0] }, completion: {} ) }, Howeve,r for the duration that it travels outside of the volume it's invisible the whole time. In this apple video, it seems to be visible when dragging and when letting go, but perhaps that's not a volume they're dragging it out of? https://youtu.be/VtenPKrvPOU?si=y1zoZOs2IMyDzOm6&t=1748 Does anyone know how to keep the entity visible even when after letting the entity go while you animate it back towards inside of your volume?
1
1
880
6d
Spaceship sample code does not compile
I'm getting the following error message when compiling the Apple provided sample, Spaceship game for the Apple Visio Pro. I've already tried deleting the derived data resetting the package cache and restarting Xcode but still getting the following error: [xrsimulator] Exception thrown during compile: Cannot get rkassets content for path /Users/myoungkang/Downloads/CreatingASpaceshipGame/Packages/Studio/Sources/Studio/Studio.rkassets because 'The file “Studio.rkassets” couldn’t be opened because you don’t have permission to view it.' error: Tool exited with code 1
0
1
73
Apr ’25
Odd image placeholder appearing when dismissing an ImmersiveSpace with a ImagePresentationComponent
Hello, There are odd artifacts (one looks like an image placeholder) appearing when dismissing an immersive space which is displaying an ImagePresentationComponent. Both artifacts look like widgets.. See below our simple code displaying the ImagePresentationComponent and the images of the odd artifacts that appear briefly when dismissing the immersive space. import OSLog import RealityKit import SwiftUI struct ImmersiveImageView: View { let logger = Logger(subsystem: AppConstant.SUBSYSTEM, category: "ImmersiveImageView") @Environment(AppModel.self) private var appModel var body: some View { RealityView { content in if let currentMedia = appModel.currentMedia, var imagePresentationComponent = currentMedia.imagePresentationComponent { let imagePresentationComponentEntity = Entity() switch currentMedia.type { case .iphoneSpatialMovie: logger.info("\(#function) \(#line) spatial3DImmersive display for \(String(describing: currentMedia))") imagePresentationComponent.desiredViewingMode = .spatial3DImmersive case .twoD: logger.info("\(#function) \(#line) spatial3DImmersive display for \(String(describing: currentMedia))") imagePresentationComponent.desiredViewingMode = .spatial3DImmersive case .visionProConvertedSpatialPhoto: logger.info("\(#function) \(#line) spatialStereoImmersive display for \(String(describing: currentMedia))") imagePresentationComponent.desiredViewingMode = .spatialStereoImmersive default : logger.error("\(#function) \(#line) Unsupported media type \(currentMedia.type)") assertionFailure("Unsupported media type \(currentMedia.type)") } imagePresentationComponentEntity.components.set(imagePresentationComponent) imagePresentationComponentEntity.position = AppConstant.Position.spacialImagePosition content.add(imagePresentationComponentEntity) } let toggleViewAttachmentComponent = ViewAttachmentComponent(rootView: ToggleImmersiveSpaceButton()) let toggleViewAttachmentComponentEntity = Entity(components: toggleViewAttachmentComponent) toggleViewAttachmentComponentEntity.position = SIMD3<Float>( AppConstant.Position.spacialImagePosition.x + 1, AppConstant.Position.spacialImagePosition.y, AppConstant.Position.spacialImagePosition.z ) toggleViewAttachmentComponentEntity.scale = AppConstant.Scale.attachments content.add(toggleViewAttachmentComponentEntity) } } }
0
1
209
Jul ’25