Hello All,
We're going to do a scene now, kind of like a time travel door. When the user selects the scene, the user passes through the door to show the current scene. The changes in the middle need to be more natural. It's even better if you can walk through an immersive space...
There is very little information now. How can I start doing this? Is there any information I can refer to
thanks
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
We use SceneReconstructionProvider to detect meshes in the surrounding environment and apply an OcclusionMaterial to them.
// Assuming `entity` represents one of the detected mesh in the environment
entity.components.set(ModelComponent(
mesh: mesh,
materials: [OcclusionMaterial()]
))
While this correctly occludes entities placed in the immersive space, it also occludes system windows. This becomes problematic when a window is dragged into an occluded area (before or after entering the immersive space), preventing interaction with its elements. In some cases, it also makes it impossible to focus on the window’s drag handle, since this might become occluded as well after moving the window nearby. More generally, system windows can be occluded when they come into proximity with a model that has OcclusionMaterial applied.
I'm aware of a change introduced in visionOS 2 regarding how occlusions interact with UI elements (as noted in the release notes). I believe this change was intended to ensure windows do not remain visible when opened in another room. However, this also introduces some challenges, as described in the scenario above.
Is there a way to prevent system window occlusion while still allowing entities to be occluded by environmental features? Perhaps not using OcclusionMaterial at all?
Development environment: Xcode 16.2, macOS 15.2
Run-time configuration: visionOS 2.2 and 2.3
I’m facing an issue while using CustomHoverEffect. In my view, there is a long title, which causes the title to be truncated. When the user hovers over it, the title should scroll. Although I have already implemented the scrolling effect, I am unsure how to trigger the scroll on hover. How should I approach this?
Hi,
we've been through the Explore Object Tracking for visionOS and worked through the sample code ExploringObjectTrackingWithARKit.
What we'd really like to see is Object Tracking for iOS using devices with either LiDAR or the TrueDepth/RGB cameras.
I am using HelloPhotogrammetry in Xcode
I can make one model with something like HelloPhotogrammetry.main([path_to_folder_of images, path_to_output/model.usdz, "-d", "medium", "-o", "unordered", "-f", "high" ])
But how would I request several models simultaneously? I only want to vary the detail.
[ ("/Users/you/Desktop/model_medium.usdz", detail: .medium), ("/Users/you/Desktop/model_full.usdz", detail: .full), ("/Users/you/Desktop/model_raw.usdz", detail: .raw ]
Hello everyone,
I've been trying for a few weeks now to convert a sequential series of meshes into a stop-motion animation in USDZ format.
In Unreal Engine, I’ve already figured out how to transform the sequential series of individual meshes into a smooth animation using the node system and arrays.
Unfortunately, the node system cannot be exported as a usdz animation logic in either Unreal or Blender.
Because of this, I have tried several other methods to incorporate the animation logic. Here’s what I’ve tried so far:
I attempted to create the animation in Blender with Render-/Viewports and mapping it to keyframes. However, in my experience, Viewports are not supported in the conversion.
I tried aligning the vertices of individual objects and merging the frames using the Shrinkwrap modifier in Blender, then setting up a morph animation with keyframes. However, because the individual meshes are too different, this results in artifacts, and manually editing each mesh is too difficult for me to handle.
I placed all individual meshes at the same position and animated them sequentially by scaling them from 0 to 100 in keyframes (Frame 1 is visible for 10 frames, then scales down at frame 11, while Frame 2 becomes visible at frame 11, and so on). I also adjusted the keyframes so that the scaling happens in a "constant" manner rather than the default Bezier or linear interpolation. I then converted this animation to .abc, and the result initially looked good. However, some information is lost when converting it with OpenUSD. The animation does not maintain its intended jump-like behavior in USDZ format, and instead, the scaling of individual files is visible in the animation.
I tried using a Blender add-on (StepMotion), which allows the animation to be exported as .abc, but it can only be read in Blender or Unreal. Even in the preview, the animation is not displayed correctly, so converting the animation logic does not work either.
Unfortunately, I have no alternative way to create the animation, as the individual frames have been provided to me as meshes. So far, I haven’t found a way to implement this successfully.
I would be very grateful for any tips or ideas, as I am running out of options on how to make this work.
Thanks in advance!
Topic:
Spatial Computing
SubTopic:
General
Tags:
Core Animation
Reality Converter
Visual Design
USDZ
So I am exporting a .usdc file from blender that already has some morph animations. The animations play well in blender but when I export I cannot seem to play them in RealityKit or RCP.
Entity.availableAnimations is an empty array.
Not of the child objects in the entity hierarchy has an animation library component with it.
Maybe I am exporting it wrong but I tried multiple combinations but doesn't seem to work.
Here are my export settings in blender
The original file I purchased is an FBX file that has the animation but when I try to directly get it in RealityConverter it doesn't seem to play animations.
Topic:
Spatial Computing
SubTopic:
General
Tags:
Reality Converter
RealityKit
Reality Composer Pro
visionOS
What is recommended best practice for importing a Blender 3D file into RCP? I assume as a .usdz file? Is there a WWDC24 session or other Apple resource that best explains this. I want to make sure I provide the right format/file to RCP from Blender.
With the new ImagePresentationComponent in visionOS 26, how can text/overlays be shown on top of the image as seen in the Spatial Gallery app?
Hi
I try to make a 360 stereo viewer, and I have made a ShaderGraphMaterial on Reality Composer Pro.
Im trying to use that material on a inverted sphere whitch is generated in Swift.
When I try to attach the material I get this error "Type of expression is ambiguous without a type annotation"
Here is the code (sorry im noob =) ):
import SwiftUI
import RealityKit
import RealityKitContent
import PhotosUI
struct ImmersiveView: View {
@Environment(AppModel.self) var appModel
var body: some View {
RealityView { content in
// Add the initial RealityKit content
guard let skyBoxEntity = await createSkybox() else {
return
}
content.add(skyBoxEntity)
}
}
}
private func createSkybox () async -> Entity? {
var matX = try? await ShaderGraphMaterial(named:
"/Root/Mat_Stereo360",
from: "360Stereo.usda",
in: realityKitContentBundle)
let sphere = await MeshResource.generateSphere(radius:1000)
let entity = await Entity()
entity.components.set(ModelComponent(mesh: sphere, materials:
[matX])). //ERROR HERE:
Type of expression is ambiguous without a type annotation
//entity.scale *= .init(x:-1, y:1, z:1)
return entity
}
I hope someone can help me =)
Best regards,
Kim
Hello,
I was looking back into downloading the Tracking geographic locations in AR sample app from https://developer.apple.com/documentation/arkit/tracking-geographic-locations-in-ar
Unfortunately the Download links to the .zip of the DisplayingAPointCloudUsingSceneDepth sample project.
The exact same issue occurs when trying to download the sample code from https://developer.apple.com/documentation/ARKit/creating-a-fog-effect-using-scene-depth
Wondering if those links are deliberately broken because of possible deprecations.
Thanks to any Apple Engineer willing to look into that.
Hello, I am trying to develop an app that broadcasts what the user sees via Apple Vision Pro. I am a graduate student studying at the university.
And I have two problems,
If I want to use passthrough in screen capture (in VisionOS), do I have to join Apple Developer Enterprise Program to get Enterprise API?
and Can I buy Apple Developer Enterprise Program (Enterprise API) with my university account?
Have any of you been able to do this?
Thank you
Hi folks, I’m new to Vision Pro stack, still trying to learn all the nuances. Here is a problem I can’t seem to find an answer.
I placed entity A( a small .02 radius sphere) inside entity B( size:.1 box). Both entities have HoverEffectComponent, and both inputcomponent is set to .direct. Entity A is NOT a child of Entity B. When I direct touch Entity B, I noticed that Entity A’s hover effect is fired as well. This only happens if Entity A‘s position is inside Entity B. The gesture that is only targeted at Entity A doesn’t work either. I double checked Entity A collider which sits inside entity B collider, my direct touch shouldn’t have trigger its hove effect. Having one collider inside another seems to produce unpredictable behavior? Thanks in advance 🙏🙏🙏
Context: I’m trying to create an invisible bound around Entity A, so when my hand approaches the bound to grab Entity A, a nice spotlight hover effect would fire first on the bound before hand reaching entity A.
Hi 26 beta guys,
I have apps using ARKit.
In iPadOS 26 beta, ARKit stops working after switching to other apps.
how to:
Enable WindowMode in iPadOS 26
Launch my app and start ARSession
Switch to another app (preference app, etc.)
Switch back to my app
AR stops updating camerafeed.
I debug printed ARSessionDelegate, and found that
after sessionWasInterrupted was called, sessionInterruptionEnded was never called.
sessionInterruptionEnded is called if WindowMode disabled.
Is this just a bug for 26 beta?
I suspect there is similar problem with non-AR camera.
Any idea?
Topic:
Spatial Computing
SubTopic:
ARKit
I am trying to apply impulseAction to an entity but everytime entity.playAnimation(impulseAnimation) is executed, the log says Cannot find a BindPoint for any bind path: "". I can't figure out what is wrong. Could someone please help me with this?
import SwiftUI
import RealityKit
import RealityKitContent
struct ImmersiveView: View {
var body: some View {
RealityView { content in
// Add the initial RealityKit content
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle), var sphere = immersiveContentEntity.findEntity(named: "Sphere") {
sphere.components.set(CollisionComponent(shapes: [ShapeResource.generateSphere(radius: 0.1)]))
sphere.components.set(PhysicsBodyComponent(shapes: [ShapeResource.generateSphere(radius: 0.1)], mass: 1000))
sphere.components[PhysicsBodyComponent.self]?.isAffectedByGravity = false
sphere.position = [0, 1, -1]
content.add(immersiveContentEntity)
// Create an action to apply an impulse, forcing the object to move upwards.
let impulseAction = ImpulseAction(linearImpulse: [0, 1, 0])
// Create a small positive duration value.
let duration: TimeInterval = 1 / 30.0
// Create an animation for the action, which will start playing
// after five seconds.
do {
let impulseAnimation = try AnimationResource
.makeActionAnimation(for: impulseAction,
duration: duration,
delay: 5.0)
// Play the sequence animation that will play the actions.
sphere.playAnimation(impulseAnimation)
} catch {
print("Error: \(error)")
}
}
}
}
}
All the logs:
Could not locate file 'default-binaryarchive.metallib' in bundle.
Error creating the CFMessagePort needed to communicate with PPT.
AddInstanceForFactory: No factory registered for id <CFUUID 0x6000029a5b80> F8BB1C28-BAE8-11D6-9C31-00039315CD46
cannot add handler to 0 from 1 - dropping
nw_socket_copy_info [C1:2] getsockopt TCP_INFO failed [102: Operation not supported on socket]
nw_socket_copy_info getsockopt TCP_INFO failed [102: Operation not supported on socket]
Registering library (/Library/Developer/CoreSimulator/Volumes/xrOS_22N840/Library/Developer/CoreSimulator/Profiles/Runtimes/xrOS 2.2.simruntime/Contents/Resources/RuntimeRoot/System/Library/PrivateFrameworks/CoreRE.framework/default.metallib) that already exists in shader manager. Library will be overwritten.
cannot add handler to 0 from 1 - dropping
Cannot find a BindPoint for any bind path: "", ""
Sync object without snapshot while removing view (id: 2816861686082450363, type: 6373420419761316588[SelectableSceneContentIdentifierComponent]).
But i think only Cannot find a BindPoint for any bind path: "", "" is relevant.
In visionOS, once an immersive space is opened, the background color is solid black, is it possible to make this background transparent?
FYI, The Immersive spaces on visionOS uses Compositor Services for drawing 3D content.
I sketched a idea for a project in Reality Composer on my iPad, thinking when I had a chance to sit down I would work it up in Xcode.
However, when I got back to my computer, I discovered I cannot open a file created in Reality Composer (or the exported Reality file) in Reality Composer Pro.
Am I missing something obvious here, because this seems like a huge oversight.
If anyone, can let me know how to open a file created in Reality Composer in Reality Composer Pro, I would greatly appreciate it. Partly, because there seems to be objects available in Reality Composer that are not in Reality Composer Pro.
Thanks
Stan
Hello experts, and question seekers,
I have been trying to get Gaussian splats working with RealityKit, however it seems not to work out for me.
The library I use for Gaussian splatting: https://github.com/scier/MetalSplatter
My idea was to use the renderers provided by RealityKit (aka RealityRenderer) https://developer.apple.com/documentation/realitykit/realityrenderer and the renderer provided by MetalSplatter (aka. SplatRenderer) https://github.com/scier/MetalSplatter/blob/main/MetalSplatter/Sources/SplatRenderer.swift
Then with a custom render pipeline, I would be able to compose the outputs of the renderers, enabling the possibility, for example to build immersive scenery with realistic environment scans, as Gaussian splats, and RealityKit to provide the necessary features to build extra scenery around Gaussian splats, eg. dynamic 3D models inside Gaussian splats.
However the problem is, as of now I am not able to do that with the current implementation of RealityRenderer.
It seems to be, that first RealityRenderer is supposed to be an API, just to render colour information onto a texture, which in first glance might be useful, but misses important information, such as for example depth, and stencil information.
Second issue is, even with that in mind, currently I am not able to execute RealityRenderer.updateAndRender, due to the following error messages:
Could not resolve material name 'engine:BuiltinRenderGraphResources/Common/realityRendererBackground.rematerial' in bundle at '/Users//Library/Developer/CoreSimulator/Devices//data/Containers/Bundle/Application//.app'. Loading via asset path.
exiting spatial tracking service update thread because wait returned 37”
I was able to build a custom Metal view with UIViewRepresentable, MTKView, and MTKViewDelegate, enabling me to build a custom rendering pipeline, by utilising some of the Metal developer workflows.
Reference: https://developer.apple.com/documentation/xcode/metal-developer-workflows/
Inside draw(in view: MTKView), in a class derived by MTKViewDelegate:
guard let currentDrawable = view.currentDrawable else {
return
}
let realityRenderer = try! RealityRenderer()
try! realityRenderer.updateAndRender(deltaTime: 0.0, cameraOutput: .init(.singleProjection(colorTexture: currentDrawable.texture)), whenScheduled: { realityRenderer in
print("Rendering scheduled")
}, onComplete: { RealityRenderer in
print("Rendering completed")
})
Can you please tell me, what I am doing wrong?
Is there any solution, that enables me to use RealityKit with for example Gaussian splats?
Any help is greatly appreciated.
All the best,
Ethem Kurt
We have a project which is currently being built as a XCFramework.
The framework contains a custom component to be used with entities in Reality Composer Pro.
I have tried to se set the RCP Package.swift file to reference the framework package for the in the dependancies.
Nothing that I do with the folder path to reference the code is working.
Do I need to change the project to be using Swift source code instead of a XCFramework?
The component needs to be in the framework as there is a class in the framework that works directly with the custom compoent.
I am able to reference the XCFramework as a Swift Package with other projects.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
With Xcode 26, loading ressources with RealityKit is extremely slow.
Here my project takes almost 50 seconds to load.
I also get multiple Hang detected messages in the console:
When I uncheck "Debug executable" in the schema, the same project loads in 2 seconds.
I'm using RealityKit asynchronous loading:
private static func loadFromRealityComposerPro(
named entityName: String,
fromSceneNamed sceneName: String
) async -> Entity? {
var entity: Entity?
do {
let scene = try await Entity(
named: sceneName,
in: visionPetsContentBundle
)
entity = scene.findEntity(named: entityName)
} catch {
print(
"Error loading \(entityName) from scene \(sceneName): \(error.localizedDescription)"
)
}
return entity
}
Anyone having the same problem?
Topic:
Spatial Computing
SubTopic:
General