Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

All subtopics
Posts under Graphics & Games topic

Post

Replies

Boosts

Views

Activity

Combining render encoders
When I take a frame capture of my application in Xcode, it shows a warning that reads "Your application created separate command encoders which can be combined into a single encoder. By combining these encoders you may reduce your application's load/store bandwidth usage." In the minimal reproduction case I've identified for this warning, I have two render pipeline states: The first writes to the current drawable, the depth buffer, and a secondary color buffer. The second writes only to the current drawable. Because these are writing to a different set of outputs, I was initially creating two separate render command encoders to handle the draws under each of these states. My understanding is that Xcode is telling me I could only create one, however when I try to do that, I get runtime asserts when attempting to apply the second render pipeline state since it doesn't have a matching attachment configured for the second color buffer or for the depth buffer, so I can't just combine the encoders. Is the only solution here to detect and propagate forward the color/depth attachments from the first state into the creation of the second state? Is there any way to suppress this specific warning in Xcode?
1
0
310
Jul ’25
Subject: Handling Z-Up Blender USDZ Models in RealityKit (visionOS) for Transform Updates
Hello everyone, I'm working on a visionOS application using RealityKit and am encountering a common coordinate system challenge when integrating 3D models created in Blender. My goal is to display and dynamically update the Transform (position, rotation, scale) of models created in Blender within RealityKit. The issue arises because Blender's default coordinate system is Z-up, and while exporting to USD/USDZ, I don't have a reliable "Y-up" export option that correctly reorients the model and its transform data for RealityKit's Y-up convention. This means I'm essentially exporting models with their "up" direction along the Z-axis. When I load these Z-up exported models into RealityKit, they are often oriented incorrectly. To then programmatically update their Transform (e.g., move them, rotate them based on game logic, or apply physics), I need to ensure that the Transform values I set align with RealityKit's Y-up system, even though the original model data was authored in a Z-up context. My questions are: What is the recommended transformation process (e.g., using simd_quatf or simd_float4x4) to convert a Transform that was conceptually defined in a Z-up coordinate system to RealityKit's Y-up coordinate system? Specifically, when I have a Transform (or its translation, rotation, scale components) from a Z-up context, how should I apply this to a RealityKit Entity so it appears and behaves correctly in a Y-up world? Are there any existing convenience APIs or helper functions within RealityKit, simd, or other Apple frameworks that simplify this Z-up to Y-up Transform conversion process? Or is a manual application of a transformation quaternion (e.g., simd_quatf(angle: -.pi / 2, axis: [1, 0, 0])) the standard approach? Any guidance, code examples, or best practices from those who have faced similar challenges would be greatly appreciated! Thank you.
1
1
491
Jul ’25
GCController.shouldMonitorBackgroundEvents = true broken?
I am suspecting that setting GCController.shouldMonitorBackgroundEvents = true does not actually make the game controllers inputs accessible to the app when it is in the background. About this value the official documentation says: A Boolean value that indicates whether the app needs to respond to controller events when it isn’t the frontmost app. Now the behavior is that when the app is in focus the users inputs do get correctly recognized but as soon as the app enters the background no inputs get recognized. The controller does not get reported as disconnecting and still works for example in launchpad. I am sure that about 2 months ago when I first used this it did work as one would expect. I also have seen that an app which lets users execute certain actions using their controller has stoped working recently, adding to my suspicion of the feature being broken. Here is a minimum reproducible example: import SwiftUI import GameController @main struct TestingControllerConnectionApp: App { @NSApplicationDelegateAdaptor(AppDelegate.self) var appDelegate var body: some Scene { WindowGroup { ContentView() } } } class AppDelegate: NSObject, NSApplicationDelegate { var statusItem: NSStatusItem? var controller: GCController? func applicationDidFinishLaunching(_ notification: Notification) { setupMenuBar() GCController.shouldMonitorBackgroundEvents = true NotificationCenter.default.addObserver( self, selector: #selector(controllerDidConnect), name: .GCControllerDidConnect, object: nil ) NotificationCenter.default.addObserver( self, selector: #selector(controllerDidDisconnect), name: .GCControllerDidDisconnect, object: nil ) } @objc private func setupMenuBar() { let menu = NSMenu() menu.addItem(NSMenuItem(title: "Quit", action: #selector(quitApp), keyEquivalent: "q")) statusItem = NSStatusBar.system.statusItem(withLength: NSStatusItem.variableLength) statusItem?.button?.image = NSImage(resource: .controllerBar) statusItem?.menu = menu } @objc private func quitApp() { NSApp.terminate(nil) } @objc private func controllerDidConnect(_ notification: Notification) { if let controller = notification.object as? GCController { print("Controller connected") self.controller = controller if let gamepad = controller.extendedGamepad { gamepad.buttonA.pressedChangedHandler = { _, _, pressed in print("Button A pressed: \(pressed)") } } } } @objc private func controllerDidDisconnect(_ notification: Notification) { print("Controller disconnected") } } This is created in a completely fresh Xcode project and NSHumanInterfaceDeviceUsageDescription has been added. I am using a PS5 Controller and a Mac running MacOS 15.4.1 which has been restarted and only Xcode and the app have been opened. I have tested this with setting a multitude of different entitlements and capabilities including: NSHumanInterfaceDeviceUsageDescription Supports Controller User Interaction Required background modes -> App communicates with an accessory com.apple.security.device.bluetooth com.apple.security.device.hid com.apple.security.device.usb I have also set this value at different points in the code with no change of effect. Does anybody see if there is any fault in my code or my understanding of the effect of the value 'shouldMonitorBackgroundEvents'? Or is this the functionality actually being broken on Apples part?
1
0
213
Apr ’25
Can you delete a MTLLibrary once shaders are placed into pipeline?
Hello, I am quite new to using the metal API and was wondering if it was common (or even possible) if you knew that, when a pipeline was created, you never needed to make another one with the same shaders again, if it is safe to release the library the was used to reference the shaders? Only asking because this is possible in other apis, but apple never mentions (as far as I have found) if this is safe or not safe to do.
1
0
398
Oct ’25
Gamekit Achievements Won't Unhide
Added achievements to my approved app. Added them for the next release version, which I am running in simulator. When I look at the Achievements page, I can see that there are 17 Achievements available (correct), but they all show as hidden, despite checking the "No" box in App Store Connect.
1
1
161
May ’25
Why does game mode not get triggered for my App?
I think I really have tried everything and I did all according to official documentation to support game mode on iOS or iPadOS but it doesn't matter what I do it just doesn't get triggered. Funny enough it works during development when I install it via Xcode but as soon as it is live on the store and when I install it from there game mode doesn't get triggered anymore. What I have atm I have added (even though it is deprecated) <key>GCSupportsGameMode</key> <true/> I have set the (but it seems only supported for macOS) <key>LSApplicationCategoryType</key> <string>public.app-category.games</string> I have added <key>LSSupportsGameMode</key> <true/> It just doesn't work. Is there anything else what needs to be done? Should the flag LSSupportsGameMode not be enough normally? The reason why this is so annoying is that my app is a real time streaming app and I want to profit from minimised background activities for smoother gameplay and more consistent frame rates like mentioned in the documentation.
1
0
740
Nov ’25
moveCharacter reports collision with itself
I'm running into an issue with collisions between two entities with a character controller component. In the collision handler for moveCharacter the collision has both hitEntity and characterEntity set to the same object. This object is the entity that was moved with moveCharacter() The below example configures 3 objects. stationary sphere with character controller falling sphere with character controller a stationary cube with a collision component if the falling sphere hits the stationary sphere then the collision handler reports both hitEntity and characterEntity to be the falling sphere. I would expect that the hitEntity would be the stationary sphere and the character entity would be the falling sphere. if the falling sphere hits the cube with a collision component the the hit entity is the cube and the characterEntity is the falling sphere as expected. Is this the expected behavior? The entities act as expected visually however if I want the spheres to react differently depending on what character they collided with then I am not getting the expected results. IE: If a player controlled character collides with a NPC then exchange resource with NPC. if player collides with enemy then take damage. import SwiftUI import RealityKit struct ContentView: View { @State var root: Entity = Entity() @State var stationary: Entity = createCharacter(named: "stationary", radius: 0.05, color: .blue) @State var falling: Entity = createCharacter(named: "falling", radius: 0.05, color: .red) @State var collisionCube: Entity = createCollisionCube(named: "cube", size: 0.1, color: .green) //relative to root @State var fallFrom: SIMD3<Float> = [0,0.5,0] var body: some View { RealityView { content in content.add(root) root.position = [0,-0.5,0.0] root.addChild(stationary) stationary.position = [0,0.05,0] root.addChild(falling) falling.position = fallFrom root.addChild(collisionCube) collisionCube.position = [0.2,0,0] collisionCube.components.set(InputTargetComponent()) } .gesture(SpatialTapGesture().targetedToAnyEntity().onEnded { tap in let tapPosition = tap.entity.position(relativeTo: root) falling.components.remove(FallComponent.self) falling.teleportCharacter(to: tapPosition + fallFrom, relativeTo: root) }) .toolbar { ToolbarItemGroup(placement: .bottomOrnament) { HStack { Button("Drop") { falling.components.set(FallComponent(speed: 0.4)) } Button("Reset") { falling.components.remove(FallComponent.self) falling.teleportCharacter(to: fallFrom, relativeTo: root) } } } } } } @MainActor func createCharacter(named name: String, radius: Float, color: UIColor) -> Entity { let character = ModelEntity(mesh: .generateSphere(radius: radius), materials: [SimpleMaterial(color: color, isMetallic: false)]) character.name = name character.components.set(CharacterControllerComponent(radius: radius, height: radius)) return character } @MainActor func createCollisionCube(named name: String, size: Float, color: UIColor) -> Entity { let cube = ModelEntity(mesh: .generateBox(size: size), materials: [SimpleMaterial(color: color, isMetallic: false)]) cube.name = name cube.generateCollisionShapes(recursive: true) return cube } struct FallComponent: Component { let speed: Float } struct FallSystem: System{ static let predicate: QueryPredicate<Entity> = .has(FallComponent.self) && .has(CharacterControllerComponent.self) static let query: EntityQuery = .init(where: predicate) let down: SIMD3<Float> = [0,-1,0] init(scene: RealityKit.Scene) { } func update(context: SceneUpdateContext) { let deltaTime = Float(context.deltaTime) for entity in context.entities(matching: Self.query, updatingSystemWhen: .rendering) { let speed = entity.components[FallComponent.self]?.speed ?? 0.5 entity.moveCharacter(by: down * speed * deltaTime, deltaTime: deltaTime, relativeTo: nil) { collision in if collision.hitEntity == collision.characterEntity { print("hit entity has collided with itself") } print("\(collision.characterEntity.name) collided with \(collision.hitEntity.name) ") } } } } #Preview(windowStyle: .volumetric) { ContentView() }
1
0
181
Aug ’25
How to obtain frame rate for iOS proMotion devices
Due to the release of ProMotion devices, the system may switch frame rates in certain scenarios, resulting in the loss of reference value for data collected through CADisplayLink callbacks at a fixed 60Hz frame rate. We cannot distinguish whether the slow callback of CADisplayLink is due to a stutter or a system switch in frame rate. I know Hitch Time Ratio, but I can't use this scheme for some reasons. How can I distinguish between stuck and frame rate gear shift in CADisplaylink callback? In iOS 15, CADisplayLink.preferredFrameRateRange.preferred always returns 0, while minimum and maximum do change. Can I use these minimum and maximum range values as criteria to distinguish between frame rate switching and stuttering?
1
0
158
May ’25
Help Request! How to Render Models with SubMeshes Using Metal 4?
Hi, I'm Beginner with Metal 4 and Model I/O 🥺. I can render simple models with just one mesh, but when I try to render models with SubMeshes, nothing shows up on screen. Can anyone help me figure out how to properly render models with multiple submeshes? I think I'm not iterating through them correctly or maybe missing some buffers setup. Here's what I have so far: https://www.icloud.com.cn/iclouddrive/0a6x_NLwlWy-herPocExZ8g3Q#LoadModel
1
0
300
Nov ’25
Metal: Intersection results unstable when reusing Instance Acceleration Structures
Hi all, I'm encountering an issue with Metal raytracing on my M5 MacBook Pro regarding Instance Acceleration Structure (IAS). Intersection tests suddenly stop working after a certain point in the sampling loop. Situation I implemented an offline GPU path tracer that runs the same kernel multiple times per pixel (sampleCount) using metal::raytracing. Intersection tests are performed using an IAS. Since this is an offline path tracer, geometries inside the IAS never changes across samples (no transforms or updates). As sampleCount increases, there comes a point where the number of intersections drops to zero, and remains zero for all subsequent samples. Here's a code sketch: let sampleCount: UInt16 = 1024 for sampleIndex: UInt16 in 0..<sampleCount { // ... do { let commandBuffer = commandQueue.makeCommandBuffer() // Dispatch the intersection kernel. await commandBuffer.completed() } do { let commandBuffer = commandQueue.makeCommandBuffer() // Use the intersection test results from the previous command buffer. await commandBuffer.completed() } // ... } kernel void intersectAlongRay( const metal::uint32_t threadIndex [[thread_position_in_grid]], // ... const metal::raytracing::instance_acceleration_structure accelerationStructure [[buffer(2)]], // ... ) { // ... const auto result = intersector.intersect(ray, accelerationStructure); switch (result.type) { case metal::raytracing::intersection_type::triangle: { // Write intersection result to device buffers. break; } default: break; } Observations Encoding both the intersection kernel and the subsequent result usage in the same command buffer does not resolve the problem. Switching from IAS to Primitive Acceleration Structure (PAS) fixes the problem. Rebuilding the IAS for each sample also resolves the issue. Intersections produce inconsistent results even though the IAS and rays are identical — Image 1 shows a hit, while Image 2 shows a miss. Questions Am I misusing IAS in some way ? Could this be a Metal bug ? Any guidance or confirmation would be greatly appreciated.
1
1
349
Dec ’25
Krazy Krownz multiplayer option not working for Game Center!
When trying to play with friends Krazy Krownz doesn’t allow me to click multiplayer even though my Apple Game Center connected and my friends Apple game center connected as well. I even tried sending an invite from Apple Game Center to friends and Krazy Krownz doesn’t even show up on the list of available multiplayer games. I’ve signed out and back in the same issue remain. I’ve try to contact the game developer, but the website doesn’t work.
1
0
308
Jan ’26
GKLeaderboard.LoadLeaderboards() return empty when testing on Xcode with local gamekit file (Unity)
var allLeaderboards = await GKLeaderboard.LoadLeaderboards(); Log(allLeaderboards.Count); // returns 0 What am I missing?? What doesn’t work: await GKGameActivityDefinition.LoadGameActivityDefinitions() → count = 0 await GKLeaderboard.LoadLeaderboards() (no args) → 0 leaderboards await GKLeaderboard.LoadLeaderboards("MY ID") → returns 0 GkGameActivity.SetScoreOnLeaderboard(Leaderboard, score, context); returns an error since my Leaderboard is null. Activities and leaderboards are defined in GameCenterResources.gamekit in Xcode. Achievements that I add locally in the .gamekit file do not appear at runtime either; only ASC live ones show. ** What works:** xcode- debug- Gamekit- Manage Game progress- I can submit new scores with the plus button and see the notification on my device. await GKLocalPlayer.Authenticate() succeeds. await GKAchievement.LoadAchievements() returns the list of achievements configured in App Store Connect- but not any new local achievements created in Xcode in GameCenterResources.gamekit Environment Device/OS: iPhone on iOS 26.0 beta (Game Center sandbox) Xcode: 26.0 beta 6 Unity: 2022.3.21 Apple GameKit Unity plugin: 2025-beta1 (GameKit package) Signing: Game Center capability enabled; using development provisioning profile GameKit resources: GameCenterResources.gamekit in project (Target: Unity-iPhone), appears under Build Phases → Copy Bundle Resources
1
0
772
Sep ’25
RealityKit - How to change camera target in response of a touch event?
Hello, I’m porting my UIKit/SceneKit app to SwiftUI/RealityKit and I’m wondering how to change the camera target programmatically. I created a simple scene in Reality Composer Pro with two spheres. My goal is straightforward: when the user taps a sphere, the camera should look at it as the main target. Following Apple’s videos, I implemented the .gesture modifier and it is printing the tapped sphere correctly, but updating my targetEntity state doesn’t change anything, so the camera won't update its target. Is there a way to access the scene content at that level? Or what else should I do? Here’s my current code implementation: Thanks!
1
0
329
Sep ’25
Compute kernel fails to compile when calling texture.read()
If I compile a compute kernel with a call to texture.read(), it fails with the following error: "Error Domain=AGXMetalG13X Code=3 "Encountered unlowered function call to air.get_read_sampler" UserInfo={NSLocalizedDescription=Encountered unlowered function call to air.get_read_sampler}." This error occurs on both macOS and iOS 26 Beta 5, but not when running on a simulator or in a playground. It does not occur on a macOS Sequoia VM. It occurs whether I use the old metal 3 or new metal 4 compilation method. A workaround would be to use a sampler, but according to the feature tables, all platforms support reading from textures of all formats. Below is a minimal example which produces the error: let device = MTLCreateSystemDefaultDevice()! let library = device.makeDefaultLibrary()! let computeFunction = library.makeFunction(name: "compute_test")! do { let pipeline = try device.makeComputePipelineState(function: computeFunction) debugPrint(pipeline) } catch { debugPrint("Metal 3 failed with error:\n\(error)") } #import <metal_stdlib> using namespace metal; kernel void compute_test(uint2 gid [[thread_position_in_grid]], texture2d<float, access::read> in [[texture(0)]], texture2d<float, access::write> out [[texture(1)]]) { out.write(in.read(gid), gid); } I filed feedback FB19530049.
1
0
219
Aug ’25
How to Configure angularLimitInYZ for PhysicsSphericalJoint in RealityKit (Pendulum/Swing Behavior)
Hello RealityKit developers, I'm currently working on physics simulations in my visionOS app and am trying to adapt the concepts from the official sample Simulating physics joints in your RealityKit app. In the sample, a sphere is connected to the ceiling using a PhysicsRevoluteJoint to create a hinge-like simulation. I've successfully modified this setup to use a PhysicsSphericalJoint instead. The basic replacement works as expected: pin1 (attached to the sphere) rotates freely around pin0 (attached to the ceiling), much like a ball-and-socket joint should, removing all translational degrees of freedom. My challenge lies with the PhysicsSphericalJoint's angularLimitInYZ property. The documentation mentions that this property allows limiting the rotation around the Y and Z axes, defining an "elliptical cone shape around the x-axis of pin0." However, I'm struggling to understand how to specify these values to achieve a desired rotational limit. If I have a sphere that is currently capable of rotating 360 degrees around pin0 (like a free-spinning ball on a string), how would I use angularLimitInYZ to restrict its rotation to a certain height or angular range, preventing it from completing a full circle? Specifically, I'm trying to achieve a "swing" like behavior where the sphere oscillates back and forth but cannot rotate completely overhead or underfoot. What values or approach should I use for the angularLimitInYZ tuple to define such a restricted pendulum-like motion? Any insights, code examples, or explanations on how to properly configure angularLimitInYZ for this kind of behavior would be incredibly helpful! The following code is modified from the sample. extension MainView { func addPinsTo(ballEntity: Entity, attachmentEntity: Entity) throws { let hingeOrientation = simd_quatf(from: [1, 0, 0], to: [0, 0, 1]) let attachmentPin = attachmentEntity.pins.set( named: "attachment_hinge", position: .zero, orientation: hingeOrientation ) let relativeJointLocation = attachmentEntity.position( relativeTo: ballEntity ) let ballPin = ballEntity.pins.set( named: "ball_hinge", position: relativeJointLocation, orientation: hingeOrientation ) // Create a PhysicsSphericalJoint between the two pins. let revoluteJoint = PhysicsSphericalJoint(pin0: attachmentPin, pin1: ballPin) try revoluteJoint.addToSimulation() } } The following image is a screenshot of the operation when changing to PhysicsSphericalJoint. Thank you in advance for your assistance.
1
0
277
Jul ’25
PhotogrammetrySession fails with internal errors 4011 / 4012 when using iOS Object Capture (Area Mode) images
Hi all, I’m running into an issue when trying to reconstruct a 3D model using PhotogrammetrySession on macOS from a set of images captured via the iOS Object Capture sample app, specifically in Area mode. When I attempt to create the model from these images (using the raw Images/ folder exported directly from the capture session), I get the following errors: ERROR cv3dapi.pg: Internal error codes (2): 4011 4012 WARN cv3dapi.pg: Internal warning codes (1): 4507 Output error with code = -15 requestError: CoreOC.PhotogrammetrySession.Error.processError I use the "Images" directory directly exported from Object Capture with my iphone 12 pro max (has lidar) set to "area mode" in the object capture app here is an example heic image metadata from the sequence. heif-info Images/00044.869568833.HEIC MIME type: image/heic main brand: heic compatible brands: mif1, MiHE, MiPr, miaf, MiHB, heic image: 3024x4032 (id=49), primary tiles: 6x8, tile size: 512x512 colorspace: YCbCr, 4:2:0 bit depth: 8 thumbnail: 240x320 color profile: nclx alpha channel: no depth channel: yes size: 192x256 bits per pixel: 8 z-near: 1.173828 z-far: 2.552734 d-min: undefined d-max: undefined representation: uniform Z metadata: Exif: 960 bytes uri /tag:apple.com,2023:ObjectCapture#CameraTrackingState: 4 bytes uri /tag:apple.com,2023:ObjectCapture#CameraCalibrationData: 1015 bytes uri /tag:apple.com,2023:ObjectCapture#ObjectTransform: 48 bytes uri /tag:apple.com,2023:ObjectCapture#ObjectBoundingBox: 48 bytes uri /tag:apple.com,2023:ObjectCapture#RawFeaturePoints: 832 bytes uri /tag:apple.com,2023:ObjectCapture#PointCloudData: 23984 bytes uri /tag:apple.com,2023:ObjectCapture#BundleVersion: 5 bytes uri /tag:apple.com,2023:ObjectCapture#SegmentID: 4 bytes uri /tag:apple.com,2024:ObjectCapture#SessionUUID: 36 bytes uri /tag:apple.com,2024:ObjectCapture#CaptureMode: 4 bytes uri /tag:apple.com,2023:ObjectCapture#Feedback: 4 bytes uri /tag:apple.com,2023:ObjectCapture#WideToDepthCameraTransform: 48 bytes uri /tag:apple.com,2023:ObjectCapture#TemporalDepthPointClouds: 864026 bytes transformations: angle (ccw): 270 region annotations: none properties: camera intrinsic matrix: focal length: 2813.695557; 2813.695557 principal point: 1522.338502; 2002.843018 skew: 0.000000 camera extrinsic matrix: rotation matrix: -0.695 0.344 -0.632 0.007 -0.875 -0.483 -0.719 -0.340 0.606 Questions: • What do internal error codes 4011 and 4012 refer to? • Is there something specific about Area mode captures that require preprocessing before they’re compatible with PhotogrammetrySession? • Has anyone successfully reconstructed a model from an Area mode session using the stock Apple tools? NOTE: I can provide the folder of images for debugging if that would help!
1
2
1k
Jul ’25
How can I assign priorities to my app’s GPU workloads?
My app has a number of heterogeneous GPU workloads that all run concurrently. Some of these should be executed with the highest priority because the app’s responsiveness depends on them, while others are triggered by file imports and the like which should have a low priority. If this was running on the CPU I’d assign the former User Interactive QoS and the latter Utility QoS. Is there an equivalent to this for GPU work?
1
0
943
Jan ’26
RealityKit generates an excessive amount of logging
During regular use, RealityKit generates an excessive amount of internal logging that is not actionable by third party developers. When developing an iOS RealityKit/ARKit app, this makes the Xcode console challenging to use for regular work. (FB19173812) See screenshots below. Xcode does have an option for filtering out logging from specific SDKs, but enabling this feature to suppress the logging of RealityKit and related SDKs like PHASE is something developers have to do dozens of times each day. After a year of developing a RealityKit app, this process becomes frustrating. If SDKs like Foundation, UIKit, and SwiftUI generated as much logging as RealityKit and related SDKs, Xcode's console would be unusable. Is there any way to disable the logging of RealityKit and PHASE permanently? Thank you for any help you provide.
1
0
515
Jul ’25
Combining render encoders
When I take a frame capture of my application in Xcode, it shows a warning that reads "Your application created separate command encoders which can be combined into a single encoder. By combining these encoders you may reduce your application's load/store bandwidth usage." In the minimal reproduction case I've identified for this warning, I have two render pipeline states: The first writes to the current drawable, the depth buffer, and a secondary color buffer. The second writes only to the current drawable. Because these are writing to a different set of outputs, I was initially creating two separate render command encoders to handle the draws under each of these states. My understanding is that Xcode is telling me I could only create one, however when I try to do that, I get runtime asserts when attempting to apply the second render pipeline state since it doesn't have a matching attachment configured for the second color buffer or for the depth buffer, so I can't just combine the encoders. Is the only solution here to detect and propagate forward the color/depth attachments from the first state into the creation of the second state? Is there any way to suppress this specific warning in Xcode?
Replies
1
Boosts
0
Views
310
Activity
Jul ’25
Subject: Handling Z-Up Blender USDZ Models in RealityKit (visionOS) for Transform Updates
Hello everyone, I'm working on a visionOS application using RealityKit and am encountering a common coordinate system challenge when integrating 3D models created in Blender. My goal is to display and dynamically update the Transform (position, rotation, scale) of models created in Blender within RealityKit. The issue arises because Blender's default coordinate system is Z-up, and while exporting to USD/USDZ, I don't have a reliable "Y-up" export option that correctly reorients the model and its transform data for RealityKit's Y-up convention. This means I'm essentially exporting models with their "up" direction along the Z-axis. When I load these Z-up exported models into RealityKit, they are often oriented incorrectly. To then programmatically update their Transform (e.g., move them, rotate them based on game logic, or apply physics), I need to ensure that the Transform values I set align with RealityKit's Y-up system, even though the original model data was authored in a Z-up context. My questions are: What is the recommended transformation process (e.g., using simd_quatf or simd_float4x4) to convert a Transform that was conceptually defined in a Z-up coordinate system to RealityKit's Y-up coordinate system? Specifically, when I have a Transform (or its translation, rotation, scale components) from a Z-up context, how should I apply this to a RealityKit Entity so it appears and behaves correctly in a Y-up world? Are there any existing convenience APIs or helper functions within RealityKit, simd, or other Apple frameworks that simplify this Z-up to Y-up Transform conversion process? Or is a manual application of a transformation quaternion (e.g., simd_quatf(angle: -.pi / 2, axis: [1, 0, 0])) the standard approach? Any guidance, code examples, or best practices from those who have faced similar challenges would be greatly appreciated! Thank you.
Replies
1
Boosts
1
Views
491
Activity
Jul ’25
GCController.shouldMonitorBackgroundEvents = true broken?
I am suspecting that setting GCController.shouldMonitorBackgroundEvents = true does not actually make the game controllers inputs accessible to the app when it is in the background. About this value the official documentation says: A Boolean value that indicates whether the app needs to respond to controller events when it isn’t the frontmost app. Now the behavior is that when the app is in focus the users inputs do get correctly recognized but as soon as the app enters the background no inputs get recognized. The controller does not get reported as disconnecting and still works for example in launchpad. I am sure that about 2 months ago when I first used this it did work as one would expect. I also have seen that an app which lets users execute certain actions using their controller has stoped working recently, adding to my suspicion of the feature being broken. Here is a minimum reproducible example: import SwiftUI import GameController @main struct TestingControllerConnectionApp: App { @NSApplicationDelegateAdaptor(AppDelegate.self) var appDelegate var body: some Scene { WindowGroup { ContentView() } } } class AppDelegate: NSObject, NSApplicationDelegate { var statusItem: NSStatusItem? var controller: GCController? func applicationDidFinishLaunching(_ notification: Notification) { setupMenuBar() GCController.shouldMonitorBackgroundEvents = true NotificationCenter.default.addObserver( self, selector: #selector(controllerDidConnect), name: .GCControllerDidConnect, object: nil ) NotificationCenter.default.addObserver( self, selector: #selector(controllerDidDisconnect), name: .GCControllerDidDisconnect, object: nil ) } @objc private func setupMenuBar() { let menu = NSMenu() menu.addItem(NSMenuItem(title: "Quit", action: #selector(quitApp), keyEquivalent: "q")) statusItem = NSStatusBar.system.statusItem(withLength: NSStatusItem.variableLength) statusItem?.button?.image = NSImage(resource: .controllerBar) statusItem?.menu = menu } @objc private func quitApp() { NSApp.terminate(nil) } @objc private func controllerDidConnect(_ notification: Notification) { if let controller = notification.object as? GCController { print("Controller connected") self.controller = controller if let gamepad = controller.extendedGamepad { gamepad.buttonA.pressedChangedHandler = { _, _, pressed in print("Button A pressed: \(pressed)") } } } } @objc private func controllerDidDisconnect(_ notification: Notification) { print("Controller disconnected") } } This is created in a completely fresh Xcode project and NSHumanInterfaceDeviceUsageDescription has been added. I am using a PS5 Controller and a Mac running MacOS 15.4.1 which has been restarted and only Xcode and the app have been opened. I have tested this with setting a multitude of different entitlements and capabilities including: NSHumanInterfaceDeviceUsageDescription Supports Controller User Interaction Required background modes -> App communicates with an accessory com.apple.security.device.bluetooth com.apple.security.device.hid com.apple.security.device.usb I have also set this value at different points in the code with no change of effect. Does anybody see if there is any fault in my code or my understanding of the effect of the value 'shouldMonitorBackgroundEvents'? Or is this the functionality actually being broken on Apples part?
Replies
1
Boosts
0
Views
213
Activity
Apr ’25
Can you delete a MTLLibrary once shaders are placed into pipeline?
Hello, I am quite new to using the metal API and was wondering if it was common (or even possible) if you knew that, when a pipeline was created, you never needed to make another one with the same shaders again, if it is safe to release the library the was used to reference the shaders? Only asking because this is possible in other apis, but apple never mentions (as far as I have found) if this is safe or not safe to do.
Replies
1
Boosts
0
Views
398
Activity
Oct ’25
Gamekit Achievements Won't Unhide
Added achievements to my approved app. Added them for the next release version, which I am running in simulator. When I look at the Achievements page, I can see that there are 17 Achievements available (correct), but they all show as hidden, despite checking the "No" box in App Store Connect.
Replies
1
Boosts
1
Views
161
Activity
May ’25
Why does game mode not get triggered for my App?
I think I really have tried everything and I did all according to official documentation to support game mode on iOS or iPadOS but it doesn't matter what I do it just doesn't get triggered. Funny enough it works during development when I install it via Xcode but as soon as it is live on the store and when I install it from there game mode doesn't get triggered anymore. What I have atm I have added (even though it is deprecated) <key>GCSupportsGameMode</key> <true/> I have set the (but it seems only supported for macOS) <key>LSApplicationCategoryType</key> <string>public.app-category.games</string> I have added <key>LSSupportsGameMode</key> <true/> It just doesn't work. Is there anything else what needs to be done? Should the flag LSSupportsGameMode not be enough normally? The reason why this is so annoying is that my app is a real time streaming app and I want to profit from minimised background activities for smoother gameplay and more consistent frame rates like mentioned in the documentation.
Replies
1
Boosts
0
Views
740
Activity
Nov ’25
Why is there no Metal on Apple Watch?
subj And how in this case are beautiful system dials made with smoke effects and other particles?
Replies
1
Boosts
0
Views
323
Activity
Oct ’25
moveCharacter reports collision with itself
I'm running into an issue with collisions between two entities with a character controller component. In the collision handler for moveCharacter the collision has both hitEntity and characterEntity set to the same object. This object is the entity that was moved with moveCharacter() The below example configures 3 objects. stationary sphere with character controller falling sphere with character controller a stationary cube with a collision component if the falling sphere hits the stationary sphere then the collision handler reports both hitEntity and characterEntity to be the falling sphere. I would expect that the hitEntity would be the stationary sphere and the character entity would be the falling sphere. if the falling sphere hits the cube with a collision component the the hit entity is the cube and the characterEntity is the falling sphere as expected. Is this the expected behavior? The entities act as expected visually however if I want the spheres to react differently depending on what character they collided with then I am not getting the expected results. IE: If a player controlled character collides with a NPC then exchange resource with NPC. if player collides with enemy then take damage. import SwiftUI import RealityKit struct ContentView: View { @State var root: Entity = Entity() @State var stationary: Entity = createCharacter(named: "stationary", radius: 0.05, color: .blue) @State var falling: Entity = createCharacter(named: "falling", radius: 0.05, color: .red) @State var collisionCube: Entity = createCollisionCube(named: "cube", size: 0.1, color: .green) //relative to root @State var fallFrom: SIMD3<Float> = [0,0.5,0] var body: some View { RealityView { content in content.add(root) root.position = [0,-0.5,0.0] root.addChild(stationary) stationary.position = [0,0.05,0] root.addChild(falling) falling.position = fallFrom root.addChild(collisionCube) collisionCube.position = [0.2,0,0] collisionCube.components.set(InputTargetComponent()) } .gesture(SpatialTapGesture().targetedToAnyEntity().onEnded { tap in let tapPosition = tap.entity.position(relativeTo: root) falling.components.remove(FallComponent.self) falling.teleportCharacter(to: tapPosition + fallFrom, relativeTo: root) }) .toolbar { ToolbarItemGroup(placement: .bottomOrnament) { HStack { Button("Drop") { falling.components.set(FallComponent(speed: 0.4)) } Button("Reset") { falling.components.remove(FallComponent.self) falling.teleportCharacter(to: fallFrom, relativeTo: root) } } } } } } @MainActor func createCharacter(named name: String, radius: Float, color: UIColor) -> Entity { let character = ModelEntity(mesh: .generateSphere(radius: radius), materials: [SimpleMaterial(color: color, isMetallic: false)]) character.name = name character.components.set(CharacterControllerComponent(radius: radius, height: radius)) return character } @MainActor func createCollisionCube(named name: String, size: Float, color: UIColor) -> Entity { let cube = ModelEntity(mesh: .generateBox(size: size), materials: [SimpleMaterial(color: color, isMetallic: false)]) cube.name = name cube.generateCollisionShapes(recursive: true) return cube } struct FallComponent: Component { let speed: Float } struct FallSystem: System{ static let predicate: QueryPredicate<Entity> = .has(FallComponent.self) && .has(CharacterControllerComponent.self) static let query: EntityQuery = .init(where: predicate) let down: SIMD3<Float> = [0,-1,0] init(scene: RealityKit.Scene) { } func update(context: SceneUpdateContext) { let deltaTime = Float(context.deltaTime) for entity in context.entities(matching: Self.query, updatingSystemWhen: .rendering) { let speed = entity.components[FallComponent.self]?.speed ?? 0.5 entity.moveCharacter(by: down * speed * deltaTime, deltaTime: deltaTime, relativeTo: nil) { collision in if collision.hitEntity == collision.characterEntity { print("hit entity has collided with itself") } print("\(collision.characterEntity.name) collided with \(collision.hitEntity.name) ") } } } } #Preview(windowStyle: .volumetric) { ContentView() }
Replies
1
Boosts
0
Views
181
Activity
Aug ’25
Roblox very Laggy on iOS 26.1 23b5044i
i play Roblox and ever since I've got this update the quality graphics stability Internet ping and a lot of other stuff has drastically been worse
Replies
1
Boosts
0
Views
784
Activity
Oct ’25
How to obtain frame rate for iOS proMotion devices
Due to the release of ProMotion devices, the system may switch frame rates in certain scenarios, resulting in the loss of reference value for data collected through CADisplayLink callbacks at a fixed 60Hz frame rate. We cannot distinguish whether the slow callback of CADisplayLink is due to a stutter or a system switch in frame rate. I know Hitch Time Ratio, but I can't use this scheme for some reasons. How can I distinguish between stuck and frame rate gear shift in CADisplaylink callback? In iOS 15, CADisplayLink.preferredFrameRateRange.preferred always returns 0, while minimum and maximum do change. Can I use these minimum and maximum range values as criteria to distinguish between frame rate switching and stuttering?
Replies
1
Boosts
0
Views
158
Activity
May ’25
Help Request! How to Render Models with SubMeshes Using Metal 4?
Hi, I'm Beginner with Metal 4 and Model I/O 🥺. I can render simple models with just one mesh, but when I try to render models with SubMeshes, nothing shows up on screen. Can anyone help me figure out how to properly render models with multiple submeshes? I think I'm not iterating through them correctly or maybe missing some buffers setup. Here's what I have so far: https://www.icloud.com.cn/iclouddrive/0a6x_NLwlWy-herPocExZ8g3Q#LoadModel
Replies
1
Boosts
0
Views
300
Activity
Nov ’25
Metal: Intersection results unstable when reusing Instance Acceleration Structures
Hi all, I'm encountering an issue with Metal raytracing on my M5 MacBook Pro regarding Instance Acceleration Structure (IAS). Intersection tests suddenly stop working after a certain point in the sampling loop. Situation I implemented an offline GPU path tracer that runs the same kernel multiple times per pixel (sampleCount) using metal::raytracing. Intersection tests are performed using an IAS. Since this is an offline path tracer, geometries inside the IAS never changes across samples (no transforms or updates). As sampleCount increases, there comes a point where the number of intersections drops to zero, and remains zero for all subsequent samples. Here's a code sketch: let sampleCount: UInt16 = 1024 for sampleIndex: UInt16 in 0..<sampleCount { // ... do { let commandBuffer = commandQueue.makeCommandBuffer() // Dispatch the intersection kernel. await commandBuffer.completed() } do { let commandBuffer = commandQueue.makeCommandBuffer() // Use the intersection test results from the previous command buffer. await commandBuffer.completed() } // ... } kernel void intersectAlongRay( const metal::uint32_t threadIndex [[thread_position_in_grid]], // ... const metal::raytracing::instance_acceleration_structure accelerationStructure [[buffer(2)]], // ... ) { // ... const auto result = intersector.intersect(ray, accelerationStructure); switch (result.type) { case metal::raytracing::intersection_type::triangle: { // Write intersection result to device buffers. break; } default: break; } Observations Encoding both the intersection kernel and the subsequent result usage in the same command buffer does not resolve the problem. Switching from IAS to Primitive Acceleration Structure (PAS) fixes the problem. Rebuilding the IAS for each sample also resolves the issue. Intersections produce inconsistent results even though the IAS and rays are identical — Image 1 shows a hit, while Image 2 shows a miss. Questions Am I misusing IAS in some way ? Could this be a Metal bug ? Any guidance or confirmation would be greatly appreciated.
Replies
1
Boosts
1
Views
349
Activity
Dec ’25
Krazy Krownz multiplayer option not working for Game Center!
When trying to play with friends Krazy Krownz doesn’t allow me to click multiplayer even though my Apple Game Center connected and my friends Apple game center connected as well. I even tried sending an invite from Apple Game Center to friends and Krazy Krownz doesn’t even show up on the list of available multiplayer games. I’ve signed out and back in the same issue remain. I’ve try to contact the game developer, but the website doesn’t work.
Replies
1
Boosts
0
Views
308
Activity
Jan ’26
GKLeaderboard.LoadLeaderboards() return empty when testing on Xcode with local gamekit file (Unity)
var allLeaderboards = await GKLeaderboard.LoadLeaderboards(); Log(allLeaderboards.Count); // returns 0 What am I missing?? What doesn’t work: await GKGameActivityDefinition.LoadGameActivityDefinitions() → count = 0 await GKLeaderboard.LoadLeaderboards() (no args) → 0 leaderboards await GKLeaderboard.LoadLeaderboards("MY ID") → returns 0 GkGameActivity.SetScoreOnLeaderboard(Leaderboard, score, context); returns an error since my Leaderboard is null. Activities and leaderboards are defined in GameCenterResources.gamekit in Xcode. Achievements that I add locally in the .gamekit file do not appear at runtime either; only ASC live ones show. ** What works:** xcode- debug- Gamekit- Manage Game progress- I can submit new scores with the plus button and see the notification on my device. await GKLocalPlayer.Authenticate() succeeds. await GKAchievement.LoadAchievements() returns the list of achievements configured in App Store Connect- but not any new local achievements created in Xcode in GameCenterResources.gamekit Environment Device/OS: iPhone on iOS 26.0 beta (Game Center sandbox) Xcode: 26.0 beta 6 Unity: 2022.3.21 Apple GameKit Unity plugin: 2025-beta1 (GameKit package) Signing: Game Center capability enabled; using development provisioning profile GameKit resources: GameCenterResources.gamekit in project (Target: Unity-iPhone), appears under Build Phases → Copy Bundle Resources
Replies
1
Boosts
0
Views
772
Activity
Sep ’25
RealityKit - How to change camera target in response of a touch event?
Hello, I’m porting my UIKit/SceneKit app to SwiftUI/RealityKit and I’m wondering how to change the camera target programmatically. I created a simple scene in Reality Composer Pro with two spheres. My goal is straightforward: when the user taps a sphere, the camera should look at it as the main target. Following Apple’s videos, I implemented the .gesture modifier and it is printing the tapped sphere correctly, but updating my targetEntity state doesn’t change anything, so the camera won't update its target. Is there a way to access the scene content at that level? Or what else should I do? Here’s my current code implementation: Thanks!
Replies
1
Boosts
0
Views
329
Activity
Sep ’25
Compute kernel fails to compile when calling texture.read()
If I compile a compute kernel with a call to texture.read(), it fails with the following error: "Error Domain=AGXMetalG13X Code=3 "Encountered unlowered function call to air.get_read_sampler" UserInfo={NSLocalizedDescription=Encountered unlowered function call to air.get_read_sampler}." This error occurs on both macOS and iOS 26 Beta 5, but not when running on a simulator or in a playground. It does not occur on a macOS Sequoia VM. It occurs whether I use the old metal 3 or new metal 4 compilation method. A workaround would be to use a sampler, but according to the feature tables, all platforms support reading from textures of all formats. Below is a minimal example which produces the error: let device = MTLCreateSystemDefaultDevice()! let library = device.makeDefaultLibrary()! let computeFunction = library.makeFunction(name: "compute_test")! do { let pipeline = try device.makeComputePipelineState(function: computeFunction) debugPrint(pipeline) } catch { debugPrint("Metal 3 failed with error:\n\(error)") } #import <metal_stdlib> using namespace metal; kernel void compute_test(uint2 gid [[thread_position_in_grid]], texture2d<float, access::read> in [[texture(0)]], texture2d<float, access::write> out [[texture(1)]]) { out.write(in.read(gid), gid); } I filed feedback FB19530049.
Replies
1
Boosts
0
Views
219
Activity
Aug ’25
How to Configure angularLimitInYZ for PhysicsSphericalJoint in RealityKit (Pendulum/Swing Behavior)
Hello RealityKit developers, I'm currently working on physics simulations in my visionOS app and am trying to adapt the concepts from the official sample Simulating physics joints in your RealityKit app. In the sample, a sphere is connected to the ceiling using a PhysicsRevoluteJoint to create a hinge-like simulation. I've successfully modified this setup to use a PhysicsSphericalJoint instead. The basic replacement works as expected: pin1 (attached to the sphere) rotates freely around pin0 (attached to the ceiling), much like a ball-and-socket joint should, removing all translational degrees of freedom. My challenge lies with the PhysicsSphericalJoint's angularLimitInYZ property. The documentation mentions that this property allows limiting the rotation around the Y and Z axes, defining an "elliptical cone shape around the x-axis of pin0." However, I'm struggling to understand how to specify these values to achieve a desired rotational limit. If I have a sphere that is currently capable of rotating 360 degrees around pin0 (like a free-spinning ball on a string), how would I use angularLimitInYZ to restrict its rotation to a certain height or angular range, preventing it from completing a full circle? Specifically, I'm trying to achieve a "swing" like behavior where the sphere oscillates back and forth but cannot rotate completely overhead or underfoot. What values or approach should I use for the angularLimitInYZ tuple to define such a restricted pendulum-like motion? Any insights, code examples, or explanations on how to properly configure angularLimitInYZ for this kind of behavior would be incredibly helpful! The following code is modified from the sample. extension MainView { func addPinsTo(ballEntity: Entity, attachmentEntity: Entity) throws { let hingeOrientation = simd_quatf(from: [1, 0, 0], to: [0, 0, 1]) let attachmentPin = attachmentEntity.pins.set( named: "attachment_hinge", position: .zero, orientation: hingeOrientation ) let relativeJointLocation = attachmentEntity.position( relativeTo: ballEntity ) let ballPin = ballEntity.pins.set( named: "ball_hinge", position: relativeJointLocation, orientation: hingeOrientation ) // Create a PhysicsSphericalJoint between the two pins. let revoluteJoint = PhysicsSphericalJoint(pin0: attachmentPin, pin1: ballPin) try revoluteJoint.addToSimulation() } } The following image is a screenshot of the operation when changing to PhysicsSphericalJoint. Thank you in advance for your assistance.
Replies
1
Boosts
0
Views
277
Activity
Jul ’25
PhotogrammetrySession fails with internal errors 4011 / 4012 when using iOS Object Capture (Area Mode) images
Hi all, I’m running into an issue when trying to reconstruct a 3D model using PhotogrammetrySession on macOS from a set of images captured via the iOS Object Capture sample app, specifically in Area mode. When I attempt to create the model from these images (using the raw Images/ folder exported directly from the capture session), I get the following errors: ERROR cv3dapi.pg: Internal error codes (2): 4011 4012 WARN cv3dapi.pg: Internal warning codes (1): 4507 Output error with code = -15 requestError: CoreOC.PhotogrammetrySession.Error.processError I use the "Images" directory directly exported from Object Capture with my iphone 12 pro max (has lidar) set to "area mode" in the object capture app here is an example heic image metadata from the sequence. heif-info Images/00044.869568833.HEIC MIME type: image/heic main brand: heic compatible brands: mif1, MiHE, MiPr, miaf, MiHB, heic image: 3024x4032 (id=49), primary tiles: 6x8, tile size: 512x512 colorspace: YCbCr, 4:2:0 bit depth: 8 thumbnail: 240x320 color profile: nclx alpha channel: no depth channel: yes size: 192x256 bits per pixel: 8 z-near: 1.173828 z-far: 2.552734 d-min: undefined d-max: undefined representation: uniform Z metadata: Exif: 960 bytes uri /tag:apple.com,2023:ObjectCapture#CameraTrackingState: 4 bytes uri /tag:apple.com,2023:ObjectCapture#CameraCalibrationData: 1015 bytes uri /tag:apple.com,2023:ObjectCapture#ObjectTransform: 48 bytes uri /tag:apple.com,2023:ObjectCapture#ObjectBoundingBox: 48 bytes uri /tag:apple.com,2023:ObjectCapture#RawFeaturePoints: 832 bytes uri /tag:apple.com,2023:ObjectCapture#PointCloudData: 23984 bytes uri /tag:apple.com,2023:ObjectCapture#BundleVersion: 5 bytes uri /tag:apple.com,2023:ObjectCapture#SegmentID: 4 bytes uri /tag:apple.com,2024:ObjectCapture#SessionUUID: 36 bytes uri /tag:apple.com,2024:ObjectCapture#CaptureMode: 4 bytes uri /tag:apple.com,2023:ObjectCapture#Feedback: 4 bytes uri /tag:apple.com,2023:ObjectCapture#WideToDepthCameraTransform: 48 bytes uri /tag:apple.com,2023:ObjectCapture#TemporalDepthPointClouds: 864026 bytes transformations: angle (ccw): 270 region annotations: none properties: camera intrinsic matrix: focal length: 2813.695557; 2813.695557 principal point: 1522.338502; 2002.843018 skew: 0.000000 camera extrinsic matrix: rotation matrix: -0.695 0.344 -0.632 0.007 -0.875 -0.483 -0.719 -0.340 0.606 Questions: • What do internal error codes 4011 and 4012 refer to? • Is there something specific about Area mode captures that require preprocessing before they’re compatible with PhotogrammetrySession? • Has anyone successfully reconstructed a model from an Area mode session using the stock Apple tools? NOTE: I can provide the folder of images for debugging if that would help!
Replies
1
Boosts
2
Views
1k
Activity
Jul ’25
How can I assign priorities to my app’s GPU workloads?
My app has a number of heterogeneous GPU workloads that all run concurrently. Some of these should be executed with the highest priority because the app’s responsiveness depends on them, while others are triggered by file imports and the like which should have a low priority. If this was running on the CPU I’d assign the former User Interactive QoS and the latter Utility QoS. Is there an equivalent to this for GPU work?
Replies
1
Boosts
0
Views
943
Activity
Jan ’26
RealityKit generates an excessive amount of logging
During regular use, RealityKit generates an excessive amount of internal logging that is not actionable by third party developers. When developing an iOS RealityKit/ARKit app, this makes the Xcode console challenging to use for regular work. (FB19173812) See screenshots below. Xcode does have an option for filtering out logging from specific SDKs, but enabling this feature to suppress the logging of RealityKit and related SDKs like PHASE is something developers have to do dozens of times each day. After a year of developing a RealityKit app, this process becomes frustrating. If SDKs like Foundation, UIKit, and SwiftUI generated as much logging as RealityKit and related SDKs, Xcode's console would be unusable. Is there any way to disable the logging of RealityKit and PHASE permanently? Thank you for any help you provide.
Replies
1
Boosts
0
Views
515
Activity
Jul ’25