How can one match the walls and floor of a given CapturedRoom ?
The transform.eulerAngles of a floor z & y are always 0 !
And the polygons seems to have a different orientation than the walls.
So how to figure out the rotation and match the one from the walls ?
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
hello apple through this message i want to draw you attention to some problems with gptk and rosetta some games like marvel spiderman 2 have broken animations and t pose issues and other like uncharted and the last of us have severe memory leak issues so its my request please fix it asap
I have two devices (iPod, iPhone), each using a different Apple ID. I have an existing game to which I'm adding TBM. When the iPod invites the iPhone, it sends an iMessage invite to the iPhone; when I click on that message, I get "Retrieving", then Game Center in Settings is opened, not my App (same version installed on both devices). I start my App on the iPhone and that match is not shown in the Matchmaker View Controller.
When I send an invite from the iPhone to the iPod and I click on the iMessage invite, the app starts but the match isn't listed in the MatchMaker ViewController on the iPod (but is on the iPhone).
In addition, when I click on the info circle on the iPhone, it who's the two players and "App Store" under the Game Center name. However, When I do the same on the iPod, it has a "Play your turn" there.
Any ideas?
I have a visionOS app that I’m adding support for IOS and will like to keep using RealityView.
I know there are the following modifiers to add some navigation
.realityViewCameraControls(.orbit)
.realityViewCameraControls(.dolly)
.realityViewCameraControls(.pan)
But how can I add more than one? For example I would like to orbit with one finger, Pan with 2 fingers and dolly by pinching. Is this possible and if so can someone share some sample code on how to achieve that?
Thanks,
Guillermo
Hi all,
I've developed some code that enables an arcball camera interaction with my scene. I've done this using components and systems. The implementation feels a bit messy as I've got gesture code on my realityView, and then a bunch of other code that uses those gesture inputs in my component and system.
Is there a demo app, or some example code that shows a nice way to encapsulate these things in to one item for custom cameras, something like Apple's .realityViewCameraControls(.orbit)
If not can anyone recommend an approach to take?
We are seeing crashes in Xcode organizer. So far we are not able to reproduce them locally. They affect multiple app releases (some older, built with Xcode 15.x and newer built with Xcode 16.0). They only affect iOS 18.5.
Is there anything that changed in latest iOS? It's hard to tell what exactly is causing this crash because setting symbolic breakpoint on CA::Render::Image::new_image(unsigned int, unsigned int, unsigned int, unsigned int, CGColorSpace*, void const*, unsigned long const*, void (*)(void const*, void*), void*) triggers this breakpoint all the time, but not necessarily with exactly the previous stack frames matching the crash report.
Is it a known issue?
crash.crash
Thank you.
The following minimal snippet SEGFAULTS with SDK 26.0 and 26.1. Won't crash if I remove async from the enclosing function signature - but it's impractical in a real project.
import Metal
import MetalPerformanceShaders
let SEED = UInt64(0x0)
typealias T = Float16
/* Why ran in async context? Because global GPU object,
and async makeMTLFunction,
and async makeMTLComputePipelineState.
Nevertheless, can trigger the bug without using global
@MainActor
let myGPU = MyGPU()
*/
@main
struct CMDLine {
static func main() async {
let ptr = UnsafeMutablePointer<T>.allocate(capacity: 0)
async let future: Void = randomFillOnGPU(ptr, count: 0)
print("Main thread is playing around")
await future
print("Successfully reached the end.")
}
static func randomFillOnGPU(_ buf: UnsafeMutablePointer<T>, count destbufcount: Int) async {
// let (device, queue) = await (myGPU.device, myGPU.commandqueue)
let myGPU = MyGPU()
let (device, queue) = (myGPU.device, myGPU.commandqueue)
// Init MTLBuffer, async let makeFunction, makeComputePipelineState, etc.
let tempDataType = MPSDataType.uInt32
let randfiller = MPSMatrixRandomMTGP32(device: device, destinationDataType: tempDataType, seed: Int(bitPattern:UInt(SEED)))
print("randomFillOnGPU: successfully created MPSMatrixRandom.")
// try await computePipelineState
// ^ Crashes before this could return
// Or in this minimal case, after randomFillOnGPU() returns
// make encoder, set pso, dispatch, commit...
}
}
actor MyGPU {
let device : MTLDevice
let commandqueue : MTLCommandQueue
init() {
guard let dev: MTLDevice = MPSGetPreferredDevice(.skipRemovable),
let cq = dev.makeCommandQueue(),
dev.supportsFamily(.apple6) || dev.supportsFamily(.mac2)
else { print("Unable to get Metal Device! Exiting"); exit(EX_UNAVAILABLE) }
print("Selected device: \(String(format: "%llX", dev.registryID))")
self.device = dev
self.commandqueue = cq
print("myGPU: initialization complete.")
}
}
See FB20916929. Apparently objc autorelease pool is releasing the wrong address during context switch (across suspension points). I wonder why such obvious case has not been caught before.
After watching WWDC 2025 session "Combine Metal 4 machine learning and graphics", I have decided to give it a shot to integrate the latest MTL4MachineLearningCommandEncoder to my existing render pipeline. After a lot of trial and errors, I managed to set up the pipeline and have the app compiled.
However, I am now stuck on creating a MTLLibrary with .mtlpackage.
Here is the code I have to create a MTLLibrary according the WWDC session https://developer.apple.com/videos/play/wwdc2025/262/?time=550:
let coreMLFilePath = bundle.path(forResource: "my_model", ofType: "mtlpackage")!
let coreMLURL = URL(string: coreMLFilePath)!
do {
metalDevice.makeLibrary(URL: coreMLURL)
} catch {
print("error: \(error)")
}
With the above code, I am getting error:
Error Domain=MTLLibraryErrorDomain Code=1 "Invalid metal package" UserInfo={NSLocalizedDescription=Invalid metal package}
What is the correct way to create a MTLLibrary with .mtlpackage? Do I see this error because the .mtlpackage I am using is incorrect? How should I go with debugging this?
I'd really appreciate if I could get some help on this as I have been stuck with it for some time now. Thanks in advance!
Hi everyone,
I'm using the Vision framework’s ImageAestheticsScoresObservation class (https://developer.apple.com/documentation/vision/imageaestheticsscoresobservation).
I noticed that the overallScore returned sometimes gives negative values. Could someone confirm whether the expected range of the score is from -1.0 to 1.0?
The documentation doesn’t explicitly state the possible score range, so I’d appreciate any clarification or insights.
Thanks in advance!
In the process of using ARKit's image tracking, we found that different images have significant differences in recognizability. How can we judge the quality of this image in ARKit's image tracking for this situation?
Hi. I'm a 3D designer, using Blender for most of my work. The most recent Blender conference discussed utilizing the Open Shading Language (OSL) in their latest versions, which allows designers to write custom shaders for their workflows.
At the moment, only Nvidia Optix GPU's can utilize this language for rendering (from what I understand), but Blender developers stated they are waiting on other GPU manufacturers to implement this feature as well. I'm not sure if there are any licensing issues here, but would this be something Apple could implement in Metal to make their hardware more attractive to the 3D design community?
Any help or knowledge on this topic would be greatly appreciated.
Hi everyone,
I’ve been developing a custom, end-to-end 3D rendering engine called Crescent from scratch using C++20 and Metal-cpp (targeting macOS and visionOS). My primary goal is to build a zero-bottleneck, GPU-driven pipeline that maximizes the potential of Apple Silicon’s Unified Memory and TBDR architecture.
While the fundamental systems are stable, I am looking for architectural feedback from Metal framework engineers regarding specific synchronization and latency challenges.
Current Core Implementations:
GPU-Driven Instance Culling: High-performance occlusion culling using a Hierarchical Z-Buffer (HZB) approach via Compute Shaders.
Clustered Forward Shading: Support for high-count dynamic lights through view-space clustering.
Temporal Stability: Custom TAA with history rejection and Motion Blur resolve.
Asset Infrastructure: Robust GUID-based scene serialization and a JSON-driven ECS hierarchy.
The Architectural Challenge:
I am currently seeing slight synchronization overhead when generating the HZB mip-chain. On Apple Silicon, I am evaluating the cost of encoder transitions versus cache-friendly barriers.
&& m_hzbInitPipeline && m_hzbDownsamplePipeline && !m_hzbMipViews.empty();
if (canBuildHzb) {
MTL::ComputeCommandEncoder* hzbInit = commandBuffer->computeCommandEncoder();
hzbInit->setComputePipelineState(m_hzbInitPipeline);
hzbInit->setTexture(m_depthTexture, 0);
hzbInit->setTexture(m_hzbMipViews[0], 1);
if (m_pointClampSampler) {
hzbInit->setSamplerState(m_pointClampSampler, 0);
} else if (m_linearClampSampler) {
hzbInit->setSamplerState(m_linearClampSampler, 0);
}
const uint32_t hzbWidth = m_hzbMipViews[0]->width();
const uint32_t hzbHeight = m_hzbMipViews[0]->height();
const uint32_t threads = 8;
MTL::Size tgSize = MTL::Size(threads, threads, 1);
MTL::Size gridSize = MTL::Size((hzbWidth + threads - 1) / threads * threads,
(hzbHeight + threads - 1) / threads * threads,
1);
hzbInit->dispatchThreads(gridSize, tgSize);
hzbInit->endEncoding();
for (size_t mip = 1; mip < m_hzbMipViews.size(); ++mip) {
MTL::Texture* src = m_hzbMipViews[mip - 1];
MTL::Texture* dst = m_hzbMipViews[mip];
if (!src || !dst) {
continue;
}
MTL::ComputeCommandEncoder* downEncoder = commandBuffer->computeCommandEncoder();
downEncoder->setComputePipelineState(m_hzbDownsamplePipeline);
downEncoder->setTexture(src, 0);
downEncoder->setTexture(dst, 1);
const uint32_t mipWidth = dst->width();
const uint32_t mipHeight = dst->height();
MTL::Size downGrid = MTL::Size((mipWidth + threads - 1) / threads * threads,
(mipHeight + threads - 1) / threads * threads,
1);
downEncoder->dispatchThreads(downGrid, tgSize);
downEncoder->endEncoding();
}
if (m_instanceCullHzbPipeline) {
dispatchInstanceCulling(m_instanceCullHzbPipeline, true);
}
}
My Questions:
Encoder Synchronization: Would you recommend moving this loop into a single ComputeCommandEncoder using MTLBarrier between dispatches to maintain L2 cache residency, or is the overhead of separate encoders negligible for depth-downsampling on TBDR?
visionOS Bindless Latency: For stereo rendering on visionOS, what are the best practices for managing MTL4ArgumentTable updates at 90Hz+? I want to ensure that updating bindless resources for each eye doesn't introduce unnecessary CPU-to-GPU latency.
Memory Management: Are there specific hints for Memoryless textures that could be applied to intermediate HZB levels to save bandwidth during this process?
I’ve attached a screenshot of a scene rendered with the engine (PBR, SSR, and IBL).
App Storeにある『浮遊時計 Premium』は1Hzごとか10Hzごと、または3段階以上のリフレッシュレート計測はできますか?
Topic:
Graphics & Games
SubTopic:
General
I have an odd bug, if I use initWithFrame as the init routine for NSView subclass that uses layers I don't see this bug.
But if I embedded this view into a storyboard with a .nib file and use initWithCoder, I need to return true on
(BOOL) contentsAreFlipped
From the NSView subclass
If I don't the CALayer actually renders from 0,0 from the view upwards and off the window.
The frame sizes for the NSView and the CALayer are good.. when I see them in updateLayer.
Obviously I have a fix.. but I would like to understand why.
Topic:
Graphics & Games
SubTopic:
General
I use unity 2020.3.48f1 to develop a game; trying to implement Apple Services integration I use Apple unity plugins(https://github.com/apple/unityplugins) Using latest version of unity plugins I getting error in Unity project after plugin import It say "Not allowed platform VisionOS" When I tryed to use older version of the plugins I getting error on runtime when calling "var fetchItemsResponse = await GKLocalPlayer.Local.FetchItems();" in line 42 it drop EXC_BAD_ACCESS(code=257, address=0x0000...) error I tryed to use different commits from official repositorys and even custom branches of apple unity plugins like (https://github.com/muZZkat/unityplugins/tree/muzzkat/fix-fetch-items) but it did not help
There is whole my script which trying to use apple unuity plugins
using System.Threading.Tasks;
using UnityEngine;
using System.Collections;
using System;
using Apple.GameKit;
using UnityEngine.UI;
public class TheScript : MonoBehaviour
{
[SerializeField]
InputField otp;
string Signature;
string TeamPlayerID;
string Salt;
string PublicKeyUrl;
string Timestamp;
void Start()
{
StartCoroutine(Call());
}
private IEnumerator Call()
{
yield return new WaitForSeconds(5);
Login();
}
public async Task Login()
{
otp.text += $"Loginig... ";
if (!Apple.GameKit.GKLocalPlayer.Local.IsAuthenticated)
{
try
{
var player = await GKLocalPlayer.Authenticate();
var localPlayer = GKLocalPlayer.Local;
TeamPlayerID = localPlayer.TeamPlayerId;
var fetchItemsResponse = await GKLocalPlayer.Local.FetchItems();
Signature = Convert.ToBase64String(fetchItemsResponse.GetSignature());
PublicKeyUrl = fetchItemsResponse.PublicKeyUrl;
otp.text += $"Team Player ID: {TeamPlayerID} ";
otp.text += $"PublicKeyUrl: {PublicKeyUrl} ";
}
catch(Exception e)
{
otp.text += $"Error: " + e.Message;
}
}
else
{
Debug.Log("AppleGameCenter player already logged in.");
}
}
async Task SignInWithAppleGameCenterAsync(string signature, string teamPlayerId, string publicKeyURL, string salt, ulong timestamp)
{
}
}
Hey I'm using the CIDepthBlurEffect Core Image Filter in my app. It seems to work ok but I get these errors in the console when calling the class.
CoreImage Metal library does not contain function for name: sparserendering_xhlrb_scan
CoreImage Metal library does not contain function for name: sparserendering_xhlrb_diffuse
CoreImage Metal library does not contain function for name: sparserendering_xhlrb_copy_back
CoreImage Metal library does not contain function for name: plain_or_sRGB_copy
Am I missing some sort of import to gain these Metal functions? I am using my own custom shaders but I assume you'd be able to use them along side the built in ones.
Hello!
I have a question about how thread groups work with tile shading. When running "traditional" compute, I get to choose both thread group size and the grid size. However, when using tile shading kernel I only have dispatchThreadsPerTile method - this controls how many threads will be ran in each tile. So far so good, but what about thread groups?
The examples in video "Tile Shading on A11" seem to suggest that there will be only one thread group per tile. In the video, [[thread_index_in_threadgroup]] is called "local_id" and it is used to access the image block.
I assume this is the default configuration. So when one does the following:
Creates MTLRenderPassDescriptor with tileWidth set to W and tileHeight set to H
Fires up the tile shading kernel using dispatchThreadsPerTile with MTLSize size = { W, H, 1 }
I understand that the result is 1-to-1 mapping between the tile "pixels" and kernel threads. Now, what I would like to do is to have more than one thread group there. I want this for performance reasons: I have a certain compute kernel which I know executes very well with small thread group size. In fact, { 32, 1, 1 } seems to be the fastest. My understanding is that even if I set tile size to 16x16, and so I am executing 256 threads there, there will only be one SIMD group active in a thread group. Meaning that this SIMD group has to execute 8 times over the tile.
Is it possible somehow? Or perhaps the limitations of the API are pointing at the limitations of hardware itself, and if I want to execute with SIMD group sized thread groups I have to use "traditional" compute encoder?
Will be grateful for help.
Michał
Hi everyone,
I’m currently learning about ParticleEmitterComponentParticleEmitterComponent and exploring the sample app provided in the Simulating particles in your visionOS app documentation.
In the sample app, when I set the EmitterPreset to fireworks from the settings panel on the left side of the window and choose SystemImage, I noticed two issues:
The image applied to mainEmitter appears clipped or cropped.
The image on spawnedEmitter does not update to the selected SystemImage.
What I want to achieve:
Apply the same SystemImage to both mainEmittermainEmitter and spawnedEmitterspawnedEmitter so that it displays correctly without clipping.
Remove the animation that changes the size of spawnedEmitterspawnedEmitter over time and keep it at a constant size.
Could someone explain which properties should be adjusted to achieve this behavior? Any guidance or examples would be greatly appreciated.
Thanks in advance!
After updating iPad/iPhone devices from iOS 18 to iOS 26, PhotogrammetrySession intermittently crashes during photogrammetry processing. The same workflow was stable on iOS 18 with no code changes to the app.
Environment:
OS versions: Works on OS 18, crashes on OS 26
Device: iPad/iPhone (reproducible across devices)
Source images: ~170-200 JPG files at 2160 x 3840 resolution
Reproduction:
The crash occurs consistently on the second or third sequential run of the photogrammetry session with the same image set. First run typically succeeds.
Crash details:
Xcode shows an uncaught exception during image processing:
terminating due to uncaught exception of type std::bad_alloc: std::bad_alloc
VTPixelTransferSession 420f sid 269 (2160.00 x 3840.00) [0.00 0.00 2160 3840]
rowbytes( 2160, 2160 ) Color( (null), 0x0, (null), (null), ITU_R_601_4 )
=> 24 sid 19 (2160.00 x 3840.00) [0.00 0.00 2160 3840] rowbytes( 6528 )
Color( 0x0, (null), (null), (null) )
This appears to be a memory allocation failure in VTPixelTransferSession during color space conversion. Has anyone else experienced similar crashes with CorePhotogrammetry on iOS 26, or found workarounds?
Topic:
Graphics & Games
SubTopic:
RealityKit
It's a Broadcast Extension issue: on iOS 26.1 beta the extension never launches—after you tap “Start Broadcast” in the system picker the countdown disappears after 3 s and no broadcast starts, so every live-streaming app(and all other non-system apps that use Broadcast Extension) fails to go live (only the native Photos screen recording still works). Is this a known regression or is a new entitlement required?