After watching WWDC 2025 session "Combine Metal 4 machine learning and graphics", I have decided to give it a shot to integrate the latest MTL4MachineLearningCommandEncoder to my existing render pipeline. After a lot of trial and errors, I managed to set up the pipeline and have the app compiled.
However, I am now stuck on creating a MTLLibrary with .mtlpackage.
Here is the code I have to create a MTLLibrary according the WWDC session https://developer.apple.com/videos/play/wwdc2025/262/?time=550:
let coreMLFilePath = bundle.path(forResource: "my_model", ofType: "mtlpackage")!
let coreMLURL = URL(string: coreMLFilePath)!
do {
metalDevice.makeLibrary(URL: coreMLURL)
} catch {
print("error: \(error)")
}
With the above code, I am getting error:
Error Domain=MTLLibraryErrorDomain Code=1 "Invalid metal package" UserInfo={NSLocalizedDescription=Invalid metal package}
What is the correct way to create a MTLLibrary with .mtlpackage? Do I see this error because the .mtlpackage I am using is incorrect? How should I go with debugging this?
I'd really appreciate if I could get some help on this as I have been stuck with it for some time now. Thanks in advance!
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Deterministic RNG behaviour across Mac M1 CPU and Metal GPU – BigCrush pass & structural diagnostics
Hello,
I am currently working on a research project under ENINCA Consulting, focused on advanced diagnostic tools for pseudorandom number generators (structural metrics, multi-seed stability, cross-architecture reproducibility, and complementary indicators to TestU01).
To validate this diagnostic framework, I prototyped a small non-linear 64-bit PRNG (not as a goal in itself, but simply as a vehicle to test the methodology).
During these evaluations, I observed something interesting on Apple Silicon (Mac M1):
• bit-exact reproducibility between M1 ARM CPU and M1 Metal GPU,
• full BigCrush pass on both CPU and Metal backends,
• excellent p-values,
• stable behaviour across multiple seeds and runs.
This was not the intended objective, the goal was mainly to validate the diagnostic concepts, but these results raised some questions about deterministic compute behaviour in Metal.
My question: Is there any official guidance on achieving (or expecting) deterministic RNG or compute behaviour across CPU ↔ Metal GPU on Apple Silicon? More specifically:
• Are deterministic compute kernels expected or guaranteed on Metal for scientific workloads?
• Are there recommended patterns or best practices to ensure reproducibility across GPU generations (M1 → M2 → M3 → M4)?
• Are there known Metal features that can introduce non-determinism?
I am not sharing the internal recurrence (this work is proprietary), but I can discuss the high-level diagnostic observations if helpful.
Thank you for any insight, very interested in how the Metal engineering team views deterministic compute patterns on Apple Silicon.
Pascal ENINCA Consulting
Topic:
Graphics & Games
SubTopic:
Metal
Tags:
ML Compute
Metal
Metal Performance Shaders
Apple Silicon
我们想在游戏类 App 内接入 Game Center。用户可以在游戏内创建多个角色,若用户在游戏内创建了2个角色:角色1、角色2,请问:
当用户将角色1与 Game Center 绑定后,数据将上报至 Game Center。此时玩家想要将角色1与 Game Center 解除绑定,解绑后,再将角色2与 Game Center 绑定。那么这时角色1的数据是留存在 Game Center 中,还是将被移除?
Hi all,
I have been trying to get Apple's assistive touch's snap to item to work for a unity game built using Apple's Core & Accessibility API. The switch control recognises these buttons however, eye tracking will not snap to them. The case in which it needs to snap is when an external eye tracking device is connected and utilises assistive touch & assistive touch's snap to item.
All buttons in the game have a AccessibilityNode with the trait 'Button' on them & an appropriate label, which, following the documentation and comments on the developer forum, should allow them to be recognised by snap to item.
This is not the case, devices (iPads and iPhones) do not recognise the buttons as a snap to target.
Does anyone know why this is the case, and if this is a bug?
I'm trying to add support to PS5 DualSense controller.
when I try to use the API from here:
https://developer.apple.com/documentation/gamecontroller/gcdualsenseadaptivetrigger?language=objc
None of the API works, am I missed anything?
The code is like this:
if ( [ controller.extendedGamepad isKindOfClass:[ GCDualSenseGamepad class ] ] )
{
GCDualSenseGamepad * dualSenseGamePad = ( GCDualSenseGamepad * )controller.extendedGamepad;
auto funcSetEffectTrigger = []( TriggerEffectParams& params, GCDualSenseAdaptiveTrigger *trigger ) {
if ( params.m_mode == TriggerEffectMode::Off )
{
[ trigger setModeOff ];
NSLog(@"setModeOff trigger.mode:%d", trigger.mode );
}
else if ( params.m_mode == TriggerEffectMode::Feedback )
{
[ trigger setModeFeedbackWithStartPosition: 0.2f resistiveStrength: 0.5f ];
}
else if ( params.m_mode == TriggerEffectMode::Weapon )
{
[ trigger setModeWeaponWithStartPosition: 0.2f endPosition: 0.4f resistiveStrength: 0.5f ];
}
else if ( params.m_mode == TriggerEffectMode::Vibration )
{
[ trigger setModeVibrationWithStartPosition: position amplitude: amplitude frequency: frequency ];
}
};
if ( L2 )
{
funcSetEffectTrigger( params, dualSenseGamePad.leftTrigger );
}
if ( R2 )
{
funcSetEffectTrigger( params, dualSenseGamePad.rightTrigger );
}
}
I've also tested to add "Game Controllers" capability to Target, still not working.
Can't find anything else from the document or forums.
I've no idea what need to do.
My IOS app generates pdf files.
Every time my users open the generated pdf files, the autofill popup jumps out, but my pdf file is NOT for interacting.
I'm here to ask if there's a way to mark my pdf files as "not a form", like in metadata or anywhere else?
Looks like is something to do with the code and that is why my images are distorted. please help me get to the right person to help me get this issue resolved
Topic:
Graphics & Games
SubTopic:
General
Hello,
MacOS 26 Betas are limiting games (noticeably, games that use java) to the native display of the MacBook Pro (120hz). Even connecting an external display this is not changing. I have submitted a bug report, but I have not had any responses to it yet. I am looking to see if anyone may have an answer or fix to this issue.
Thanks!
Topic:
Graphics & Games
SubTopic:
General
I am trying to use Broadcast upload extension but Broadcast picker starts countdown and stops (swiftUI).
Steps i followed.
added BroadcastUploadExtension as target
same app group for for main app and extension
added packages using SPM
i seems the extension functions are not getting triggered, i check using UIScreen.main.isCaptured also which always comes as false.
i tried Using Logs which never Appeared.
Hello,
I found an issue with the Games app on macOS 26 (Tahoe) when viewing achievements:
In App Store Connect, each achievement has different values set for the pre-earned description and the post-earned description.
When testing with GameKit directly (GKAchievementDescription), both values are returned correctly.
However, in the macOS Games app, the post-earned description is shown even before the achievement is earned.
This seems to be a display issue specific to the Games app on macOS.
Could you confirm if this is a known bug in the Games app, or if there is a reason why pre-earned descriptions are not being shown?
Thank you.
Hey there,
I tried to install GPTK again, since I had to reinstall the OS for irrelevant reasons. But every time I try to install the tool kit, it gives me theError: apple/apple/game-porting-toolkit 1.1 did not build error. Before that error occired, I had the Openssl error, which I fixed with the rbenv version of openssl. Is there any way to fix this error? Down bellow you'll find the full error message it gave me. The specs for my Mac are (if they are helpful in any way): M1 Pro MBP 14" with 16GB Ram and 512GB SSD.
Thanks!
``Error: apple/apple/game-porting-toolkit 1.1 did not build
Logs:
/Users/myuser/Library/Logs/Homebrew/game-porting-toolkit/00.options.out
/Users/myuser/Library/Logs/Homebrew/game-porting-toolkit/01.configure
/Users/myuser/Library/Logs/Homebrew/game-porting-toolkit/01.configure.cc
/Users/myuser/Library/Logs/Homebrew/game-porting-toolkit/wine64-build
If reporting this issue please do so at (not Homebrew/brew or Homebrew/homebrew-core):
https://github.com/apple/homebrew-apple/issues```
Hello, when I'm looking to customize the icons of my phone, the applications that are in the grouping genres without replacing with all-black images, I don't know what happens by changing the color of the applications in group of change no color throws just listen not the black stuff
Topic:
Graphics & Games
SubTopic:
General
Hi,
Apple’s documentation on Order-Independent Transparency (OIT) describes an approach using image blocks, where an array of size 4 is allocated per fragment to store depth and color in a tile shading compute pass.
However, when increasing the scene’s depth complexity by adding more overlapping quads, the OIT implementation fails due to the fixed array size.
Is there a way to dynamically allocate storage for fragments based on actual depth complexity encountered during rasterization, rather than using a fixed-size array? Specifically, can an adaptive array of fragments be maintained and sorted by depth, where the size grows as needed instead of being limited to 4 entries?
Any insights or alternative approaches would be greatly appreciated.
Thank you!
Hello,
I am experiencing an issue with programmatically capturing a GPU trace using MTLCaptureManager. The .gputrace file that is generated appears to be corrupted, and I'm looking for guidance or a solution.
Description of the Problem:
I am using MTLCaptureManager.sharedCaptureManager to capture a Metal frame and save it to disk.
The generated .gputrace file is consistently reported as 0 bytes in size by the file system.
Crucially, when I compress this 0-byte .gputrace file into a .zip archive, the resulting archive contains the full, expected data. After unzipping, the file can be opened and viewed correctly in Xcode.
However,When inspecting the file's contents using NSFileManager in Objective-C (treating it as a directory), the internal structure is different from a .gputrace file captured directly from Xcode's Metal Debugger.
capture in xcode
capture in file
Finally,When capturing multiple frames programmatically, the first captured frame contains valid buffer data. However, for subsequent frames (starting from the second frame), the corresponding buffer contents are all zero-filled.
Frame 1: All MTLBuffer data is correctly captured and populated.
Frame 2 and onward: The same MTLBuffer objects are present in the trace, but their contents are entirely 0 (i.e., the data is not captured or is corrupted).
In this case, the on-screen display is normal, but the captured frame is incorrect. The frame captured directly in Xcode is also correct. Only the frame captured to a file is abnormal.
Topic:
Graphics & Games
SubTopic:
Metal
Hello, I’m the developer of the “ StepSquad” app. Our app uses the Game Center achievement feature, but we’ve been encountering a problem: the “Global Players” metric always shows 0%, even though there are friends who have already achieved these achievements. Initially, I thought it might be because the app was newly launched. However, it’s now been over two months since release, and it’s still showing 0%. If anyone has any insight into this issue, please leave a comment.
Hello, I'm tracking down a bug where useResource doesn't seem to apply proper synchronization when a resource is produced by the render pass then consumed by the compute pass, but when I use MTLFence between the to signal and wait between the render/compute encoders, the artifact goes away.
The resource is created with MTLHazardTrackingModeTracked and useResource is called on the compute encoder after the render pass. Metal API Validation doesn't report any warnings/errors.
Am I misunderstanding the difference between the two APIs? I dug through the Metal documentation and it looks like useResource should handle synchronization given the resource has MTLHazardTrackingModeTracked but on the other hand, MTLFence should be used to ensure proper synchronization between command encoders. Can someone can clarify the difference between the two APIs and when to use them.
When running on my iPhone SE3 under IOS 18.4.1, achievement banners show as expected. The same code running on my iPad Air2 under IOS 15.8.4, achievement banners do not show, but they are accepted (as shown in the GameCenterViewController). The banners also don't show when running the simulator under iPhone 16 Pro Max under IOS 18.2 or simulator under iPhone SE3 under IOS 18.3. I haven't tried others. [Note that I clear the achievements each run during test so that I can duplicate this]
Topic:
Graphics & Games
SubTopic:
GameKit
The flushContextInternal function in glr_sync.mm:262 called abort internally. What caused this? Was it due to high device temperature or some other reason?
Date/Time: 2024-08-29 09:20:09.3102 +0800
Launch Time: 2024-08-29 08:53:11.3878 +0800
OS Version: iPhone OS 16.7.10 (20H350)
Release Type: User
Baseband Version: 8.50.04
Report Version: 104
Exception Type: EXC_CRASH (SIGABRT)
Exception Codes: 0x0000000000000000, 0x0000000000000000
Triggered by Thread: 0
Thread 0 name:
Thread 0 Crashed:
0 libsystem_kernel.dylib 0x00000001ed053198 __pthread_kill + 8 (:-1)
1 libsystem_pthread.dylib 0x00000001fc5e25f8 pthread_kill + 208 (pthread.c:1670)
2 libsystem_c.dylib 0x00000001b869c4b8 abort + 124 (abort.c:118)
3 AppleMetalGLRenderer 0x00000002349f574c GLDContextRec::flushContextInternal() + 700 (glr_sync.mm:262)
4 DiSpecialDriver 0x000000010824b07c Di::RHI::onRenderFrameEnd() + 184 (RHIDevice.cpp:118)
5 DiSpecialDriver 0x00000001081b85f8 Di::Client::drawFrame() + 120 (Client.cpp:155)
2024-08-27_14-44-10.8104_+0800-07d9de9207ce4c73289507e608e5de4320d02ccf.crash
Topic:
Graphics & Games
SubTopic:
Metal
My app has a number of heterogeneous GPU workloads that all run concurrently. Some of these should be executed with the highest priority because the app’s responsiveness depends on them, while others are triggered by file imports and the like which should have a low priority. If this was running on the CPU I’d assign the former User Interactive QoS and the latter Utility QoS. Is there an equivalent to this for GPU work?
Create the QRCode
CIFilter<CIBlendWithMask> *f = CIFilter.QRCodeGenerator;
f.message = [@"Message" dataUsingEncoding:NSASCIIStringEncoding];
f.correctionLevel = @"Q"; // increase level
CIImage *qrcode = f.outputImage;
Overlay the icon
CIImage *icon = [CIImage imageWithURL:url];
CGAffineTransform *t = CGAffineTransformMakeTranslation(
(qrcode.extent.width-icon.extent.width)/2.0,
(qrcode.extent.height-icon.extent.height)/2.0);
icon = [icon imageByApplyingTransform:t];
qrcode = [icon imageByCompositingOver:qrcode];
Round off the corners
static dispatch_once_t onceToken;
static CIWarpKernel *k;
dispatch_once(&onceToken, ^ {
k = [CIWarpKernel kernelWithFunctionName:name
fromMetalLibraryData:metalLibData()
error:nil];
});
CGRect iExtent = image.extent;
qrcode = [k applyWithExtent:qrcode.extent
roiCallback:^CGRect(int i, CGRect r) {
return CGRectInset(r, -radius, -radius); }
inputImage:qrcode
arguments:@[[CIVector vectorWithCGRect:qrcode.extent], @(radius)]];
…and this code for the kernel should go in a separate .ci.metal source file:
float2 bend_corners (float4 extent, float s, destination dest)
{
float2 p, dc = dest.coord();
float ratio = 1.0;
// Round lower left corner
p = float2(extent.x+s,extent.y+s);
if (dc.x < p.x && dc.y < p.y) {
float2 d = abs(dc - p);
ratio = min(d.x,d.y)/max(d.x,d.y);
ratio = sqrt(1.0 + ratio*ratio);
return (dc - p)*ratio + p;
}
// Round lower right corner
p = float2(extent.x+extent.z-s, extent.y+s);
if (dc.x > p.x && dc.y < p.y) {
float2 d = abs(dc - p);
ratio = min(d.x,d.y)/max(d.x,d.y);
ratio = sqrt(1.0 + ratio*ratio);
return (dc - p)*ratio + p;
}
// Round upper left corner
p = float2(extent.x+s,extent.y+extent.w-s);
if (dc.x < p.x && dc.y > p.y) {
float2 d = abs(dc - p);
ratio = min(d.x,d.y)/max(d.x,d.y);
ratio = sqrt(1.0 + ratio*ratio);
return (dc - p)*ratio + p;
}
// Round upper right corner
p = float2(extent.x+extent.z-s, extent.y+extent.w-s);
if (dc.x > p.x && dc.y > p.y) {
float2 d = abs(dc - p);
ratio = min(d.x,d.y)/max(d.x,d.y);
ratio = sqrt(1.0 + ratio*ratio);
return (dc - p)*ratio + p;
}
return dc;
}