Hi, I'm currently using Metal Performance Shaders Graph (MPSGraphExecutable) to run neural network inference operations as part of a metal rendering pipeline.
I also tried to profile the usage of neural engine when running inference using MPSGraphExecutable but the graph shows no sign of neural engine usage. However, when I used the coreML model inspection tool in xcode and run performance report, it was able to use ANE.
Does MPSGraphExecutable automatically utilize the Apple Neural Engine (ANE) when running inference operations, or does it only execute on GPU?
My model (Core ML Package) was converted from a pytouch model using coremltools with ML program type and support iOS17.0+.
Any insights or documentation references would be greatly appreciated!
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I have an app that uses a couple of mlmodels (word tagger and gazetteer) and I’m trying to encrypt them before publishing.
The models are part of a package. I understand that Xcode can’t automatically handle the encryption for a model in a package the way it can within a traditional app structure.
Given that, I’ve generated the Apple MLModel encryption key from Xcode and am encrypting via the command line with:
xcrun coremlcompiler compile Gazetteer.mlmodel GazetteerENC.mlmodelc --encrypt Gazetteerkey.mlmodelkey
In the package manifest, I’ve listed the encrypted models as .copy resources for my target and have verified the URL to that file is good.
When I try to load the encrypted .mlmodelc file (on a physical device) with the line:
gazetteer = try NLGazetteer(contentsOf: gazetteerURL!)
I get the error:
Failed to open file: /…/Scanner.bundle/GazetteerENC.mlmodelc/coremldata.bin. It is not a valid .mlmodelc file.
So my questions are:
Does the NLGazetteer class support encrypted MLModel files?
Given that my models are in a package, do I have the right general approach?
Thanks for any help or thoughts.
Topic:
Machine Learning & AI
SubTopic:
Core ML
We are trying to implement a custom encryption scheme for our Core ML models. Our goal is to bundle encrypted models, decrypt them into memory at runtime, and instantiate the MLModel without the unencrypted model file ever touching the disk.
We have looked into the native apple encryption described here https://developer.apple.com/documentation/coreml/encrypting-a-model-in-your-app but it has limitations like not working on intel macs, without SIP, and doesn’t work loading from dylib.
It seems like most of the Core ML APIs require a file path, there is MLModelAsset APIs but I think they just write a modelc back to disk when compiling but can’t find any information confirming that (also concerned that this seems to be an older API, and means we need to compile at runtime).
I am aware that the native encryption will be much more secure but would like not to have the models in readable text on disk.
Does anyone know if this is possible or any alternatives to try to obfuscate the Core ML models, thanks
I am using gemini2.5-flash with SwiftUI. How can I receive a response in JSON?
Topic:
Machine Learning & AI
SubTopic:
General
The documentation for the Create ML tool ("Building an object detector data source") mentions that there are options for using normalized values instead of pixels and also different anchor point origins ("MLBoundingBoxCoordinatesOrigin") instead of always using "center". However, the JSON format for these does not appear in any examples. Does anyone know the format for these options?
Topic:
Machine Learning & AI
SubTopic:
Create ML
I have been working on a small CV program, which uses fine-tuned U2Netp model converted by coremltools 8.3.0 from PyTorch.
It works well on my iPhone (with iOS version 18.5) and my Macbook (with MacOS version 15.3.1). But it fails to load after I upgraded Macbook to MacOS version 15.5.
I have attached console log when loading this model.
Unable to load MPSGraphExecutable from path /Users/yongzhang/Library/Caches/swiftmetal/com.apple.e5rt.e5bundlecache/24F74/E051B28C6957815C140A86134D673B5C015E79A1460E9B54B8764F659FDCE645/16FA8CF2CDE66C0C427F4B51BBA82C38ACC44A514CCA396FD7B281AAC087AB2F.bundle/H14C.bundle/main/main_mps_graph/main_mps_graph.mpsgraphpackage @ GetMPSGraphExecutable
E5RT: Unable to load MPSGraphExecutable from path /Users/yongzhang/Library/Caches/swiftmetal/com.apple.e5rt.e5bundlecache/24F74/E051B28C6957815C140A86134D673B5C015E79A1460E9B54B8764F659FDCE645/16FA8CF2CDE66C0C427F4B51BBA82C38ACC44A514CCA396FD7B281AAC087AB2F.bundle/H14C.bundle/main/main_mps_graph/main_mps_graph.mpsgraphpackage (13)
Unable to load MPSGraphExecutable from path /Users/yongzhang/Library/Caches/swiftmetal/com.apple.e5rt.e5bundlecache/24F74/E051B28C6957815C140A86134D673B5C015E79A1460E9B54B8764F659FDCE645/16FA8CF2CDE66C0C427F4B51BBA82C38ACC44A514CCA396FD7B281AAC087AB2F.bundle/H14C.bundle/main/main_mps_graph/main_mps_graph.mpsgraphpackage @ GetMPSGraphExecutable
E5RT: Unable to load MPSGraphExecutable from path /Users/yongzhang/Library/Caches/swiftmetal/com.apple.e5rt.e5bundlecache/24F74/E051B28C6957815C140A86134D673B5C015E79A1460E9B54B8764F659FDCE645/16FA8CF2CDE66C0C427F4B51BBA82C38ACC44A514CCA396FD7B281AAC087AB2F.bundle/H14C.bundle/main/main_mps_graph/main_mps_graph.mpsgraphpackage (13)
Failure translating MIL->EIR network: Espresso exception: "Network translation error": MIL->EIR translation error at /Users/yongzhang/CLionProjects/ImageSimilarity/models/compiled/u2netp.mlmodelc/model.mil:1557:12: Parameter binding for axes does not exist.
[Espresso::handle_ex_plan] exception=Espresso exception: "Network translation error": MIL->EIR translation error at /Users/yongzhang/CLionProjects/ImageSimilarity/models/compiled/u2netp.mlmodelc/model.mil:1557:12: Parameter binding for axes does not exist. status=-14
Failed to build the model execution plan using a model architecture file '/Users/yongzhang/CLionProjects/ImageSimilarity/models/compiled/u2netp.mlmodelc/model.mil' with error code: -14.
Topic:
Machine Learning & AI
SubTopic:
Create ML
Hi i'm curently crating a model to identify car plates (object detection) i use asitop to monitor my macbook pro and i see that only the cpu is used for the training and i wanted to know why
Hi! I noticed that on my father's M1 Max MacBook Pro (64gb ram) there's an option for style transfer which I don't see on my M1 MacBook Air (16gb ram). I am running macOS Tahoe and he is running macOS Sequoia.
Topic:
Machine Learning & AI
SubTopic:
Create ML
hello,
Do you have any information on the handling of sparse matrix with MPS and PyTorch? release date? ...
Hello, I have to create an app in Swift that it scan NFC Identity card. It extract data and convert it to human readable data. I do it with below code
import CoreNFC
class NFCIdentityCardReader: NSObject , NFCTagReaderSessionDelegate {
func tagReaderSessionDidBecomeActive(_ session: NFCTagReaderSession) {
print("\(session.description)")
}
func tagReaderSession(_ session: NFCTagReaderSession, didInvalidateWithError error: any Error) {
print("NFC Error: \(error.localizedDescription)")
}
var session: NFCTagReaderSession?
func beginScanning() {
guard NFCTagReaderSession.readingAvailable else {
print("NFC is not supported on this device")
return
}
session = NFCTagReaderSession(pollingOption: .iso14443, delegate: self, queue: nil)
session?.alertMessage = "Hold your NFC identity card near the device."
session?.begin()
}
func tagReaderSession(_ session: NFCTagReaderSession, didDetect tags: [NFCTag]) {
guard let tag = tags.first else {
session.invalidate(errorMessage: "No tag detected")
return
}
session.connect(to: tag) { (error) in
if let error = error {
session.invalidate(errorMessage: "Connection error: \(error.localizedDescription)")
return
}
switch tag {
case .miFare(let miFareTag):
self.readMiFareTag(miFareTag, session: session)
case .iso7816(let iso7816Tag):
self.readISO7816Tag(iso7816Tag, session: session)
case .iso15693, .feliCa:
session.invalidate(errorMessage: "Unsupported tag type")
@unknown default:
session.invalidate(errorMessage: "Unknown tag type")
}
}
}
private func readMiFareTag(_ tag: NFCMiFareTag, session: NFCTagReaderSession) {
// Read from MiFare card, assuming it's formatted as an identity card
let command: [UInt8] = [0x30, 0x04] // Example: Read command for block 4
let requestData = Data(command)
tag.sendMiFareCommand(commandPacket: requestData) { (response, error) in
if let error = error {
session.invalidate(errorMessage: "Error reading MiFare: \(error.localizedDescription)")
return
}
let readableData = String(data: response, encoding: .utf8) ?? response.map { String(format: "%02X", $0) }.joined()
session.alertMessage = "ID Card Data: \(readableData)"
session.invalidate()
}
}
private func readISO7816Tag(_ tag: NFCISO7816Tag, session: NFCTagReaderSession) {
let selectAppCommand = NFCISO7816APDU(instructionClass: 0x00, instructionCode: 0xA4, p1Parameter: 0x04, p2Parameter: 0x00, data: Data([0xA0, 0x00, 0x00, 0x02, 0x47, 0x10, 0x01]), expectedResponseLength: -1)
tag.sendCommand(apdu: selectAppCommand) { (response, sw1, sw2, error) in
if let error = error {
session.invalidate(errorMessage: "Error reading ISO7816: \(error.localizedDescription)")
return
}
let readableData = response.map { String(format: "%02X", $0) }.joined()
session.alertMessage = "ID Card Data: \(readableData)"
session.invalidate()
}
}
}
But I got null. I think that these data are encrypted. How can I convert them to readable data without MRZ, is it possible ?
I need to get personal informations from Identity card via Core NFC.
Thanks in advance.
Best regards
I'm on Tahoe 26.1 / M3 Macbook Air. I'm using VNDetectFaceRectanglesRequest as properly as possible, as in the minimal command line program attached below. For some reason, I always get:
MLE5Engine is disabled through the configuration
printed. I couldn't find any notes on developer docs saying that VNDetectFaceRectanglesRequest can not use the Apple Neural Engine. I'm assuming there is something wrong with my code however I wasn't able to find any remarks from documentation where it might be. I wasn't able to find the above error message online either. I would appreciate your help a lot and thank you in advance.
The code below accesses the video from AVCaptureDevice.DeviceType.builtInWideAngleCamera. Currently it directly chooses the 0th format which has the largest resolution (Full HD on my M3 MBA) and "4:2:0" color "v" reduced color component spectrum encoding ("420v").
After accessing video, it performs a VNDetectFaceRectanglesRequest. It prints "VNDetectFaceRectanglesRequest completion Handler called" many times, then prints the error message above, then continues printing "VNDetectFaceRectanglesRequest completion Handler called" until the user quits it.
To run it in Xcode, File > New project > Mac command line tool. Pasting the code below, then click on the root file > Targets > Signing & Capabilities > Hardened Runtime > Resource Access > Camera.
A possible explanation could be that either Apple's internal CoreML code for this function works on GPU/CPU only or it doesn't accept 420v as supplied by the Macbook Air camera
import AVKit
import Vision
var videoDataOutput: AVCaptureVideoDataOutput = AVCaptureVideoDataOutput()
var detectionRequests: [VNDetectFaceRectanglesRequest]?
var videoDataOutputQueue: DispatchQueue = DispatchQueue(label: "queue")
class XYZ: /*NSViewController or NSObject*/NSObject, AVCaptureVideoDataOutputSampleBufferDelegate {
func viewDidLoad() {
//super.viewDidLoad()
let session = AVCaptureSession()
let inputDevice = try! self.configureFrontCamera(for: session)
self.configureVideoDataOutput(for: inputDevice.device, resolution: inputDevice.resolution, captureSession: session)
self.prepareVisionRequest()
session.startRunning()
}
fileprivate func highestResolution420Format(for device: AVCaptureDevice) -> (format: AVCaptureDevice.Format, resolution: CGSize)? {
let deviceFormat = device.formats[0]
print(deviceFormat)
let dims = CMVideoFormatDescriptionGetDimensions(deviceFormat.formatDescription)
let resolution = CGSize(width: CGFloat(dims.width), height: CGFloat(dims.height))
return (deviceFormat, resolution)
}
fileprivate func configureFrontCamera(for captureSession: AVCaptureSession) throws -> (device: AVCaptureDevice, resolution: CGSize) {
let deviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [AVCaptureDevice.DeviceType.builtInWideAngleCamera], mediaType: .video, position: AVCaptureDevice.Position.unspecified)
let device = deviceDiscoverySession.devices.first!
let deviceInput = try! AVCaptureDeviceInput(device: device)
captureSession.addInput(deviceInput)
let highestResolution = self.highestResolution420Format(for: device)!
try! device.lockForConfiguration()
device.activeFormat = highestResolution.format
device.unlockForConfiguration()
return (device, highestResolution.resolution)
}
fileprivate func configureVideoDataOutput(for inputDevice: AVCaptureDevice, resolution: CGSize, captureSession: AVCaptureSession) {
videoDataOutput.setSampleBufferDelegate(self, queue: videoDataOutputQueue)
captureSession.addOutput(videoDataOutput)
}
fileprivate func prepareVisionRequest() {
let faceDetectionRequest: VNDetectFaceRectanglesRequest = VNDetectFaceRectanglesRequest(completionHandler: { (request, error) in
print("VNDetectFaceRectanglesRequest completion Handler called")
})
// Start with detection
detectionRequests = [faceDetectionRequest]
}
// MARK: AVCaptureVideoDataOutputSampleBufferDelegate
// Handle delegate method callback on receiving a sample buffer.
public func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
var requestHandlerOptions: [VNImageOption: AnyObject] = [:]
let cameraIntrinsicData = CMGetAttachment(sampleBuffer, key: kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix, attachmentModeOut: nil)
if cameraIntrinsicData != nil {
requestHandlerOptions[VNImageOption.cameraIntrinsics] = cameraIntrinsicData
}
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
// No tracking object detected, so perform initial detection
let imageRequestHandler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer,
orientation: CGImagePropertyOrientation.up, options: requestHandlerOptions)
try! imageRequestHandler.perform(detectionRequests!)
}
}
let X = XYZ()
X.viewDidLoad()
sleep(9999999)
In this WWDC25 session, it is explictely mentioned that apps should support AttributedString for text parameters to their App Intents.
However, I have not gotten this to work. Whenever I pass rich text (either generated by the new "Use Model" intent or generated manually for example using "Make Rich Text from Markdown"), my Intent gets an AttributedString with the correct characters, but with all attributes stripped (so in effect just plain text).
struct TestIntent: AppIntent {
static var title = LocalizedStringResource(stringLiteral: "Test Intent")
static var description = IntentDescription("Tests Attributed Strings in Intent Parameters.")
@Parameter
var text: AttributedString
func perform() async throws -> some IntentResult & ReturnsValue<AttributedString> {
return .result(value: text)
}
}
Is there anything else I am missing?
:
Hello, I’m seeking clarification on whether Apple provides any framework or API that enables deep integration between Siri and advanced AI assistants (such as ChatGPT), including system-level functions like voice interaction, navigation, cross-platform syncing, and operational access similar to Siri’s own capabilities. If no such option exists today, I would appreciate guidance on the recommended path or approved third-party solutions for building a unified, voice-first experience across Apple’s ecosystem. Thank you for your time and insight.
I am running some experiments with WebGPU using the wgpu crate in rust. I have some Buffers already allocated in the GPU.
Is it possible to use those already existing buffers directly as inputs to a predict call in CoreML? I want to prevent gpu to cpu download time as much as possible.
Or are there any other ways to do something like this. Is this only possible using the latest Tensor object which came out with Metal 4 ?
I’m keep looking around documentation and some sample codes but still haven’t found example of how was used this type of Network Regressor .
Does it take some special parameters to perform on ANE , what size,format of DataFrame ?
I’m trying to follow Apple’s “WWDC24: Bring your machine learning and AI models to Apple Silicon” session to convert the Mistral-7B-Instruct-v0.2 model into a Core ML package, but I’ve run into a roadblock that I can’t seem to overcome. I’ve uploaded my full conversion script here for reference:
https://pastebin.com/T7Zchzfc
When I run the script, it progresses through tracing and MIL conversion but then fails at the backend_mlprogram stage with this error:
https://pastebin.com/fUdEzzKM
The core of the error is:
ValueError: Op "keyCache_tmp" (op_type: identity) Input x="keyCache" expects list, tensor, or scalar but got state[tensor[1,32,8,2048,128,fp16]]
I’ve registered my KV-cache buffers in a StatefulMistralWrapper subclass of nn.Module, matching the keyCache and valueCache state names in my ct.StateType definitions, but Core ML’s backend pass reports the state tensor as an invalid input. I’m using Core ML Tools 8.3.0 on Python 3.9.6, targeting iOS18, and forcing CPU conversion (MPS wasn’t available). Any pointers on how to satisfy the handle_unused_inputs pass or properly declare/cache state for GQA models in Core ML would be greatly appreciated!
Thanks in advance for your help,
Usman Khan
Topic:
Machine Learning & AI
SubTopic:
Core ML
Tags:
Metal
Metal Performance Shaders
Core ML
tensorflow-metal
Hi everyone,
I'm experiencing an inconsistent behavior with the Translation framework on iOS 18. The LanguageAvailability.status() API reports language models as .installed, but translation fails with Code 16.
Setup:
Using translationTask modifier with TranslationSession
Batch translation with explicit source/target languages
Languages: Portuguese→English, German→English
Issue:
let status = await LanguageAvailability().status(from: sourceLang, to: targetLang) // Returns: .installed
// But translation fails:
let responses = try await session.translations(from: requests)
// Error: TranslationErrorDomain Code=16 "Offline models not available"
Logs:
Language model installed: pt -> en
Language model installed: de -> en
Starting translation: de -> en
Error Domain=TranslationErrorDomain Code=16 "Translation failed"NSLocalizedFailureReason=Offline models not available for language pair
What I've tried:
Re-downloading languages in Settings
Using source: nil for auto-detection
Fresh TranslationSession.Configuration each time
Questions:
Is there a way to force model re-validation/re-download programmatically?
Should translationTask show download popup when Code 16 occurs?
Has anyone found a reliable workaround?
I've seen similar reports in threads 791357 and 777113. Any guidance appreciated!
Thanks!
Topic:
Machine Learning & AI
SubTopic:
General
I'm working on my first model that detects bowling score screens, and I have it working with pictures no problem. But when it comes to video, I have a sizing issue.
I added my model to a small app I wrote for taking a picture of a Bowling Scoring Screen, where my model will frame the screens in the video feed from the camera. My model works, but my boxes are about 2/3 the size of the screens being detected. I don't understand the theory of the video stream the camera is feeding me. What I mean is that I don't want to make tweaks to the size of my rectangles by making them larger, and I'm not sure if the video feed is larger than what I'm detecting in code.
Questions I have are like is the video feed a certain resolution like 1980x something, or a much higher resolution in the 12 megapixel range?
On a static image of say 1920x something, My alignment is perfect.
AI says that it's my model training, that I'm training on square images but video is 16:9. Or that I'm producing 4:3 images in a 16:9 environment.
I'm missing something here but not sure what it is. I already wrote code to force it to fit, but reverted back to trying for a natural fit.
Topic:
Machine Learning & AI
SubTopic:
Core ML
I’m sure someone though about it already. But let’s have ecosystem, where Apple Intelligence uses your most capable (Apple) hardware at first and the cloud service as second.
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Download the Foundation Models Adaptor Training Toolkit
Hi, after I clicked on the download button, I was redirected to this page https://developer.apple.com and did not download the toolkit.