Hello,
I posted an issue on the coremltools GitHub about my Core ML models not performing as well on iOS 17 vs iOS 16 but I'm posting it here just in case.
TL;DR
The same model on the same device/chip performs far slower (doesn't use the Neural Engine) on iOS 17 compared to iOS 16.
Longer description
The following screenshots show the performance of the same model (a PyTorch computer vision model) on an iPhone SE 3rd gen and iPhone 13 Pro (both use the A15 Bionic).
iOS 16 - iPhone SE 3rd Gen (A15 Bioinc)
iOS 16 uses the ANE and results in fast prediction, load and compilation times.
iOS 17 - iPhone 13 Pro (A15 Bionic)
iOS 17 doesn't seem to use the ANE, thus the prediction, load and compilation times are all slower.
Code To Reproduce
The following is my code I'm using to export my PyTorch vision model (using coremltools).
I've used the same code for the past few months with sensational results on iOS 16.
# Convert to Core ML using the Unified Conversion API
coreml_model = ct.convert(
model=traced_model,
inputs=[image_input],
outputs=[ct.TensorType(name="output")],
classifier_config=ct.ClassifierConfig(class_names),
convert_to="neuralnetwork",
# compute_precision=ct.precision.FLOAT16,
compute_units=ct.ComputeUnit.ALL
)
System environment:
Xcode version: 15.0
coremltools version: 7.0.0
OS (e.g. MacOS version or Linux type): Linux Ubuntu 20.04 (for exporting), macOS 13.6 (for testing on Xcode)
Any other relevant version information (e.g. PyTorch or TensorFlow version): PyTorch 2.0
Additional context
This happens across "neuralnetwork" and "mlprogram" type models, neither use the ANE on iOS 17 but both use the ANE on iOS 16
If anyone has a similar experience, I'd love to hear more.
Otherwise, if I'm doing something wrong for the exporting of models for iOS 17+, please let me know.
Thank you!
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
While building an app with large language model inferencing on device, I got gibberish output. After carefully examining every detail, I found it's caused by the fused scaledDotProductAttention operation. I switched back to the discrete operations and problem solved. To reproduce the bug, please check https://github.com/zhoudan111/MPSGraph_SDPA_bug
Topic:
Machine Learning & AI
SubTopic:
General
Hi everyone,
I am using Xcode 16.4 in MacOS Sequoia 15.5 with Apple Intelligence turned on.
The following code gives the error message in the title:
import NaturalLanguage
@available(iOS 18.0, *)
func testSystemModel() {
let model = SystemLanguageModel.default
print(model)
}
What am I missing?
Hi, I just upgraded to macOS Tahoe Beta 2 and now I'm getting this error when I try to initialize my Foundation Models' session:
Error Resource (Local Sanitizer Asset) unavailable error.
import FoundationModels
#Playground {
let session = LanguageModelSession()
do {
let result = try await session.respond(to: "Tell me 3 colors")
print(result.content)
} catch {
print("Error", error)
}
}
I couldn't find any resource guiding me on how to solve this. Any help/workaround?
Thank you!
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I didn't run benchmarks before update, but it seems at least 5x slower. Of course all the LLM work is on remote servers, so is non-intuitive to me this should be happening.
Had updated MacOS and Xcode to 26.1RC at the same time, so can't even say I think it is MacOS or I think it is Xcode.
Before the update the progress indicator for each piece of code might seem to get stuck at the very end (and toggling between Navigators and Coding Assistant) in Xcode UI seemed to refresh the UI and confirm coding complete... but now it seems progress races to 50%, then often is stuck at 75%... well earlier than used to get stuck. And it like something is legitimately processing not just a UI glitch.
I'm wondering if this is somehow tied to visual rendering of the code in the little white window? CMD-TAB into Xcode seems laggy. Xcode is pinning a CPU. Why, this is all remote LLM work?
MacBook Pro 2021 M1 64GB RAM. Went from 26.01 to 26.1RC. Didn't touch any of the betas until RC1.
Our app is downloading a zip of an .mlpackage file, which is then compiled into an .mlmodelc file using MLModel.compileModel(at:). This model is then run using a VNCoreMLRequest.
Two users – and this after a very small rollout - are reporting issues running the VNCoreMLRequest. The error message from their logs:
Error Domain=com.apple.CoreML Code=0 "Failed to build the model execution plan using a model architecture file '/private/var/mobile/Containers/Data/Application/F93077A5-5508-4970-92A6-03A835E3291D/Documents/SKDownload/Identify-image-iOS/mobile_img_eu_v210.mlmodelc/model.mil' with error code: -5."
The URL there is to a file inside the compiled model. The error is happening when the perform function of VNImageRequestHandler is run. (i.e. the model compiled without an error.)
Anyone else seen this issue? Its only picked up in a few web results and none of them are directly relevant or have a fix.
I know that a CoreML error Code=0 is a generic error, but does anyone know what error code -5 is? Not even sure which framework its coming from.
@Generable
enum Breakfast {
case waffles
case pancakes
case bagels
case eggs
}
do {
let session = LanguageModelSession()
let userInput = "I want something sweet."
let prompt = "Pick the ideal breakfast for request: (userInput)"
let response = try await session.respond(to: prompt,generating: Breakfast.self)
print(response.content)
} catch let error {
print(error)
}
i want to test the @Generable demo but get error with below:decodingFailure(FoundationModels.LanguageModelSession.GenerationError.Context(debugDescription: "Failed to convert text into into GeneratedContent\nText: waffles", underlyingErrors: [Swift.DecodingError.dataCorrupted(Swift.DecodingError.Context(codingPath: [], debugDescription: "The given data was not valid JSON.", underlyingError: Optional(Error Domain=NSCocoaErrorDomain Code=3840 "Unexpected character 'w' around line 1, column 1." UserInfo={NSJSONSerializationErrorIndex=0, NSDebugDescription=Unexpected character 'w' around line 1, column 1.})))]))
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
iOS 18.2 includes a new feature called Visual Intelligence. If I hold down the Camera Control on my iPhone, I can take a photo of an object and use Google to look up items similar to what I've photographed.
Is there a way to programmatically open this interface within my app? If so, can I see which result the user selects?
I've checked on pypi.org and it appears to only have arm64 packages, has x86 with AMD been deprecated?
I get the following dyld error on an iPad Pro with Xcode 26 beta 4:
Symbol not found: _$s16FoundationModels20LanguageModelSessionC7prewarm12promptPrefixyAA6PromptVSg_tF
Any advice?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hello
It seems the model Content Tagging doesn't obey when I define the type of tag I wish in the instructions parameters, always the output are the main topics.
The unique form to get other type of tags like emotions is using Generable + Guided types. The documentation says it is recommended but not mandatory the use instructions.
Maybe I'm setting wrongly the instructions but take a look in the attached snapshot. I copied the definition of tagging emotions from the official documentation. The upper example is employing generable and it works but in the example at the botton I set like instruction the same description of emotion and it doesn't work. I tried with other statements with more or less verbose and never output emotions.
Could you provide a state using instruction where it works? Current version of model isn't working with instruction?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hey dear developers!
This post should be available for the future Siri updates and improvements but also for wishes in this forum so that everyone can share their opinion and idea please stay friendly. have fun! I had already thought about developing a demo app to demonstrate my idea for a better Siri.
My change of many:
Wish Update: Siri's language recognition capabilities have been significantly enhanced. Instead of manually setting the language, Siri can now automatically recognize the language you intend to use, making language switching much more efficient. Simply speak the language you want to communicate in, and Siri will automatically recognize it and respond accordingly. Whether you speak English, German, or Japanese, Siri will respond in the language you choose.
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Tags:
iPhone
Siri Event Suggestions Markup
Siri and Voice
Apple Intelligence
In the play ground I'm trying to bias my LanguageModel to use a tool I registered, but I don't see it actually calling the tool. I'm following the developer video on landmarks itinerary generation tutorial almost verbatim. Is this a prompt engineering thing I'm missing? Or is it possible that I'm injecting my tool wrong?
I’ve been testing silent Siri engagement via typing on iOS 18 and also on iOS 26 beta 1 and beta 2. While normal typing works perfectly in type-to-Siri mode, I’ve noticed that swipe-to-type gestures don’t work within Siri’s input field. Interestingly, you still feel the usual haptic feedback associated with swipe typing, but no text appears in the Siri text box. Swipe-to-type continues to work flawlessly in other apps like Messages and Notes, so this seems to be an issue specific to Siri’s typing input handler in these betas. Hopefully, it will be fixed in the next release because swipe typing is essential to my silent Siri workflow.
Topic:
Machine Learning & AI
SubTopic:
Core ML
I am working on a lung cancer scanning app in for iOS with a CoreML model and when I test my app on a physical device, the model results in the same prediction 100% of the time. I even changed the names around and still resulted in the same case. I have listed my labels in cases and when its just stuck on the same case (case 1)
My code is below:
https://github.com/ShivenKhurana1/Detect-to-Protect-App/blob/main/DetectToProtect/SecondView.swift
I couldn't add the code as it was too long so I hope github link is fine!
Hey
Tried using a few regular expressions and all fail with an error:
Unhandled error streaming response: A generation guide with an unsupported pattern was used.
Is there are a list of supported features? I don't see it in docs, and it takes RegExp.
Anything with e.g. [A-Z] fails.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Can't import data in create ML word tagging project
training data is 100% correct I guarantee it:
I mean look this one has one entry in it.
[
{
"tokens": [
"a", "august", "gruters"
],
"labels": [
"BUILDER", "BUILDER", "BUILDER"
]
}
]
Topic:
Machine Learning & AI
SubTopic:
Create ML
Encountered a few times when the answer get "stuck" (I am now at beta 6).
This is an example.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hello,
I am studying macOS26 Apple Intelligence features.
I have created a basic swift program with Xcode. This program is sending prompts to FoundationModels.LanguageModelSession.
It works fine but this model is not trained for programming or code completion.
Xcode has an AI code completion feature. It is called "Predictive Code completion model".
So, there are multiple on-device models on macOS26 ?
Are there others ?
Is there a way for me to send prompts to this "Predictive Code completion model" from my program ?
Thanks
During testing the “Bringing advanced speech-to-text capabilities to your app” sample app demonstrating the use of iOS 26 SpeechAnalyzer, I noticed that the language model for the English locale was presumably already downloaded. Upon checking the documentation of AssetInventory, I found out that indeed, the language model can be preinstalled on the system.
Can someone from the dev team share more info about what assets are preinstalled by the system? For example, can we safely assume that the English language model will almost certainly be already preinstalled by the OS if the phone has the English locale?