Is there any way to ensure iOS apps we develop using Foundation Models can only be purchasable/downloadable on App Store by folks with capable devices? I would've thought there would be a Required Capabilities that App Store would hook into, but I don't seem to see it in the documentation here: https://developer.apple.com/documentation/bundleresources/information-property-list/uirequireddevicecapabilities
The closest seems to be iphone-performance-gaming-tier as that seems to target all M1 and above chips on iPhone & iPad. There is an ipad-minimum-performance-m1 that would more reasonably seem to ensure Foundation Models is likely available, but that doesn't help with iPhone. So far, it seems the only path would be to set Minimum Deployment to iOS 26 and add iphone-performance-gaming-tier as a required capability, but I'm a bit worried that capability might diverge in the future from what's Foundation Model / Apple Intelligence capable.
While I understand for the majority of apps they'll want to just selectively add in Apple Intelligence features and so can be usable by folks whose devices don't support it, the app experience I'm building doesn't make sense without the Foundation Models being available and I'd rather not have a large number of users downloading the app to be told "Sorry, you're not Apple Intelligence capable"
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hey,
I've been trying to write an AI agent for OpenAI's GPT-5, but using the @Generable Tool types from the FoundationModels framework, which is super awesome btw!
I'm having trouble implementing the tool calling, though. When I receive a tool call from the OpenAI api, I do the following:
Find the tool in my [any Tool] array via the tool name I get from the model
if let tool = tools.first(where: { $0.name == functionCall.name }) {
// ...
}
Parse the arguments of the tool call via GeneratedContent(json:)
let generatedContent = try GeneratedContent(json: functionCall.arguments)
Pass the tool and arguments to a function that calls tool.call(arguments: arguments) and returns the tool's output type
private func execute<T: Tool>(_ tool: T, with generatedContent: GeneratedContent) async throws -> T.Output {
let arguments = try T.Arguments.init(generatedContent)
return try await tool.call(arguments: arguments)
}
Up to this point, everything is working as expected. However, the tool's output type is any PromptRepresentable and I have no idea how to turn that into something that I can encode and send back to the model. I assumed there might be a way to turn it into a GeneratedContent but there is no fitting initializer.
Am I missing something or is this not supported? Without a way to return the output to an external provider, it wouldn't really be possible to use FoundationModels Tool type I think. That would be unfortunate because it's implemented so elegantly.
Thanks!
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I have exported a Pytorch model into a CoreML mlpackage file and imported the model file into my iOS project. The model is a Music Source Separation model - running prediction on audio-spectrogram blocks and returning separated audio source spectrograms.
Model produces correct results vs. desktop+GPU+Python but the inference on iPhone 15 Pro Max is really, really slow. Using Xcode model Performance tool I can see that the inference isn't automatically managed between compute units - all of it runs on CPU. The Performance tool notation hints all that ops should be supported by both the GPU and Neural Engine.
One thing to note, that when initializing the model with MLModelConfiguration option .cpuAndGPU or .cpuAndNeuralEngine there is an error in Xcode console:
`Error(s) occurred compiling MIL to BNNS graph:
[CreateBnnsGraphProgramFromMIL]: Failed to determine convolution kernel at location at /private/var/containers/Bundle/Application/2E3C4AFF-1FA4-4C95-AAE4-ECEBC0FB0BF9/mymss.app/mymss.mlmodelc/model.mil:2453:12
@ CreateBnnsGraphProgramFromMIL`
Before going back hammering the model in Python, are there any tips/strategies I could try in CoreMLTools export phase or in configuring the model for prediction on iOS?
My export toolchain is currently Linux with CoreMLTools v8.1, export target iOS16.
Hi everyone! 👋
I'm working on a C++ project using TensorFlow Lite and was wondering if anyone has a prebuilt TensorFlow Lite C++ library (libtensorflowlite) for macOS (Apple Silicon M1/M2) that they’d be willing to share.
I’m looking specifically for the TensorFlow Lite C++ API — something that lets me use tflite::Interpreter, tflite::FlatBufferModel, etc. Building it from source using Bazel on macOS has been quite challenging and time-consuming, so a ready-to-use .dylib or .a build along with the required headers would be incredibly helpful.
TensorFlow Lite version: v2.18.0 preferred
Target: macOS arm64 (Apple Silicon)
What I need:
libtensorflowlite.dylib or .a
Corresponding headers (ideally organized in a clean include/ folder)
If you have one available or know where I can find a reliable prebuilt version, I’d be super grateful. Thanks in advance! 🙏
Just tried to write a very simple test of using foundation models, but it gave me the error like this
"ModelManager received unentitled request. Expected entitlement com.apple.modelmanager.inference
establishment of session failed with Missing entitlement: com.apple.modelmanager.inference"
The simple code is listed below:
let session: LanguageModelSession = LanguageModelSession()
let response = try? await session.respond(to: "What is the capital of France?")
print("Response: (response)")
So what's the problem of this one?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I've been struggling and Siri support to an application. I have developed it kept getting this error when I run it on MacOS:
Failed to refresh AppShortcut parameters with error: Error Domain=Foundation._GenericObjCError Code=0 "(null)"
So I found AppIntentsSampleApp and downloaded and buil it and I get a similar, but larger, error:
Failed to refresh AppShortcut parameters with error: Error Domain=RBSServiceErrorDomain Code=1 "(originator doesn't have entitlement com.apple.private.xpc.launchd.app-server AND originator doesn't have entitlement com.apple.assertiond.system-shell AND originator doesn't have entitlement com.apple.runningboard.launchprocess)" UserInfo={NSLocalizedFailureReason=(originator doesn't have entitlement com.apple.private.xpc.launchd.app-server AND originator doesn't have entitlement com.apple.assertiond.system-shell AND j
And it goes on and on.
What am I missing? I'm using Xcode 16. I don't see an option to add a Siri framework. I have tried adding both the intent and tap, intent frameworks, which does not seem to make a difference.
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
When I use ChatGPT in Xcode, the following error is displayed:
It was working fine before, but suddenly it became like this, without changing any configuration. Why?
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Dear Apple Developer Team,
I am writing to request the addition of GS1 DataBar Stacked (both regular and expanded variants) to the barcode symbologies supported by the Vision framework (VNBarcodeSymbology) and VisionKit's DataScannerViewController.
Currently, Vision supports several GS1 DataBar formats, such as:
VNBarcodeSymbology.gs1DataBar
VNBarcodeSymbology.gs1DataBarExpanded
VNBarcodeSymbology.gs1DataBarLimited
However, GS1 DataBar Stacked is widely used in industries such as retail, pharmaceuticals, and logistics, where space constraints prevent the use of the standard GS1 DataBar format. Many businesses rely on this symbology to encode GTINs and other product data, but Apple's barcode scanning API does not explicitly support it.
Why This Feature Matters:
Essential for Small Packaging: GS1 DataBar Stacked is commonly used on small product labels where a standard linear barcode does not fit.
Widespread Industry Adoption: Many point-of-sale (POS) systems and inventory management tools require this symbology.
Improves iOS Adoption for Enterprise Use: Adding support would make Apple’s Vision framework a more viable solution for businesses that currently rely on third-party barcode scanning SDKs.
Feature Request:
Please add GS1 DataBar Stacked and GS1 DataBar Expanded Stacked to the recognized symbologies in:
VNBarcodeSymbology (for Vision framework)
DataScannerViewController (for VisionKit)
This addition would enhance the versatility of Apple’s barcode scanning tools and reduce the need for third-party libraries.
I appreciate your consideration of this request and would be happy to provide more details or test implementations if needed.
Thank you for your time and support!
Best regards
In this thread, I asked about adding parameters to App Shortcuts. The conclusion that I've drawn so far is that for App Shortcuts, there cannot be any parameters in the prompt, otherwise the system cannot find the AppShortcutsProvider. While this is fine for Shortcuts and non-voice interaction, I'd like to find a way to add parameters to the prompt. Here is the scenario:
My app controls a device that displays some content on "pages." The pages are defined in an AppEnum, which I use for Shortcuts integration via App Intents. The App Intent functions as expected, and is able to change the page based on the user selection within Shortcuts (or prompted if using the App Shortcut). What I'd like to do is allow the user to be able to say "Siri, open with ."
So far, The closest I've come to understanding how this works is through the .intentsdefinition file you can create (and SiriKit in general), however the part that really confused me there is a button in the File Editor that says "Convert to App Intent." To me, this means that I should be able to use the app intent I've already authored and hook that into Siri, rather than making an entirely new function/code-block that does exactly the same thing. Ideally, that's what I want to do.
What's the right way to define this behavior?
p.s. If I had to pick an intent schema in the context of AssistantSchemas, I'd say it's closest to the "Open File" one, if that helps. I'd ultimately like to make the "pages" user-customizable so in the long run, that would be what I'd do.
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Tags:
Siri and Voice
SiriKit
App Intents
Apple Intelligence
At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Apple Intelligence.
Can I integrate writing tools in my own text editor?
UITextView, NSTextView, and SwiftUI TextEditor automatically get Writing Tools on devices that support Apple Intelligence. For custom text editors, check out Enhancing your custom text engine with Writing Tools.
Given that Foundation Models are on-device, how will Apple update the models over time? And how should we test our app against the model updates?
Model updates are in sync with OS updates. As for testing with updated models, watch our WWDC session about prompt engineering and safety, and read the Human Interface Guidelines to understand best practices in prompting the on-device model.
What is the context size of a session in Foundation Models Framework? How to handle the error if a session runs out of the context size?
Currently the context size is about 4,000 tokens. If it’s exceeded, developers can catch the .exceededContextWindowSize error at runtime. As discussed in one of our WWDC25 sessions, when the context window is exceeded, one approach is to trim and summarize a transcript, and then start a new session.
Can I do image generation using the Foundation Models Framework or is only text generation supported?
Foundation Models do not generate images, but you can use the Foundation Models framework to generate prompts for ImageCreator in the Image Playground framework. Developers can also take advantage of Tools in Foundation Models framework, if appropriate for their app.
My app currently uses a third party server-based LLM. Can I use the Foundation Models Framework as well in the same app? Any guidance here?
The Foundation Models framework is optimized for a subset of tasks like summarization, extraction, classification, and tagging. It’s also on-device, private, and free. But at 3 billion parameters it isn’t designed for advanced reasoning or world knowledge, so for some tasks you may still want to use a larger server-based model.
Should I use the AFM for my language translation features given it does text translation, or is the Translation API still the preferred approach?
The Translation API is still preferred. Foundation Models is great for tasks like text summarization and data generation. It’s not recommended for general world knowledge or translation tasks.
Will the TranslationSession class introduced in ios18 get any new improvments in performance or reliability with the new live translation abilities in ios/macos/ipados 26? Essentially, will we get access to live translation in a similar way and if so, how?
There's new API in LiveCommunicationKit to take advantage of live translation in your communication apps.
The Translate framework is using the same models as used by Live Communication and can be combined with the new SpeechAnalyzer API to translate your own audio.
How do I set a default value for an App Intent parameter that is otherwise required?
You can implement a default value as part of your parameter declaration by using the @Parameter(defaultValue:) form of the property wrapper.
How long can an App Intent run?
On macOS there is no limit to how long app intents can run. On iOS, there is a limit of 30 seconds. This time limit is paused when waiting for user interaction.
How do I vary the options for a specific parameter of an App Intent, not just based on the type?
Implement a DynamicOptionsProvider on that parameter. You can add suggestedEntities() to suggest options.
What if there is not a schema available for what my app is doing?
If an app intent schema matching your app’s functionality isn’t available, take a look to see if there’s a SiriKit domain that meets your needs, such as for media playback and messaging apps. If your app’s functionality doesn’t match any of the available schemas, you can define a custom app intent, and integrate it with Siri by making it an App Shortcut. Please file enhancement requests via Feedback Assistant for new App intent schemas that would benefit your app.
Are you adding any new app intent domains this year?
In addition to all the app intent domains we announced last year, this year at WWDC25 we announced that Visual Intelligence will be added to iOS 26 and macOS Tahoe.
When my App Intent doesn't show up as an action in Shortcuts, where do I start in figuring out what went wrong?
App Intents are statically extracted. You can check the ExtractMetadata info in Xcode's build log.
What do I need to do to make sure my App Intents work well with Spotlight+?
Check out our WWDC25 sessions on App Intents, including Explore new advances in App Intents and Develop for Shortcuts and Spotlight with App Intents. Mostly, make sure that your intent can run from the parameter summary alone, no required parameters without default values that are not already in the parameter summary.
Does Spotlight+ on macOS support App Shortcuts?
Not directly, but it shows the App Intents your App Shortcuts are sitting on top of.
I’m wondering if the on-device Foundation Models framework API can be integrated into an app to act strictly as an app in-universe AI assistant, responding only within the boundaries of the app’s fictional context. Is such controlled, context-limited interaction supported?
FM API runs inside the process of your app only and does not automatically integrate with any remaining part of the system (unless you choose to implement your own tool and utilize tool calling). You can provide any instructions and prompts you want to the model.
If a country does not support Apple Intelligence yet, can the Foundation Models framework work?
FM API works on Apple Intelligence-enabled devices in supported regions and won’t work in regions where Apple Intelligence is not yet supported
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Almost everywhere else you see Apple Intelligence, you get to select whether it's on device, private cloud compute, or ChatGPT. Is there a way to do that via code in the Foundation Model? I searched through the docs and couldn't find anything, but maybe I missed it.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Overview
I'm experiencing a critical issue where TensorFlow-metal and PyArrow seem to be incompatible when installed together in the same environment. Whenever both packages are present, TensorFlow crashes and the kernel dies during execution. Environment Details
Environment Details
macOS Version: 15.3.2
Mac Model: MacBook Pro Max M3
Python Version: 3.11
TensorFlow Version: 2.19
PyArrow Version: 19.0.0
Issue Description:
When both TensorFlow-metal and PyArrow are installed in the same Python environment, any attempt to use TensorFlow results in immediate kernel crashes. The issue appears to be a compatibility problem between these two packages rather than a problem with either package individually.
Steps to Reproduce
Create a new Python environment:
conda create -n tf-metal python=3.11
Install TensorFlow-metal:
pip install tensorflow tensorflow-metal
Install PyArrow: pip install pyarrow
Run the following minimal example:
# Create a simple model
model = tf.keras.Sequential([
tf.keras.layers.Input(shape=(2,)),
tf.keras.layers.Dense(1)
])
model.compile(optimizer='adam', loss='mse')
model.summary() # This works fine
# Generate some dummy data
X = np.random.random((100, 2))
y = np.random.random((100, 1))
# The crash happens exactly at this line
model.fit(X, y, epochs=5, batch_size=32) # CRASH: Kernel dies here
Result: Kernel crashes with no error message
What I've Tried
Reinstalling both packages in different orders Using different versions of both packages Creating isolated environments Checking system logs for additional error information
The only workaround I've found is to use separate environments for each package, which isn't practical for my workflow as I need both libraries for my data processing and machine learning pipeline.
Questions
Has anyone else encountered this specific compatibility issue? Are there known workarounds that allow both packages to coexist? Is this a known issue that's being addressed in upcoming releases?
Any insights, suggestions, or assistance would be greatly appreciated. I'm happy to provide any additional information that might help diagnose this problem. Thank you in advance for your help!
Thank you in advance for your help!
Topic:
Machine Learning & AI
SubTopic:
Core ML
As we described on the title, the model that I have built completely works on iPhone 15 / A16 Bionic, on the other hand it does not run on iPhone 16 / A18 chip with the following error message.
E5RT encountered an STL exception. msg = MILCompilerForANE error: failed to compile ANE model using ANEF. Error=_ANECompiler : ANECCompile() FAILED.
E5RT: MILCompilerForANE error: failed to compile ANE model using ANEF. Error=_ANECompiler : ANECCompile() FAILED (11)
It consumes 1.5 ~ 1.6 GB RAM on the loading the model, then the consumption is decreased to less than 100MB on the both of iPhone 15 and 16. After that, only on iPhone 16, the above error is shown on the Xcode log, the memory consumption is surged to 5 to 6GB, and the system kills the app. It works well only on iPhone 15.
This model is built with the Core ML tools. Until now, I have tried the target iOS 16 to 18 and the compute units of CPU_AND_NE and ALL. But any ways have not solved this issue. Eventually, what kindof fix should I do?
minimum_deployment_target = ct.target.iOS18
compute_units = ct.ComputeUnit.ALL
compute_precision = ct.precision.FLOAT16
Hi everyone😊, I want to implement facial recognition into my app. I was planning to use createML's image classification, but there seams to be a lot of hassle to implement (the JSON file etc.). Are there some other easy to implement options that don't involve advanced coding. Thanks, Oliver
Topic:
Machine Learning & AI
SubTopic:
General
Hello,
I have a CoreML model and I want to convert it to a PyTorch model.
Any ideas if this is possible and if so how?
Topic:
Machine Learning & AI
SubTopic:
Core ML
We are using VNRecognizeTextRequest to detect text in documents, and we have noticed that even in some very clear and well-formatted documents, there are still instances where text blocks are missed. the live text also have the same issue.
I have a question. In China, long pressing a picture in the album can segment the target. Is this model a local model? Is there any information? Can developers use it?
While runninf Apple Foundation Model in iPhone simulator, I got this error:
IPC error: Underlying connection interrupted
What does this mean? Related to foundation model?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hi everyone,
I'm trying to use VNDetectTextRectanglesRequest to detect text rectangles in an image. Here's my current code:
guard let cgImage = image.cgImage(forProposedRect: nil, context: nil, hints: nil) else {
return
}
let textDetectionRequest = VNDetectTextRectanglesRequest { request, error in
if let error = error {
print("Text detection error: \(error)")
return
}
guard let observations = request.results as? [VNTextObservation] else {
print("No text rectangles detected.")
return
}
print("Detected \(observations.count) text rectangles.")
for observation in observations {
print(observation.boundingBox)
}
}
textDetectionRequest.revision = VNDetectTextRectanglesRequestRevision1
textDetectionRequest.reportCharacterBoxes = true
let handler = VNImageRequestHandler(cgImage: cgImage, orientation: .up, options: [:])
do {
try handler.perform([textDetectionRequest])
} catch {
print("Vision request error: \(error)")
}
The request completes without error, but no text rectangles are detected — the observations array is empty (count = 0). Here's a sample image I'm testing with:
I expected VNTextObservation results, but I'm not getting any. Is there something I'm missing in how this API works? Or could it be a limitation of this request or revision?
Thanks for any help!
Hello, I am thinking of buying the MacBook Pro 14" with M4 Pro for ML/AI/ NLP tasks mostly. And since I have only used Windows before, I am wandering if it is compatible with libraries like "Pytorch" and "TensorFlow" etc., or people have experienced problems in installation... Thank you!
Topic:
Machine Learning & AI
SubTopic:
General