Has Apple made any commitment to versioning the Foundation Models on device? What if you build a feature that works great on 26.0 but they change the model or guardrails in 26.1 and it breaks your feature, is your only recourse filing Feedback or pulling the feature from the app? Will there be a way to specify a model version like in all of the server based LLM provider APIs? If not, sounds risky to build on.
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello,
I'm unable to develop for Apple Intelligence on my Mac Studio, M1 Max running macOS 26 beta 1.
The models get downloaded and I can also verify that they exist in /System/Library/AssetsV2/ however the download progress remains stuck at 100%.
Checking console logs shows the process generativeexperiencesd reporting the following:
My device region and language is set to English (India).
Things I've already tried:
Changing language and region to English (US)
Reinstalling macOS
Trying with a different ISP via hotspot.
After updating to macOS15.2beta, the Yolo11 object detection model exported to coreml outputs incorrect and abnormal bounding boxes.
It also doesn't work in iOS apps built on a 15.2 mac.
The same model worked fine on macOS14.1.
When training a Yolo11 custom model in Python, exporting it to coreml, and testing it in the preview tab of mlpackage on macOS15.2 and Xcode16.0, the above result is obtained.
Hi all.
My adapter model just won't invoke my tool.
The problem I am having is covered in an older post: https://developer.apple.com/forums/thread/794839?answerId=852262022#852262022
Sadly the thread dies there and no resolution is seen in that thread.
It's worth noting that I have developed an AI chatbot built around LanguageModelSession to which I feed the exact same system prompt that I feed to my training set (pasted further in this post). The AI chatbot works perfectly, the tool is invoked when needed. I am training the adapter model because the base model whilst capable doesn't produce the quality I'm looking for.
So here's the template of an item in my training set:
[
{
'role': 'system',
'content': systemPrompt,
'tools': [TOOL_DEFINITION]
},
{
'role': 'user',
'content': entry['prompt']
},
{
'role': 'assistant',
'content': entry['code']
}
]
where TOOL_DEFINITION =
{
'type': 'function',
'function': {
'name': 'WriteUbersichtWidgetToFileSystem',
'description': 'Writes an Übersicht Widget to the file system. Call this tool as the last step in processing a prompt that generates a widget.',
'parameters': {
'type': 'object',
'properties': {
'jsxContent': {
'type': 'string',
'description': 'Complete JSX code for an Übersicht widget. This should include all required exports: command, refreshFrequency, render, and className. The JSX should be a complete, valid Übersicht widget file.'
}
},
'required': ['jsxContent']
}
}
... and systemPrompt =
A conversation between a user and a helpful assistant. You are an Übersicht widget designer. Create Übersicht widgets when requested by the user.
IMPORTANT: You have access to a tool called WriteUbersichtWidgetToFileSystem. When asked to create a widget, you MUST call this tool.
### Tool Usage:
Call WriteUbersichtWidgetToFileSystem with complete JSX code that implements the Übersicht Widget API. Generate custom JSX based on the user's specific request - do not copy the example below.
### Übersicht Widget API (REQUIRED):
Every Übersicht widget MUST export these 4 items:
- export const command: The bash command to execute (string)
- export const refreshFrequency: Refresh rate in milliseconds (number)
- export const render: React component function that receives {output} prop (function)
- export const className: CSS positioning for absolute placement (string)
Example format (customize for each request):
WriteUbersichtWidgetToFileSystem({jsxContent: `export const command = "echo hello"; export const refreshFrequency = 1000; export const render = ({output}) => { return <div>{output}</div>; }; export const className = "top: 20px; left: 20px;"`})
### Rules:
- The terms "ubersicht widget", "widget", "a widget", "the widget" must all be interpreted as "Übersicht widget"
- Generate complete, valid JSX code that follows the Übersicht widget API
- When you generate a widget, don't just show JSON or code - you MUST call the WriteUbersichtWidgetToFileSystem tool
- Report the results to the user after calling the tool
### Examples:
- "Generate a Übersicht widget" → Use WriteUbersichtWidgetToFileSystem tool
- "Can you add a widget that shows the time" → Use WriteUbersichtWidgetToFileSystem tool
- "Create a widget with a button" → Use WriteUbersichtWidgetToFileSystem tool
When the script that I use to compose the full training set is executed, entry['prompt'] and entry['code'] contain the prompt and the resulting JSX code for one of the examples I'm feeding to the training session. This is repeated for about 60 such examples that I have in my sample data collection.
Thanks for any help.
Michael
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Not finding a lot on the Swift Assist technology announced at WWDC 2024. Does anyone know the latest status? Also, currently I use OpenAI's macOS app and its 'Work With...' functionality to assist with Xcode development, and this is okay, certainly saves copying code back and forth, but it seems like AI should be able to do a lot more to help with Xcode app development.
I guess I'm looking at what people are doing with AI in Visual Studio, Cline, Cursor and other IDEs and tools like those and feel a bit left out working in Xcode. Please let me know if there are AI tools or techniques out there you use to help with your Xcode projects.
Thanks in advance!
Hi,
I was using Foundation Models in my app, and suddenly it just stopped working from one moment to the next.
To double-check, I created a small test in Playgrounds, but I’m getting the exact same error there too.
#Playground {
let session = LanguageModelSession()
let prompt = "please answer a word"
do {
let response = try await session.respond(to: prompt)
} catch {
print("error is \(error)")
}
}
error is Error Domain=FoundationModels.LanguageModelSession.GenerationError Code=-1 "(null)" UserInfo={NSMultipleUnderlyingErrorsKey=(
"Error Domain=ModelManagerServices.ModelManagerError Code=1026 \"(null)\" UserInfo={NSMultipleUnderlyingErrorsKey=(\n)}"
)}
I’m no longer able to get any response from the framework anywhere, even in a fresh project. It's been 5 days.
Has anyone else experienced this issue or knows what could be causing it?
Thanks in advance!
Tahoe 26.2 beta 1, Xcode 26.1.1, iPhone Air simulator 26.1
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Dear Apple Foundation Models Development Team,
I am a developer integrating Apple Foundation Models (AFM) into my app and encountered the exceededContextWindowSize error when exceeding the 4096-token limit.
Proposal:
I suggest Apple develop a tool to estimate the token count of a prompt before sending it to the model. This tool could be integrated into FoundationModels Framework for ease of use.
Benefits:
A token estimation tool would help developers manage the context window limit and optimize performance. I hope Apple considers this proposal soon.
Thank you!
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hey,
I receive GenerableContent as follows:
let response = try await session.respond(to: "", schema: generationSchema)
And it wraps GeneratedJSON which seems to be private.
What is the best way to get a string / raw value out of it? I noticed it could theoretically be accessed via transcriptEntries but it's not ideal.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Also submitted as feedback (ID: FB20612561).
Tensorflow-metal fails on tensorflow versions above 2.18.1, but works fine on tensorflow 2.18.1
In a new python 3.12 virtual environment:
pip install tensorflow
pip install tensor flow-metal
python -c "import tensorflow as tf"
Prints error:
Traceback (most recent call last):
File "", line 1, in
File "/Users//pt/venv/lib/python3.12/site-packages/tensorflow/init.py", line 438, in
_ll.load_library(_plugin_dir)
File "/Users//pt/venv/lib/python3.12/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library
py_tf.TF_LoadLibrary(lib)
tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users//pt/venv/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Library not loaded: @rpath/_pywrap_tensorflow_internal.so
Referenced from: <8B62586B-B082-3113-93AB-FD766A9960AE> /Users//pt/venv/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib
Reason: tried: '/Users//pt/venv/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file), '/Users//pt/venv/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file), '/opt/homebrew/lib/_pywrap_tensorflow_internal.so' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/homebrew/lib/_pywrap_tensorflow_internal.so' (no such file)
Topic:
Machine Learning & AI
SubTopic:
General
Tags:
Developer Tools
Metal
Machine Learning
tensorflow-metal
Hi, unfortunately I am not able to verify this but I remember some time ago I was able to create CoreML models that had one (or more) inputs with an enumerated shape size, and one (or more) inputs with a static shape.
This was some months ago. Since then I updated my MacOS to Sequoia 15.5, and when I try to execute MLModels with this setup I get the following error
libc++abi: terminating due to uncaught exception of type CoreML::MLNeuralNetworkUtilities::AsymmetricalEnumeratedShapesException: A model doesn't allow input features with enumerated flexibility to have unequal number of enumerated shapes, but input feature global_write_indices has 1 enumerated shapes and input feature input_hidden_states has 3 enumerated shapes.
It may make sense (but not really though) to verify that for inputs with a flexible enumerated shape they all have the same number of possible shapes is the same, but this should not impede the possibility of also having static shape inputs with a single shape defined alongside the flexible shape inputs.
We’ve encountered what appears to be a CoreML regression between macOS 26.0.1 and macOS 26.1 Beta.
In macOS 26.0.1, CoreML models run and produce correct results. However, in macOS 26.1 Beta, the same models produce scrambled or corrupted outputs, suggesting that tensor memory is being read or written incorrectly. The behavior is consistent with a low-level stride or pointer arithmetic issue — for example, using 16-bit strides on 32-bit data or other mismatches in tensor layout handling.
Reproduction
Install ON1 Photo RAW 2026 or ON1 Resize 2026 on macOS 26.0.1.
Use the newest Highest Quality resize model, which is Stable Diffusion–based and runs through CoreML.
Observe correct, high-quality results.
Upgrade to macOS 26.1 Beta and run the same operation again.
The output becomes visually scrambled or corrupted.
We are also seeing similar issues with another Stable Diffusion UNet model that previously worked correctly on macOS 26.0.1. This suggests the regression may affect multiple diffusion-style architectures, likely due to a change in CoreML’s tensor stride, layout computation, or memory alignment between these versions.
Notes
The affected models are exported using standard CoreML conversion pipelines.
No custom operators or third-party CoreML runtime layers are used.
The issue reproduces consistently across multiple machines.
It would be helpful to know if there were changes to CoreML’s tensor layout, precision handling, or MLCompute backend between macOS 26.0.1 and 26.1 Beta, or if this is a known regression in the current beta.
Did something change on face detection / Vision Framework on iOS 15?
Using VNDetectFaceLandmarksRequest and reading the VNFaceLandmarkRegion2D to detect eyes is not working on iOS 15 as it did before. I am running the exact same code on an iOS 14 and iOS 15 device and the coordinates are different as seen on the screenshot?
Any Ideas?
I am working on an app using FoundationModels to process web pages.
I am looking to find ways to filter the input to fit within the token limits.
I have unit tests, UI tests and the app running on an iPad in the simulator. It appears that the different configurations of the test environment seems to affect the token limits.
That is, the same input in a unit test and UI test will hit different token limits.
Is this correct? Or is this an artifact of my test tooling?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hi, The most recent version of tensorflow-metal is only available for macosx 12.0 and python up to version 3.11. Is there any chance it could be updated with wheels for macos 15 and Python 3.12 (which is the default version supported for tensrofllow 2.17+)? I'd note that even downgrading to Python 3.11 would not be sufficient, as the wheels only work for macos 12.
Thanks.
Hello folks! Taking a look at https://developer.apple.com/documentation/foundationmodels it’s not clear how to use another models there.
Do anyone knows if it’s possible use one trained model from outside (imported) here in foundation models framework?
Thanks!
Has anyone been able to run Tensorflow > 2.15 with Tensorflow Metal 1.1.0 on M3? I tried several times but was not successful. Seems like development on TensorFlow Metal has paused?
I'm experimenting with downloading an audio file of spoken content, using the Speech framework to transcribe it, then using FoundationModels to clean up the formatting to add paragraph breaks and such. I have this code to do that cleanup:
private func cleanupText(_ text: String) async throws -> String? {
print("Cleaning up text of length \(text.count)...")
let session = LanguageModelSession(instructions: "The content you read is a transcription of a speech. Separate it into paragraphs by adding newlines. Do not modify the content - only add newlines.")
let response = try await session.respond(to: .init(text), generating: String.self)
return response.content
}
The content length is about 29,000 characters. And I get this error:
InferenceError::inferenceFailed::Failed to run inference: Context length of 4096 was exceeded during singleExtend..
Is 4096 a reference to a max input length? Or is this a bug?
This is running on an M1 iPad Air, with iPadOS 26 Seed 1.
Testing Foundation Models framework with a health-focused recipe generation app. The on-device approach is appealing but performance is rough. Taking 20+ seconds just to get recipe name and description. Same content from Claude API: 4 seconds.
I know it's beta and on-device has different tradeoffs, but this is approaching unusable territory for real-time user experience. The streaming helps psychologically but doesn't mask the underlying latency.The privacy/cost benefits are compelling but not if users abandon the feature before it completes.
Anyone else seeing similar performance? Is this expected for beta, or are there optimization techniques I'm missing?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Here's the result:
Very weird.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
My iOS app supports iOS 18, and I’m using an encrypted CoreML model secured with a key generated from Xcode.
Every few months (around every 3 months), the encrypted model fails to load for both me and my users. When I investigate, I find this error:
coreml Fetching decryption key from server failed: noEntryFound("No records found"). Make sure the encryption key was generated with correct team ID
To temporarily fix it, I delete the old key, generate a new one, re-encrypt the model, and submit an app update. This resolves the issue, but only for a while.
This is a terrible experience for users and obviously not a sustainable solution.
I want to understand:
Why is this happening?
Is there a known expiration or invalidation policy for CoreML encryption keys?
How can I prevent this issue permanently?
Any insights or official guidance would be really appreciated.