Did something change on face detection / Vision Framework on iOS 15?
Using VNDetectFaceLandmarksRequest and reading the VNFaceLandmarkRegion2D to detect eyes is not working on iOS 15 as it did before. I am running the exact same code on an iOS 14 and iOS 15 device and the coordinates are different as seen on the screenshot?
Any Ideas?
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Dear Apple Foundation Models Development Team,
I am a developer integrating Apple Foundation Models (AFM) into my app and encountered the exceededContextWindowSize error when exceeding the 4096-token limit.
Proposal:
I suggest Apple develop a tool to estimate the token count of a prompt before sending it to the model. This tool could be integrated into FoundationModels Framework for ease of use.
Benefits:
A token estimation tool would help developers manage the context window limit and optimize performance. I hope Apple considers this proposal soon.
Thank you!
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
After updating to macOS15.2beta, the Yolo11 object detection model exported to coreml outputs incorrect and abnormal bounding boxes.
It also doesn't work in iOS apps built on a 15.2 mac.
The same model worked fine on macOS14.1.
When training a Yolo11 custom model in Python, exporting it to coreml, and testing it in the preview tab of mlpackage on macOS15.2 and Xcode16.0, the above result is obtained.
Hello everyone,
I am trying to train using CreateML Version 6.0 Beta (146.1), feature extractor Image Feature Print v2.
I am using 100K images for a total ~4GB on my M3 Max 48GB (MacOs 15.0 Beta (24A5279h))
The images seems to be correctly read and visualized in the Data Source section (no images with corrupted data seems to be there).
When I start the training it's all fine for the first 6k ~ 7k pictures, then I receive the following error:
Failed to create CVPixelBufferPool. Width = 0, Height = 0, Format = 0x00000000
It is the first time I am using it, so I don't really have so much of experience.
Could you help me to understand what could be the problem?
Thanks a lot
Hi, unfortunately I am not able to verify this but I remember some time ago I was able to create CoreML models that had one (or more) inputs with an enumerated shape size, and one (or more) inputs with a static shape.
This was some months ago. Since then I updated my MacOS to Sequoia 15.5, and when I try to execute MLModels with this setup I get the following error
libc++abi: terminating due to uncaught exception of type CoreML::MLNeuralNetworkUtilities::AsymmetricalEnumeratedShapesException: A model doesn't allow input features with enumerated flexibility to have unequal number of enumerated shapes, but input feature global_write_indices has 1 enumerated shapes and input feature input_hidden_states has 3 enumerated shapes.
It may make sense (but not really though) to verify that for inputs with a flexible enumerated shape they all have the same number of possible shapes is the same, but this should not impede the possibility of also having static shape inputs with a single shape defined alongside the flexible shape inputs.
Hello,
I'm unable to develop for Apple Intelligence on my Mac Studio, M1 Max running macOS 26 beta 1.
The models get downloaded and I can also verify that they exist in /System/Library/AssetsV2/ however the download progress remains stuck at 100%.
Checking console logs shows the process generativeexperiencesd reporting the following:
My device region and language is set to English (India).
Things I've already tried:
Changing language and region to English (US)
Reinstalling macOS
Trying with a different ISP via hotspot.
Also submitted as feedback (ID: FB20612561).
Tensorflow-metal fails on tensorflow versions above 2.18.1, but works fine on tensorflow 2.18.1
In a new python 3.12 virtual environment:
pip install tensorflow
pip install tensor flow-metal
python -c "import tensorflow as tf"
Prints error:
Traceback (most recent call last):
File "", line 1, in
File "/Users//pt/venv/lib/python3.12/site-packages/tensorflow/init.py", line 438, in
_ll.load_library(_plugin_dir)
File "/Users//pt/venv/lib/python3.12/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library
py_tf.TF_LoadLibrary(lib)
tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users//pt/venv/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Library not loaded: @rpath/_pywrap_tensorflow_internal.so
Referenced from: <8B62586B-B082-3113-93AB-FD766A9960AE> /Users//pt/venv/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib
Reason: tried: '/Users//pt/venv/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file), '/Users//pt/venv/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file), '/opt/homebrew/lib/_pywrap_tensorflow_internal.so' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/homebrew/lib/_pywrap_tensorflow_internal.so' (no such file)
Topic:
Machine Learning & AI
SubTopic:
General
Tags:
Developer Tools
Metal
Machine Learning
tensorflow-metal
Hi all.
My adapter model just won't invoke my tool.
The problem I am having is covered in an older post: https://developer.apple.com/forums/thread/794839?answerId=852262022#852262022
Sadly the thread dies there and no resolution is seen in that thread.
It's worth noting that I have developed an AI chatbot built around LanguageModelSession to which I feed the exact same system prompt that I feed to my training set (pasted further in this post). The AI chatbot works perfectly, the tool is invoked when needed. I am training the adapter model because the base model whilst capable doesn't produce the quality I'm looking for.
So here's the template of an item in my training set:
[
{
'role': 'system',
'content': systemPrompt,
'tools': [TOOL_DEFINITION]
},
{
'role': 'user',
'content': entry['prompt']
},
{
'role': 'assistant',
'content': entry['code']
}
]
where TOOL_DEFINITION =
{
'type': 'function',
'function': {
'name': 'WriteUbersichtWidgetToFileSystem',
'description': 'Writes an Übersicht Widget to the file system. Call this tool as the last step in processing a prompt that generates a widget.',
'parameters': {
'type': 'object',
'properties': {
'jsxContent': {
'type': 'string',
'description': 'Complete JSX code for an Übersicht widget. This should include all required exports: command, refreshFrequency, render, and className. The JSX should be a complete, valid Übersicht widget file.'
}
},
'required': ['jsxContent']
}
}
... and systemPrompt =
A conversation between a user and a helpful assistant. You are an Übersicht widget designer. Create Übersicht widgets when requested by the user.
IMPORTANT: You have access to a tool called WriteUbersichtWidgetToFileSystem. When asked to create a widget, you MUST call this tool.
### Tool Usage:
Call WriteUbersichtWidgetToFileSystem with complete JSX code that implements the Übersicht Widget API. Generate custom JSX based on the user's specific request - do not copy the example below.
### Übersicht Widget API (REQUIRED):
Every Übersicht widget MUST export these 4 items:
- export const command: The bash command to execute (string)
- export const refreshFrequency: Refresh rate in milliseconds (number)
- export const render: React component function that receives {output} prop (function)
- export const className: CSS positioning for absolute placement (string)
Example format (customize for each request):
WriteUbersichtWidgetToFileSystem({jsxContent: `export const command = "echo hello"; export const refreshFrequency = 1000; export const render = ({output}) => { return <div>{output}</div>; }; export const className = "top: 20px; left: 20px;"`})
### Rules:
- The terms "ubersicht widget", "widget", "a widget", "the widget" must all be interpreted as "Übersicht widget"
- Generate complete, valid JSX code that follows the Übersicht widget API
- When you generate a widget, don't just show JSON or code - you MUST call the WriteUbersichtWidgetToFileSystem tool
- Report the results to the user after calling the tool
### Examples:
- "Generate a Übersicht widget" → Use WriteUbersichtWidgetToFileSystem tool
- "Can you add a widget that shows the time" → Use WriteUbersichtWidgetToFileSystem tool
- "Create a widget with a button" → Use WriteUbersichtWidgetToFileSystem tool
When the script that I use to compose the full training set is executed, entry['prompt'] and entry['code'] contain the prompt and the resulting JSX code for one of the examples I'm feeding to the training session. This is repeated for about 60 such examples that I have in my sample data collection.
Thanks for any help.
Michael
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Not finding a lot on the Swift Assist technology announced at WWDC 2024. Does anyone know the latest status? Also, currently I use OpenAI's macOS app and its 'Work With...' functionality to assist with Xcode development, and this is okay, certainly saves copying code back and forth, but it seems like AI should be able to do a lot more to help with Xcode app development.
I guess I'm looking at what people are doing with AI in Visual Studio, Cline, Cursor and other IDEs and tools like those and feel a bit left out working in Xcode. Please let me know if there are AI tools or techniques out there you use to help with your Xcode projects.
Thanks in advance!
Hi,
I was using Foundation Models in my app, and suddenly it just stopped working from one moment to the next.
To double-check, I created a small test in Playgrounds, but I’m getting the exact same error there too.
#Playground {
let session = LanguageModelSession()
let prompt = "please answer a word"
do {
let response = try await session.respond(to: prompt)
} catch {
print("error is \(error)")
}
}
error is Error Domain=FoundationModels.LanguageModelSession.GenerationError Code=-1 "(null)" UserInfo={NSMultipleUnderlyingErrorsKey=(
"Error Domain=ModelManagerServices.ModelManagerError Code=1026 \"(null)\" UserInfo={NSMultipleUnderlyingErrorsKey=(\n)}"
)}
I’m no longer able to get any response from the framework anywhere, even in a fresh project. It's been 5 days.
Has anyone else experienced this issue or knows what could be causing it?
Thanks in advance!
Tahoe 26.2 beta 1, Xcode 26.1.1, iPhone Air simulator 26.1
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hey,
I receive GenerableContent as follows:
let response = try await session.respond(to: "", schema: generationSchema)
And it wraps GeneratedJSON which seems to be private.
What is the best way to get a string / raw value out of it? I noticed it could theoretically be accessed via transcriptEntries but it's not ideal.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I'm using Numbers to build a spreadsheet that I'm exporting as a CSV. I then import this file into Create ML to train a word tagger model. Everything has been working fine for all the models I've trained so far, but now I'm coming across a use case that has been breaking the import process: commas within the training data. This is a case that none of Apple's examples show.
My project takes Navajo text that has been tokenized by syllables and labels the parts-of-speech.
Case that works...
Raw text:
Naaltsoos yídéeshtah.
Tokens column:
Naal,tsoos, ,yí,déesh,tah,.
Labels column:
NObj,NObj,Space,Verb,Verb,VStem,Punct
Case that breaks...
Raw text:
óola, béésh łigaii, tłʼoh naadą́ą́ʼ, wáin, akʼah, dóó á,shįįh
Tokens column with tokenized text (commas quoted):
óo,la,",", ,béésh, ,łi,gaii,",", ,tłʼoh, ,naa,dą́ą́ʼ,",", ,wáin,",", ,a,kʼah,",", ,dóó, ,á,shįįh
(Create ML reports mismatched columns)
Tokens column with tokenized text (commas escaped):
óo,la,\,, ,béésh, ,łi,gaii,\,, ,tłʼoh, ,naa,dą́ą́ʼ,\,, ,wáin,\,, ,a,kʼah,\,, ,dóó, ,á,shįįh
(Create ML reports mismatched columns)
Tokens column with tokenized text (commas escape-quoted):
óo,la,\",\", ,béésh, ,łi,gaii,\",\", ,tłʼoh, ,naa,dą́ą́ʼ,\",\", ,wáin,\",\", ,a,kʼah,\",\", ,dóó, ,á,shįįh
(record not detected by Create ML)
Tokens column with tokenized text (commas escape-quoted):
óo,la,"","", ,béésh, ,łi,gaii,"","", ,tłʼoh, ,naa,dą́ą́ʼ,"","", ,wáin,"","", ,a,kʼah,"","", ,dóó, ,á,shįįh
(Create ML reports mismatched columns)
Labels column:
NSub,NSub,Punct,Space,NSub,Space,NSub,NSub,Punct,Space,NSub,Space,NSub,NSub,Punct,Space,NSub,Punct,Space,NSub,NSub,Punct,Space,Conj,Space,NSub,NSub
Sample From Spreadsheet
Solution Needed
It's simple enough to escape commas within CSV files, but the format needed by Create ML essentially combines entire CSV records into single columns, so I'm ending up needing a CSV record that contains a mixture of commas to use for parsing and ones to use as character literals. That's where this gets complicated.
For this particular use case (which seems like it would frequently arise when training a word tagger model), how should I properly escape a comma literal?
Topic:
Machine Learning & AI
SubTopic:
Create ML
Tags:
Natural Language
Machine Learning
Create ML
TabularData
Here's the result:
Very weird.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I am using Foundation Models for the first time and no response is being provided to me.
Code
import Playgrounds
import FoundationModels
#Playground {
let session = LanguageModelSession()
let result = try await session.respond(to: "List all the states in the USA")
print(result.content)
}
Canvas Output
What I did
New file
Code
Canvas refreshes but nothing happens
Am I missing a step or setup here? Please help. Something so basic is not working I do not know what to do.
Running 40GPU, 16CPU MacBook Pro.. IOS26/Xcodebeta2/Tahoe allocated 8CPU, 48GB memory in Parallels VM.
Settings for Playgrounds in Xcode
Thank you for your help in advance.
I've been following along with "App Shortcuts" development but cannot get Siri to run my Intent. The intent on its own works in Shortcuts, along with a couple others that aren't in the AppShortcutsProvder structure.
I keep getting the following two errors, but cannot figure out why this is occurring with documentation or other forum posts.
No ConnectionContext found for 12909953344
Attempted to fetch App Shortcuts, but couldn't find the AppShortcutsProvider.
Here are the relevant snippets of code -
(1) The AppIntent definition
struct SetBrightnessIntent: AppIntent {
static var title = LocalizedStringResource("Set Brightness")
static var description = IntentDescription("Set Glass Display Brightness")
@Parameter(title: "Level")
var level: Int?
static var parameterSummary: some ParameterSummary {
Summary("Set Brightness to \(\.$level)%")
}
func perform() async throws -> some IntentResult {
guard let level = level else {
throw $level.needsValueError("Please provide a brightness value")
}
if level > 100 || level <= 0 {
throw $level.needsValueError("Brightness must be between 1 and 100")
}
// do stuff with level
return .result()
}
}
(2) The AppShortcutsProvider (defined in my iOS app target, there are no other targets)
struct MyAppShortcuts: AppShortcutsProvider {
static var shortcutTileColor: ShortcutTileColor = .grayBlue
@AppShortcutsBuilder
static var appShortcuts: [AppShortcut] {
AppShortcut(
intent: SetBrightnessIntent(),
phrases: [
"set \(.applicationName) brightness to \(\.$level)",
"set \(.applicationName) brightness to \(\.$level) percent"
],
shortTitle: LocalizedStringResource("Set Glass Brightness"),
systemImageName: "sun.max"
)
}
}
Does anything here look wrong? Is there some magical key that I need to specify in Info.plist to get Siri to recognize the AppShortcutsProvider?
On Xcode 16.2 and iOS 18.2 (non-beta).
Hello folks! Taking a look at https://developer.apple.com/documentation/foundationmodels it’s not clear how to use another models there.
Do anyone knows if it’s possible use one trained model from outside (imported) here in foundation models framework?
Thanks!
My iOS app supports iOS 18, and I’m using an encrypted CoreML model secured with a key generated from Xcode.
Every few months (around every 3 months), the encrypted model fails to load for both me and my users. When I investigate, I find this error:
coreml Fetching decryption key from server failed: noEntryFound("No records found"). Make sure the encryption key was generated with correct team ID
To temporarily fix it, I delete the old key, generate a new one, re-encrypt the model, and submit an app update. This resolves the issue, but only for a while.
This is a terrible experience for users and obviously not a sustainable solution.
I want to understand:
Why is this happening?
Is there a known expiration or invalidation policy for CoreML encryption keys?
How can I prevent this issue permanently?
Any insights or official guidance would be really appreciated.
I am working on an app using FoundationModels to process web pages.
I am looking to find ways to filter the input to fit within the token limits.
I have unit tests, UI tests and the app running on an iPad in the simulator. It appears that the different configurations of the test environment seems to affect the token limits.
That is, the same input in a unit test and UI test will hit different token limits.
Is this correct? Or is this an artifact of my test tooling?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hello,
I am developing an app for the Swift Student challenge; however, I keep encountering an error when using ClassifyImageRequest from the Vision framework in Xcode:
VTEST: error: perform(_:): inside 'for await result in resultStream' error: internalError("Error Domain=NSOSStatusErrorDomain Code=-1 \"Failed to create espresso context.\" UserInfo={NSLocalizedDescription=Failed to create espresso context.}")
It works perfectly when testing it on a physical device, and I saw on another thread that ClassifyImageRequest doesn't work on simulators. Will this cause problems with my submission to the challenge?
Thanks
Topic:
Machine Learning & AI
SubTopic:
General
Tags:
Swift Student Challenge
Swift
Swift Playground
Vision
Has anyone been able to run Tensorflow > 2.15 with Tensorflow Metal 1.1.0 on M3? I tried several times but was not successful. Seems like development on TensorFlow Metal has paused?