I am using Foundation Models for the first time and no response is being provided to me.
Code
import Playgrounds
import FoundationModels
#Playground {
let session = LanguageModelSession()
let result = try await session.respond(to: "List all the states in the USA")
print(result.content)
}
Canvas Output
What I did
New file
Code
Canvas refreshes but nothing happens
Am I missing a step or setup here? Please help. Something so basic is not working I do not know what to do.
Running 40GPU, 16CPU MacBook Pro.. IOS26/Xcodebeta2/Tahoe allocated 8CPU, 48GB memory in Parallels VM.
Settings for Playgrounds in Xcode
Thank you for your help in advance.
Foundation Models
RSS for tagDiscuss the Foundation Models framework which provides access to Apple’s on-device large language model that powers Apple Intelligence to help you perform intelligent tasks specific to your app.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Testing Foundation Models framework with a health-focused recipe generation app. The on-device approach is appealing but performance is rough. Taking 20+ seconds just to get recipe name and description. Same content from Claude API: 4 seconds.
I know it's beta and on-device has different tradeoffs, but this is approaching unusable territory for real-time user experience. The streaming helps psychologically but doesn't mask the underlying latency.The privacy/cost benefits are compelling but not if users abandon the feature before it completes.
Anyone else seeing similar performance? Is this expected for beta, or are there optimization techniques I'm missing?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Here's the result:
Very weird.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
In working with Apple's foundation models, we often want to provide as much context as possible. However, since the model has a context size limit of 4096 tokens, is there a way to estimate the number of tokens beforehand?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I am working on an app using FoundationModels to process web pages.
I am looking to find ways to filter the input to fit within the token limits.
I have unit tests, UI tests and the app running on an iPad in the simulator. It appears that the different configurations of the test environment seems to affect the token limits.
That is, the same input in a unit test and UI test will hit different token limits.
Is this correct? Or is this an artifact of my test tooling?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I am watching a few WWDC sessions on Foundation Model and its usage and it looks pretty cool.
I was wondering if it is possible to perform RAG on the user documents on the devices and entuallly on iCloud...
Let's say I have a lot of pages documents about me and I want the Foundation model to access those information on the documents to answer questions about me that can be retrieved from the documents.
How can this be done ?
Thanks
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I am excited to try Foundation Models during WWDC, but it doesn't work at all for me. When running on my iPad Pro M4 with iPadOS 26 seed 1, I get the following error even when running the simplest query:
let prompt = "How are you?"
let stream = session.streamResponse(to: prompt)
for try await partial in stream {
self.answer = partial
self.resultString = partial
}
In the Xcode console, I see the following error:
assetsUnavailable(FoundationModels.LanguageModelSession.GenerationError.Context(debugDescription: "Model is unavailable", underlyingErrors: []))
I have verified that Apple Intelligence is enabled on my iPad. Any tips on how can I get it working? I have also submitted this feedback: FB17896752
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Trying the Foundation Model framework and when I try to run several sessions in a loop, I'm getting a thrown error that I'm hitting a rate limit.
Are these rate limits documented? What's the best practice here?
I'm trying to run the models against new content downloaded from a web service where I might get ~200 items in a given download. They're relatively small but there can be that many that want to be processed in a loop.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I've run into an issue with a small Foundation Models test with Generable. I'm getting a strange error message with this Generable. I was able to get simpler ones to work.
Is this because the Generable is recursive with a property of [HTMLDiv]?
The error message is:
FoundationModels/SchemaAugmentor.swift:209: Fatal error: 'try!' expression unexpectedly raised an error: FoundationModels.GenerationSchema.SchemaError.undefinedReferences(schema: Optional("SafeResponse<HTMLDiv>"), references: ["HTMLDiv"], context: FoundationModels.GenerationSchema.SchemaError.Context(debugDescription: "Undefined types: [HTMLDiv]", underlyingErrors: []))
The code is:
import FoundationModels
import Playgrounds
@Generable
struct HTMLDiv {
@Guide(description: "Optional named ID, useful for nicknames")
var id: String? = nil
@Guide(description: "Optional visible HTML text")
var textContent: String? = nil
@Guide(description: "Any child elements", .count(0...10))
var children: [HTMLDiv] = []
static var sample: HTMLDiv {
HTMLDiv(
id: "profileToolbar",
children: [
HTMLDiv(textContent: "Log in"),
HTMLDiv(textContent: "Sign up"),
]
)
}
}
#Playground {
do {
let session = LanguageModelSession {
"Your job is to generate simple HTML markup"
"Here is an example response to the prompt: 'Make a profile toolbar':"
HTMLDiv.sample
}
let response = try await session.respond(
to: "Make a sign up form",
generating: HTMLDiv.self
)
print(response.content)
} catch {
print(error)
}
}
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
In the name of God, please allow initializing GeneratedContent from an array of key-value pairs. It’s literally the same thing KeyValuePairs uses internally, but it would let us initialize structure-like GeneratedContent from dynamic data without resorting to unsafeBitCast hacks.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I am using a contact tool to help get contact from my address book. but the model ins't invoking my tool call method. Even tried with a simple tool the outcome is the same my simple tool is not being invoked.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I'm interested in using Foundation Models to act as an AI support agent for our extensive in-app documentation. We have many pages of in-app documents, which the user can currently search, but it would be great to use Foundation Models to let the user get answers to arbitrary questions.
Is this possible with the current version of Foundation Models? It seems like the way to add new context to the model is with the instructions parameter on LanguageModelSession. As I understand it, the combined instructions and prompt need to consume less than 4096 tokens.
That definitely wouldn't be enough for the amount of documentation I want the agent to be able to refer to. Is there another way of doing this, maybe as a series of recursive queries? If there is a solution based on multiple queries, should I expect this to be fast enough for interactive use?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I have a Generable type with many elements. I am using a stream() to incrementally process the output (Generable.PartiallyGenerated?) content.
At the end, I want to pass the final version (not partially generated) to another function.
I cannot seem to find a good way to convert from a MyGenerable.PartiallyGenerated to a MyGenerable.
Am I missing some functionality in the APIs?
I'm using Xcode 26 Beta 5 and get errors on any generation I try, however harmless, when wrapped in the #Playground macro.
#Playground {
let session = LanguageModelSession()
let topic = "pandas"
let prompt = "Write a safe and respectful story about (topic)."
let response = try await session.respond(to: prompt)
Not seeing any issues on simulator or device. Anyone else seeing this or have any ideas?
Thanks for any help!
Version 26.0 beta 5 (17A5295f)
macOS 26.0 Beta (25A5316i)
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hello,
I am testing the sample project provided here: Bringing advanced speech-to-text capabilities to your app.
On both macOS 26.0 beta and iOS 26.0 beta, the app crashes immediately on launch with a dyld "Symbol not found" error related to FoundationModels.framework.
It feels like this may be related to testing primarily on newer Apple Silicon devices, as I am seeing consistent crashes on an Intel MacBook and on an older iPhone device.
I would appreciate any insight, confirmation, or guidance on whether this is a known limitation or if there is a workaround. Is it planned to be resolved soon?
Environment
macOS:
Device: MacBook Pro (Intel)
Processor: 2 GHz Quad-Core Intel Core i5
Graphics: Intel Iris Plus Graphics 1536 MB
Memory: 16 GB 3733 MHz LPDDR4X
OS: macOS Tahoe Version 26.0 Beta (25A5338b)
iOS:
Device: iPhone 11
Model Number: MHDD3HN/A
OS: iOS 26.0
Xcode:
Version: 26.0 beta 3 (17A5276g)
Crash (macOS)
Abort signal received. Excerpt from crash dump:
dyld`__abort_with_payload:
0x7ff80e3ad4a0 <+0>: movl $0x2000209, %eax
0x7ff80e3ad4a5 <+5>: movq %rcx, %r10
0x7ff80e3ad4a8 <+8>: syscall
-> 0x7ff80e3ad4aa <+10>: jae 0x7ff80e3ad4b4
Console:
dyld[9819]: Symbol not found: _$s16FoundationModels20LanguageModelSessionC5model10guardrails5tools12instructionsAcA06SystemcD0C_AC10GuardrailsVSayAA4Tool_pGAA12InstructionsVSgtcfC
Referenced from: /Users/userx/Library/Developer/Xcode/DerivedData/SwiftTranscriptionSampleApp-*/Build/Products/Debug/SwiftTranscriptionSampleApp.app/Contents/MacOS/SwiftTranscriptionSampleApp.debug.dylib
Expected in: /System/Library/Frameworks/FoundationModels.framework/Versions/A/FoundationModels
Crash (iOS)
Abort signal received. Excerpt from crash dump:
dyld`__abort_with_payload:
0x18f22b4b0 <+0>: mov x16, #0x209
0x18f22b4b4 <+4>: svc #0x80
-> 0x18f22b4b8 <+8>: b.lo 0x18f22b4d8
Console
dyld[2080]: Symbol not found: _$s16FoundationModels20LanguageModelSessionC5model10guardrails5tools12instructionsAcA06SystemcD0C_AC10GuardrailsVSayAA4Tool_pGAA12InstructionsVSgtcfC
Referenced from: /private/var/containers/Bundle/Application/.../SwiftTranscriptionSampleApp.app/SwiftTranscriptionSampleApp.debug.dylib
Expected in: /System/Library/Frameworks/FoundationModels.framework/FoundationModels
Question
Is this crash expected on Intel Macs and older iPhone models with the beta SDKs?
Is there an official statement on whether macOS 26.x releases support Intel, or it exists only until macOS 26.1?
Any suggested workarounds for testing this sample project on current hardware?
Is this a known limitation for the 26.0 beta, and if so, should we expect a fix in 26.0 or only in subsequent releases?
Attaching screenshots for reference.
Thank you in advance.
Hi, I am a new IOS developer, trying to learn to integrate the Apple Foundation Model.
my set up is:
Mac M1 Pro
MacOS 26 Beta
Version 26.0 beta 3
Apple Intelligence & Siri --> On
here is the code,
func generate() {
Task {
isGenerating = true
output = "⏳ Thinking..."
do {
let session = LanguageModelSession( instructions: """
Extract time from a message. Example
Q: Golfing at 6PM
A: 6PM
""")
let response = try await session.respond(to: "Go to gym at 7PM")
output = response.content
} catch {
output = "❌ Error:, \(error)"
print(output)
}
isGenerating = false
}
and I get these errors
guardrailViolation(FoundationModels.LanguageModelSession.GenerationError.Context(debugDescription: "Prompt may contain sensitive or unsafe content", underlyingErrors: [Asset com.apple.gm.safety_embedding_deny.all not found in Model Catalog]))
Can you help me get through this?
Hi
For certain tasks, such as qualitative analysis or tagging, it is advisable to provide the AI with the option to respond with a joker / wild card answer when it encounters difficulties in tagging or scoring. For instance, you can include this slot in the prompt as follows:
output must be "not data to score" when there isn't information to score.
In the absence of these types of slots, AI trends to provide a solution even when there is insufficient information.
Foundations Models are told to be prompted with simple prompts. I wonder: Is recommended keep this slot though adds verbose complexity? Is the best place the comment of a guided attribute? other tips?
Another use case is when you want the AI to be tied to the information provided in the prompt and not take information from its data set. What is the best approach to this purpose?
Thanks in advance for any suggestion.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hello!
I'm following the Foundation Models adapter training guide (https://developer.apple.com/apple-intelligence/foundation-models-adapter/) on my NVIDIA DGX Spark box. I'm able to train on my own data but the example notebook fails when I try to export the artifact as an fmadapter. I get the following error for the code block I'm trying to run. I haven't touched any of the code in the export folder. I tried exporting it on my Mac too and got the same error as well (given below). Would appreciate some more clarity around this. Thank you.
Code Block:
from export.export_fmadapter import Metadata, export_fmadapter
metadata = Metadata(
author="3P developer",
description="An adapter that writes play scripts.",
)
export_fmadapter(
output_dir="./",
adapter_name="myPlaywritingAdapter",
metadata=metadata,
checkpoint="adapter-final.pt",
draft_checkpoint="draft-model-final.pt",
)
Error:
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
Cell In[10], line 1
----> 1 from export.export_fmadapter import Metadata, export_fmadapter
3 metadata = Metadata(
4 author="3P developer",
5 description="An adapter that writes play scripts.",
6 )
8 export_fmadapter(
9 output_dir="./",
10 adapter_name="myPlaywritingAdapter",
(...) 13 draft_checkpoint="draft-model-final.pt",
14 )
File /workspace/export/export_fmadapter.py:11
8 from typing import Any
10 from .constants import BASE_SIGNATURE, MIL_PATH
---> 11 from .export_utils import AdapterConverter, AdapterSpec, DraftModelConverter, camelize
13 logger = logging.getLogger(__name__)
16 class MetadataKeys(enum.StrEnum):
File /workspace/export/export_utils.py:15
13 import torch
14 import yaml
---> 15 from coremltools.libmilstoragepython import _BlobStorageWriter as BlobWriter
16 from coremltools.models.neural_network.quantization_utils import _get_kmeans_lookup_table_and_weight
17 from coremltools.optimize._utils import LutParams
ModuleNotFoundError: No module named 'coremltools.libmilstoragepython'
Hi,
I have trained a basic adapter using the adapter training toolkit. I am trying a very basic example of loading it and running inference with it, but am getting the following error:
Passing along InferenceError::inferenceFailed::loadFailed::Error Domain=com.apple.TokenGenerationInference.E5Runner Code=0 "Failed to load model: ANE adapted model load failure: createProgramInstanceWithWeights:modelToken:qos:baseModelIdentifier:owningPid:numWeightFiles:error:: Program load new instance failure (0x170006)." UserInfo={NSLocalizedDescription=Failed to load model: ANE adapted model load failure: createProgramInstanceWithWeights:modelToken:qos:baseModelIdentifier:owningPid:numWeightFiles:error:: Program load new instance failure (0x170006).} in response to ExecuteRequest
Any ideas / direction?
For testing I am including the .fmadapter file inside the app bundle. This is where I load it:
@State private var session: LanguageModelSession? // = LanguageModelSession()
func loadAdapter() async throws {
if let assetURL = Bundle.main.url(forResource: "qasc---afm---4-epochs-adapter", withExtension: "fmadapter") {
print("Asset URL: \(assetURL)")
let adapter = try SystemLanguageModel.Adapter(fileURL: assetURL)
let adaptedModel = SystemLanguageModel(adapter: adapter)
session = LanguageModelSession(model: adaptedModel)
print("Loaded adapter and updated session")
} else {
print("Asset not found in the main bundle.")
}
}
This seems to work fine as I get to the log Loaded adapter and updated session. However when the below inference code runs I get the aforementioned error:
func sendMessage(_ msg: String) {
self.loading = true
if let session = session {
Task {
do {
let modelResponse = try await session.respond(to: msg)
DispatchQueue.main.async {
self.response = modelResponse.content
self.loading = false
}
} catch {
print("Error: \(error)")
DispatchQueue.main.async {
self.loading = false
}
}
}
}
}
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I'm testing Foundation Model on my iPad Pro (5th gen) iOS 26. Up until late this morning, I can no longer load the SystemLanguageModel.default. I'm not doing anything interesting, something as basic as this is only going to unavailable, specifically I get unavailable reason: modelNotReady.
let model = SystemLanguageModel.default
...
switch model.availability {
case .available:
print("LM available")
case .unavailable(let reason):
print("unavailable reason: ", String(describing: reason))
}
I also ran the FoundationModelsTripPlanner app, same thing. It was working yesterday, I have not modified that project either.
Why is the Model not ready? How do I fix this? Yes, I tried restarting both my laptop and iPad, no luck.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models