Foundation Models

RSS for tag

Discuss the Foundation Models framework which provides access to Apple’s on-device large language model that powers Apple Intelligence to help you perform intelligent tasks specific to your app.

Foundation Models Documentation

Posts under Foundation Models subtopic

Post

Replies

Boosts

Views

Activity

Using #Preview with a PartialyGenerated model
I have an app that streams in data from the Foundation Model and I have a card that shows one of the outputs. I want my card to accept a partially generated model but I keep getting a nonsensical error. The error I get on line 59 is: Cannot convert value of type 'FrostDate.VegetableSuggestion.PartiallyGenerated' (aka 'FrostDate.VegetableSuggestion') to expected argument type 'FrostDate.VegetableSuggestion.PartiallyGenerated' Here is my card with preview: import SwiftUI import FoundationModels struct VegetableSuggestionCard: View { let vegetableSuggestion: VegetableSuggestion.PartiallyGenerated init(vegetableSuggestion: VegetableSuggestion.PartiallyGenerated) { self.vegetableSuggestion = vegetableSuggestion } var body: some View { VStack(alignment: .leading, spacing: 8) { if let name = vegetableSuggestion.vegetableName { Text(name) .font(.headline) .frame(maxWidth: .infinity, alignment: .leading) } if let startIndoors = vegetableSuggestion.startSeedsIndoors { Text("Start indoors: \(startIndoors)") .frame(maxWidth: .infinity, alignment: .leading) } if let startOutdoors = vegetableSuggestion.startSeedsOutdoors { Text("Start outdoors: \(startOutdoors)") .frame(maxWidth: .infinity, alignment: .leading) } if let transplant = vegetableSuggestion.transplantSeedlingsOutdoors { Text("Transplant: \(transplant)") .frame(maxWidth: .infinity, alignment: .leading) } if let tips = vegetableSuggestion.tips { Text("Tips: \(tips)") .foregroundStyle(.secondary) .frame(maxWidth: .infinity, alignment: .leading) } } .padding(16) .frame(maxWidth: .infinity, alignment: .leading) .background( RoundedRectangle(cornerRadius: 16, style: .continuous) .fill(.background) .overlay( RoundedRectangle(cornerRadius: 16, style: .continuous) .strokeBorder(.quaternary, lineWidth: 1) ) .shadow(color: Color.black.opacity(0.05), radius: 6, x: 0, y: 2) ) } } #Preview("Vegetable Suggestion Card") { let sample = VegetableSuggestion.PartiallyGenerated( vegetableName: "Tomato", startSeedsIndoors: "6–8 weeks before last frost", startSeedsOutdoors: "After last frost when soil is warm", transplantSeedlingsOutdoors: "1–2 weeks after last frost", tips: "Harden off seedlings; provide full sun and consistent moisture." ) VegetableSuggestionCard(vegetableSuggestion: sample) .padding() .previewLayout(.sizeThatFits) }
1
0
82
Oct ’25
Foundation Models unavailable for millions of users due to device language restriction - Need per-app language override
Hi everyone, I'm developing an iOS app using Foundation Models and I've hit a critical limitation that I believe affects many developers and millions of users. The Issue Foundation Models requires the device system language to be one of the supported languages. If a user has their device set to an unsupported language (Catalan, Dutch, Swedish, Polish, Danish, Norwegian, Finnish, Czech, Hungarian, Greek, Romanian, and many others), SystemLanguageModel.isSupported returns false and the framework is completely unavailable. Why This Is Problematic Scenario: A Catalan user has their iPhone in Catalan (native language). They want to use an AI chat app in Spanish or English (languages they speak fluently). Current situation: ❌ Foundation Models: Completely unavailable ✅ OpenAI GPT-4: Works perfectly ✅ Anthropic Claude: Works perfectly ✅ Any cloud-based AI: Works perfectly The user must choose between: Keep device in Catalan → Cannot use Foundation Models at all Change entire device to Spanish → Can use Foundation Models but terrible UX Impact This affects: Millions of users in regions where unsupported languages are official Multilingual users who prefer their device in their native language but can comfortably interact with AI in English/Spanish Developers who cannot deploy Foundation Models-based apps in these markets Privacy-conscious users who are ironically forced to use cloud AI instead of on-device AI What We Need One of these solutions would solve the problem: Option 1: Per-app language override (preferred) // Proposed API let session = try await LanguageModelSession(preferredLanguage: "es-ES") Option 2: Faster rollout of additional languages (particularly EU languages) Option 3: Allow fallback to user-selected supported language when system language is unsupported Technical Details Current behavior: // Device in Catalan let isAvailable = SystemLanguageModel.isSupported // Returns false // No way to override or specify alternative language Why This Matters Apple Intelligence and Foundation Models are amazing for privacy and performance. But this language restriction makes the most privacy-focused AI solution less accessible than cloud alternatives. This seems contrary to Apple's values of accessibility and user choice. Questions for the Community Has anyone else encountered this limitation? Are there any workarounds I'm missing? Has anyone successfully filed feedback about this?(Please share FB number so we can reference it) Are there any sessions or labs where this has been discussed? Thanks for reading. I'd love to hear if others are facing this and how you're handling it.
1
1
332
Nov ’25
Defining instructions employing Content Tagging Model
Hello It seems the model Content Tagging doesn't obey when I define the type of tag I wish in the instructions parameters, always the output are the main topics. The unique form to get other type of tags like emotions is using Generable + Guided types. The documentation says it is recommended but not mandatory the use instructions. Maybe I'm setting wrongly the instructions but take a look in the attached snapshot. I copied the definition of tagging emotions from the official documentation. The upper example is employing generable and it works but in the example at the botton I set like instruction the same description of emotion and it doesn't work. I tried with other statements with more or less verbose and never output emotions. Could you provide a state using instruction where it works? Current version of model isn't working with instruction?
1
0
373
Oct ’25
Is there an API that allows iOS app developers to leverage Apple Foundation Models to authorize a user's Apple Intelligence extension, chatGPT login account?
Is there an API that allows iOS app developers to leverage Apple Foundation Models to authorize a user's Apple Intelligence extension, chatGPT login account? I'm trying to provide a real-time question feature for chatGPT, a logged-in extension account, while leveraging Apple Intelligence's LLM. Is there an API that also affects the extension login account?
1
0
179
Nov ’25
Apple's PCC + Foundation Models
Hi, I am developing an iOS application that utilizes Apple’s Foundation Models to perform certain summarization tasks. I would like to understand whether user data is transferred to Private Cloud Compute (PCC) in cases where the computation cannot be performed entirely on-device. This information is critical for our internal security and compliance reviews. I would appreciate your clarification on this matter. Thank you.
1
0
297
1w
Provide actionable feedback for the Foundation Models framework and the on-device LLM
We are really excited to have introduced the Foundation Models framework in WWDC25. When using the framework, you might have feedback about how it can better fit your use cases. Starting in macOS/iOS 26 Beta 4, the best way to provide feedback is to use #Playground in Xcode. To do so: In Xcode, create a playground using #Playground. Fore more information, see Running code snippets using the playground macro. Reproduce the issue by setting up a session and generating a response with your prompt. In the canvas on the right, click the thumbs-up icon to the right of the response. Follow the instructions on the pop-up window and submit your feedback by clicking Share with Apple. Another way to provide your feedback is to file a feedback report with relevant details. Specific to the Foundation Models framework, it’s super important to add the following information in your report: Language model feedback This feedback contains the session transcript, including the instructions, the prompts, the responses, etc. Without that, we can’t reason the model’s behavior, and hence can hardly take any action. Use logFeedbackAttachment(sentiment:issues:desiredOutput: ) to retrieve the feedback data of your current model session, as shown in the usage example, write the data into a file, and then attach the file to your feedback report. If you believe what you’d report is related to the system configuration, please capture a sysdiagnose and attach it to your feedback report as well. The framework is still new. Your actionable feedback helps us evolve the framework quickly, and we appreciate that. Thanks, The Foundation Models framework team
0
0
593
Aug ’25
Is it possible to pass the streaming output of Foundation Models down a function chain
I am writing a custom package wrapping Foundation Models which provides a chain-of-thought with intermittent self-evaluation among other things. At first I was designing this package with the command line in mind, but after seeing how well it augments the models and makes them more intelligent I wanted to try and build a SwiftUI wrapper around the package. When I started I was using synchronous generation rather than streaming, but to give the best user experience (as I've seen in the WWDC sessions) it is necessary to provide constant feedback to the user that something is happening. I have created a super simplified example of my setup so it's easier to understand. First, there is the Reasoning conversation item, which can be converted to an XML representation which is then fed back into the model (I've found XML works best for structured input) public typealias ConversationContext = XMLDocument extension ConversationContext { public func toPlainText() -> String { return xmlString(options: [.nodePrettyPrint]) } } /// Represents a reasoning item in a conversation, which includes a title and reasoning content. /// Reasoning items are used to provide detailed explanations or justifications for certain decisions or responses within a conversation. @Generable(description: "A reasoning item in a conversation, containing content and a title.") struct ConversationReasoningItem: ConversationItem { @Guide(description: "The content of the reasoning item, which is your thinking process or explanation") public var reasoningContent: String @Guide(description: "A short summary of the reasoning content, digestible in an interface.") public var title: String @Guide(description: "Indicates whether reasoning is complete") public var done: Bool } extension ConversationReasoningItem: ConversationContextProvider { public func toContext() -> ConversationContext { // <ReasoningItem title="${title}"> // ${reasoningContent} // </ReasoningItem> let root = XMLElement(name: "ReasoningItem") root.addAttribute(XMLNode.attribute(withName: "title", stringValue: title) as! XMLNode) root.stringValue = reasoningContent return ConversationContext(rootElement: root) } } Then there is the generator, which creates a reasoning item from a user query and previously generated items: struct ReasoningItemGenerator { var instructions: String { """ <omitted for brevity> """ } func generate(from input: (String, [ConversationReasoningItem])) async throws -> sending LanguageModelSession.ResponseStream<ConversationReasoningItem> { let session = LanguageModelSession(instructions: instructions) // build the context for the reasoning item out of the user's query and the previous reasoning items let userQuery = "User's query: \(input.0)" let reasoningItemsText = input.1.map { $0.toContext().toPlainText() }.joined(separator: "\n") let context = userQuery + "\n" + reasoningItemsText let reasoningItemResponse = try await session.streamResponse( to: context, generating: ConversationReasoningItem.self) return reasoningItemResponse } } I'm not sure if returning LanguageModelSession.ResponseStream<ConversationReasoningItem> is the right move, I am just trying to imitate what session.streamResponse returns. Then there is the orchestrator, which I can't figure out. It receives the streamed ConversationReasoningItems from the Generator and is responsible for streaming those to SwiftUI later and also for evaluating each reasoning item after it is complete to see if it needs to be regenerated (to keep the model on-track). I want the users of the orchestrator to receive partially generated reasoning items as they are being generated by the generator. Later, when they finish, if the evaluation passes, the item is kept, but if it fails, the reasoning item should be removed from the stream before a new one is generated. So in-flight reasoning items should be outputted aggresively. I really am having trouble figuring this out so if someone with more knowledge about asynchronous stuff in Swift, or- even better- someone who has worked on the Foundation Models framework could point me in the right direction, that would be awesome!
0
0
252
Jul ’25
How to Ensure Controlled and Contextual Responses Using Foundation Models ?
Hi everyone, I’m currently exploring the use of Foundation models on Apple platforms to build a chatbot-style assistant within an app. While the integration part is straightforward using the new FoundationModel APIs, I’m trying to figure out how to control the assistant’s responses more tightly — particularly: Ensuring the assistant adheres to a specific tone, context, or domain (e.g. hospitality, healthcare, etc.) Preventing hallucinations or unrelated outputs Constraining responses based on app-specific rules, structured data, or recent interactions I’ve experimented with prompt, systemMessage, and few-shot examples to steer outputs, but even with carefully generated prompts, the model occasionally produces incorrect or out-of-scope responses. Additionally, when using multiple tools, I'm unsure how best to structure the setup so the model can select the correct pathway/tool and respond appropriately. Is there a recommended approach to guiding the model's decision-making when several tools or structured contexts are involved? Looking forward to hearing your thoughts or being pointed toward related WWDC sessions, Apple docs, or sample projects.
0
0
119
Jul ’25
ANE Performance for on-device Foundation model
I'm running MacOs 26 Beta 5. I noticed that I can no longer achieve 100% usage on the ANE as I could before with Apple Foundations on-device model. Has Apple activated some kind of throttling or power limiting of the ANE? I cannot get above 3w or 40% usage now since upgrading. I'm on the high power energy mode. I there an API rate limit being applied? I kave a M4 Pro mini with 64 GB of memory.
0
0
321
Aug ’25
Code along with the Foundation Models framework
In this online session, you can code along with us as we build generative AI features into a sample app live in Xcode. We'll guide you through implementing core features like basic text generation, as well as advanced topics like guided generation for structured data output, streaming responses for dynamic UI updates, and tool calling to retrieve data or take an action. Check out these resources to get started: Download the project files: https://developer.apple.com/events/re... Explore the code along guide: https://developer.apple.com/events/re... Join the live Q&A: https://developer.apple.com/videos/pl... Agenda – All times PDT 10 a.m.: Welcome and Xcode setup 10:15 a.m.: Framework basics, guided generation, and building prompts 11 a.m.: Break 11:10 a.m.: UI streaming, tool calling, and performance optimization 11:50 a.m.: Wrap up All are welcome to attend the session. To actively code along, you'll need a Mac with Apple silicon that supports Apple Intelligence running the latest release of macOS Tahoe 26 and Xcode 26. If you have questions after the code along concludes please share a post here in the forums and engage with the community.
0
0
279
Sep ’25
Deterministic AI Safety Governor for iOS — Seeking Feedback on App Review Approach
I've built an iOS app with a novel approach to AI safety: a deterministic, pre-inference validation layer called Newton Engine. Instead of relying on the LLM to self-moderate, Newton validates every prompt BEFORE it reaches the model. It uses shape theory and semantic analysis to detect: • Corrosive frames (self-harm language patterns) • Logical contradictions (requests that undermine themselves) • Delegation attempts (asking AI to make human decisions) • Jailbreak patterns (prompt injection, role-play escapes) • Hallucination triggers (requests for fabricated citations) The system achieves a 96% adversarial catch rate across 847 test cases, with zero false positives on benign prompts. Key technical details: • Pure Swift/SwiftUI, no external dependencies • Runs entirely on-device (no server calls for validation) • Deterministic (same input always produces same output) • Auditable (full trace logging for every validation) I'm preparing to submit to the App Store and wanted to ask: Are there specific App Review guidelines I should reference for AI safety claims? Is there interest from Apple in deterministic governance layers for Apple Intelligence integration? Any recommendations for demonstrating safety compliance during review? The app is called Ada, and the engine is open source at: github.com/jaredlewiswechs/ada-newton Happy to share technical documentation or discuss the architecture with anyone interested. See: parcri.net
0
0
146
1d