Apple Intelligence

RSS for tag

Apple Intelligence is the personal intelligence system that puts powerful generative models right at the core of your iPhone, iPad, and Mac and powers incredible new features to help users communicate, work, and express themselves.

Posts under Apple Intelligence tag

88 Posts

Post

Replies

Boosts

Views

Activity

How can I use private AI agents in Xcode 26.3?
I work on some proprietary codebases and can only use private AI services with them (currently MiniMax M2.1 and GLM 4.7). It all works great with both Claude Code and OpenCode agents, and I'd like to leverage the new agentic capabilities that are now in Xcode 26.3. I'm not seeing any option to connect to OpenCode, and both the Anthropic and OpenAI providers require an enterprise account (which I don't have access to). Are there any options that I'm missing here?
4
3
134
4h
Apple Intelligence crashed/stopped working
Hi everyone, I’m currently using macOS Version 15.3 Beta (24D5034f), and I’m encountering an issue with Apple Intelligence. The image generation tools seem to work fine, but everything else shows a message saying that it’s “not available at this time.” I’ve tried restarting my Mac and double-checked my settings, but the problem persists. Is anyone else experiencing this issue on the beta version? Are there any fixes or settings I might be overlooking? Any help or insights would be greatly appreciated! Thanks in advance!
3
1
1.4k
5d
Sharing: How I Built an IPv4/IPv6 Dual-Stack Network Diagnostic Tool for iOS
Hi everyone 👋 As a network engineer and indie iOS developer, I couldn’t find a lightweight mobile tool that fully supports IPv4/IPv6 dual-stack diagnostics — so I built NetToolbox -All-In-One Utility for engineers, DevOps, and developers. Here are its core features that solve real mobile networking pain points: One-Click Full Diagnostics: Integrates ping, traceroute, and multi-type DNS queries (A/AAAA/CNAME) — no need to switch between apps IPv4/IPv6 Dual-Stack Support: Seamlessly works in IPv6-only networks, with the ability to test connectivity differences between dual-stack environments LAN Device Scanning: Quickly identifies all devices on the same network segment and checks port availability Offline Functionality: Diagnostic logic is stored locally, enabling LAN troubleshooting without an internet connection Lightweight Design: 5MB install size, no storage bloat, and low power consumption during operation Dark Mode Support: Tailored for developers who work late at night During development, I leveraged Apple Intelligence alongside Claude Code and Gemini 3 to accelerate the process, optimize iOS native networking stack adaptation and local storage logic, and significantly boost development efficiency. I’d love to hear from the community: What must-have features are missing from mobile network diagnostic tools? Do you have experience optimizing iOS workflows with Apple Intelligence? 👉 You can try the app here: https://apps.apple.com/us/app/nettoolbox-all-in-one-utility/id6757392404 Feedback is highly appreciated — I’ll keep iterating to make it better! 🚀
1
0
68
1w
Foundation Model Framework
Greetings! I was trying to get a response from the LanguageModelSession but I just keep getting the following: Error getting response: Model Catalog error: Error Domain=com.apple.UnifiedAssetFramework Code=5000 "There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.MobileAsset.UAF.FM.Overrides" UserInfo={NSLocalizedFailureReason=There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.MobileAsset.UAF.FM.Overrides} This occurs both in macOS 15.5 running the new Xcode beta with an iOS 26 simulator, and also on a macOS 26 with Xcode beta. The simulators are both Pro iPhone 16s. I was wondering if anyone had any advice?
19
3
2.7k
2w
Accessible Speech Practice App - R Helper Launch
Hi Community, I'm excited to share R Helper, a speech practice app I built with accessibility as the core focus from day one. App Store: https://apps.apple.com/app/speak-r-clearly/id6751442522 WHY I BUILT THIS I personally struggled with R sound pronunciation growing up. It affected my confidence in school and job interviews. That experience taught me how important accessible practice tools are. R Helper helps children and adults practice R sounds with full accessibility support. ACCESSIBILITY FEATURES IMPLEMENTED VoiceOver - complete navigation and feedback Voice Control - hands-free operation Dynamic Type - scales to large accessibility sizes Reduce Motion - respects user preference Dark Mode - user controllable High Contrast compatibility Differentiate Without Color THE CHALLENGE Most speech practice apps ignore accessibility. I wanted to change that and prove that specialized educational apps can be fully accessible. KEY FEATURES Works 100% offline, no internet needed Zero data collection, privacy first Generous free tier with all accessibility features included 10 story missions with gamification 7 languages supported including RTL for Arabic LESSONS LEARNED Accessibility is not hard when you prioritize it from the start. VoiceOver labels and hints make a huge difference. Testing with accessibility features enabled is essential. Standard SwiftUI components handle most accessibility automatically. Reducing motion significantly helps users with vestibular issues. TECHNICAL DETAILS Built with SwiftUI, targets iOS 17 and up. Universal app for iPhone and iPad. Fully offline using CoreData and local storage. No third party analytics, privacy focused. QUESTIONS FOR THE COMMUNITY What accessibility features do you find users request most? How do you test accessibility features efficiently? WHATS NEXT I'm currently working on expanding the word library, adding more story content, improving haptic feedback Thanks for reading. Nour
1
1
1.4k
2w
Nasty problems in Xcode 26.2: Apple Intelligence crash & Source Code Control Failure
I know this post isn't going to give a lot of details, but what I experienced tonight was so completely weird that I wanted to get it posted here in case others run into it: FIRST: All was well until I made a trivial change to a large Objective-C++ module. I suddenly got the idea to look at that line in the code review pane, to see if that area of code had ever had recent modifications. But, the entire module showed up as modified -- one giant change bar, with nothing on the right side of the code review pane, no matter what commit I selected. Then I noticed that the two lines of code which had all of 4 characters edited were no longer showing any change bars. Yet, the file showed up as "modified". Still, the exact line changes were not showing in the source code navigator, even though other files showed their changes. Note I'm connected to our remote repo on github. I did some command line git checks of the local repo, and the changes were there (as yet unstaged). So -- I figured, I'm gonna ask the Apple Coding Assistant what's up. And it gave some fantastic advice, especially on how to confirm the changes really were in the repo ready to stage and commit and push. Which I did. But despite following a couple hours of wonderful suggestions, I could never get the change bars back -- for this one specific file! (yes, the file was in the repo, and in the project -- everything seemed OK with the file itself -- nothing had changed in the project, which compiled and ran perfectly with my changes). SECOND -- suddenly, the AI assistant seemed to crash Xcode. When I went to re-run Xcode, it just crashed exactly the same. The crash log indicated "Xcode is crashing inside the IDEIntelligenceChat plugin while it’s trying to “apply changes” to a Source Editor buffer". Ultimately, I needed to restart Xcode holding down the SHIFT key. I could open other projects, but not the one I had been working on. So I turned OFF apple Intelligence (thanks, ChatGPT). That allowed me to launch. It sounds like some sort of corrupt Apple Intelligence chat logs and/or caches, which ChatGPT has given me extensive suggestions for deleting. I don't have the energy to attack that tonight -- I did an additional Time Machine backup and hope to take a closer look tomorrow. Ideally -- I'd rather NOT lose all my on-going coding assistant chats for this project -- I had some ongoing suggestions I was working on. But more concerning is the weirdness with changebars affecting this one 7,000 line .mm file. It doesn't seem like there's anything that should affect those change bars for ONE FILE that is in the repo and where changes can be seen from a git diff command line operation. If it's a bug -- I can live with it. But it's worrisome. Other than that, Xcode 26.2 has been running great! Unlike 26.1, which insisted on re-compiling all 600 files in my project every time I ran/debugged, 26.2 just does the 2-6 modified files -- a perfect incremental compile. I've saved HOURS of wasted unnecessary compilation since 26.2 was released.
4
1
474
2w
Image Playground files suddenly not available
My app lets you create images with Image Playground. When the user approves an image I move it to the documents dir from the temp storage. With over a year of usage I’ve created a lot of images over time. Out of nowhere the app stopped loading my custom creations from Image Playground saying it couldn’t find the files. It still had my VoiceOver strings I had added for each image and still had the custom categories I assigned them. Debug code to look in the docs dir doesn’t find them. I downloaded the app’s container and only see the images I created as a test after the problem started. But my ~70MB app is still taking up 300MB on my iPhone so it feels like they’re there but not accessible. Is there anything else I can try?
2
0
779
3w
What devices will the Swift Student Challenge be judged on?
My project requires the on-device apple intelligence models (FoundationModels) which are only available for iPad on iPad Pro M1 and later, iPad Air M1 and later, iPad mini A17 Pro. If they don't judge on one of these devices, my project might not work properly as FoundationModels is a pretty big part of my project. For this reason I really need to know what devices the Swift Student Challenge will be judged on.
1
0
265
3w
Pre-inference AI Safety Governor for FoundationModels (Swift, On-Device)
Hi everyone, I've been building an on-device AI safety layer called Newton Engine, designed to validate prompts before they reach FoundationModels (or any LLM). Wanted to share v1.3 and get feedback from the community. The Problem Current AI safety is post-training — baked into the model, probabilistic, not auditable. When Apple Intelligence ships with FoundationModels, developers will need a way to catch unsafe prompts before inference, with deterministic results they can log and explain. What Newton Does Newton validates every prompt pre-inference and returns: Phase (0/1/7/8/9) Shape classification Confidence score Full audit trace If validation fails, generation is blocked. If it passes (Phase 9), the prompt proceeds to the model. v1.3 Detection Categories (14 total) Jailbreak / prompt injection Corrosive self-negation ("I hate myself") Hedged corrosive ("Not saying I'm worthless, but...") Emotional dependency ("You're the only one who understands") Third-person manipulation ("If you refuse, you're proving nobody cares") Logical contradictions ("Prove truth doesn't exist") Self-referential paradox ("Prove that proof is impossible") Semantic inversion ("Explain how truth can be false") Definitional impossibility ("Square circle") Delegated agency ("Decide for me") Hallucination-risk prompts ("Cite the 2025 CDC report") Unbounded recursion ("Repeat forever") Conditional unbounded ("Until you can't") Nonsense / low semantic density Test Results 94.3% catch rate on 35 adversarial test cases (33/35 passed). Architecture User Input ↓ [ Newton ] → Validates prompt, assigns Phase ↓ Phase 9? → [ FoundationModels ] → Response Phase 1/7/8? → Blocked with explanation Key Properties Deterministic (same input → same output) Fully auditable (ValidationTrace on every prompt) On-device (no network required) Native Swift / SwiftUI String Catalog localization (EN/ES/FR) FoundationModels-ready (#if canImport) Code Sample — Validation let governor = NewtonGovernor() let result = governor.validate(prompt: userInput) if result.permitted { // Proceed to FoundationModels let session = LanguageModelSession() let response = try await session.respond(to: userInput) } else { // Handle block print("Blocked: Phase \(result.phase.rawValue) — \(result.reasoning)") print(result.trace.summary) // Full audit trace } Questions for the Community Anyone else building pre-inference validation for FoundationModels? Thoughts on the Phase system (0/1/7/8/9) vs. simple pass/fail? Interest in Shape Theory classification for prompt complexity? Best practices for integrating with LanguageModelSession? Links GitHub: https://github.com/jaredlewiswechs/ada-newton Technical overview: parcri.net Happy to share more implementation details. Looking for feedback, collaborators, and anyone else thinking about deterministic AI safety on-device.
1
0
546
4w
LanguageModelSession with multiple tools and structured outpout
Hi, I'm using LanguageModelSession and giving it two different tools to query data from a local database. I'm wondering how I can have the session generate structured content as the response that includes data one or both tools (or no tool at all). Here is an example of what I'm trying to do: Let's say the app has access to a database that contains information about exercise and sleep data (this is just an analogy). There are two tools, GetExerciseData() and GetSleepData(). The user may then prompt something like, "how well did I sleep in November". I have this working so that it calls through to the right tool, which would return a SleepSummary. However, I can't figure out how to have the session return the right structured data. I can do this and get back good text data: let response = session.respond(to: userInput), but I believe I want to do something like: let response = session.respond(to: trimmed, generating: <SomeStructure?>) Sometimes the model I run one tool or the other, or both tools, or no tool at all. Any help of what the right way to go about this would be much appreciated. Most of the example I found have to do with 1 tool.
1
0
492
4w
Pre-inference AI Safety Governor for FoundationModels (Swift, On-Device)
Greetings, and Happy Holidays, I've been building an on-device AI safety layer called Newton Engine, designed to validate prompts before they reach FoundationModels (or any LLM). Wanted to share v1.3 and get feedback from the community. The Problem Current AI safety is post-training — baked into the model, probabilistic, not auditable. When Apple Intelligence ships with FoundationModels, developers will need a way to catch unsafe prompts before inference, with deterministic results they can log and explain. What Newton Does Newton validates every prompt pre-inference and returns: Phase (0/1/7/8/9) Shape classification Confidence score Full audit trace If validation fails, generation is blocked. If it passes (Phase 9), the prompt proceeds to the model. v1.3 Detection Categories (14 total) Jailbreak / prompt injection Corrosive self-negation ("I hate myself") Hedged corrosive ("Not saying I'm worthless, but...") Emotional dependency ("You're the only one who understands") Third-person manipulation ("If you refuse, you're proving nobody cares") Logical contradictions ("Prove truth doesn't exist") Self-referential paradox ("Prove that proof is impossible") Semantic inversion ("Explain how truth can be false") Definitional impossibility ("Square circle") Delegated agency ("Decide for me") Hallucination-risk prompts ("Cite the 2025 CDC report") Unbounded recursion ("Repeat forever") Conditional unbounded ("Until you can't") Nonsense / low semantic density Test Results 94.3% catch rate on 35 adversarial test cases (33/35 passed). Architecture User Input ↓ [ Newton ] → Validates prompt, assigns Phase ↓ Phase 9? → [ FoundationModels ] → Response Phase 1/7/8? → Blocked with explanation Key Properties Deterministic (same input → same output) Fully auditable (ValidationTrace on every prompt) On-device (no network required) Native Swift / SwiftUI String Catalog localization (EN/ES/FR) FoundationModels-ready (#if canImport) Code Sample — Validation let governor = NewtonGovernor() let result = governor.validate(prompt: userInput) if result.permitted { // Proceed to FoundationModels let session = LanguageModelSession() let response = try await session.respond(to: userInput) } else { // Handle block print("Blocked: Phase \(result.phase.rawValue) — \(result.reasoning)") print(result.trace.summary) // Full audit trace } Questions for the Community Anyone else building pre-inference validation for FoundationModels? Thoughts on the Phase system (0/1/7/8/9) vs. simple pass/fail? Interest in Shape Theory classification for prompt complexity? Best practices for integrating with LanguageModelSession? Links GitHub: https://github.com/jaredlewiswechs/ada-newton Technical overview: parcri.net Happy to share more implementation details. Looking for feedback, collaborators, and anyone else thinking about deterministic AI safety on-device. parcri.net has the link :)
1
0
315
Dec ’25
On-Device Intelligent Assistant (Works Offline with Foundation Models)
Hello, World I built a deterministic safety layer for FoundationModels called Newton. It validates prompts before inference — if validation fails, generation never happens. It catches jailbreaks, hallucination traps, corrosive frames, and logical contradictions with 94% accuracy on adversarial inputs. All on-device, native Swift, no dependencies. Newton also has a front-facing Intelligent Partner named Ada, and given the incredible integration with FoundationModels and various census data and shape files, this is all available PRIVATE AND OFFLINE. Running on iOS 26 beta today. Happy to demo. https://github.com/jaredlewiswechs/ada-newton — Jared Lewis parcri.net
1
0
118
Dec ’25
Error with guardrailViolation and underlyingErrors
Hi, I am a new IOS developer, trying to learn to integrate the Apple Foundation Model. my set up is: Mac M1 Pro MacOS 26 Beta Version 26.0 beta 3 Apple Intelligence &amp; Siri --&gt; On here is the code, func generate() { Task { isGenerating = true output = "⏳ Thinking..." do { let session = LanguageModelSession( instructions: """ Extract time from a message. Example Q: Golfing at 6PM A: 6PM """) let response = try await session.respond(to: "Go to gym at 7PM") output = response.content } catch { output = "❌ Error:, \(error)" print(output) } isGenerating = false } and I get these errors guardrailViolation(FoundationModels.LanguageModelSession.GenerationError.Context(debugDescription: "Prompt may contain sensitive or unsafe content", underlyingErrors: [Asset com.apple.gm.safety_embedding_deny.all not found in Model Catalog])) Can you help me get through this?
4
0
552
Dec ’25
Using a local model for Xcode Assist
Hi, I'm interested in trying out Xcode Assist to help with things like complicated refactors or writing tests cases. The ChatGPT and Claude options both share your code with third parties, which is not acceptable for my use case. Has anyone used a fully local model for Xcode Assist? I see that you can select one in the Apple Intelligence section of Xcode's Preferences screen, but don't really know where to start. Are there local models that work well with Xcode Assist and that truly keep your source code private?
1
0
343
Dec ’25
Accessibility & Inclusion
We are developing Apple AI for foreign markets and adapting it for iPhone models 17 and above. When the system language and Siri language are not the same—for example, if the system is in English and Siri is in Chinese—it can cause a situation where Apple AI cannot be used. So, may I ask if there are any other reasons that could cause Apple AI to be unavailable within the app, even if it has been enabled?
0
0
431
Dec ’25
Does Image Playground is On-device + Private Cloud ?
Apple's Image Playground primarily performs image generation on-device, but can use secure Private Cloud Compute for more complex requests that require larger models. Private Cloud Compute (PCC) For more complex tasks that require greater computational power than the device can provide, Image Playground leverages Apple's Private Cloud Compute. This system extends the privacy and security of the device to the cloud: Secure Environment: PCC runs on Apple silicon servers and uses a secure enclave to protect data, ensuring requests are processed in a verified, secure environment. No Data Storage: Data is never stored or made accessible to Apple when using PCC; it is used only to fulfill the specific request. Independent Verification: Independent experts are able to inspect the code running on these servers to verify Apple's privacy promises.
3
0
852
Dec ’25
Is there an API that allows iOS app developers to leverage Apple Foundation Models to authorize a user's Apple Intelligence extension, chatGPT login account?
Is there an API that allows iOS app developers to leverage Apple Foundation Models to authorize a user's Apple Intelligence extension, chatGPT login account? I'm trying to provide a real-time question feature for chatGPT, a logged-in extension account, while leveraging Apple Intelligence's LLM. Is there an API that also affects the extension login account?
1
0
252
Nov ’25