Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

Tensor Flow Metal 1.2.0 on M2 Fails to converge on common toy models
I've been trying to get some basic models to work on an M2 with tensor metal 1.2 and keras 2.15 and 2.18 and they all fail to work as expected. I'm running models copy/pasted from common tutorials like Jason Brownlee ML Mastery Object Classification tutorial using CIFAR-10. When run with the GPU I can't get any reasonable results. Under keras 2.15 the best validation accuracy ends up being around 10-15%. Under keras 2.18, the validation goes off the rails around epoch 5 with wildly low accuracy and loss values that are reported as "nan". Epoch 4/25 782/782: 19s 24ms/step - accuracy: 0.3450 - loss: 2.8925 - val_accuracy: 0.2992 - val_loss: 1.9869 Epoch 5/25 782/782: 19s 24ms/step - accuracy: 0.2553 - loss: nan - val_accuracy: 0.0000e+00 - val_loss: nan Running the same code on the CPU using keras 2.15 using tf.config.experimental.set_visible_devices([], 'GPU') yields a reasonable result with the validation accuracy around 75% as expected. Running the same code on keras 2.15 on a linux instance with just the CPU provides similar results. The tutorial can be found here: https://machinelearningmastery.com/object-recognition-convolutional-neural-networks-keras-deep-learning-library/ The only places I've deviated from the provided tutorial is using sdg = tf.keras.optimizers.legacy.SGD(learning_rate=lrate, momentum=0.9, nesterov=False) I did this at the advice of the warning: WARNING:absl:At this time, the v2.11+ optimizer `tf.keras.optimizers.SGD` runs slowly on M1/M2 Macs, please use the legacy Keras optimizer instead, located at `tf.keras.optimizers.legacy.SGD`. Is there something special that I need to do to make this work? I've followed the instructions here: https://developer.apple.com/metal/tensorflow-plugin/ I've purged the venv a few times and started from scratch, but all with similarly terrible results. Here are my platform details: Chip: Apple M2 Memory: 16 GB macOS : Sequoia 15.2 Python venv: 3.11 Jupyter Lab Version: 4.3.3 TensorFlow versions: 2.15, 2.18 tensorflow-metal: 1.2.0 Thanks for any assistance or advice.
8
3
881
Mar ’25
iOS 18 App Intents while supporting iOS 17
iOS 18 App Intents while supporting iOS 17 Hello, I have an existing app that supports iOS 17. I already have three App Intents but would like to add some of the new iOS 18 app intents like ShowInAppSearchResultsIntent. However, I am having a hard time using #available or @available to limit this ShowInAppSearchResultsIntent to iOS 18 only while still supporting iOS 17. Obviously, the ShowInAppSearchResultsIntent needs to use @AssistantIntent which is iOS 18 only, so I mark that struct as @available(iOS 18, *). That works as expected. It is when I need to add this "SearchSnippetIntent" intent to the AppShortcutsProvider, that I begin to have trouble doing. See code below: struct SnippetsShortcutsAppShortcutsProvider: AppShortcutsProvider { @AppShortcutsBuilder static var appShortcuts: [AppShortcut] { //iOS 17+ AppShortcut(intent: SnippetsNewSnippetShortcutsAppIntent(), phrases: [ "Create a New Snippet in \(.applicationName) Studio", ], shortTitle: "New Snippet", systemImageName: "rectangle.fill.on.rectangle.angled.fill") AppShortcut(intent: SnippetsNewLanguageShortcutsAppIntent(), phrases: [ "Create a New Language in \(.applicationName) Studio", ], shortTitle: "New Language", systemImageName: "curlybraces") AppShortcut(intent: SnippetsNewTagShortcutsAppIntent(), phrases: [ "Create a New Tag in \(.applicationName) Studio", ], shortTitle: "New Tag", systemImageName: "tag.fill") //iOS 18 Only AppShortcut(intent: SearchSnippetIntent(), phrases: [ "Search \(.applicationName) Studio", "Search \(.applicationName)" ], shortTitle: "Search", systemImageName: "magnifyingglass") } let shortcutTileColor: ShortcutTileColor = .blue } The iOS 18 Only AppShortcut shows the following error but none of the options seem to work. Maybe I am going about it the wrong way. 'SearchSnippetIntent' is only available in iOS 18 or newer Add 'if #available' version check Add @available attribute to enclosing static property Add @available attribute to enclosing struct Thanks in advance for your help.
4
3
2.1k
Jan ’25
Apple's PCC + Foundation Models
Hi, I am developing an iOS application that utilizes Apple’s Foundation Models to perform certain summarization tasks. I would like to understand whether user data is transferred to Private Cloud Compute (PCC) in cases where the computation cannot be performed entirely on-device. This information is critical for our internal security and compliance reviews. I would appreciate your clarification on this matter. Thank you.
1
0
288
1w
Foundation Model Framework
Greetings! I was trying to get a response from the LanguageModelSession but I just keep getting the following: Error getting response: Model Catalog error: Error Domain=com.apple.UnifiedAssetFramework Code=5000 "There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.MobileAsset.UAF.FM.Overrides" UserInfo={NSLocalizedFailureReason=There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.MobileAsset.UAF.FM.Overrides} This occurs both in macOS 15.5 running the new Xcode beta with an iOS 26 simulator, and also on a macOS 26 with Xcode beta. The simulators are both Pro iPhone 16s. I was wondering if anyone had any advice?
15
3
1.3k
Jun ’25
Foundation Models not working: "Model is unavailable" error on iPad Pro M4
I am excited to try Foundation Models during WWDC, but it doesn't work at all for me. When running on my iPad Pro M4 with iPadOS 26 seed 1, I get the following error even when running the simplest query: let prompt = "How are you?" let stream = session.streamResponse(to: prompt) for try await partial in stream { self.answer = partial self.resultString = partial } In the Xcode console, I see the following error: assetsUnavailable(FoundationModels.LanguageModelSession.GenerationError.Context(debugDescription: "Model is unavailable", underlyingErrors: [])) I have verified that Apple Intelligence is enabled on my iPad. Any tips on how can I get it working? I have also submitted this feedback: FB17896752
4
3
992
Sep ’25
Foundation Models not working in Simulator?
I'm attempting to run a basic Foundation Model prototype in Xcode 26, but I'm getting the error below, using the iPhone 16 simulator with iOS 26. Should these models be working yet? Do I need to be running macOS 26 for these to work? (I hope that's not it) Error: Passing along Model Catalog error: Error Domain=com.apple.UnifiedAssetFramework Code=5000 "There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.MobileAsset.UAF.FM.Overrides" UserInfo={NSLocalizedFailureReason=There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.MobileAsset.UAF.FM.Overrides} in response to ExecuteRequest Playground to reproduce: #Playground { let session = LanguageModelSession() do { let response = try await session.respond(to: "What's happening?") } catch { let error = error } }
14
3
2.0k
Jul ’25
Guardrail configuration options?
Is anything configurable for LanguageModelSession.Guardrails besides the default? I'm prototyping a camping app, and it's constantly slamming into guardrail errors when I use the new foundation model interface. Any subjects relating to fishing, survival, etc. won't generate. For example the prompt "How can I kill deer ticks using a clothing treatment?" returns a generation error. The results that I get are great when it works, but so far the local model sessions are extremely unreliable.
2
2
224
Jul ’25
coreml Fetching decryption key from server failed
My iOS app supports iOS 18, and I’m using an encrypted CoreML model secured with a key generated from Xcode. Every few months (around every 3 months), the encrypted model fails to load for both me and my users. When I investigate, I find this error: coreml Fetching decryption key from server failed: noEntryFound("No records found"). Make sure the encryption key was generated with correct team ID To temporarily fix it, I delete the old key, generate a new one, re-encrypt the model, and submit an app update. This resolves the issue, but only for a while. This is a terrible experience for users and obviously not a sustainable solution. I want to understand: Why is this happening? Is there a known expiration or invalidation policy for CoreML encryption keys? How can I prevent this issue permanently? Any insights or official guidance would be really appreciated.
5
2
548
Jul ’25
tensorflow-metal
Using Tensorflow for Silicon gives inaccurate results when compared to Google Colab GPU (9-15% differences). Here are my install versions for 4 anaconda env's. I understand the Floating point precision can be an issue, batch size, activation functions but how do you rectify this issue for the past 3 years? 1.) Version TF: 2.12.0, Python 3.10.13, tensorflow-deps: 2.9.0, tensorflow-metal: 1.2.0, h5py: 3.6.0, keras: 2.12.0 2.) Version TF: 2.19.0, Python 3.11.0, tensorflow-metal: 1.2.0, h5py: 3.13.0, keras: 3.9.2, jax: 0.6.0, jax-metal: 0.1.1,jaxlib: 0.6.0, ml_dtypes: 0.5.1 3.) python: 3.10.13,tensorflow: 2.19.0,tensorflow-metal: 1.2.0, h5py: 3.13.0, keras: 3.9.2, ml_dtypes: 0.5.1 4.) Version TF: 2.16.2, tensorflow-deps:2.9.0,Python: 3.10.16, tensorflow-macos 2.16.2, tensorflow-metal: 1.2.0, h5py:3.13.0, keras: 3.9.2, ml_dtypes: 0.3.2 Install of Each ENV with common example: Create ENV: conda create --name TF_Env_V2 --no-default-packages start env: source TF_Env_Name ENV_1.) conda install -c apple tensorflow-deps , conda install tensorflow,pip install tensorflow-metal,conda install ipykernel ENV_2.) conda install pip python==3.11, pip install tensorflow,pip install tensorflow-metal,conda install ipykernel ENV_3) conda install pip python 3.10.13,pip install tensorflow, pip install tensorflow-metal,conda install ipykernel ENV_4) conda install -c apple tensorflow-deps, pip install tensorflow-macos, pip install tensor-metal, conda install ipykernel Example used on all 4 env: import tensorflow as tf cifar = tf.keras.datasets.cifar100 (x_train, y_train), (x_test, y_test) = cifar.load_data() model = tf.keras.applications.ResNet50( include_top=True, weights=None, input_shape=(32, 32, 3), classes=100,) loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False) model.compile(optimizer="adam", loss=loss_fn, metrics=["accuracy"]) model.fit(x_train, y_train, epochs=5, batch_size=64)
5
2
1.1k
Oct ’25
Support for Content Exclusion Files in Apple Intelligence
I am writing to inquire about content exclusion capabilities within Apple Intelligence, particularly regarding the use of configuration files such as .aiignore or .aiexclude—similar to what exists in other AI-assisted coding tools. These mechanisms are highly valuable in managing what content AI systems can access, especially in environments that involve sensitive code or proprietary frameworks. I would appreciate it if anyone could clarify whether Apple Intelligence currently supports any exclusion configuration for AI-assisted features. If so, could you kindly provide documentation or guidance on how developers can implement these controls? If not, Is there any plan to include such feature in future updates?
4
0
677
Nov ’25
Group AppIntents’ Searchable DynamicOptionsProvider in Sections
I’m trying to group my EntityPropertyQuery selection into sections as well as making it searchable. I know that the EntityStringQuery is used to perform the text search via entities(matching string: String). That works well enough and results in this modal: Though, when I’m using a DynamicOptionsProvider to section my EntityPropertyQuery, it doesn’t allow for searching anymore and simply opens the sectioned list in a menu like so: How can I combine both? I’ve seen it in other apps, but can’t figure out why my code doesn’t allow to section the results and make it searchable? Any ideas? My code (simplified) struct MyIntent: AppIntent { @Parameter(title: "Meter"), optionsProvider: MyOptionsProvider()) var meter: MyIntentEntity? // … struct MyOptionsProvider: DynamicOptionsProvider { func results() async throws -> ItemCollection<MyIntentEntity> { // Get All Data let allData = try IntentsDataHandler.shared.getEntities() // Create Arrays for Sections let fooEntities = allData.filter { $0.type == .foo } let barEntities = allData.filter { $0.type == .bar } return ItemCollection(sections: [ ItemSection("Foo", items: fooEntities), ItemSection("Bar", items: barEntities) ]) } } struct MeterIntentQuery: EntityStringQuery { // entities(for identifiers: [UUID]) and suggestedEntities() functions func entities(matching string: String) async throws -> [MyIntentEntity] { // Fetch All Data let allData = try IntentsDataHandler.shared.getEntities() // Filter Data by String let matchingData = allData.filter { data in return data.title.localizedCaseInsensitiveContains(string)) } return matchingData } }
0
2
568
Mar ’25
Setting Required Capabilities for Foundation Models
Is there any way to ensure iOS apps we develop using Foundation Models can only be purchasable/downloadable on App Store by folks with capable devices? I would've thought there would be a Required Capabilities that App Store would hook into, but I don't seem to see it in the documentation here: https://developer.apple.com/documentation/bundleresources/information-property-list/uirequireddevicecapabilities The closest seems to be iphone-performance-gaming-tier as that seems to target all M1 and above chips on iPhone & iPad. There is an ipad-minimum-performance-m1 that would more reasonably seem to ensure Foundation Models is likely available, but that doesn't help with iPhone. So far, it seems the only path would be to set Minimum Deployment to iOS 26 and add iphone-performance-gaming-tier as a required capability, but I'm a bit worried that capability might diverge in the future from what's Foundation Model / Apple Intelligence capable. While I understand for the majority of apps they'll want to just selectively add in Apple Intelligence features and so can be usable by folks whose devices don't support it, the app experience I'm building doesn't make sense without the Foundation Models being available and I'd rather not have a large number of users downloading the app to be told "Sorry, you're not Apple Intelligence capable"
2
2
234
Aug ’25
Does Foundation Models ever do off-device computation?
I want to use Foundation Models in a project, but I know my users will want to avoid environmentally intensive AI work in data centers. Does Foundation Models ever use Private Compute Cloud or any other kind of cloud-based AI system? I'd like to be able to assure my users that the LLM usage is relatively environmentally friendly. It would be great to be able to cite a specific Apple page explaining that Foundation Models work is always done locally. If there's any chance that work can be done in the cloud, is there a way to opt out of that?
2
0
272
Oct ’25
Safety Guardrail errors for tiny prompt (dropped into large app)
I was able to open a new project and play around with the Foundation Model, but when I dropped this class in a production app (with a lot of files) I'm running into Safety Guardrail errors for this very small prompt. Specifically it's "Safety guardrail was triggered after consecutive failures during streaming." Does it have something to do with the size of the app? I don't know what else to try to get it to work? import FoundationModels import Playgrounds @available(iOS 26.0, *) #Playground { Task { do { let session = LanguageModelSession() let prompt = "Write a short story about a talking cat." let response = try await session.respond(to: prompt) print(response) } catch { print("Error: \(error)") } } }
3
2
240
Jun ’25
Using RAG on local documents from Foundation Model
I am watching a few WWDC sessions on Foundation Model and its usage and it looks pretty cool. I was wondering if it is possible to perform RAG on the user documents on the devices and entuallly on iCloud... Let's say I have a lot of pages documents about me and I want the Foundation model to access those information on the documents to answer questions about me that can be retrieved from the documents. How can this be done ? Thanks
4
2
411
Jun ’25
Conforming an existing AppIntent to the photos domain schema
I have an image based app with albums, except in my app, albums are known as galleries. When I tried to conform my existing OpenGalleryIntent with @AssistantIntent(schema: .photos.openAlbum), I had to change my existing gallery parameter to be called target in order to fit the predefined shape of this domain. Previously, my intent was configured to display as “Open Gallery” with the description “Opens the selected Gallery” in the Shortcuts app. After conforming to the photos domain, it displays as “Open Album” with a description “Opens the Provided Album”. Shortcuts is ignoring my configured title and description now. My code builds, but with the following build warnings: Parameter argument title of a required Assistant schema intent parameter target should not be overridden Implementation of the property title of an AppIntent conforming to AssistantSchemaIntent should not be overridden Implementation of the property description of an AppIntent conforming to AssistantSchemaIntent should not be overridden Is my only option to change the concept of a Gallery inside of my app into an Album? I don't want to do this... Conceptually, my app aligns well with this domain does, but I didn't consider that conforming to the shape of an AI schema intent would also dictate exactly how it's presented to the user. FB16283840
2
2
780
Jan ’25
Create ML Model Shows Wrong output or predictions in xcode
I am working on a CoreML image classification model in Xcode, which takes a 299x299 image and attempts to classify hand-drawn sketches. The model was trained using Create ML and works perfectly when tested in the Create ML preview. However, when used in Xcode application, the classification results are incorrect. I have already verified that the image is correctly resized to 299x299 pixels, matching the input size of the model. The classification always returns incorrect results, even when using images that were correctly classified during training. I originally used kCVPixelFormatType_32ARGB, but I read that CoreML typically expects BGRA format. I updated my conversion function to use kCVPixelFormatType_32BGRA and CGImageAlphaInfo.premultipliedLast, but the issue persists. This makes me suspect that either the pixel format is still incorrect or that something went wrong during the .mlmodelc compilation.
1
2
469
Jan ’25