Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

FoundationModels and Core Data
Hi, I have an app that uses Core Data to store user information and display it in various views. I want to know if it's possible to easily integrate this setup with FoundationModels to make it easier for the user to query and manipulate the information, and if so, how would I go about it? Can the model be pointed to the database schema file and the SQLite file sitting in the user's app group container to parse out the information needed? And/or should the NSManagedObjects be made @Generable for better output? Any guidance about this would be useful.
1
0
199
Jun ’25
Can not use Language Model in Xcode-beta
I've downloaded the Xcode-beta and run the sample project "FoundationModelsTripPlanner" but I got this error when trying generate the response. InferenceError::inferenceFailed::Error Domain=com.apple.UnifiedAssetFramework Code=5000 "There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.modelcatalog" UserInfo={NSLocalizedFailureReason=There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.modelcatalog} Device: M1 Pro Question: Is it because M1 not supporting this feature?
1
1
282
Jun ’25
Apple's PCC + Foundation Models
Hi, I am developing an iOS application that utilizes Apple’s Foundation Models to perform certain summarization tasks. I would like to understand whether user data is transferred to Private Cloud Compute (PCC) in cases where the computation cannot be performed entirely on-device. This information is critical for our internal security and compliance reviews. I would appreciate your clarification on this matter. Thank you.
1
0
298
1w
Foundation Models unavailable for millions of users due to device language restriction - Need per-app language override
Hi everyone, I'm developing an iOS app using Foundation Models and I've hit a critical limitation that I believe affects many developers and millions of users. The Issue Foundation Models requires the device system language to be one of the supported languages. If a user has their device set to an unsupported language (Catalan, Dutch, Swedish, Polish, Danish, Norwegian, Finnish, Czech, Hungarian, Greek, Romanian, and many others), SystemLanguageModel.isSupported returns false and the framework is completely unavailable. Why This Is Problematic Scenario: A Catalan user has their iPhone in Catalan (native language). They want to use an AI chat app in Spanish or English (languages they speak fluently). Current situation: ❌ Foundation Models: Completely unavailable ✅ OpenAI GPT-4: Works perfectly ✅ Anthropic Claude: Works perfectly ✅ Any cloud-based AI: Works perfectly The user must choose between: Keep device in Catalan → Cannot use Foundation Models at all Change entire device to Spanish → Can use Foundation Models but terrible UX Impact This affects: Millions of users in regions where unsupported languages are official Multilingual users who prefer their device in their native language but can comfortably interact with AI in English/Spanish Developers who cannot deploy Foundation Models-based apps in these markets Privacy-conscious users who are ironically forced to use cloud AI instead of on-device AI What We Need One of these solutions would solve the problem: Option 1: Per-app language override (preferred) // Proposed API let session = try await LanguageModelSession(preferredLanguage: "es-ES") Option 2: Faster rollout of additional languages (particularly EU languages) Option 3: Allow fallback to user-selected supported language when system language is unsupported Technical Details Current behavior: // Device in Catalan let isAvailable = SystemLanguageModel.isSupported // Returns false // No way to override or specify alternative language Why This Matters Apple Intelligence and Foundation Models are amazing for privacy and performance. But this language restriction makes the most privacy-focused AI solution less accessible than cloud alternatives. This seems contrary to Apple's values of accessibility and user choice. Questions for the Community Has anyone else encountered this limitation? Are there any workarounds I'm missing? Has anyone successfully filed feedback about this?(Please share FB number so we can reference it) Are there any sessions or labs where this has been discussed? Thanks for reading. I'd love to hear if others are facing this and how you're handling it.
1
1
332
Nov ’25
App Shortcuts Limit (10 per app) — Can This Be Increased?
Hi Apple team, When using AppShortcutsProvider, I hit the hard limit: Each app may have at most 10 App Shortcuts. This feels limiting for apps that offer multiple workflows and would benefit from deeper Siri integration. Could this cap be raised — ideally to 30 — to support broader use of AppIntents, enhance Siri automation, and unlock more system-level capabilities? AppShortcuts are a fantastic tool. Increasing the limit would make them even more powerful. Thanks!
1
0
157
Jun ’25
Create ML app seems to stop testing without error
I have a smallish image classifier I've been working on using the Create ML app. For a while everything was going fine, but lately, as the dataset has gotten larger, Create ML seems to stop during the testing phase with no error or test results. You can see here that there is no score in the result box, even though there are testing started and completed messages: No error message is shown in the Create ML app, but I do see these messages in the log: default 14:25:36.529887-0500 MLRecipeExecutionService [0x6000012bc000] activating connection: mach=false listener=false peer=false name=com.apple.coremedia.videodecoder default 14:25:36.529978-0500 MLRecipeExecutionService [0x41c5d34c0] activating connection: mach=false listener=true peer=false name=(anonymous) default 14:25:36.530004-0500 MLRecipeExecutionService [0x41c5d34c0] Channel could not return listener port. default 14:25:36.530364-0500 MLRecipeExecutionService [0x429a88740] activating connection: mach=false listener=false peer=true name=com.apple.xpc.anonymous.0x41c5d34c0.peer[1167].0x429a88740 default 14:25:36.534523-0500 MLRecipeExecutionService [0x6000012bc000] invalidated because the current process cancelled the connection by calling xpc_connection_cancel() default 14:25:36.534537-0500 MLRecipeExecutionService [0x41c5d34c0] invalidated because the current process cancelled the connection by calling xpc_connection_cancel() default 14:25:36.534544-0500 MLRecipeExecutionService [0x429a88740] invalidated because the current process cancelled the connection by calling xpc_connection_cancel() error 14:25:36.558788-0500 MLRecipeExecutionService CreateWithURL:342: *** ERROR: err=24 (Too many open files) - could not open '<CFURL 0x60000079b540 [0x1fdd32240]>{string = file:///Users/kevin/Library/Mobile%20Documents/com~apple~CloudDocs/Binary%20Formations/Under%20My%20Roof/Core%20ML%20Training%20Data/Household%20Items/Output/2025.01.23_12.55.16/Test/Stove/Test480.webp, encoding = 134217984, base = (null)}' default 14:25:36.559030-0500 MLRecipeExecutionService Error: <private> default 14:25:36.559077-0500 MLRecipeExecutionService Error: <private> Of particular interest is the "Too many open files" message from MLRecipeExecutionService referencing one of the test images. There are a total of 2,555 test images, which I wouldn't think would be a very large set. The system doesn't seem to be running out of memory or anything like that. Near the end of the test run there MLRecipeExecution service had 2934 file descriptors open according to lsof. Has anyone else run into this or know of a workaround? So far I've tried rebooting and recreating the Create ML project. Currently using Create ML Version 6.1 (150.3) on macOS 15.2 (24C101) running on a Mac Studio.
1
0
489
Jan ’25
Error when using Image Feature Print v2
Hi all, I'm working on an app to classify dog breeds via CoreML, but when I try training a model using Image Feature Print v2, I get the following error: Failed to create CVPixelBufferPool. Width = 0, Height = 0, Format = 0x00000000 Strangely, when I switch back to Image Feature Print v1, the model trains perfectly fine. I've verified that there aren't any invalid or broken images in my dataset. Is there a fix for this? Thanks!
1
1
479
Jan ’25
Inference Provider crashed with 2:5
I am trying to create a slightly different version of the content tagging code in the documentation: https://developer.apple.com/documentation/foundationmodels/systemlanguagemodel/usecase/contenttagging In the playground I am getting an "Inference Provider crashed with 2:5" error. I have no idea what that means or how to address the error. Any assistance would be appreciated.
1
0
535
Jul ’25
Xcode Beta 1 and FoundationsModel access
I downloaded Xcode Beta 1 on my mac (did not upgrade the OS). The target OS level of iOS26 and the device simulator for iOS26 is downloaded and selected as the target. When I try a simple Playground in Xcode ( #Playground ) I get a session error. #Playground { let avail = SystemLanguageModel.default.availability if avail != .available { print("SystemLanguageModel not available") return } let session = LanguageModelSession() do { let response = try await session.respond(to: "Create a recipe for apple pie") } catch { print(error) } } The error I get is: Asset com.apple.gm.safety_deny_input.foundation_models.framework.api not found in Model Catalog Is there a way to test drive the FoundationModel code without upgrading to macos26?
1
1
348
Jun ’25
About VisionKit DataScannerViewController
Hi I'm having a problem with DataScannerViewController, I'm using the volume barcode scanning feature in my app, prior to that I was using an AVCaptureDevice with the UltraWideAngle set. After discovering DataScannerViewController, we planned to replace the previous obsolete code with DataScannerViewController, all together it was ok, when I want to set the ultra wide angle, I don't know how to start. I tried to get the minZoomFactor and I realized that I get 0.0 I tried to set zoomFactor to 1.0 and I found that he is not valid Note: func dataScannerDidZoom(_ dataScanner: DataScannerViewController), when I try to get the minZoomFactor, set the zoomFactor in this proxy method, I find that it is valid! What should I do next, I want to use only DataScannerViewController and implement ultra wide angle Thanks a lot.
1
0
636
Jan ’25
CoreML: Model loading utilities
Hello, We find that models sometimes load very fast (<< 1 second) and sometimes encounter very long load times (>> 120 seconds). During such slow load times, the model is being compiled. We would greatly appreciate the ability to check cache validity via CoreML and determine that we are about to encounter long load times so that we can mitigate and provide a good user experience. A secondary issue: sometimes the cache is corrupted (typically .mpsgraphpackage yielding Metal cold asserts). This yields load failures and OS errors that persist between launches, and we have to manually nuke the cache (~/Library/..../my-app/...) for the CoreML assets. A CoreML API for clearing caches and hardening from asserts across the load paths would be appreciated
1
0
118
Jun ’25
How to confirm whether CreatML is training
I am currently training a Tabular Classification model in CreatML. The dataset comprises 30 features, including 1,000,000 training data points and 1,000,000 verification data points. Could you please estimate the approximate training time for an M4Max MacBook Pro? During the training process, CreatML has been displaying the “Processing” status, but there is no progress bar. I would like to ascertain whether the training is still ongoing, as I have often suspected that it has ceased.
1
0
643
Jan ’25
Train adapter with tool calling
Documentation on adapter train is lacking any details related to training on dataset with tool calling. And page about tool calling itself only explain how to use it from Swift without any internal details useful in training. Question is how schema should looks like for including tool calling in dataset?
1
0
237
Jun ’25
Problems creating a PipelineRegressor from a PyTorch converted model
I am trying to create a Pipeline with 3 sub-models: a Feature Vectorizer -> a NN regressor converted from PyTorch -> a Feature Extractor (to convert the output tensor to a Double value). The pipeline works fine when I use just a Vectorizer and an Extractor, this is the code: vectorizer = models.feature_vectorizer.create_feature_vectorizer( input_features=["windSpeed", "theoreticalPowerCurve", "windDirection"], # Multiple input features output_feature_name="input" ) preProc_spec = vectorizer[0] ct.utils.convert_double_to_float_multiarray_type(preProc_spec) extractor = models.array_feature_extractor.create_array_feature_extractor( input_features=[("input",datatypes.Array(3,))], # Multiple input features output_name="output", extract_indices = 1 ) ct.utils.convert_double_to_float_multiarray_type(extractor) pipeline_network = pipeline.PipelineRegressor ( input_features = ["windSpeed", "theoreticalPowerCurve", "windDirection"], output_features=["output"] ) pipeline_network.add_model(preProc_spec) pipeline_network.add_model(extractor) ct.utils.convert_double_to_float_multiarray_type(pipeline_network.spec) ct.utils.save_spec(pipeline_network.spec,"Final.mlpackage") This model works ok. I created a regression NN using PyTorch and converted to Core ML either import torch import torch.nn as nn class TurbinePowerModel(nn.Module): def __init__(self): super().__init__() self.linear1 = nn.Linear(3, 4) self.activation1 = nn.ReLU() #self.linear2 = nn.Linear(5, 4) #self.activation2 = nn.ReLU() self.output = nn.Linear(4, 1) def forward(self, x): #x = F.normalize(x, dim = 0) x = self.linear1(x) x = self.activation1(x) # x = self.linear2(x) # x = self.activation2(x) x = self.output(x) return x def forward_inference(self, windSpeed,theoreticalPowerCurve,windDirection): input_tensor = torch.tensor([windSpeed, theoreticalPowerCurve, windDirection], dtype=torch.float32) return self.forward(input_tensor) model = torch.load('TurbinePowerRegression-1layer.pt', weights_only=False) import coremltools as ct print(ct.__version__) import pandas as pd from sklearn.preprocessing import StandardScaler df = pd.read_csv('T1_clean.csv',delimiter=';') X = df[['WindSpeed','TheoreticalPowerCurve','WindDirection']] y = df[['ActivePower']] scaler = StandardScaler() X = scaler.fit_transform(X) y = scaler.fit_transform(y) X_tensor = torch.tensor(X, dtype=torch.float32) y_tensor = torch.tensor(y, dtype=torch.float32) traced_model = torch.jit.trace(model, X_tensor[0]) mlmodel = ct.convert( traced_model, inputs=[ct.TensorType(name="input", shape=X_tensor[0].shape)], classifier_config=None # Optional, for classification tasks ) mlmodel.save("TurbineBase.mlpackage") This model has a Multiarray(Float 32 3) as input and a Multiarray(Float32 1) as output. When I try to include it in the middle of the pipeline (Adjusting the output and input types of the other models accordingly), the process runs ok, but I have the following error when opening the generated model on Xcode: What's is missing on the models. How can I set or adjust this metadata properly? Thanks!!!
1
0
640
Dec ’24
VNCoreMLTransform - request failed
Keep getting error : I have tried Picker for File, Photo Library , both same results . Debugging the resize for 360x360 but still facing this error. The model I'm trying to implement is created with CreateMLComponents The process is from example of WWDC 2022 Banana Ripeness , I have used index for each .jpg . Prediction Failed: The VNCoreMLTransform request failed Is there some possible way to solve it or is error somewhere in training of model ?
1
0
462
Mar ’25
FoundationModels Content Sanitizer Blocking Legitimate Text Processing
I'm developing a macOS application using the FoundationModels framework (LanguageModelSession) and encountering issues with the content sanitizer blocking legitimate text input. ** Issue Description:** The content sanitizer is flagging text strings that contain certain substrings, even when they represent legitimate technical content. For example: F_SEEL_SEX1S.wav (sE Electronics SEX1S microphone model) Technical product identifiers Serial numbers and version codes ** Broader Concern:** The content sanitizer appears to be applying restrictions that seem inappropriate for user-owned content. Even if a filename were something like "human sex.wav", users should have the right to process their own legitimate files on their own devices without content filtering interference. ** Error Messages:** SensitiveContentSettings: Sanitizer model found unsafe content in value FoundationModels.LanguageModelSession.GenerationError error 2 ** Questions:** Is there a way to disable content sanitization for processing user-owned content? 2. What's the recommended approach for applications that need to handle arbitrary user text? 3. Are there APIs to process personal content without filtering restrictions? ** Environment:** macOS 26.0 FoundationModels framework LanguageModelSession Any guidance would be appreciated.
1
0
311
Jun ’25