Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

Insufficient memory for Foundational Model Adapter Training
I have a MacBook Pro M3 Pro with 18GB of RAM and was following the instructions to fine tune the foundational model given here: https://developer.apple.com/apple-intelligence/foundation-models-adapter/ However, while following the code sample in the example Jupyter notebook, my Mac hangs on the second code cell. Specifically: from examples.generate import generate_content, GenerationConfiguration from examples.data import Message output = generate_content( [[ Message.from_system("A conversation between a user and a helpful assistant. Taking the role as a play writer assistant for a kids' play."), Message.from_user("Write a script about penguins.") ]], GenerationConfiguration(temperature=0.0, max_new_tokens=128) ) output[0].response After some debugging, I was getting the following error: RuntimeError: MPS backend out of memory (MPS allocated: 22.64 GB, other allocations: 5.78 MB, max allowed: 22.64 GB). Tried to allocate 52.00 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure). So is my machine not capable enough to adapter train Apple's Foundation Model? And if so, what's the recommended spec and could this be specified somewhere? Thanks!
8
1
267
Jul ’25
Apple Intelligence stuck on "preparing" for 6 days.
On the October 10/28 release day of Apple Intelligence I opted in. My iPhone and iPad immediately went to "waitlist" and within 2 to 3 hours were ready to initialize Apple Intelligence. My MacBook Pro 14" with M3 Pro processor and 18 GB or RAM has been stuck on "preparing" since release day (6 days now). I've tried numerous workarounds that I found on forums as well as talking to Apple support, who basically had me repeat the workarounds that I found on forums. I've tried changing region to an area that does not have Apple Intelligence and then back to the US, I've changed Siri language to an unsupported one and back to a supported one, and I have tried disabling background/startup Apps, I've disabled and reenabled Siri. Oh, I've restarted a bunch and let the Mac alone for hours at a time. I've noticed that my selected Siri voice seems to not download. Finally, after several chats and calls with Apple support, I was told that it's Beta software, they can't help me, and I should try the developer forums.... so here I am. Any advice?
9
1
3.0k
Feb ’25
The yolo11 object detection model I exported to coreml stopped working in macOS15.2 beta.
After updating to macOS15.2beta, the Yolo11 object detection model exported to coreml outputs incorrect and abnormal bounding boxes. It also doesn't work in iOS apps built on a 15.2 mac. The same model worked fine on macOS14.1. When training a Yolo11 custom model in Python, exporting it to coreml, and testing it in the preview tab of mlpackage on macOS15.2 and Xcode16.0, the above result is obtained.
6
1
1.4k
Feb ’25
no tensorflow-metal past tf 2.18?
Hi We're on tensorflow 2.20 that has support now for python 3.13 (finally!). tensorflow-metal is still only supporting 2.18 which is over a year old. When can we expect to see support in tensorflow-metal for tf 2.20 (or later!) ? I bought a mac thinking I would be able to get great performance from the M processors but here I am using my CPU for my ML projects. If it's taking so long to release it, why not open source it so the community can keep it more up to date? cheers Matt
1
1
297
Nov ’25
lldb issues with Vision
HI, I've been modifying the Camera sample app found here: https://developer.apple.com/tutorials/sample-apps/capturingphotos-camerapreview ... in the processpreview images, I am calling in to the Vision APis to either detect a person or object, then I'm using the segmentation mask to extract the person and composite them onto a different background with some other filters. I am using coreimage to filter the CIImages, and converting and displaying as a SwiftUI Image. When running on my IPhone, it works fine. When running on my Iphone with the debugger, it crashes within a few seconds... Attached is a screenshot. At the top is an EXC_BAD_ACCESS in libRPAC.dylib`std::__1::__hash_table<std::__1::__hash_value_type<long, qos_info_t>, std::__1::__unordered_map_hasher<long, std::__1::__hash_value_type<long, qos_info_t>, std::__1::hash, std::__1::equal_to, true>, std::__1::__unordered_map_equal<long, std::__1::__hash_value_type<long, qos_info_t>, std::__1::equal_to, std::__1::hash, true>, std::__1::allocator<std::__1::__hash_value_type<long, qos_info_t>>>::__emplace_unique_key_args<long, std::__1::piecewise_construct_t const&, std::__1::tuple<long const&>, std::__1::tuple<>>: This was working fine a couple of days ago.. Not sure why it's popping up now. Am I correct in interpreting this as an LLDB issue? How do I fix it?
3
1
116
May ’25
iOS 26 beta breaking my model
I just recently updated to iOS 26 beta (23A5336a) to test an app I am developing I running an MLModel loaded from a .mlmodelc file. On the current iOS version 18.6.2 the model is running as expected with no issues. However on iOS 26 I am now getting error when trying to perform an inference to the model where I pass a camera frame into it. Below is the error I am seeing when I attempt to run an inference. at the bottom it says "Failed with status=0x1d : statusType=0x9: Program Inference error status=-1 Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model " does this indicate I need to convert my model or something? I don't understand since it runs as normal on iOS 18. Any help getting this to run again would be greatly appreciated. Thank you, processRequest:model:qos:qIndex:modelStringID:options:returnValue:error:: Could not process request ret=0x1d lModel=_ANEModel: { modelURL=file:///var/containers/Bundle/Application/04F01BF5-D48B-44EC-A5F6-3C7389CF4856/RizzCanvas.app/faceParsing.mlmodelc/ : sourceURL=(null) : UUID=46228BFC-19B0-45BF-B18D-4A2942EEC144 : key={"isegment":0,"inputs":{"input":{"shape":[512,512,1,3,1]}},"outputs":{"var_633":{"shape":[512,512,1,19,1]},"94_argmax_out_value":{"shape":[512,512,1,1,1]},"argmax_out":{"shape":[512,512,1,1,1]},"var_637":{"shape":[512,512,1,19,1]}}} : identifierSource=1 : cacheURLIdentifier=01EF2D3DDB9BA8FD1FDE18C7CCDABA1D78C6BD02DC421D37D4E4A9D34B9F8181_93D03B87030C23427646D13E326EC55368695C3F61B2D32264CFC33E02FFD9FF : string_id=0x00000000 : program=_ANEProgramForEvaluation: { programHandle=259022032430 : intermediateBufferHandle=13949 : queueDepth=127 } : state=3 : [Espresso::ANERuntimeEngine::__forward_segment 0] evaluate[RealTime]WithModel returned 0; code=8 err=Error Domain=com.apple.appleneuralengine Code=8 "processRequest:model:qos:qIndex:modelStringID:options:returnValue:error:: ANEProgramProcessRequestDirect() Failed with status=0x1d : statusType=0x9: Program Inference error" UserInfo={NSLocalizedDescription=processRequest:model:qos:qIndex:modelStringID:options:returnValue:error:: ANEProgramProcessRequestDirect() Failed with status=0x1d : statusType=0x9: Program Inference error} [Espresso::handle_ex_plan] exception=Espresso exception: "Generic error": ANEF error: /private/var/containers/Bundle/Application/04F01BF5-D48B-44EC-A5F6-3C7389CF4856/RizzCanvas.app/faceParsing.mlmodelc/model.espresso.net, processRequest:model:qos:qIndex:modelStringID:options:returnValue:error:: ANEProgramProcessRequestDirect() Failed with status=0x1d : statusType=0x9: Program Inference error status=-1 Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1). Error Domain=com.apple.Vision Code=3 "The VNCoreMLTransform request failed" UserInfo={NSLocalizedDescription=The VNCoreMLTransform request failed, NSUnderlyingError=0x114d92940 {Error Domain=com.apple.CoreML Code=0 "Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1)." UserInfo={NSLocalizedDescription=Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1).}}}
1
0
1k
Sep ’25
Issues with using ClassifyImageRequest() on an Xcode simulator
Hello, I am developing an app for the Swift Student challenge; however, I keep encountering an error when using ClassifyImageRequest from the Vision framework in Xcode: VTEST: error: perform(_:): inside 'for await result in resultStream' error: internalError("Error Domain=NSOSStatusErrorDomain Code=-1 \"Failed to create espresso context.\" UserInfo={NSLocalizedDescription=Failed to create espresso context.}") It works perfectly when testing it on a physical device, and I saw on another thread that ClassifyImageRequest doesn't work on simulators. Will this cause problems with my submission to the challenge? Thanks
5
1
773
Feb ’25
Regression in EnumeratedShaped support in recent MacOS release
Hi, unfortunately I am not able to verify this but I remember some time ago I was able to create CoreML models that had one (or more) inputs with an enumerated shape size, and one (or more) inputs with a static shape. This was some months ago. Since then I updated my MacOS to Sequoia 15.5, and when I try to execute MLModels with this setup I get the following error libc++abi: terminating due to uncaught exception of type CoreML::MLNeuralNetworkUtilities::AsymmetricalEnumeratedShapesException: A model doesn't allow input features with enumerated flexibility to have unequal number of enumerated shapes, but input feature global_write_indices has 1 enumerated shapes and input feature input_hidden_states has 3 enumerated shapes. It may make sense (but not really though) to verify that for inputs with a flexible enumerated shape they all have the same number of possible shapes is the same, but this should not impede the possibility of also having static shape inputs with a single shape defined alongside the flexible shape inputs.
6
1
199
May ’25
Foundation Model crash on macOS 15 (iPad app compatibility)
I have integrated Apple’s Foundation Model into my iOS application. As known, Foundation Model is only supported starting from iOS 26 on compatible devices. To maintain compatibility with older iOS versions, I wrapped the API calls with the condition if #available(iOS 26, *). The application works normally on an iPad running iOS 18 and on a Mac running macOS 26. However, when running the same build on a MacBook Air M1 (macOS 15) through iPad app compatibility, the app crashes immediately upon launch. The main issue is that I cannot debug directly on macOS 15, since the app can only be built on macOS 26 with Xcode beta. I then have to distribute it via TestFlight and download it on the MacBook Air M1 for testing. This makes identifying the detailed cause of the crash very difficult and time-consuming. Nevertheless, I have confirmed that the crash is caused by the Foundation Model APIs.
1
0
937
Aug ’25
TAMM toolkit v0.2.0 is for base model older than base model in macOS 26 beta 4
Problem: We trained a LoRA adapter for Apple's FoundationModels framework using their TAMM (Training Adapter for Model Modification) toolkit v0.2.0 on macOS 26 beta 4. The adapter trains successfully but fails to load with: "Adapter is not compatible with the current system base model." TAMM 2.0 contains export/constants.py with: BASE_SIGNATURE = "9799725ff8e851184037110b422d891ad3b92ec1" Findings: Adapter Export Process: In export_fmadapter.py def write_metadata(...): self_dict[MetadataKeys.BASE_SIGNATURE] = BASE_SIGNATURE # Hardcoded value The Compatibility Check: - When loading an adapter, Apple's system compares the adapter's baseModelSignature with the current system model - If they don't match: compatibleAdapterNotFound error - The error doesn't reveal the expected signature Questions: - How is BASE_SIGNATURE derived from the base model? - Is it SHA-1 of base-model.pt or some other computation? - Can we compute the correct signature for beta 4? - Or do we need Apple to release TAMM v0.3.0 with updated signature?
1
0
540
Aug ’25
Proposal: Develop a Token Estimation Tool for Foundation Models
Dear Apple Foundation Models Development Team, I am a developer integrating Apple Foundation Models (AFM) into my app and encountered the exceededContextWindowSize error when exceeding the 4096-token limit. Proposal: I suggest Apple develop a tool to estimate the token count of a prompt before sending it to the model. This tool could be integrated into FoundationModels Framework for ease of use. Benefits: A token estimation tool would help developers manage the context window limit and optimize performance. I hope Apple considers this proposal soon. Thank you!
6
0
346
Aug ’25
Ways I can leverage AI when the user asks Siri, "What does this word mean"
I'm the creator of an app that helps users learn Arabic. Inside of the app users can save words, engage in lessons specific to certain grammar concepts etc. I'm looking for a way for Siri to 'suggest' my app when the user asks to define any Arabic words. There are other questions that I would like for Siri to suggest my app for, but I figure that's a good start. What framework am I looking for here? I think AppItents? I remember I played with it for a bit last year but didn't get far. Any suggestions would be great. Would the new Foundations model be any help here?
2
0
118
Jun ’25
Crash when testing Speech sample app with FoundationModels on macOS 26.0 beta and iOS 26.0 beta
Hello, I am testing the sample project provided here: Bringing advanced speech-to-text capabilities to your app. On both macOS 26.0 beta and iOS 26.0 beta, the app crashes immediately on launch with a dyld "Symbol not found" error related to FoundationModels.framework. It feels like this may be related to testing primarily on newer Apple Silicon devices, as I am seeing consistent crashes on an Intel MacBook and on an older iPhone device. I would appreciate any insight, confirmation, or guidance on whether this is a known limitation or if there is a workaround. Is it planned to be resolved soon? Environment macOS: Device: MacBook Pro (Intel) Processor: 2 GHz Quad-Core Intel Core i5 Graphics: Intel Iris Plus Graphics 1536 MB Memory: 16 GB 3733 MHz LPDDR4X OS: macOS Tahoe Version 26.0 Beta (25A5338b) iOS: Device: iPhone 11 Model Number: MHDD3HN/A OS: iOS 26.0 Xcode: Version: 26.0 beta 3 (17A5276g) Crash (macOS) Abort signal received. Excerpt from crash dump: dyld`__abort_with_payload: 0x7ff80e3ad4a0 &lt;+0&gt;: movl $0x2000209, %eax 0x7ff80e3ad4a5 &lt;+5&gt;: movq %rcx, %r10 0x7ff80e3ad4a8 &lt;+8&gt;: syscall -&gt; 0x7ff80e3ad4aa &lt;+10&gt;: jae 0x7ff80e3ad4b4 Console: dyld[9819]: Symbol not found: _$s16FoundationModels20LanguageModelSessionC5model10guardrails5tools12instructionsAcA06SystemcD0C_AC10GuardrailsVSayAA4Tool_pGAA12InstructionsVSgtcfC Referenced from: /Users/userx/Library/Developer/Xcode/DerivedData/SwiftTranscriptionSampleApp-*/Build/Products/Debug/SwiftTranscriptionSampleApp.app/Contents/MacOS/SwiftTranscriptionSampleApp.debug.dylib Expected in: /System/Library/Frameworks/FoundationModels.framework/Versions/A/FoundationModels Crash (iOS) Abort signal received. Excerpt from crash dump: dyld`__abort_with_payload: 0x18f22b4b0 &lt;+0&gt;: mov x16, #0x209 0x18f22b4b4 &lt;+4&gt;: svc #0x80 -&gt; 0x18f22b4b8 &lt;+8&gt;: b.lo 0x18f22b4d8 Console dyld[2080]: Symbol not found: _$s16FoundationModels20LanguageModelSessionC5model10guardrails5tools12instructionsAcA06SystemcD0C_AC10GuardrailsVSayAA4Tool_pGAA12InstructionsVSgtcfC Referenced from: /private/var/containers/Bundle/Application/.../SwiftTranscriptionSampleApp.app/SwiftTranscriptionSampleApp.debug.dylib Expected in: /System/Library/Frameworks/FoundationModels.framework/FoundationModels Question Is this crash expected on Intel Macs and older iPhone models with the beta SDKs? Is there an official statement on whether macOS 26.x releases support Intel, or it exists only until macOS 26.1? Any suggested workarounds for testing this sample project on current hardware? Is this a known limitation for the 26.0 beta, and if so, should we expect a fix in 26.0 or only in subsequent releases? Attaching screenshots for reference. Thank you in advance.
4
0
529
Aug ’25
Swift playgrounds (.swiftpm) and CoreML
Hey guys, I've been having difficulties transferring my Xcode project to a Swift playground (.swiftpm) for the Swift Student Challenge. I keep getting these errors as well as none of the views being able to find the model in scope: "TrashDetector 1.mlmodel: No predominant language detected. Set COREML_CODEGEN_LANGUAGE to preferred language." Unexpected duplicate tasks: Target 'TrashQuest' (project 'TrashQuest') has write command with output /Users/kmcph3/Library/Developer/Xcode/DerivedData/TrashQuest-glvzskunedgtakfrdmsxdoplondj/Build/Intermediates.noindex/TrashQuest.build/Debug-iphonesimulator/TrashQuest.build/0a4ef2429d66360920ddb4f16e65e233.sb I've gone through multiple post with these exact problems, but they all seem to be talking about ".playground" files due to the "Resources" folder (mind you I did try exactly what they said). Is there anyone that can help??? (Quick side note, why does it need to be a swiftpm file for the SSC??? Like why can't we just send the zip of our Xcode project??)
2
0
780
Feb ’25
Metal GPU Work Won't Stop
Is there any way to stop GPU work running that is scheduled using metal? Long shader calculations don't stop when application is stopped in Xcode and continue to take up GPU time and affect the display. Why is this functionality not available when Swift Tasks are able to be canceled?
2
0
743
Feb ’25
Can't apply compression techniques on my CoreML Object Detection model.
import coremltools as ct from coremltools.models.neural_network import quantization_utils # load full precision model model_fp32 = ct.models.MLModel(modelPath) model_fp16 = quantization_utils.quantize_weights(model_fp32, nbits=16) model_fp16.save("reduced-model.mlmodel") I'm testing it with the model from one of Apple's source codes(GameBoardDetector), and it works fine, reduces the model size by half. But there are several problems with my model(trained on CreateML app using Full Network): Quantizing to float 16 does not work(new file gets created with reduced only 0.1mb). Quantizing to below 16 values cause errors, and no file gets created. Here are additional metadata and precisions of models. Working model's additional metadata and precision: Mine's additional metadata and precision:
2
0
622
Jan ’25