Hi everyone,
I'm working with VNFeaturePrintObservation in Swift to compute the similarity between images. The computeDistance function allows me to calculate the distance between two images, and I want to cluster similar images based on these distances.
Current Approach
Right now, I'm using a brute-force approach where I compare every image against every other image in the dataset. This results in an O(n^2) complexity, which quickly becomes a bottleneck. With 5000 images, it takes around 10 seconds to complete, which is too slow for my use case.
Question
Are there any efficient algorithms or data structures I can use to improve performance?
If anyone has experience with optimizing feature vector clustering or has suggestions on how to scale this efficiently, I'd really appreciate your insights. Thanks!
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
The Core ML developer guide recommends saving reusable compiled Core ML models to a permanent location to avoid unnecessary rebuilds when creating a Core ML model instance.
However, there is no location that remains consistent across app updates, since each update changes the UUID associated with the app’s resources path
/var/mobile/Containers/Data/Application/<UUID>/Library/Application Support/
As a result, Core ML rebuilds models even if they are unchanged and located in the same relative directory within the app’s file structure.
Topic:
Machine Learning & AI
SubTopic:
Core ML
the specific context is that i would like to build an agent that monitors my phone call (with a customer support for example), and simiply identify whether or not im still put on hold, and notify me when im not.
currently after reading the doc, i dont think its possible yet, but im so annoyed by the customer support calls that im willing to go the distance and see if theres any way.
I have been able to train an adapter on Google's Colaboratory.
I am able to start a LanguageModelSession and load it with my adapter.
The problem is that after one simple prompt, the context window is 90% full.
If I start the session without the adapter, the same simple prompt consumes only 1% of the context window.
Has anyone encountered this? I asked Claude AI and it seems to think that my training script needs adjusting. Grok on the other hand is (wrongly, I tried) convinced that I just need to tweak some parameters of LanguageModelSession or SystemLanguageModel.
Thanks for any tips.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Incident Identifier: 4C22F586-71FB-4644-B823-A4B52D158057
CrashReporter Key: adc89b7506c09c2a6b3a9099cc85531bdaba9156
Hardware Model: Mac16,10
Process: PRISMLensCore [16561]
Path: /Applications/PRISMLens.app/Contents/Resources/app.asar.unpacked/node_modules/core-node/PRISMLensCore.app/PRISMLensCore
Identifier: com.prismlive.camstudio
Version: (null) ((null))
Code Type: ARM-64
Parent Process: ? [16560]
Date/Time: (null)
OS Version: macOS 15.4 (24E5228e)
Report Version: 104
Exception Type: EXC_CRASH (SIGABRT)
Exception Codes: 0x00000000 at 0x0000000000000000
Crashed Thread: 34
Application Specific Information:
*** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[__NSArrayM insertObject:atIndex:]: object cannot be nil'
Thread 34 Crashed:
0 CoreFoundation 0x000000018ba4dde4 0x18b960000 + 974308 (__exceptionPreprocess + 164)
1 libobjc.A.dylib 0x000000018b512b60 0x18b4f8000 + 109408 (objc_exception_throw + 88)
2 CoreFoundation 0x000000018b97e69c 0x18b960000 + 124572 (-[__NSArrayM insertObject:atIndex:] + 1276)
3 Portrait 0x0000000257e16a94 0x257da3000 + 473748 (-[PTMSRResize addAdditionalOutput:] + 604)
4 Portrait 0x0000000257de91c0 0x257da3000 + 287168 (-[PTEffectRenderer initWithDescriptor:metalContext:useHighResNetwork:faceAttributesNetwork:humanDetections:prevTemporalState:asyncInitQueue:sharedResources:] + 6204)
5 Portrait 0x0000000257dab21c 0x257da3000 + 33308 (__33-[PTEffect updateEffectDelegate:]_block_invoke.241 + 164)
6 libdispatch.dylib 0x000000018b739b2c 0x18b738000 + 6956 (_dispatch_call_block_and_release + 32)
7 libdispatch.dylib 0x000000018b75385c 0x18b738000 + 112732 (_dispatch_client_callout + 16)
8 libdispatch.dylib 0x000000018b742350 0x18b738000 + 41808 (_dispatch_lane_serial_drain + 740)
9 libdispatch.dylib 0x000000018b742e2c 0x18b738000 + 44588 (_dispatch_lane_invoke + 388)
10 libdispatch.dylib 0x000000018b74d264 0x18b738000 + 86628 (_dispatch_root_queue_drain_deferred_wlh + 292)
11 libdispatch.dylib 0x000000018b74cae8 0x18b738000 + 84712 (_dispatch_workloop_worker_thread + 540)
12 libsystem_pthread.dylib 0x000000018b8ede64 0x18b8eb000 + 11876 (_pthread_wqthread + 292)
13 libsystem_pthread.dylib 0x000000018b8ecb74 0x18b8eb000 + 7028 (start_wqthread + 8)
Topic:
Machine Learning & AI
SubTopic:
General
Good morning all has anyone encountered the issue of Siri returning back to her original user interface on IOS-26? I’m trying to figure out the cause. I’ve sent feedback via the feedback app. Just seeing if anyone else has the same issue.
Some of my users are experiencing crashes on instantiation of a CoreML model I've bundled with my app. I haven't been able to reproduce the crash on any of my devices. Crashes happen across all iOS 18 releases. Seems like something internal in CoreML is causing an issue.
Full stack trace:
6646631296fb42128ddc340b2d4322f7-symbolicated.crash
Topic:
Machine Learning & AI
SubTopic:
Core ML
Hello Apple Developer Community,
I'm exploring the integration of Apple Intelligence features into my mobile application and have a couple of questions regarding the current and upcoming API capabilities:
Custom Prompt Support: Is there a way to pass custom prompts to Apple Intelligence to generate specific inferences? For instance, can we provide a unique prompt to the Writing Tools or Image Playground APIs to obtain tailored outputs?
Direct Inference Capabilities: Beyond the predefined functionalities like text rewriting or image generation, does Apple Intelligence offer APIs that allow for more generalized inference tasks based on custom inputs?
I understand that Apple has provided APIs such as Writing Tools, Image Playground, and Genmoji. However, I'm interested in understanding the extent of customization and flexibility these APIs offer, especially concerning custom prompts and generalized inference.
Additionally, are there any plans or timelines for expanding these capabilities, perhaps with the introduction of new SDKs or frameworks that allow deeper integration and customization?
Any insights, documentation links, or experiences shared would be greatly appreciated.
Thank you in advance for your assistance!
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
I installed Xcode 26.0 beta and downloaded the generative models sample from here:
https://developer.apple.com/documentation/foundationmodels/adding-intelligent-app-features-with-generative-models
But when I run it in the iOS 26.0 simulator, I get the error shown here. What's going wrong?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hi everyone
Im currently developing an object detection model that shall identify up to seven classes in an image. While im usually doing development with basic python and the ultralytics library, i thought i would like to give CreateML a shot. The experience is actually very nice, except for the fact that the model seem not to be using any ANE or GPU (MPS) for accelerated training.
On https://developer.apple.com/machine-learning/create-ml/ it states: "On-device training Train models blazingly fast right on your Mac while taking advantage of CPU and GPU."
Am I doing something wrong?
Im running the training on
Apple M1 Pro 16GB
MacOS 26.1 (Tahoe)
Xcode 26.1 (Build version 17B55)
It would be super nice to get some feedback or instructions.
Thank you in advance!
The developer tutorial for visual intelligence indicates that the method to detect and handle taps on a displayed entity from the Search section is via an "OpenIntent" associated with your entity.
However, running this intent executes code from within my app. If I have the perform() method display UI, it always displays UI from within my app.
I noticed that the Google app's integration to visual intelligence has a different behavior-- tapping on an entity does not take you to the Google app -- instead, a Webview is presented sheet-style WITHIN the Visual Intelligence environment (see below)
How is that accomplished?
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
I've created a "Transfer Learning BERT Embeddings" model with the default "Latin" language family and "Automatic" Language setting. This model performs exceptionally well against the test data set and functions as expected when I preview it in Create ML. However, when I add it to the Xcode project of the application to which I am deploying it, I am getting runtime errors that suggest it can't find the embedding resources:
Failed to locate assets for 'mul_Latn' - '5C45D94E-BAB4-4927-94B6-8B5745C46289' embedding model
Note, I am adding the model to the app project the same way that I added an earlier "Maximum Entropy" model. That model had no runtime issues. So it seems there is an issue getting hold of the embeddings at runtime.
For now, "runtime" means in the Simulator. I intend to deploy my application to iOS devices once GM 26 is released (the app also uses AFM).
I'm developing on Tahoe 26 beta, running on iOS 26 beta, using Xcode 26 beta.
Is this a known/expected issue? Are the embeddings expected to be a resource in the model? Is there a workaround?
I did try opening the model in Xcode and saving it as an mlpackage, then adding that to my app project, but that also didn't resolve the issue.
I'm running MacOs 26 Beta 5. I noticed that I can no longer achieve 100% usage on the ANE as I could before with Apple Foundations on-device model. Has Apple activated some kind of throttling or power limiting of the ANE? I cannot get above 3w or 40% usage now since upgrading. I'm on the high power energy mode. I there an API rate limit being applied?
I kave a M4 Pro mini with 64 GB of memory.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I have been using "apple" to test foundation models.
I thought this is local, but today the answer changed - half way through explanation, suddenly guardrailViolation error was activated! And yesterday, all reference to "Apple II", "Apple III" now refers me to consult apple.com!
Does foundation models connect to Internet for answer? Using beta 3.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I would like to make use of create ML to classify a motion. However, it seems it requires 2 classes at least to train or test it. What should I do as I only has 1 class (the target motion).
Also, how to interpret the 'Recall' and 'F1 Score'
Topic:
Machine Learning & AI
SubTopic:
Create ML
Hello Apple Developer Community,
I'm investigating Core ML model loading behavior and noticed that even when the compiled model path remains unchanged after an APP update, the first run still triggers an "uncached load" process. This seems to impact user experience with unnecessary delays.
Question: Does Core ML provide any public API to check whether a compiled model (from a specific .mlmodelc path) is already cached in the system?
If such API exists, we'd like to use it for pre-loading decision logic - only perform background pre-load when the model isn't cached.
Has anyone encountered similar scenarios or found official solutions? Any insights would be greatly appreciated!
Hello,
I was successfully able to compile TKDKid1000/TinyLlama-1.1B-Chat-v0.3-CoreML using Core ML, and it's working well. However, I’m now trying to compile the same model using Swift Transformers.
With the limited documentation available on the swift-chat and Hugging Face repositories, I’m finding it difficult to understand the correct process for compiling a model via Swift Transformers. I attempted the following approach, but I’m fairly certain it’s not the recommended or correct method.
Could someone guide me on the proper way to compile and use models like TinyLlama with Swift Transformers? Any official workflow, example, or best practice would be very helpful.
Thanks in advance!
This is the approach I have used:
import Foundation
import CoreML
import Tokenizers
@main
struct HopeApp {
static func main() async {
print(" Running custom decoder loop...")
do {
let tokenizer = try await AutoTokenizer.from(pretrained: "PY007/TinyLlama-1.1B-Chat-v0.3")
var inputIds = tokenizer("this is the test of the prompt")
print("🧠 Prompt token IDs:", inputIds)
let model = try float16_model(configuration: .init())
let maxTokens = 30
for _ in 0..<maxTokens {
let input = try MLMultiArray(shape: [1, 128], dataType: .int32)
let mask = try MLMultiArray(shape: [1, 128], dataType: .int32)
for i in 0..<inputIds.count {
input[i] = NSNumber(value: inputIds[i])
mask[i] = 1
}
for i in inputIds.count..<128 {
input[i] = 0
mask[i] = 0
}
let output = try model.prediction(input_ids: input, attention_mask: mask)
let logits = output.logits // shape: [1, seqLen, vocabSize]
let lastIndex = inputIds.count - 1
let lastLogitsStart = lastIndex * 32003 // vocab size = 32003
var nextToken = 0
var maxLogit: Float32 = -Float.greatestFiniteMagnitude
for i in 0..<32003 {
let logit = logits[lastLogitsStart + i].floatValue
if logit > maxLogit {
maxLogit = logit
nextToken = i
}
}
inputIds.append(nextToken)
if nextToken == 32002 { break }
let partialText = try await tokenizer.decode(tokens:inputIds)
print(partialText)
}
} catch {
print("❌ Error: \(error)")
}
}
}
Topic:
Machine Learning & AI
SubTopic:
Core ML
I'm the creator of an app that helps users learn Arabic. Inside of the app users can save words, engage in lessons specific to certain grammar concepts etc. I'm looking for a way for Siri to 'suggest' my app when the user asks to define any Arabic words. There are other questions that I would like for Siri to suggest my app for, but I figure that's a good start. What framework am I looking for here? I think AppItents? I remember I played with it for a bit last year but didn't get far. Any suggestions would be great.
Would the new Foundations model be any help here?
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Hey everyone, I am a beginner with developing and using Artificial Intelligence models.
How do I integrate my createML image classification with swift.
I already have have an ML model and I want to integrate it into a swiftUI app.
If anyone could help, that would be great.
Thank you, O3DP
Hey guys, I've been having difficulties transferring my Xcode project to a Swift playground (.swiftpm) for the Swift Student Challenge. I keep getting these errors as well as none of the views being able to find the model in scope:
"TrashDetector 1.mlmodel: No predominant language detected. Set COREML_CODEGEN_LANGUAGE to preferred language."
Unexpected duplicate tasks: Target 'TrashQuest' (project 'TrashQuest') has write command with output /Users/kmcph3/Library/Developer/Xcode/DerivedData/TrashQuest-glvzskunedgtakfrdmsxdoplondj/Build/Intermediates.noindex/TrashQuest.build/Debug-iphonesimulator/TrashQuest.build/0a4ef2429d66360920ddb4f16e65e233.sb
I've gone through multiple post with these exact problems, but they all seem to be talking about ".playground" files due to the "Resources" folder (mind you I did try exactly what they said). Is there anyone that can help???
(Quick side note, why does it need to be a swiftpm file for the SSC??? Like why can't we just send the zip of our Xcode project??)
Topic:
Machine Learning & AI
SubTopic:
Core ML