Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

Apple Intelligence stuck on "preparing" for 6 days.
On the October 10/28 release day of Apple Intelligence I opted in. My iPhone and iPad immediately went to "waitlist" and within 2 to 3 hours were ready to initialize Apple Intelligence. My MacBook Pro 14" with M3 Pro processor and 18 GB or RAM has been stuck on "preparing" since release day (6 days now). I've tried numerous workarounds that I found on forums as well as talking to Apple support, who basically had me repeat the workarounds that I found on forums. I've tried changing region to an area that does not have Apple Intelligence and then back to the US, I've changed Siri language to an unsupported one and back to a supported one, and I have tried disabling background/startup Apps, I've disabled and reenabled Siri. Oh, I've restarted a bunch and let the Mac alone for hours at a time. I've noticed that my selected Siri voice seems to not download. Finally, after several chats and calls with Apple support, I was told that it's Beta software, they can't help me, and I should try the developer forums.... so here I am. Any advice?
9
1
3.0k
Feb ’25
The yolo11 object detection model I exported to coreml stopped working in macOS15.2 beta.
After updating to macOS15.2beta, the Yolo11 object detection model exported to coreml outputs incorrect and abnormal bounding boxes. It also doesn't work in iOS apps built on a 15.2 mac. The same model worked fine on macOS14.1. When training a Yolo11 custom model in Python, exporting it to coreml, and testing it in the preview tab of mlpackage on macOS15.2 and Xcode16.0, the above result is obtained.
6
1
1.4k
Feb ’25
Issues with using ClassifyImageRequest() on an Xcode simulator
Hello, I am developing an app for the Swift Student challenge; however, I keep encountering an error when using ClassifyImageRequest from the Vision framework in Xcode: VTEST: error: perform(_:): inside 'for await result in resultStream' error: internalError("Error Domain=NSOSStatusErrorDomain Code=-1 \"Failed to create espresso context.\" UserInfo={NSLocalizedDescription=Failed to create espresso context.}") It works perfectly when testing it on a physical device, and I saw on another thread that ClassifyImageRequest doesn't work on simulators. Will this cause problems with my submission to the challenge? Thanks
5
1
773
Feb ’25
Efficient Clustering of Images Using VNFeaturePrintObservation.computeDistance
Hi everyone, I'm working with VNFeaturePrintObservation in Swift to compute the similarity between images. The computeDistance function allows me to calculate the distance between two images, and I want to cluster similar images based on these distances. Current Approach Right now, I'm using a brute-force approach where I compare every image against every other image in the dataset. This results in an O(n^2) complexity, which quickly becomes a bottleneck. With 5000 images, it takes around 10 seconds to complete, which is too slow for my use case. Question Are there any efficient algorithms or data structures I can use to improve performance? If anyone has experience with optimizing feature vector clustering or has suggestions on how to scale this efficiently, I'd really appreciate your insights. Thanks!
0
0
524
Feb ’25
Swift playgrounds (.swiftpm) and CoreML
Hey guys, I've been having difficulties transferring my Xcode project to a Swift playground (.swiftpm) for the Swift Student Challenge. I keep getting these errors as well as none of the views being able to find the model in scope: "TrashDetector 1.mlmodel: No predominant language detected. Set COREML_CODEGEN_LANGUAGE to preferred language." Unexpected duplicate tasks: Target 'TrashQuest' (project 'TrashQuest') has write command with output /Users/kmcph3/Library/Developer/Xcode/DerivedData/TrashQuest-glvzskunedgtakfrdmsxdoplondj/Build/Intermediates.noindex/TrashQuest.build/Debug-iphonesimulator/TrashQuest.build/0a4ef2429d66360920ddb4f16e65e233.sb I've gone through multiple post with these exact problems, but they all seem to be talking about ".playground" files due to the "Resources" folder (mind you I did try exactly what they said). Is there anyone that can help??? (Quick side note, why does it need to be a swiftpm file for the SSC??? Like why can't we just send the zip of our Xcode project??)
2
0
780
Feb ’25
Metal GPU Work Won't Stop
Is there any way to stop GPU work running that is scheduled using metal? Long shader calculations don't stop when application is stopped in Xcode and continue to take up GPU time and affect the display. Why is this functionality not available when Swift Tasks are able to be canceled?
2
0
743
Feb ’25
Using the Apple Neural Engine for MLTensor operations
Based on the documentation, it appears that MLTensor can be used to perform tensor operations using the ANE (Apple Neural Engine) by wrapping the tensor operations with withMLTensorComputePolicy with a MLComputePolicy initialized with MLComputeUnits.cpuAndNeuralEngine (it can also be initialized with MLComputeUnits.all to let the OS spread the load between the Neural Engine, GPU and CPU). However, when using the Instruments app, it appears that the tensor operations never get executed on the Neural Engine. It would be helpful if someone can guide me on the correct way to ensure that the Nerual Engine is used to perform the tensor operations (not as part of a CoreML model file). based on this example, I've created a simple code to try it: import Foundation import CoreML print("Starting...") let semaphore = DispatchSemaphore(value: 0) Task { await withMLTensorComputePolicy(.init(MLComputeUnits.cpuAndNeuralEngine)) { let v1 = MLTensor([1.0, 2.0, 3.0, 4.0]) let v2 = MLTensor([5.0, 6.0, 7.0, 8.0]) let v3 = v1.matmul(v2) await v3.shapedArray(of: Float.self) // is 70.0 let m1 = MLTensor(shape: [2, 3], scalars: [ 1, 2, 3, 4, 5, 6 ], scalarType: Float.self) let m2 = MLTensor(shape: [3, 2], scalars: [ 7, 8, 9, 10, 11, 12 ], scalarType: Float.self) let m3 = m1.matmul(m2) let result = await m3.shapedArray(of: Float.self) // is [[58, 64], [139, 154]] // Supports broadcasting let m4 = MLTensor(randomNormal: [3, 1, 1, 4], scalarType: Float.self) let m5 = MLTensor(randomNormal: [4, 2], scalarType: Float.self) let m6 = m4.matmul(m5) print("Done") return result; } semaphore.signal() } semaphore.wait() Here's what I get on the Instruments app: Notice how the Neural Engine line shows no usage. Ive run this test on an M1 Max MacBook Pro.
2
4
762
Mar ’25
Creating .mlmodel with Create ML Components
I have rewatched WWDC22 a few times , but still not getting full understanding how to get .mlmodel model file type from components . Example with banana ripeness is cool , but what need to be added to actually have output of .mlmodel , is somewhere full sample code for this type of modular project ? Code is from [https://developer.apple.com/videos/play/wwdc2022/10019) import CoreImage import CreateMLComponents struct ImageRegressor { static let trainingDataURL = URL(fileURLWithPath: "~/Desktop/bananas") static let parametersURL = URL(fileURLWithPath: "~/Desktop/parameters") static func train() async throws -> some Transformer<CIImage, Float> { let estimator = ImageFeaturePrint() .appending(LinearRegressor()) // File name example: banana-5.jpg let data = try AnnotatedFiles(labeledByNamesAt: trainingDataURL, separator: "-", index: 1, type: .image) .mapFeatures(ImageReader.read) .mapAnnotations({ Float($0)! }) let (training, validation) = data.randomSplit(by: 0.8) let transformer = try await estimator.fitted(to: training, validateOn: validation) try estimator.write(transformer, to: parametersURL) return transformer } } I have tried to run it in Mac OS command line type app, Swift-UI but most what I had as output was .pkg with "pipeline.json, parameters, optimizer.json, optimizer"
3
0
541
Mar ’25
missing CreateML frameworks
I have reinstalled everything including command line tools but the CreateML frameworks fail to install, I need the framework so that I can train my auto-categorzation model which predicts category based on descriptions. I need that framework because I want to use reviision 4. please suggest advice on how do I proceed
4
0
739
Mar ’25
Making a model in MLLinearRegressor works with Sonoma, but on upgrading to 15.3.1 it no longer does "anything"
I was generating models using the code:- import Foundation import CreateML import TabularData import CoreML .... func makeTheModel(columntopredict:String,training:DataFrame,colstouse:[String],numberofmodels:Int) -> [MLLinearRegressor] { var returnmodels = [MLLinearRegressor]() var result = 0.0 for i in 0...numberofmodels { let pms = MLLinearRegressor.ModelParameters(validation: .split(strategy: .automatic)) do { let tm = try MLLinearRegressor(trainingData: training, targetColumn: columntopredict) returnmodels.append(tm) } catch let error as NSError { print("Error: \(error.localizedDescription)") } } return returnmodels } Which worked absolutely fine with Sonoma, but upon upgrading the OS to 15.3.1, it does absolutely nothing. I get no error messages, I get nothing, the code just pauses. If I look at CPU usage, as soon as it hits the line let tm = try MLLinearRegressor(trainingData: training, targetColumn: columntopredict) the CPU usage drops to 0% What am I doing wrong? Is there a flag I need to set somewhere in Xcode? This is on an M1 MacBook Pro Any help would be greatly appreciated
2
1
455
Mar ’25
Tensor Flow Metal 1.2.0 on M2 Fails to converge on common toy models
I've been trying to get some basic models to work on an M2 with tensor metal 1.2 and keras 2.15 and 2.18 and they all fail to work as expected. I'm running models copy/pasted from common tutorials like Jason Brownlee ML Mastery Object Classification tutorial using CIFAR-10. When run with the GPU I can't get any reasonable results. Under keras 2.15 the best validation accuracy ends up being around 10-15%. Under keras 2.18, the validation goes off the rails around epoch 5 with wildly low accuracy and loss values that are reported as "nan". Epoch 4/25 782/782: 19s 24ms/step - accuracy: 0.3450 - loss: 2.8925 - val_accuracy: 0.2992 - val_loss: 1.9869 Epoch 5/25 782/782: 19s 24ms/step - accuracy: 0.2553 - loss: nan - val_accuracy: 0.0000e+00 - val_loss: nan Running the same code on the CPU using keras 2.15 using tf.config.experimental.set_visible_devices([], 'GPU') yields a reasonable result with the validation accuracy around 75% as expected. Running the same code on keras 2.15 on a linux instance with just the CPU provides similar results. The tutorial can be found here: https://machinelearningmastery.com/object-recognition-convolutional-neural-networks-keras-deep-learning-library/ The only places I've deviated from the provided tutorial is using sdg = tf.keras.optimizers.legacy.SGD(learning_rate=lrate, momentum=0.9, nesterov=False) I did this at the advice of the warning: WARNING:absl:At this time, the v2.11+ optimizer `tf.keras.optimizers.SGD` runs slowly on M1/M2 Macs, please use the legacy Keras optimizer instead, located at `tf.keras.optimizers.legacy.SGD`. Is there something special that I need to do to make this work? I've followed the instructions here: https://developer.apple.com/metal/tensorflow-plugin/ I've purged the venv a few times and started from scratch, but all with similarly terrible results. Here are my platform details: Chip: Apple M2 Memory: 16 GB macOS : Sequoia 15.2 Python venv: 3.11 Jupyter Lab Version: 4.3.3 TensorFlow versions: 2.15, 2.18 tensorflow-metal: 1.2.0 Thanks for any assistance or advice.
8
3
881
Mar ’25
Xcode AI Coding Assistance Option(s)
Not finding a lot on the Swift Assist technology announced at WWDC 2024. Does anyone know the latest status? Also, currently I use OpenAI's macOS app and its 'Work With...' functionality to assist with Xcode development, and this is okay, certainly saves copying code back and forth, but it seems like AI should be able to do a lot more to help with Xcode app development. I guess I'm looking at what people are doing with AI in Visual Studio, Cline, Cursor and other IDEs and tools like those and feel a bit left out working in Xcode. Please let me know if there are AI tools or techniques out there you use to help with your Xcode projects. Thanks in advance!
6
0
11k
Mar ’25
Group AppIntents’ Searchable DynamicOptionsProvider in Sections
I’m trying to group my EntityPropertyQuery selection into sections as well as making it searchable. I know that the EntityStringQuery is used to perform the text search via entities(matching string: String). That works well enough and results in this modal: Though, when I’m using a DynamicOptionsProvider to section my EntityPropertyQuery, it doesn’t allow for searching anymore and simply opens the sectioned list in a menu like so: How can I combine both? I’ve seen it in other apps, but can’t figure out why my code doesn’t allow to section the results and make it searchable? Any ideas? My code (simplified) struct MyIntent: AppIntent { @Parameter(title: "Meter"), optionsProvider: MyOptionsProvider()) var meter: MyIntentEntity? // … struct MyOptionsProvider: DynamicOptionsProvider { func results() async throws -> ItemCollection<MyIntentEntity> { // Get All Data let allData = try IntentsDataHandler.shared.getEntities() // Create Arrays for Sections let fooEntities = allData.filter { $0.type == .foo } let barEntities = allData.filter { $0.type == .bar } return ItemCollection(sections: [ ItemSection("Foo", items: fooEntities), ItemSection("Bar", items: barEntities) ]) } } struct MeterIntentQuery: EntityStringQuery { // entities(for identifiers: [UUID]) and suggestedEntities() functions func entities(matching string: String) async throws -> [MyIntentEntity] { // Fetch All Data let allData = try IntentsDataHandler.shared.getEntities() // Filter Data by String let matchingData = allData.filter { data in return data.title.localizedCaseInsensitiveContains(string)) } return matchingData } }
0
2
568
Mar ’25