Hello, I'm using videotoolbox superresolution API in MACOS 26: https://developer.apple.com/documentation/videotoolbox/vtsuperresolutionscalerconfiguration/downloadconfigurationmodel(completionhandler:)?language=objc, when using swift, it's ok, when using objective-c, I get error when downloading model with downloadConfigurationModelWithCompletionHandler:
[Auto] MA-auto{_failedLockContent} | failure reported by server | error:[com.apple.MobileAssetError.AutoAsset:MissingReference(6111)]
[Auto] MA-auto{_failedLockContent} | failure reported by server | error:[com.apple.MobileAssetError.AutoAsset:UnderlyingError(6107)_1_com.apple.MobileAssetError.Download:47]
Download completion handler called with error: The operation couldnxe2x80x99t be completed. (VTFrameProcessorErrorDomain error -19743.)
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Issue type: Bug
TensorFlow metal version: 1.1.1
TensorFlow version: 2.18
OS platform and distribution: MacOS 15.2
Python version: 3.11.11
GPU model and memory: Apple M2 Max GPU 38-cores
Standalone code to reproduce the issue:
import tensorflow as tf
if __name__ == '__main__':
gpus = tf.config.experimental.list_physical_devices('GPU')
print(gpus)
Current behavior
Apple silicone GPU with tensorflow-metal==1.1.0 and python 3.11 works fine with tensorboard==2.17.0
This is normal output:
/Users/mspanchenko/anaconda3/envs/cryptoNN_ml_core/bin/python /Users/mspanchenko/VSCode/cryptoNN/ml/core_second_window/test_tensorflow_gpus.py
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
Process finished with exit code 0
But if I upgrade tensorflow to 2.18 I'll have error:
/Users/mspanchenko/anaconda3/envs/cryptoNN_ml_core/bin/python /Users/mspanchenko/VSCode/cryptoNN/ml/core_second_window/test_tensorflow_gpus.py
Traceback (most recent call last):
File "/Users/mspanchenko/VSCode/cryptoNN/ml/core_second_window/test_tensorflow_gpus.py", line 1, in <module>
import tensorflow as tf
File "/Users/mspanchenko/anaconda3/envs/cryptoNN_ml_core/lib/python3.11/site-packages/tensorflow/__init__.py", line 437, in <module>
_ll.load_library(_plugin_dir)
File "/Users/mspanchenko/anaconda3/envs/cryptoNN_ml_core/lib/python3.11/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library
py_tf.TF_LoadLibrary(lib)
tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users/mspanchenko/anaconda3/envs/cryptoNN_ml_core/lib/python3.11/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Symbol not found: __ZN3tsl8internal10LogMessageC1EPKcii
Referenced from: <D2EF42E3-3A7F-39DD-9982-FB6BCDC2853C> /Users/mspanchenko/anaconda3/envs/cryptoNN_ml_core/lib/python3.11/site-packages/tensorflow-plugins/libmetal_plugin.dylib
Expected in: <2814A58E-D752-317B-8040-131217E2F9AA> /Users/mspanchenko/anaconda3/envs/cryptoNN_ml_core/lib/python3.11/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so
Process finished with exit code 1
Bear with me, please. Please make sure a highly skilled technical person reads and understands this.
I want to describe my vision for (AI/Algorithmically) Optimised Operating Systems. To explain it properly, I will describe the process to build it (pseudo).
Required Knowledge (no particular order): Processor Logic Circuits, LLM models, LLM tool usage, Python OO coding, Procedural vs OO, NLP fuzzy matching, benchmarking, canvas/artefacts/dynamic HTML interfaces, concepts of how AI models are vastly compressed and miniaturised forms of full data, Algorithmic vs AI.
First, take all OO Python code (example) on GitHub (example), then separate each function from each object into its own procedure (procedural logic) by making a logical procedural list of actions to perform only that function based on its entire dependency chain (i.e. all other objects it relies on). Relate all compiled functions using (for example) fuzzy matching on the name, or AI-based functional profiling to get multiple instances of each function.
Starting with the most used function, test each one against the others that perform the same task for bugs and completeness. Determine the fastest, most optimal version of that function (and every function). Add a single instance of each most optimal function to the centralised tool codebase, which will later be utilised by the language models. This ensures we rely only on the most optimised function for each and every use case — with every program using one shared instance of that function instead of compiling it separately.
In this way, we optimise all existing (Python OO) code based on maximum availability of certain resources. Use the optimised code to teach the AI how to map each user task directly to optimised code (i.e. each user-facing function in unoptimised program code to its optimised version). Later, teach the codebase how to optimise these functions directly to the logic circuits on the processor — as an Operating System — using essentially the same process. Eventually, use the most optimised form for tasks via a dynamic canvas/artefacts UI interface generated by the Start Menu using the language model.
Allow custom interfaces to emerge dynamically from the available functions based on language understanding. Grow and optimise the algorithmic codebase until the entire operating system — with versioning — evolves based on the AI-generated user interfaces (compressing them into new versions of the tools codebase).
Please don’t give up until you understand this concept completely and nothing in the description is beyond your full comprehension. It’s a highly significant step forward.
Hello,
We have been encountering a persistent crash in our application, which is deployed exclusively on iPad devices. The crash occurs in the following code block:
let requestHandler = ImageRequestHandler(paddedImage)
var request = CoreMLRequest(model: model)
request.cropAndScaleAction = .scaleToFit
let results = try await requestHandler.perform(request)
The client using this code is wrapped inside an actor, following Swift concurrency principles.
The issue has been consistently reproduced across multiple iPadOS versions, including:
iPad OS - 18.4.0
iPad OS - 18.4.1
iPad OS - 18.5.0
This is the crash log -
Crashed: com.apple.VN.detectorSyncTasksQueue.VNCoreMLTransformer
0 libobjc.A.dylib 0x7b98 objc_retain + 16
1 libobjc.A.dylib 0x7b98 objc_retain_x0 + 16
2 libobjc.A.dylib 0xbf18 objc_getProperty + 100
3 Vision 0x326300 -[VNCoreMLModel predictWithCVPixelBuffer:options:error:] + 148
4 Vision 0x3273b0 -[VNCoreMLTransformer processRegionOfInterest:croppedPixelBuffer:options:qosClass:warningRecorder:error:progressHandler:] + 748
5 Vision 0x2ccdcc __119-[VNDetector internalProcessUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:]_block_invoke_5 + 132
6 Vision 0x14600 VNExecuteBlock + 80
7 Vision 0x14580 __76+[VNDetector runSuccessReportingBlockSynchronously:detector:qosClass:error:]_block_invoke + 56
8 libdispatch.dylib 0x6c98 _dispatch_block_sync_invoke + 240
9 libdispatch.dylib 0x1b584 _dispatch_client_callout + 16
10 libdispatch.dylib 0x11728 _dispatch_lane_barrier_sync_invoke_and_complete + 56
11 libdispatch.dylib 0x7fac _dispatch_sync_block_with_privdata + 452
12 Vision 0x14110 -[VNControlledCapacityTasksQueue dispatchSyncByPreservingQueueCapacity:] + 60
13 Vision 0x13ffc +[VNDetector runSuccessReportingBlockSynchronously:detector:qosClass:error:] + 324
14 Vision 0x2ccc80 __119-[VNDetector internalProcessUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:]_block_invoke_4 + 336
15 Vision 0x14600 VNExecuteBlock + 80
16 Vision 0x2cc98c __119-[VNDetector internalProcessUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:]_block_invoke_3 + 256
17 libdispatch.dylib 0x1b584 _dispatch_client_callout + 16
18 libdispatch.dylib 0x6ab0 _dispatch_block_invoke_direct + 284
19 Vision 0x2cc454 -[VNDetector internalProcessUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:] + 632
20 Vision 0x2cd14c __111-[VNDetector processUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:]_block_invoke + 124
21 Vision 0x14600 VNExecuteBlock + 80
22 Vision 0x2ccfbc -[VNDetector processUsingQualityOfServiceClass:options:regionOfInterest:warningRecorder:error:progressHandler:] + 340
23 Vision 0x125410 __swift_memcpy112_8 + 4852
24 libswift_Concurrency.dylib 0x5c134 swift::runJobInEstablishedExecutorContext(swift::Job*) + 292
25 libswift_Concurrency.dylib 0x5d5c8 swift_job_runImpl(swift::Job*, swift::SerialExecutorRef) + 156
26 libdispatch.dylib 0x13db0 _dispatch_root_queue_drain + 364
27 libdispatch.dylib 0x1454c _dispatch_worker_thread2 + 156
28 libsystem_pthread.dylib 0x9d0 _pthread_wqthread + 232
29 libsystem_pthread.dylib 0xaac start_wqthread + 8
We found an issue similar to us - https://developer.apple.com/forums/thread/770771.
But the crash logs are quite different, we believe this warrants further investigation to better understand the root cause and potential mitigation strategies.
Please let us know if any additional information would help diagnose this issue.
Hey,
Would be great to have an equivalent of toolCallId for both toolCall and toolResult in the transcript. Otherwise, it is hard to connect tool calls with their respective responses, when there were multiple parallel calls to the same tool.
Thanks!
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Somehow I'm not able to decrypt our ml models on my machine. It does not matter:
If I clean the build / delete the build folder
If it's a local build or a build downloaded from our build server
I log in as a different user
I reboot my system (15.4.1 (24E263)
I use a different network
Re-generate the encryption keys.
I'm the only one in my team confronted with this issue. Using the encrypted models works fine for everyone else.
As soon as our application tries to load the bundled ml model the following error is logged and returned:
Could not create persistent key blob for CD49E04F-1A42-4FBE-BFC1-2576B89EC233 : error=Error Domain=com.apple.CoreML Code=9 "Failed to generate key request for CD49E04F-1A42-4FBE-BFC1-2576B89EC233 with error: -42908"
Error code 9 points to a decryption issue, but offers no useful pointers and suggests that some sort of network request needs to be made in order to decrypt our models.
/*! Core ML throws/returns this error when the framework encounters an error in the model
decryption subsystem.
The typical cause for this error is in the key server configuration and the client application
cannot do much about it.
For example, a model loading method will throw/return the error when it uses incorrect model
decryption key.
*/
MLModelErrorModelDecryption API_AVAILABLE(macos(11.0), ios(14.0), watchos(7.0), tvos(14.0)) = 9,
I could not find a reference to error '-42908' anywhere.
ChatGPT just lied to me, as usual...
How do can I resolve this or diagnose this further?
Thanks.
No matter what, the LanguageModelSession always returns very lengthy / verbose responses. I set the maximumResponseTokens option to various small numbers but it doesn't appear to have any effect. I've even used this instructions format to keep responses between 3-8 words but it returns multiple paragraphs. Is there a way to manage LLM response length? Thanks.
My sample app has been working with the following code:
func call(arguments: Arguments) async throws -> ToolOutput {
var temp:Int
switch arguments.city {
case .singapore: temp = Int.random(in: 30..<40)
case .china: temp = Int.random(in: 10..<30)
}
let content = GeneratedContent(temp)
let output = ToolOutput(content)
return output
}
However in 26 beta 5, ToolOutput no longer available, please advice what has changed.
HI,
I've been modifying the Camera sample app found here: https://developer.apple.com/tutorials/sample-apps/capturingphotos-camerapreview ... in the processpreview images, I am calling in to the Vision APis to either detect a person or object, then I'm using the segmentation mask to extract the person and composite them onto a different background with some other filters. I am using coreimage to filter the CIImages, and converting and displaying as a SwiftUI Image. When running on my IPhone, it works fine. When running on my Iphone with the debugger, it crashes within a few seconds... Attached is a screenshot. At the top is an EXC_BAD_ACCESS in libRPAC.dylib`std::__1::__hash_table<std::__1::__hash_value_type<long, qos_info_t>, std::__1::__unordered_map_hasher<long, std::__1::__hash_value_type<long, qos_info_t>, std::__1::hash, std::__1::equal_to, true>, std::__1::__unordered_map_equal<long, std::__1::__hash_value_type<long, qos_info_t>, std::__1::equal_to, std::__1::hash, true>, std::__1::allocator<std::__1::__hash_value_type<long, qos_info_t>>>::__emplace_unique_key_args<long, std::__1::piecewise_construct_t const&, std::__1::tuple<long const&>, std::__1::tuple<>>:
This was working fine a couple of days ago.. Not sure why it's popping up now. Am I correct in interpreting this as an LLDB issue? How do I fix it?
Hello Apple Developer Community,
I'm investigating Core ML model loading behavior and noticed that even when the compiled model path remains unchanged after an APP update, the first run still triggers an "uncached load" process. This seems to impact user experience with unnecessary delays.
Question: Does Core ML provide any public API to check whether a compiled model (from a specific .mlmodelc path) is already cached in the system?
If such API exists, we'd like to use it for pre-loading decision logic - only perform background pre-load when the model isn't cached.
Has anyone encountered similar scenarios or found official solutions? Any insights would be greatly appreciated!
I'm the creator of an app that helps users learn Arabic. Inside of the app users can save words, engage in lessons specific to certain grammar concepts etc. I'm looking for a way for Siri to 'suggest' my app when the user asks to define any Arabic words. There are other questions that I would like for Siri to suggest my app for, but I figure that's a good start. What framework am I looking for here? I think AppItents? I remember I played with it for a bit last year but didn't get far. Any suggestions would be great.
Would the new Foundations model be any help here?
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Hello everyone, I have a visual convolutional model and a video that has been decoded into many frames. When I perform inference on each frame in a loop, the speed is a bit slow. So, I started 4 threads, each running inference simultaneously, but I found that the speed is the same as serial inference, every single forward inference is slower. I used the mactop tool to check the GPU utilization, and it was only around 20%. Is this normal? How can I accelerate it?
Hey guys, I've been having difficulties transferring my Xcode project to a Swift playground (.swiftpm) for the Swift Student Challenge. I keep getting these errors as well as none of the views being able to find the model in scope:
"TrashDetector 1.mlmodel: No predominant language detected. Set COREML_CODEGEN_LANGUAGE to preferred language."
Unexpected duplicate tasks: Target 'TrashQuest' (project 'TrashQuest') has write command with output /Users/kmcph3/Library/Developer/Xcode/DerivedData/TrashQuest-glvzskunedgtakfrdmsxdoplondj/Build/Intermediates.noindex/TrashQuest.build/Debug-iphonesimulator/TrashQuest.build/0a4ef2429d66360920ddb4f16e65e233.sb
I've gone through multiple post with these exact problems, but they all seem to be talking about ".playground" files due to the "Resources" folder (mind you I did try exactly what they said). Is there anyone that can help???
(Quick side note, why does it need to be a swiftpm file for the SSC??? Like why can't we just send the zip of our Xcode project??)
Topic:
Machine Learning & AI
SubTopic:
Core ML
I couldn't find information about this in the documentation. Could someone clarify if this API is available and how to access it?
May i know the bundle identifier for apple intelligence?
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Is anything configurable for LanguageModelSession.Guardrails besides the default? I'm prototyping a camping app, and it's constantly slamming into guardrail errors when I use the new foundation model interface. Any subjects relating to fishing, survival, etc. won't generate.
For example the prompt "How can I kill deer ticks using a clothing treatment?" returns a generation error.
The results that I get are great when it works, but so far the local model sessions are extremely unreliable.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Is there any way to stop GPU work running that is scheduled using metal?
Long shader calculations don't stop when application is stopped in Xcode and continue to take up GPU time and affect the display.
Why is this functionality not available when Swift Tasks are able to be canceled?
Topic:
Machine Learning & AI
SubTopic:
General
Hey everyone
I'm Manish Mehta, field CTO at Centific. I recently read Apple's white paper, The Illusion of Thinking and it got me thinking about the current state of AI reasoning. Who here has read it?
The paper highlights how LLMs often rely on pattern recognition rather than genuine understanding. When faced with complex tasks, their performance can degrade significantly.
I was just thinking that to move beyond this problem, we need to explore approaches that combines Deeper Reasoning Architectures for true cognitive capability with Deep Human Partnership to guide AI toward better judgment and understanding.
The first part means fundamentally rewiring AI to reason. This involves advancing deeper architectures like World Models, which can build internal simulations to understand real-world scenarios , and Neurosymbolic systems, which combines neural networks with symbolic reasoning for deeper self-verification.
Additionally, we need to look at deep human partnership and scalable oversight. An AI cannot learn certain things from data alone, it lacks the real-world judgment an AI will never have. Among other things, deep domain expert human partners are needed to instill this wisdom , validate the AI's entire reasoning process , build its ethical guardrails , and act as skilled adversaries to find hidden flaws before they can cause harm.
What do you all think? Is this focus on a deeper partnership between advanced AI reasoning and deep human judgment the right path forward?
Agree? Disagree?
Thanks
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
import coremltools as ct
from coremltools.models.neural_network import quantization_utils
# load full precision model
model_fp32 = ct.models.MLModel(modelPath)
model_fp16 = quantization_utils.quantize_weights(model_fp32, nbits=16)
model_fp16.save("reduced-model.mlmodel")
I'm testing it with the model from one of Apple's source codes(GameBoardDetector), and it works fine, reduces the model size by half.
But there are several problems with my model(trained on CreateML app using Full Network):
Quantizing to float 16 does not work(new file gets created with reduced only 0.1mb).
Quantizing to below 16 values cause errors, and no file gets created.
Here are additional metadata and precisions of models.
Working model's additional metadata and precision:
Mine's additional metadata and precision:
Based on the documentation, it appears that MLTensor can be used to perform tensor operations using the ANE (Apple Neural Engine) by wrapping the tensor operations with withMLTensorComputePolicy with a MLComputePolicy initialized with MLComputeUnits.cpuAndNeuralEngine (it can also be initialized with MLComputeUnits.all to let the OS spread the load between the Neural Engine, GPU and CPU).
However, when using the Instruments app, it appears that the tensor operations never get executed on the Neural Engine.
It would be helpful if someone can guide me on the correct way to ensure that the Nerual Engine is used to perform the tensor operations (not as part of a CoreML model file).
based on this example, I've created a simple code to try it:
import Foundation
import CoreML
print("Starting...")
let semaphore = DispatchSemaphore(value: 0)
Task {
await withMLTensorComputePolicy(.init(MLComputeUnits.cpuAndNeuralEngine)) {
let v1 = MLTensor([1.0, 2.0, 3.0, 4.0])
let v2 = MLTensor([5.0, 6.0, 7.0, 8.0])
let v3 = v1.matmul(v2)
await v3.shapedArray(of: Float.self) // is 70.0
let m1 = MLTensor(shape: [2, 3], scalars: [
1, 2, 3,
4, 5, 6
], scalarType: Float.self)
let m2 = MLTensor(shape: [3, 2], scalars: [
7, 8,
9, 10,
11, 12
], scalarType: Float.self)
let m3 = m1.matmul(m2)
let result = await m3.shapedArray(of: Float.self) // is [[58, 64], [139, 154]]
// Supports broadcasting
let m4 = MLTensor(randomNormal: [3, 1, 1, 4], scalarType: Float.self)
let m5 = MLTensor(randomNormal: [4, 2], scalarType: Float.self)
let m6 = m4.matmul(m5)
print("Done")
return result;
}
semaphore.signal()
}
semaphore.wait()
Here's what I get on the Instruments app:
Notice how the Neural Engine line shows no usage.
Ive run this test on an M1 Max MacBook Pro.