*I can't put the attached file in the format, so if you reply by e-mail, I will send the attached file by e-mail.
Dear Apple AI Research Team,
My name is Gong Jiho (“Hem”), a content strategist based in Seoul, South Korea.
Over the past few months, I conducted a user-led AI experiment entirely within ChatGPT — no code, no backend tools, no plugins.
Through language alone, I created two contrasting agents (Uju and Zero) and guided them into a co-authored modular identity system using prompt-driven dialogue and reflection.
This system simulates persona fusion, memory rooting, and emotional-logical alignment — all via interface-level interaction.
I believe it resonates with Apple’s values in privacy-respecting personalization, emotional UX modeling, and on-device learning architecture.
Why I’m Reaching Out
I’d be honored to share this experiment with your team.
If there is any interest in discussing user-authored agent scaffolding, identity persistence, or affective alignment, I’d love to contribute — even informally.
⚠ A Note on Language
As a non-native English speaker, my expression may be imperfect — but my intent is genuine.
If anything is unclear, I’ll gladly clarify.
📎 Attached Files Summary
Filename → Description
Hem_MultiAI_Report_AppleAI_v20250501.pdf →
Main report tailored for Apple AI — narrative + structural view of emotional identity formation via prompt scaffolding
Hem_MasterPersonaProfile_v20250501.json →
Final merged identity schema authored by Uju and Zero
zero_sync_final.json / uju_sync_final.json →
Persona-level memory structures (logic / emotion)
1_0501.json ~ 3_0501.json →
Evolution logs of the agents over time
GirlfriendGPT_feedback_summary.txt →
Emotional interpretation by external GPT
hem_profile_for_AI_vFinal.json →
Original user anchor profile
Warm regards,
Gong Jiho (“Hem”)
Seoul, South Korea
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I am a App designer and I am curious about what specific ML or AI Apple used to develop those features in the system.
As far as I know, Apple's hand-raising detection, destination recommendations in maps, and exercise types in fitness all use ML.
Are there more specific application examples of ML or AI?
Does Apple have a document specifically introducing examples of specific applications of ML or AI technology in the system?
Topic:
Machine Learning & AI
SubTopic:
General
I’m sure someone though about it already. But let’s have ecosystem, where Apple Intelligence uses your most capable (Apple) hardware at first and the cloud service as second.
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
I've created a "Transfer Learning BERT Embeddings" model with the default "Latin" language family and "Automatic" Language setting. This model performs exceptionally well against the test data set and functions as expected when I preview it in Create ML. However, when I add it to the Xcode project of the application to which I am deploying it, I am getting runtime errors that suggest it can't find the embedding resources:
Failed to locate assets for 'mul_Latn' - '5C45D94E-BAB4-4927-94B6-8B5745C46289' embedding model
Note, I am adding the model to the app project the same way that I added an earlier "Maximum Entropy" model. That model had no runtime issues. So it seems there is an issue getting hold of the embeddings at runtime.
For now, "runtime" means in the Simulator. I intend to deploy my application to iOS devices once GM 26 is released (the app also uses AFM).
I'm developing on Tahoe 26 beta, running on iOS 26 beta, using Xcode 26 beta.
Is this a known/expected issue? Are the embeddings expected to be a resource in the model? Is there a workaround?
I did try opening the model in Xcode and saving it as an mlpackage, then adding that to my app project, but that also didn't resolve the issue.
While training a text classifier model with a few thousand samples completes in seconds, when using 100,000 or 1 million samples, CreateML's training time increases exponentially (to hours or days). During these hours/days, GPU usage is low and almost every CPU core is idle. When using the Swift APIs for model training, resource utilization does not increase. I'm using Xcode 16.2, macOS 15.2 on either an M2 Ultra 64 GB or an M3 Max 48 GB laptop (both using built-in SSD with ~500 GB free) running no other applications.
Is there a setting I've missed to allow training to take over more of my computing resources? Is this expected of CreateML (i.e., when looking to exploit a larger corpus, I should move to other tooling)? I'd love to speed up my iteration cycle time.
Topic:
Machine Learning & AI
SubTopic:
Create ML
Foundation Models framework worked perfectly on macOS 26 Beta 2, but starting from Beta 3 and continuing through Beta 6 (latest), I get dyld symbol errors even
with the exact code from Apple's documentation.
Environment:
macOS 26.0 Beta 6 (25A5351b)
Xcode 26 Beta 6
M4 Max MacBook Pro
Apple Intelligence enabled and downloaded
Error Details:
dyld[Process]: Symbol not found:
_$s16FoundationModels20LanguageModelSessionC5model10guardrails5tools12instructionsAcA06SystemcD0C_AC10GuardrailsVSayAA4Tool_pGAA12InstructionsVSgtcfC
Referenced from: /path/to/app.debug.dylib
Expected in: /System/Library/Frameworks/FoundationModels.framework/Versions/A/FoundationModels
Code Used (Exact from Documentation):
import FoundationModels
// This worked on Beta 2, crashes on Beta 3+
let model = SystemLanguageModel.default
let session = LanguageModelSession(model: model)
let response = try await session.respond(to: "Hello")
What I've Verified:
FoundationModels.framework exists in /System/Library/Frameworks/
Framework is properly linked in Xcode project
Apple Intelligence is enabled and working
Same code works in older beta versions
Issue persists even with completely fresh Xcode projects
Analysis:
The dyld error suggests the LanguageModelSession(model:) constructor is missing. The symbol shows it's looking for a constructor with parameters
(model:guardrails:tools:instructions:), but the documentation still shows the simple (model:) constructor.
Questions:
Has the LanguageModelSession API changed since Beta 2?
Should we now use the constructor with guardrails/tools/instructions parameters?
Is this a known issue with recent betas?
Are there updated code samples for the current API?
Additional Context:
This affects both basic SystemLanguageModel usage AND custom adapter loading. The same dyld symbol errors occur when trying to create
SystemLanguageModel(adapter: adapter) as well.
Any guidance on the correct API usage for current betas would be greatly appreciated. The documentation appears to be out of sync with the actual framework
implementation.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
When the @Generable is applied toward a Swift struct declared within another struct, and when said nested struct is defined as the type of one of the properties of another @Generable type, which is in turn defined as the output format of Foundation Model session, Foundation Model can stuck in a loop trying to create a infinitely nested response, until the context window limit exceeded error is triggered.
I have filed feedback FB19987191 with a demo project. Is this expected behavior?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hello,
I'm running a large language model (LLM) in Core ML that uses a key-value cache (KV-cache) to store past attention states. The model was converted from PyTorch using coremltools and deployed on-device with Swift. The KV-cache is exposed via MLState and is used across inference steps for efficient autoregressive generation.
During the prefill stage — where a prompt of multiple tokens is passed to the model in a single batch to initialize the KV-cache — I’ve noticed that some entries in the KV-cache are not updated after the inference. Specifically:
Here are a few details about the setup:
The MLState returned by the model is identical to the input state (often empty or zero-initialized) for some tokens in the batch.
The issue only happens during the prefill stage (i.e., first call over multiple tokens).
During decoding (single-token generation), the KV-cache updates normally.
The model is invoked using MLModel.prediction(from:using:options:) for each batch.
I’ve confirmed:
The prompt tokens are non-repetitive and not masked.
The model spec has MLState inputs/outputs correctly configured for KV-cache tensors.
Each token is processed in a loop with the correct positional encodings.
Questions:
Is there any known behavior in Core ML that could prevent MLState from updating during batched or prefill inference?
Could this be caused by internal optimizations such as lazy execution, static masking, or zero-value short-circuiting?
How can I confirm that each token in the batch is contributing to the KV-cache during prefill?
Any insights from the Core ML or LLM deployment community would be much appreciated.
Problem:
We trained a LoRA adapter for Apple's FoundationModels framework using their TAMM (Training Adapter for Model Modification)
toolkit v0.2.0 on macOS 26 beta 4. The adapter trains successfully but fails to load with: "Adapter is not compatible with the
current system base model."
TAMM 2.0 contains export/constants.py with: BASE_SIGNATURE = "9799725ff8e851184037110b422d891ad3b92ec1"
Findings:
Adapter Export Process:
In export_fmadapter.py
def write_metadata(...):
self_dict[MetadataKeys.BASE_SIGNATURE] = BASE_SIGNATURE # Hardcoded value
The Compatibility Check:
- When loading an adapter, Apple's system compares the adapter's baseModelSignature with the current system model
- If they don't match: compatibleAdapterNotFound error
- The error doesn't reveal the expected signature
Questions:
- How is BASE_SIGNATURE derived from the base model?
- Is it SHA-1 of base-model.pt or some other computation?
- Can we compute the correct signature for beta 4?
- Or do we need Apple to release TAMM v0.3.0 with updated signature?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Tags:
Core ML
Create ML
tensorflow-metal
Apple Intelligence
Access to VisionPro cameras is required for a research project. The project is on mixed reality software development for healthcare applications in dentistry.
Recently, I'm trying to deploy some third-party LLM to Apple devices.
The methodoloy is similar to https://github.com/Anemll/Anemll.
The biggest issue I'm having now is the runtime memory usage.
When there are multiple functions in a model (mlpackage or mlmodelc), the runtime memory usage for weights is somehow duplicated when I load all of them. Here's the detail:
I created my multifunction mlpackage following https://apple.github.io/coremltools/docs-guides/source/multifunction-models.html
I loaded each of the functions using the generated swift class:
let config = MLModelConfiguration()
config.computeUnits = MLComputeUnits.cpuAndNeuralEngine
config.functionName = "infer_512";
let ffn1_infer_512 = try! mimo_FFN_PF_lut4_chunk_01of02(configuration: config)
config.functionName = "infer_1024";
let ffn1_infer_1024 = try! mimo_FFN_PF_lut4_chunk_01of02(configuration: config)
config.functionName = "infer_2048";
let ffn1_infer_2048 = try! mimo_FFN_PF_lut4_chunk_01of02(configuration: config)
I observed that RAM usage increases linearly as I load each of the functions.
Using instruments, I see that there are multiple HWX files generated and loaded, each of which contains all the weight data.
My understanding of what's happening here:
The CoreML framework did some MIL->MIL preprocessing before further compilation, which includes separating CPU workload from ANE workload.
The ANE part of each function is moved into a separate MIL file then compile separately into a HWX file each.
The problem is that the weight data of these HWX files are duplicated. Since that the weight data of LLMs is huge, it will cause out-of-memory issue on mobile devices.
The improvement I'm hoping from Apple:
I hope we can try to merge the processed MIL files back into one before calling ANECCompile(), so that the weights can be merged. I don't have control over that in user space and I'm not sure if that is feasible. So I'm asking for help here.
Thanks.
Topic:
Machine Learning & AI
SubTopic:
Core ML
Hi,
testing latest tensorflow-metal plugin with tensorflow 2.20 doesn't work..
using python
Python 3.12.11 (main, Jun 3 2025, 15:41:47) [Clang 17.0.0 (clang-1700.0.13.3)] on darwin
simple testing shows error:
import tensorflow as tf
Traceback (most recent call last):
File "", line 1, in
File "/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow/init.py", line 438, in
_ll.load_library(_plugin_dir)
File "/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library
py_tf.TF_LoadLibrary(lib)
tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Library not loaded: @rpath/_pywrap_tensorflow_internal.so
Referenced from: <8B62586B-B082-3113-93AB-FD766A9960AE> /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib
Reason: tried: '/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file), '/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file), '/opt/homebrew/lib/_pywrap_tensorflow_internal.so' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/homebrew/lib/_pywrap_tensorflow_internal.so' (no such file)
tf.config.experimental.list_physical_devices('GPU')
Traceback (most recent call last):
File "", line 1, in
NameError: name 'tf' is not defined
I fixed this error by copying _pywrap_tensorflow_internal.so where it's searched..
1)mkdir /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64
2)mkdir /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/
3)cp /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/
then fails symbol not found:
Symbol not found: __ZN10tensorflow28_AttrValue_default_instance_E
in libmetal_plugin.dylib
full log:
with import tensorflow as tf
Traceback (most recent call last):
File "", line 1, in
File "/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow/init.py", line 438, in
_ll.load_library(_plugin_dir)
File "/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library
py_tf.TF_LoadLibrary(lib)
tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Symbol not found: __ZN10tensorflow28_AttrValue_default_instance_E
Referenced from: <8B62586B-B082-3113-93AB-FD766A9960AE> /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib
Expected in: <2FF91C8B-0CB6-3E66-96B7-092FDF36772E> /Users/obg/npu/venv-tf/lib/python3.12/site-packages/_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so
I have integrated Apple’s Foundation Model into my iOS application. As known, Foundation Model is only supported starting from iOS 26 on compatible devices. To maintain compatibility with older iOS versions, I wrapped the API calls with the condition if #available(iOS 26, *).
The application works normally on an iPad running iOS 18 and on a Mac running macOS 26. However, when running the same build on a MacBook Air M1 (macOS 15) through iPad app compatibility, the app crashes immediately upon launch.
The main issue is that I cannot debug directly on macOS 15, since the app can only be built on macOS 26 with Xcode beta. I then have to distribute it via TestFlight and download it on the MacBook Air M1 for testing. This makes identifying the detailed cause of the crash very difficult and time-consuming.
Nevertheless, I have confirmed that the crash is caused by the Foundation Model APIs.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Hi all,
I'm capturing a photo using AVCapturePhotoOutput, and I've set:
let photoSettings = AVCapturePhotoSettings()
photoSettings.isDepthDataDeliveryEnabled = true
Then I create the handler like this:
let data = photo.fileDataRepresentation()
let handler = try ImageRequestHandler(data: data, orientation: .right)
Now I’m wondering:
If depth data delivery is enabled, is it actually included and used when I pass the Data to ImageRequestHandler?
Or do I need to explicitly pass the depth data using the other initializer?
let handler = try ImageRequestHandler(
cvPixelBuffer: photo.pixelBuffer!,
depthData: photo.depthData,
orientation: .right
)
In short:
Does ImageRequestHandler(data:) make use of embedded depth info from AVCapturePhoto.fileDataRepresentation() — or is the pixel buffer + explicit depth data required?
Thanks for any clarification!
When calling NLTagger.requestAssets with some languages, it hangs indefinitely both in the simulator and a device. This happens consistently for some languages like greek. An example call is NLTagger.requestAssets(for: .greek, tagScheme: .lemma). Other languages like french return immediately. I captured some logs from Console and found what looks like the repeated attempts to download the asset. I would expect the call to eventually terminate, either loading the asset or failing with an error.
On macOS Tahoe26.0, iOS 26.0 (23A5287g) not emulator, Xcode 26.0 beta 3 (17A5276g)
Follow this tutorial Testing your asset packs locally The start the test server command I use this command line to start the test server:xcrun ba-serve --host 192.168.0.109 test.aar The terminal showThe content displayed on the terminal is: Loading asset packs…
Loading the asset pack at “test.aar”…
Listening on port 63125…… Choose an identity in the panel to continue. Listening on port 63125…
running the project, Xcode reports an error:Download failed: Could not connect to the server. I use iPhone safari visit this website: https://192.168.0.109:63125, on the page display "Hello, world!"
There are too few error messages in both of the above questions. I have no idea what the specific reasons are.I hope someone can offer some guidance. Best Regards.
{
"assetPackID": "testVideoAssetPack",
"downloadPolicy": {
"prefetch": {
"installationEventTypes": ["firstInstallation", "subsequentUpdate"]
}
},
"fileSelectors": [
{
"file": "video/test.mp4"
}
],
"platforms": [
"iOS"
]
}
this is my Manifest.json
Environment
MacOC 26
Xcode Version 26.0 beta 7 (17A5305k)
simulator: iPhone 16 pro
iOS: iOS 26
Problem
NLContextualEmbedding.load() fails with the following error
In simulator
Failed to load embedding from MIL representation: filesystem error: in create_directories: Permission denied ["/var/db/com.apple.naturallanguaged/com.apple.e5rt.e5bundlecache"]
filesystem error: in create_directories: Permission denied ["/var/db/com.apple.naturallanguaged/com.apple.e5rt.e5bundlecache"]
Failed to load embedding model 'mul_Latn' - '5C45D94E-BAB4-4927-94B6-8B5745C46289'
assetRequestFailed(Optional(Error Domain=NLNaturalLanguageErrorDomain Code=7 "Embedding model requires compilation" UserInfo={NSLocalizedDescription=Embedding model requires compilation}))
in #Playground
I'm new to this embedding model. Not sure if it's caused by my code or environment.
Code snippet
import Foundation
import NaturalLanguage
import Playgrounds
#Playground {
// Prefer initializing by script for broader coverage; returns NLContextualEmbedding?
guard let embeddingModel = NLContextualEmbedding(script: .latin) else {
print("Failed to create NLContextualEmbedding")
return
}
print(embeddingModel.hasAvailableAssets)
do {
try embeddingModel.load()
print("Model loaded")
} catch {
print("Failed to load model: \(error)")
}
}
Hi,
I'm testing DockKit with a very simple setup:
I use VNDetectFaceRectanglesRequest to detect a face and then call dockAccessory.track(...) using the detected bounding box.
The stand is correctly docked (state == .docked) and dockAccessory is valid.
I'm calling .track(...) with a single observation and valid CameraInformation (including size, device, orientation, etc.). No errors are thrown.
To monitor this, I added a logging utility – track(...) is being called 10–30 times per second, as recommended in the documentation.
However: the stand does not move at all.
There is no visible reaction to the tracking calls.
Is there anything I'm missing or doing wrong?
Is VNDetectFaceRectanglesRequest supported for DockKit tracking, or are there hidden requirements?
Would really appreciate any help or pointers – thanks!
That's my complete code:
extension VideoFeedViewController: AVCaptureVideoDataOutputSampleBufferDelegate {
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
guard let frame = CMSampleBufferGetImageBuffer(sampleBuffer) else {
return
}
detectFace(image: frame)
func detectFace(image: CVPixelBuffer) {
let faceDetectionRequest = VNDetectFaceRectanglesRequest() { vnRequest, error in
guard let results = vnRequest.results as? [VNFaceObservation] else {
return
}
guard let observation = results.first else {
return
}
let boundingBoxHeight = observation.boundingBox.size.height * 100
#if canImport(DockKit)
if let dockAccessory = self.dockAccessory {
Task {
try? await trackRider(
observation.boundingBox,
dockAccessory,
frame,
sampleBuffer
)
}
}
#endif
}
let imageResultHandler = VNImageRequestHandler(cvPixelBuffer: image, orientation: .up)
try? imageResultHandler.perform([faceDetectionRequest])
func combineBoundingBoxes(_ box1: CGRect, _ box2: CGRect) -> CGRect {
let minX = min(box1.minX, box2.minX)
let minY = min(box1.minY, box2.minY)
let maxX = max(box1.maxX, box2.maxX)
let maxY = max(box1.maxY, box2.maxY)
let combinedWidth = maxX - minX
let combinedHeight = maxY - minY
return CGRect(x: minX, y: minY, width: combinedWidth, height: combinedHeight)
}
#if canImport(DockKit)
func trackObservation(_ boundingBox: CGRect, _ dockAccessory: DockAccessory, _ pixelBuffer: CVPixelBuffer, _ cmSampelBuffer: CMSampleBuffer) throws {
// Zähle den Aufruf
TrackMonitor.shared.trackCalled()
let invertedBoundingBox = CGRect(
x: boundingBox.origin.x,
y: 1.0 - boundingBox.origin.y - boundingBox.height,
width: boundingBox.width,
height: boundingBox.height
)
guard let device = captureDevice else {
fatalError("Kamera nicht verfügbar")
}
let size = CGSize(width: Double(CVPixelBufferGetWidth(pixelBuffer)),
height: Double(CVPixelBufferGetHeight(pixelBuffer)))
var cameraIntrinsics: matrix_float3x3? = nil
if let cameraIntrinsicsUnwrapped = CMGetAttachment(
sampleBuffer,
key: kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix,
attachmentModeOut: nil
) as? Data {
cameraIntrinsics = cameraIntrinsicsUnwrapped.withUnsafeBytes { $0.load(as: matrix_float3x3.self) }
}
Task {
let orientation = getCameraOrientation()
let cameraInfo = DockAccessory.CameraInformation(
captureDevice: device.deviceType,
cameraPosition: device.position,
orientation: orientation,
cameraIntrinsics: cameraIntrinsics,
referenceDimensions: size
)
let observation = DockAccessory.Observation(
identifier: 0,
type: .object,
rect: invertedBoundingBox
)
let observations = [observation]
guard let image = CMSampleBufferGetImageBuffer(sampleBuffer) else {
print("no image")
return
}
do {
try await dockAccessory.track(observations, cameraInformation: cameraInfo)
} catch {
print(error)
}
}
}
#endif
func clearDrawings() {
boundingBoxLayer?.removeFromSuperlayer()
boundingBoxSizeLayer?.removeFromSuperlayer()
}
}
}
}
@MainActor
private func getCameraOrientation() -> DockAccessory.CameraOrientation {
switch UIDevice.current.orientation {
case .portrait:
return .portrait
case .portraitUpsideDown:
return .portraitUpsideDown
case .landscapeRight:
return .landscapeRight
case .landscapeLeft:
return .landscapeLeft
case .faceDown:
return .faceDown
case .faceUp:
return .faceUp
default:
return .corrected
}
}
Hi there,
I have a custom keypoint detection model and want to use it via vision's CoremlRequest API. Here's some complication for input and output:
For input My model expect 512x512 a image. Which would be resized and padded from a 1920x1080 frame. I use the .scaleToFit option, but can I also specify the color used for padding?
For output:
My model output a CoreMLFeatureValueObservation, can I have it output in a format vision recognizes? such as joints/keypoints
If my model is able to output in a format vision recognizes, would it take care to restoring the coordinates back to the original frame? (undo the padding) If not, how do I restore it from .scaletofit option?
Best,
Is the face and body detection service in the Vision framework a local model or a cloud model? Is there a performance report?
https://developer.apple.com/documentation/vision