Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

Various On-Device Frameworks API & ChatGPT
Posting a follow up question after the WWDC 2025 Machine Learning AI & Frameworks Group Lab on June 12. In regards to the on-device API of any of the AI frameworks (foundation model, vision framework, ect.), is there a response condition or path where the API outsources it's input to ChatGPT if the user has allowed this like Siri does? Ignore this if it's a no: is this handled behind the scenes or by the developer?
0
0
312
Jun ’25
How to test for VisualIntelligence available on device?
I'm adding Visual Intelligence support to my app, and now want to add a Tip using TipKit to guide users to this feature from within my app. I want to add a Rule to my Tip which will only show this Tip on devices where Visual Intelligence is supported (ex. not iPhone 14 Pro Max). What is the best way for me to determine availability to set this TipKit rule? Here's the documentation I'm following for Visual Intelligence: https://developer.apple.com/documentation/visualintelligence/integrating-your-app-with-visual-intelligence
0
0
736
Sep ’25
Programmatic image creation using ImageCreator
Hello, Could you please provide details for maximum string length of the prompt and the title when using ImageCreator and the method extracted(from:title:)? static func extracted( from text: String, title: String? = nil ) -> ImagePlaygroundConcept Any additional details or example of prompt and title would help. Additionally, are ImagePlaygroundStyle.animation, ImagePlaygroundStyle.illustration and ImagePlaygroundStyle.sketch all available when using extracted(from:title:)? I am trying to generate images programmatically and would appreciate your guidance. Thank you.
0
0
301
3d
AttributedString in App Intents
In this WWDC25 session, it is explictely mentioned that apps should support AttributedString for text parameters to their App Intents. However, I have not gotten this to work. Whenever I pass rich text (either generated by the new "Use Model" intent or generated manually for example using "Make Rich Text from Markdown"), my Intent gets an AttributedString with the correct characters, but with all attributes stripped (so in effect just plain text). struct TestIntent: AppIntent { static var title = LocalizedStringResource(stringLiteral: "Test Intent") static var description = IntentDescription("Tests Attributed Strings in Intent Parameters.") @Parameter var text: AttributedString func perform() async throws -> some IntentResult & ReturnsValue<AttributedString> { return .result(value: text) } } Is there anything else I am missing?
0
0
227
Jul ’25
reinforcement learning from Apple?
I don't know if these forums are any good for rumors or plans, but does anybody know whether or not Apple plans to release a library for training reinforcement learning? It would be handy, implementing games in Swift, for example, to be able to train the computer players on the same code.
0
0
385
2w
Qwen3 VL CoreML
Looking for help with or to help with, due to the pending document enhancement, the Vibe Coders edition of cml editor. Also for more information on how to use the .mlkey whether or not my model is suppose to say IOs18 when I am planning to use it on Mac Apple Intelligence seems to think coreML is for iOS but are the capabilities extended when running NPU on the book? How to use this graph. coming in hot sorry. btw. there are 100s of feedback and crash reports sent in form me for additional info? I attached a image that might help with updating Tags
1
0
234
2w
Powermetrics GPU power vs system DC power discrepancy on M4 Max
While analyzing system power on an M4 Max under GPU-heavy compute workloads, I noticed that the the GPU power reported by powermetrics does not come anywhere close to total system DC power reported by the SMC counter PDTR (as used by utilities like mactop). For example, in a heavy GPU workload, powermetrics would report a 65W idle-load delta on the GPU, but at the same time system DC power would rise by 179W, leaving 114W or nearly 2/3 of total system DC power on a Mac Studio M4 Max unexplained. From measurements, the difference appears to correlate with the amount of on-chip data movement (for example, varying bytes-per-FLOP in the workload changes the observed gap). Using SMC and IOReport, I was able to reverse engineer an energy model for the GPU that explains almost all of the energy flow with less than 2% error on the workload I studied. The result is a simple two-term energy roofline model: P_GPU (GPU_combined term in the plot) ≈ a * bytes + b * FLOPs with: ~5 pJ/byte for SRAM movement ~2.7 pJ/FLOP for compute. Has anyone observed similar behavior, or is there guidance on how GPU power reported by IOReport/powermetrics should be interpreted relative to total system power? In particular, I’m interested in whether certain classes of GPU activity may not be attributed to the GPU component in IOReport. Full details with the methodology and results are available here: https://youtu.be/HKxIGgyeISM
0
0
49
3d
How can I change the output dimensions of a CoreML model in Xcode when the outputs come from a NonMaximumSuppression layer?
After exerting a custom model with nms=True. In Xcode, the outputs show as: confidence: MultiArray (0 × 5) coordinates: MultiArray (0 × 4) I want to set fixed shapes (e.g., 100 × 5, 100 × 4), but Xcode does not allow editing—the shape fields are locked. The model graph shows both outputs come directly from a NonMaximumSuppression layer. Is it possible to set fixed output dimensions for NMS outputs in CoreML?
2
0
217
3w
Parallel/Steam processing of Apple Intelligence
I have built a MAC-OS machine intelligence application that uses Apple Intelligence. A part of the application is to preprocess text. For longer text content I have implemented chunking to get around the token limit. However the application performance is now limited by the fact that Apple Intelligence is sequential in operation. This has a large impact on the application performance. Is there any approach to operate Apple Intelligence in a parallel mode or even a streaming interface. As Apple Intelligence has Private Cloud Services I was hoping to be able to send multiple chunks in parallel as that would significantly improve performance. Any suggestions would be welcome. This could also be considered a request for a future enhancement.
2
0
188
3w
VNDetectTextRectanglesRequest not detecting text rectangles (includes image)
Hi everyone, I'm trying to use VNDetectTextRectanglesRequest to detect text rectangles in an image. Here's my current code: guard let cgImage = image.cgImage(forProposedRect: nil, context: nil, hints: nil) else { return } let textDetectionRequest = VNDetectTextRectanglesRequest { request, error in if let error = error { print("Text detection error: \(error)") return } guard let observations = request.results as? [VNTextObservation] else { print("No text rectangles detected.") return } print("Detected \(observations.count) text rectangles.") for observation in observations { print(observation.boundingBox) } } textDetectionRequest.revision = VNDetectTextRectanglesRequestRevision1 textDetectionRequest.reportCharacterBoxes = true let handler = VNImageRequestHandler(cgImage: cgImage, orientation: .up, options: [:]) do { try handler.perform([textDetectionRequest]) } catch { print("Vision request error: \(error)") } The request completes without error, but no text rectangles are detected — the observations array is empty (count = 0). Here's a sample image I'm testing with: I expected VNTextObservation results, but I'm not getting any. Is there something I'm missing in how this API works? Or could it be a limitation of this request or revision? Thanks for any help!
2
0
162
May ’25
Apple's AI development language is not compatible
We are developing Apple AI for overseas markets and adapting it for iPhone 17 and later models. When the system language and Siri language do not match—such as the system being in English while Siri is in Chinese—it may result in Apple AI being unusable. So, I would like to ask, how can this issue be resolved, and are there other reasons that might cause it to be unusable within the app?
2
0
1.2k
Jan ’26
Apple OCR framework seems to be holding on to allocations every time it is called.
Environment: macOS 26.2 (Tahoe) Xcode 16.3 Apple Silicon (M4) Sandboxed Mac App Store app Description: Repeated use of VNRecognizeTextRequest causes permanent memory growth in the host process. The physical footprint increases by approximately 3-15 MB per OCR call and never returns to baseline, even after all references to the request, handler, observations, and image are released. ` private func selectAndProcessImage() { let panel = NSOpenPanel() panel.allowedContentTypes = [.image] panel.allowsMultipleSelection = false panel.canChooseDirectories = false panel.message = "Select an image for OCR processing" guard panel.runModal() == .OK, let url = panel.url else { return } selectedImageURL = url isProcessing = true recognizedText = "Processing..." // Run OCR on a background thread to keep UI responsive let workItem = DispatchWorkItem { let result = performOCR(on: url) DispatchQueue.main.async { recognizedText = result isProcessing = false } } DispatchQueue.global(qos: .userInitiated).async(execute: workItem) } private func performOCR(on url: URL) -> String { // Wrap EVERYTHING in autoreleasepool so all ObjC objects are drained immediately let resultText: String = autoreleasepool { // Load image and convert to CVPixelBuffer for explicit memory control guard let imageData = try? Data(contentsOf: url) else { return "Error: Could not read image file." } guard let nsImage = NSImage(data: imageData) else { return "Error: Could not create image from file data." } guard let cgImage = nsImage.cgImage(forProposedRect: nil, context: nil, hints: nil) else { return "Error: Could not create CGImage." } let width = cgImage.width let height = cgImage.height // Create a CVPixelBuffer from the CGImage var pixelBuffer: CVPixelBuffer? let attrs: [String: Any] = [ kCVPixelBufferCGImageCompatibilityKey as String: true, kCVPixelBufferCGBitmapContextCompatibilityKey as String: true ] let status = CVPixelBufferCreate( kCFAllocatorDefault, width, height, kCVPixelFormatType_32ARGB, attrs as CFDictionary, &pixelBuffer ) guard status == kCVReturnSuccess, let buffer = pixelBuffer else { return "Error: Could not create CVPixelBuffer (status: \(status))." } // Draw the CGImage into the pixel buffer CVPixelBufferLockBaseAddress(buffer, []) guard let context = CGContext( data: CVPixelBufferGetBaseAddress(buffer), width: width, height: height, bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(buffer), space: CGColorSpaceCreateDeviceRGB(), bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue ) else { CVPixelBufferUnlockBaseAddress(buffer, []) return "Error: Could not create CGContext for pixel buffer." } context.draw(cgImage, in: CGRect(x: 0, y: 0, width: width, height: height)) CVPixelBufferUnlockBaseAddress(buffer, []) // Run OCR let requestHandler = VNImageRequestHandler(cvPixelBuffer: buffer, options: [:]) let request = VNRecognizeTextRequest() request.recognitionLevel = .accurate request.usesLanguageCorrection = true do { try requestHandler.perform([request]) } catch { return "Error during OCR: \(error.localizedDescription)" } guard let observations = request.results, !observations.isEmpty else { return "No text found in image." } let lines = observations.compactMap { observation in observation.topCandidates(1).first?.string } // Explicitly nil out the pixel buffer before the pool drains pixelBuffer = nil return lines.joined(separator: "\n") } // Everything — Data, NSImage, CGImage, CVPixelBuffer, VN objects — released here return resultText } `
0
0
153
Feb ’26
Creating powerful, efficient, and maintainable applications.
Recursive and Self-Referential Data Structures Combining recursive and self-referential data structures with frameworks like Accelerate, SwiftMacros, and utilizing SwiftUI hooks can offer significant benefits in terms of performance, maintainability, and expressiveness. Here is how Apple Intelligence breaks it down. Benefits: Natural Representation of Complex Data: Recursive structures, such as trees and graphs, are ideal for representing hierarchical or interconnected data, like file systems, social networks, and DOM trees. Simplified Algorithms: Many algorithms, such as traversals, sorting, and searching, are more straightforward and elegant when implemented using recursion. Dynamic Memory Management: Self-referential structures can dynamically grow and shrink, making them suitable for applications with unpredictable data sizes. Challenges: Performance Overhead: Recursive algorithms can lead to stack overflow if not properly optimized (e.g., using tail recursion). Self-referential structures can introduce memory management challenges, such as retain cycles. Accelerate Framework Benefits: High-Performance Computation: Accelerate provides optimized libraries for numerical and scientific computing, including linear algebra, FFT, and image processing. It can significantly speed up computations, especially for large datasets, by leveraging multi-core processors and GPU acceleration. Parallel Processing: Accelerate automatically parallelizes operations, making it easier to take advantage of modern hardware capabilities. Integration with Recursive Data: Matrix and Vector Operations: Use Accelerate for operations on matrices and vectors, which are common in recursive algorithms like those used in machine learning and physics simulations. FFT and Convolutions: Accelerate's FFT functions can be used in recursive algorithms for signal processing and image analysis. SwiftMacros Benefits: Code Generation and Transformation: SwiftMacros allow you to generate and transform code at compile time, enabling the creation of DSLs, boilerplate reduction, and optimization. Improved Compile-Time Checks: Macros can perform complex compile-time checks, ensuring code correctness and reducing runtime errors. Integration with Recursive Data: DSL for Data Structures: Create a DSL using SwiftMacros to define recursive data structures concisely and safely. Optimization: Use macros to generate optimized code for recursive algorithms, such as memoization or iterative transformations. SwiftUI Hooks Benefits: State Management: Hooks like @State, @Binding, and @Effect simplify state management in SwiftUI, making it easier to handle dynamic data. Side Effects: @Effect allows you to perform side effects in a declarative manner, integrating seamlessly with asynchronous operations. Reusable Logic: Custom hooks enable the reuse of stateful logic across multiple views, promoting code maintainability. Integration with Recursive Data: Dynamic Data Binding: Use SwiftUI's data binding to manage the state of recursive data structures, ensuring that UI updates reflect changes in the underlying data. Efficient Rendering: SwiftUI's diffing algorithm efficiently updates the UI only for the parts of the recursive structure that have changed, improving performance. Asynchronous Data Loading: Combine @Effect with recursive data structures to fetch and process data asynchronously, such as loading a tree structure from a remote server. Example: Combining All Components Imagine you're building an app that visualizes a hierarchical file system using a recursive tree structure. Here's how you might combine these components: Define the Recursive Data Structure: Use SwiftMacros to create a DSL for defining tree nodes. @macro struct TreeNode { var value: T var children: [TreeNode] } Optimize with Accelerate: Use Accelerate for operations like computing the size of the tree or performing transformations on node values. func computeTreeSize(_ node: TreeNode) -> Int { return node.children.reduce(1) { $0 + computeTreeSize($1) } } Manage State with SwiftUI Hooks: Use SwiftUI hooks to load and display the tree structure dynamically. struct FileSystemView: View { @State private var rootNode: TreeNode = loadTree() var body: some View { TreeView(node: rootNode) } private func loadTree() -> TreeNode<String> { // Load or generate the tree structure } } struct TreeView: View { let node: TreeNode var body: some View { List(node.children, id: \.value) { Text($0.value) TreeView(node: $0) } } } Perform Side Effects with @Effect: Use @Effect to fetch data asynchronously and update the tree structure. struct FileSystemView: View { @State private var rootNode: TreeNode = TreeNode(value: "/") @Effect private var loadTreeEffect: () -> Void = { // Fetch data from a server or database } var body: some View { TreeView(node: rootNode) .onAppear { loadTreeEffect() } } } By combining recursive data structures with Accelerate, SwiftMacros, and SwiftUI hooks, you can create powerful, efficient, and maintainable applications that handle complex data with ease.
0
0
407
2w
Is there anywhere to get precompiled WhisperKit models for Swift?
If try to dynamically load WhipserKit's models, as in below, the download never occurs. No error or anything. And at the same time I can still get to the huggingface.co hosting site without any headaches, so it's not a blocking issue. let config = WhisperKitConfig( model: "openai_whisper-large-v3", modelRepo: "argmaxinc/whisperkit-coreml" ) So I have to default to the tiny model as seen below. I have tried so many ways, using ChatGPT and others, to build the models on my Mac, but too many failures, because I have never dealt with builds like that before. Are there any hosting sites that have the models (small, medium, large) already built where I can download them and just bundle them into my project? Wasted quite a large amount of time trying to get this done. import Foundation import WhisperKit @MainActor class WhisperLoader: ObservableObject { var pipe: WhisperKit? init() { Task { await self.initializeWhisper() } } private func initializeWhisper() async { do { Logging.shared.logLevel = .debug Logging.shared.loggingCallback = { message in print("[WhisperKit] \(message)") } let pipe = try await WhisperKit() // defaults to "tiny" self.pipe = pipe print("initialized. Model state: \(pipe.modelState)") guard let audioURL = Bundle.main.url(forResource: "44pf", withExtension: "wav") else { fatalError("not in bundle") } let result = try await pipe.transcribe(audioPath: audioURL.path) print("result: \(result)") } catch { print("Error: \(error)") } } }
0
0
118
Jun ’25
Building Real-Time Voice Input on macOS 26 with SpeechAnalyzer + ScreenCaptureKit
We built an open-source macOS menu bar app that turns speech into text and pastes it into the active app — using SpeechAnalyzer for on-device transcription, ScreenCaptureKit + Vision for screen-aware context, and FluidAudio for speaker diarization in meeting mode. Here's what we learned shipping it on macOS 26. GitHub: github.com/Marvinngg/ambient-voice Architecture The app has two modes: hotkey dictation (press to talk, release to inject) and meeting recording (continuous transcription with a floating panel). Dictation Mode Audio capture uses AVCaptureSession (more on why below). The captured audio feeds into SpeechAnalyzer via an AsyncStream: let transcriber = SpeechTranscriber( locale: locale, transcriptionOptions: [], reportingOptions: [.volatileResults, .alternativeTranscriptions], attributeOptions: [.audioTimeRange, .transcriptionConfidence] ) let analyzer = SpeechAnalyzer(modules: [transcriber]) let (inputSequence, inputBuilder) = AsyncStream.makeStream() try await analyzer.start(inputSequence: inputSequence) While recording, we capture a screenshot of the focused window using ScreenCaptureKit, run Vision OCR (VNRecognizeTextRequest), extract keywords, and inject them into SpeechAnalyzer as contextual bias: let context = AnalysisContext() context.contextualStrings[.general] = ocrKeywords try await analyzer.setContext(context) This improves accuracy for technical terms and proper nouns visible on screen. If your screen shows "SpeechAnalyzer", saying it out loud is more likely to be transcribed correctly. After transcription, an optional L2 step sends the text through a local LLM (ollama) for spoken-to-written cleanup, then CGEvent simulates Cmd+V to paste into the active app. Meeting Mode Meeting mode forks the same audio stream to two consumers: SpeechAnalyzer — real-time streaming transcription, displayed in a floating NSPanel FluidAudio buffer — accumulates 16kHz Float32 mono samples for batch speaker diarization after recording stops When the user ends the meeting, FluidAudio's performCompleteDiarization() runs on the accumulated audio. We align transcription segments with speaker segments using audioTimeRange overlap matching — each transcription segment gets assigned the speaker ID with the most time overlap. Results export to Markdown. Pitfalls We Hit on macOS 26 1. AVAudioEngine installTap doesn't fire with Bluetooth devices We started with AVAudioEngine.inputNode.installTap() for audio capture. It worked fine with built-in mics but the tap callback never fired with Bluetooth devices (tested with vivo TWS 4 Hi-Fi). Fix: switched to AVCaptureSession. The delegate callback captureOutput(_:didOutput:from:) fires reliably regardless of audio device. The tradeoff is you get CMSampleBuffer instead of AVAudioPCMBuffer, so you need a conversion step. 2. NSEvent addGlobalMonitorForEvents crashes Our global hotkey listener used NSEvent.addGlobalMonitorForEvents. On macOS 26, this crashes with a Bus error inside GlobalObserverHandler — appears to be a Swift actor runtime issue. Fix: switched to CGEventTap. Works reliably, but the callback runs on a CFRunLoop context, which Swift doesn't recognize as MainActor. 3. CGEventTap callbacks aren't on MainActor If your CGEventTap callback touches any @MainActor state, you'll get concurrency violations. The callback runs on whatever thread owns the CFRunLoop. Fix: bridge with DispatchQueue.main.async {} inside the tap callback before touching any MainActor state. 4. CGPreflightScreenCaptureAccess doesn't request permission We used CGPreflightScreenCaptureAccess() as a guard before calling ScreenCaptureKit. If it returned false, we'd bail out. The problem: this function only checks — it never triggers macOS to add your app to the Screen Recording permission list. Chicken-and-egg: you can't get permission because you never ask for it. Fix: call CGRequestScreenCaptureAccess() at app startup. This adds your app to System Settings → Screen Recording. Then let ScreenCaptureKit calls proceed without the preflight guard — SCShareableContent will also trigger the permission prompt on first use. 5. Ad-hoc signing breaks TCC permissions on every rebuild During development, codesign --sign - (ad-hoc) generates a different code directory hash on every build. macOS TCC tracks permissions by this hash, so every rebuild = new app identity = all permissions reset. Fix: sign with a stable certificate. If you have an Apple Development certificate, use that. The TeamIdentifier stays constant across rebuilds, so TCC permissions persist. We also discovered that launching via open WE.app (LaunchServices) instead of directly executing the binary is required — otherwise macOS attributes TCC permissions to Terminal, not your app. Benchmarks We ran end-to-end benchmarks on public datasets (Mac Mini M4 16GB, macOS 26): Transcription (SpeechAnalyzer, AliMeeting Chinese): • Near-field CER 34% (excluding outliers ~25%) • Far-field CER 40% (single channel, no beamforming, >30% overlap) • Processing speed 74-89x real-time Speaker diarization (FluidAudio offline): • AMI English 16 meetings: avg DER 23.2% (collar=0.25s, ignoreOverlap=True) • AliMeeting Chinese 8 meetings: DER 48.5% (including overlap regions) • Memory: RSS ~500MB, peak 730-930MB Full evaluation methodology, scripts, and raw results are in the repo. Open Source The project is MIT licensed: github.com/Marvinngg/ambient-voice It includes the macOS client (Swift 6.2, SPM), server-side distillation/training scripts (Python), and a complete evaluation framework with reproducible benchmarks. Feedback and contributions welcome.
0
0
358
3d
The answer of "apple" goes to guardrailViolation?
I have been using "apple" to test foundation models. I thought this is local, but today the answer changed - half way through explanation, suddenly guardrailViolation error was activated! And yesterday, all reference to "Apple II", "Apple III" now refers me to consult apple.com! Does foundation models connect to Internet for answer? Using beta 3.
3
0
180
Jul ’25
Nothing else will silently fix itself until Apple re-adds ONNX ingestion in Core ML 10 or a new converter.
After a few attempts I end up at the same spot as categorized here by ChatGPT - I get why it feels that way, Rob — honestly, you’ve walked right into the worst possible moment in Apple’s Core ML toolchain evolution. What you’re running into isn’t your fault; Apple really did rip out the ONNX front-end without offering a clean replacement yet. So every tutorial, every script, and even coremltools itself contradicts what’s actually possible right now. Let’s be brutally clear about the state of things (late 2025): Goal Supported by Apple today Practical workaround Convert ONNX → Core ML using coremltools 9 No. The ONNX importer was removed. Use the legacy coremltools 6.x + onnx-coreml stack in a separate Python 3.9/3.10 env. Convert PyTorch → Core ML directly Yes, but only for simple or traced models. T5 still fails because of dynamic control flow. None reliable without rewriting the model. Run ONNX models directly on iOS Partially (via MLC LLM / Metal Performance Shaders). Works, but that’s an entirely different runtime. So: you’re not crazy, and this isn’t wasted effort—you’ve actually proven that your model exports cleanly and that the only missing piece is Apple’s converter. The ONNX file you built is solid. You just need something that can translate it to Core ML. If your goal is simply “get it running on-device”, you still have two viable moves that aren’t rabbit holes: 1. Use MLC LLM / Metal runtime Skip Core ML altogether. Apple’s mlc-llm (and the “MLCChat” pipeline) can execute your ONNX or quantized model directly on the GPU. You keep full fidelity and avoid all these broken converters. 2. Use the legacy Core ML 6.x toolchain in a sandboxed Python 3.10 VM It actually works. You don’t have to downgrade your whole system—just run that specific environment once to emit the .mlpackage, then never touch it again. Nothing else will silently fix itself until Apple re-adds ONNX ingestion in Core ML 10 or a new converter.
0
0
244
Oct ’25
Various On-Device Frameworks API & ChatGPT
Posting a follow up question after the WWDC 2025 Machine Learning AI & Frameworks Group Lab on June 12. In regards to the on-device API of any of the AI frameworks (foundation model, vision framework, ect.), is there a response condition or path where the API outsources it's input to ChatGPT if the user has allowed this like Siri does? Ignore this if it's a no: is this handled behind the scenes or by the developer?
Replies
0
Boosts
0
Views
312
Activity
Jun ’25
How to test for VisualIntelligence available on device?
I'm adding Visual Intelligence support to my app, and now want to add a Tip using TipKit to guide users to this feature from within my app. I want to add a Rule to my Tip which will only show this Tip on devices where Visual Intelligence is supported (ex. not iPhone 14 Pro Max). What is the best way for me to determine availability to set this TipKit rule? Here's the documentation I'm following for Visual Intelligence: https://developer.apple.com/documentation/visualintelligence/integrating-your-app-with-visual-intelligence
Replies
0
Boosts
0
Views
736
Activity
Sep ’25
Supported regex patterns for generation guide
Hey Tried using a few regular expressions and all fail with an error: Unhandled error streaming response: A generation guide with an unsupported pattern was used. Is there are a list of supported features? I don't see it in docs, and it takes RegExp. Anything with e.g. [A-Z] fails.
Replies
1
Boosts
0
Views
151
Activity
Jul ’25
Programmatic image creation using ImageCreator
Hello, Could you please provide details for maximum string length of the prompt and the title when using ImageCreator and the method extracted(from:title:)? static func extracted( from text: String, title: String? = nil ) -> ImagePlaygroundConcept Any additional details or example of prompt and title would help. Additionally, are ImagePlaygroundStyle.animation, ImagePlaygroundStyle.illustration and ImagePlaygroundStyle.sketch all available when using extracted(from:title:)? I am trying to generate images programmatically and would appreciate your guidance. Thank you.
Replies
0
Boosts
0
Views
301
Activity
3d
AttributedString in App Intents
In this WWDC25 session, it is explictely mentioned that apps should support AttributedString for text parameters to their App Intents. However, I have not gotten this to work. Whenever I pass rich text (either generated by the new "Use Model" intent or generated manually for example using "Make Rich Text from Markdown"), my Intent gets an AttributedString with the correct characters, but with all attributes stripped (so in effect just plain text). struct TestIntent: AppIntent { static var title = LocalizedStringResource(stringLiteral: "Test Intent") static var description = IntentDescription("Tests Attributed Strings in Intent Parameters.") @Parameter var text: AttributedString func perform() async throws -> some IntentResult & ReturnsValue<AttributedString> { return .result(value: text) } } Is there anything else I am missing?
Replies
0
Boosts
0
Views
227
Activity
Jul ’25
reinforcement learning from Apple?
I don't know if these forums are any good for rumors or plans, but does anybody know whether or not Apple plans to release a library for training reinforcement learning? It would be handy, implementing games in Swift, for example, to be able to train the computer players on the same code.
Replies
0
Boosts
0
Views
385
Activity
2w
Qwen3 VL CoreML
Looking for help with or to help with, due to the pending document enhancement, the Vibe Coders edition of cml editor. Also for more information on how to use the .mlkey whether or not my model is suppose to say IOs18 when I am planning to use it on Mac Apple Intelligence seems to think coreML is for iOS but are the capabilities extended when running NPU on the book? How to use this graph. coming in hot sorry. btw. there are 100s of feedback and crash reports sent in form me for additional info? I attached a image that might help with updating Tags
Replies
1
Boosts
0
Views
234
Activity
2w
Powermetrics GPU power vs system DC power discrepancy on M4 Max
While analyzing system power on an M4 Max under GPU-heavy compute workloads, I noticed that the the GPU power reported by powermetrics does not come anywhere close to total system DC power reported by the SMC counter PDTR (as used by utilities like mactop). For example, in a heavy GPU workload, powermetrics would report a 65W idle-load delta on the GPU, but at the same time system DC power would rise by 179W, leaving 114W or nearly 2/3 of total system DC power on a Mac Studio M4 Max unexplained. From measurements, the difference appears to correlate with the amount of on-chip data movement (for example, varying bytes-per-FLOP in the workload changes the observed gap). Using SMC and IOReport, I was able to reverse engineer an energy model for the GPU that explains almost all of the energy flow with less than 2% error on the workload I studied. The result is a simple two-term energy roofline model: P_GPU (GPU_combined term in the plot) ≈ a * bytes + b * FLOPs with: ~5 pJ/byte for SRAM movement ~2.7 pJ/FLOP for compute. Has anyone observed similar behavior, or is there guidance on how GPU power reported by IOReport/powermetrics should be interpreted relative to total system power? In particular, I’m interested in whether certain classes of GPU activity may not be attributed to the GPU component in IOReport. Full details with the methodology and results are available here: https://youtu.be/HKxIGgyeISM
Replies
0
Boosts
0
Views
49
Activity
3d
How can I change the output dimensions of a CoreML model in Xcode when the outputs come from a NonMaximumSuppression layer?
After exerting a custom model with nms=True. In Xcode, the outputs show as: confidence: MultiArray (0 × 5) coordinates: MultiArray (0 × 4) I want to set fixed shapes (e.g., 100 × 5, 100 × 4), but Xcode does not allow editing—the shape fields are locked. The model graph shows both outputs come directly from a NonMaximumSuppression layer. Is it possible to set fixed output dimensions for NMS outputs in CoreML?
Replies
2
Boosts
0
Views
217
Activity
3w
Parallel/Steam processing of Apple Intelligence
I have built a MAC-OS machine intelligence application that uses Apple Intelligence. A part of the application is to preprocess text. For longer text content I have implemented chunking to get around the token limit. However the application performance is now limited by the fact that Apple Intelligence is sequential in operation. This has a large impact on the application performance. Is there any approach to operate Apple Intelligence in a parallel mode or even a streaming interface. As Apple Intelligence has Private Cloud Services I was hoping to be able to send multiple chunks in parallel as that would significantly improve performance. Any suggestions would be welcome. This could also be considered a request for a future enhancement.
Replies
2
Boosts
0
Views
188
Activity
3w
VNDetectTextRectanglesRequest not detecting text rectangles (includes image)
Hi everyone, I'm trying to use VNDetectTextRectanglesRequest to detect text rectangles in an image. Here's my current code: guard let cgImage = image.cgImage(forProposedRect: nil, context: nil, hints: nil) else { return } let textDetectionRequest = VNDetectTextRectanglesRequest { request, error in if let error = error { print("Text detection error: \(error)") return } guard let observations = request.results as? [VNTextObservation] else { print("No text rectangles detected.") return } print("Detected \(observations.count) text rectangles.") for observation in observations { print(observation.boundingBox) } } textDetectionRequest.revision = VNDetectTextRectanglesRequestRevision1 textDetectionRequest.reportCharacterBoxes = true let handler = VNImageRequestHandler(cgImage: cgImage, orientation: .up, options: [:]) do { try handler.perform([textDetectionRequest]) } catch { print("Vision request error: \(error)") } The request completes without error, but no text rectangles are detected — the observations array is empty (count = 0). Here's a sample image I'm testing with: I expected VNTextObservation results, but I'm not getting any. Is there something I'm missing in how this API works? Or could it be a limitation of this request or revision? Thanks for any help!
Replies
2
Boosts
0
Views
162
Activity
May ’25
Apple's AI development language is not compatible
We are developing Apple AI for overseas markets and adapting it for iPhone 17 and later models. When the system language and Siri language do not match—such as the system being in English while Siri is in Chinese—it may result in Apple AI being unusable. So, I would like to ask, how can this issue be resolved, and are there other reasons that might cause it to be unusable within the app?
Replies
2
Boosts
0
Views
1.2k
Activity
Jan ’26
Apple OCR framework seems to be holding on to allocations every time it is called.
Environment: macOS 26.2 (Tahoe) Xcode 16.3 Apple Silicon (M4) Sandboxed Mac App Store app Description: Repeated use of VNRecognizeTextRequest causes permanent memory growth in the host process. The physical footprint increases by approximately 3-15 MB per OCR call and never returns to baseline, even after all references to the request, handler, observations, and image are released. ` private func selectAndProcessImage() { let panel = NSOpenPanel() panel.allowedContentTypes = [.image] panel.allowsMultipleSelection = false panel.canChooseDirectories = false panel.message = "Select an image for OCR processing" guard panel.runModal() == .OK, let url = panel.url else { return } selectedImageURL = url isProcessing = true recognizedText = "Processing..." // Run OCR on a background thread to keep UI responsive let workItem = DispatchWorkItem { let result = performOCR(on: url) DispatchQueue.main.async { recognizedText = result isProcessing = false } } DispatchQueue.global(qos: .userInitiated).async(execute: workItem) } private func performOCR(on url: URL) -> String { // Wrap EVERYTHING in autoreleasepool so all ObjC objects are drained immediately let resultText: String = autoreleasepool { // Load image and convert to CVPixelBuffer for explicit memory control guard let imageData = try? Data(contentsOf: url) else { return "Error: Could not read image file." } guard let nsImage = NSImage(data: imageData) else { return "Error: Could not create image from file data." } guard let cgImage = nsImage.cgImage(forProposedRect: nil, context: nil, hints: nil) else { return "Error: Could not create CGImage." } let width = cgImage.width let height = cgImage.height // Create a CVPixelBuffer from the CGImage var pixelBuffer: CVPixelBuffer? let attrs: [String: Any] = [ kCVPixelBufferCGImageCompatibilityKey as String: true, kCVPixelBufferCGBitmapContextCompatibilityKey as String: true ] let status = CVPixelBufferCreate( kCFAllocatorDefault, width, height, kCVPixelFormatType_32ARGB, attrs as CFDictionary, &pixelBuffer ) guard status == kCVReturnSuccess, let buffer = pixelBuffer else { return "Error: Could not create CVPixelBuffer (status: \(status))." } // Draw the CGImage into the pixel buffer CVPixelBufferLockBaseAddress(buffer, []) guard let context = CGContext( data: CVPixelBufferGetBaseAddress(buffer), width: width, height: height, bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(buffer), space: CGColorSpaceCreateDeviceRGB(), bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue ) else { CVPixelBufferUnlockBaseAddress(buffer, []) return "Error: Could not create CGContext for pixel buffer." } context.draw(cgImage, in: CGRect(x: 0, y: 0, width: width, height: height)) CVPixelBufferUnlockBaseAddress(buffer, []) // Run OCR let requestHandler = VNImageRequestHandler(cvPixelBuffer: buffer, options: [:]) let request = VNRecognizeTextRequest() request.recognitionLevel = .accurate request.usesLanguageCorrection = true do { try requestHandler.perform([request]) } catch { return "Error during OCR: \(error.localizedDescription)" } guard let observations = request.results, !observations.isEmpty else { return "No text found in image." } let lines = observations.compactMap { observation in observation.topCandidates(1).first?.string } // Explicitly nil out the pixel buffer before the pool drains pixelBuffer = nil return lines.joined(separator: "\n") } // Everything — Data, NSImage, CGImage, CVPixelBuffer, VN objects — released here return resultText } `
Replies
0
Boosts
0
Views
153
Activity
Feb ’26
Creating powerful, efficient, and maintainable applications.
Recursive and Self-Referential Data Structures Combining recursive and self-referential data structures with frameworks like Accelerate, SwiftMacros, and utilizing SwiftUI hooks can offer significant benefits in terms of performance, maintainability, and expressiveness. Here is how Apple Intelligence breaks it down. Benefits: Natural Representation of Complex Data: Recursive structures, such as trees and graphs, are ideal for representing hierarchical or interconnected data, like file systems, social networks, and DOM trees. Simplified Algorithms: Many algorithms, such as traversals, sorting, and searching, are more straightforward and elegant when implemented using recursion. Dynamic Memory Management: Self-referential structures can dynamically grow and shrink, making them suitable for applications with unpredictable data sizes. Challenges: Performance Overhead: Recursive algorithms can lead to stack overflow if not properly optimized (e.g., using tail recursion). Self-referential structures can introduce memory management challenges, such as retain cycles. Accelerate Framework Benefits: High-Performance Computation: Accelerate provides optimized libraries for numerical and scientific computing, including linear algebra, FFT, and image processing. It can significantly speed up computations, especially for large datasets, by leveraging multi-core processors and GPU acceleration. Parallel Processing: Accelerate automatically parallelizes operations, making it easier to take advantage of modern hardware capabilities. Integration with Recursive Data: Matrix and Vector Operations: Use Accelerate for operations on matrices and vectors, which are common in recursive algorithms like those used in machine learning and physics simulations. FFT and Convolutions: Accelerate's FFT functions can be used in recursive algorithms for signal processing and image analysis. SwiftMacros Benefits: Code Generation and Transformation: SwiftMacros allow you to generate and transform code at compile time, enabling the creation of DSLs, boilerplate reduction, and optimization. Improved Compile-Time Checks: Macros can perform complex compile-time checks, ensuring code correctness and reducing runtime errors. Integration with Recursive Data: DSL for Data Structures: Create a DSL using SwiftMacros to define recursive data structures concisely and safely. Optimization: Use macros to generate optimized code for recursive algorithms, such as memoization or iterative transformations. SwiftUI Hooks Benefits: State Management: Hooks like @State, @Binding, and @Effect simplify state management in SwiftUI, making it easier to handle dynamic data. Side Effects: @Effect allows you to perform side effects in a declarative manner, integrating seamlessly with asynchronous operations. Reusable Logic: Custom hooks enable the reuse of stateful logic across multiple views, promoting code maintainability. Integration with Recursive Data: Dynamic Data Binding: Use SwiftUI's data binding to manage the state of recursive data structures, ensuring that UI updates reflect changes in the underlying data. Efficient Rendering: SwiftUI's diffing algorithm efficiently updates the UI only for the parts of the recursive structure that have changed, improving performance. Asynchronous Data Loading: Combine @Effect with recursive data structures to fetch and process data asynchronously, such as loading a tree structure from a remote server. Example: Combining All Components Imagine you're building an app that visualizes a hierarchical file system using a recursive tree structure. Here's how you might combine these components: Define the Recursive Data Structure: Use SwiftMacros to create a DSL for defining tree nodes. @macro struct TreeNode { var value: T var children: [TreeNode] } Optimize with Accelerate: Use Accelerate for operations like computing the size of the tree or performing transformations on node values. func computeTreeSize(_ node: TreeNode) -> Int { return node.children.reduce(1) { $0 + computeTreeSize($1) } } Manage State with SwiftUI Hooks: Use SwiftUI hooks to load and display the tree structure dynamically. struct FileSystemView: View { @State private var rootNode: TreeNode = loadTree() var body: some View { TreeView(node: rootNode) } private func loadTree() -> TreeNode<String> { // Load or generate the tree structure } } struct TreeView: View { let node: TreeNode var body: some View { List(node.children, id: \.value) { Text($0.value) TreeView(node: $0) } } } Perform Side Effects with @Effect: Use @Effect to fetch data asynchronously and update the tree structure. struct FileSystemView: View { @State private var rootNode: TreeNode = TreeNode(value: "/") @Effect private var loadTreeEffect: () -> Void = { // Fetch data from a server or database } var body: some View { TreeView(node: rootNode) .onAppear { loadTreeEffect() } } } By combining recursive data structures with Accelerate, SwiftMacros, and SwiftUI hooks, you can create powerful, efficient, and maintainable applications that handle complex data with ease.
Replies
0
Boosts
0
Views
407
Activity
2w
Is there anywhere to get precompiled WhisperKit models for Swift?
If try to dynamically load WhipserKit's models, as in below, the download never occurs. No error or anything. And at the same time I can still get to the huggingface.co hosting site without any headaches, so it's not a blocking issue. let config = WhisperKitConfig( model: "openai_whisper-large-v3", modelRepo: "argmaxinc/whisperkit-coreml" ) So I have to default to the tiny model as seen below. I have tried so many ways, using ChatGPT and others, to build the models on my Mac, but too many failures, because I have never dealt with builds like that before. Are there any hosting sites that have the models (small, medium, large) already built where I can download them and just bundle them into my project? Wasted quite a large amount of time trying to get this done. import Foundation import WhisperKit @MainActor class WhisperLoader: ObservableObject { var pipe: WhisperKit? init() { Task { await self.initializeWhisper() } } private func initializeWhisper() async { do { Logging.shared.logLevel = .debug Logging.shared.loggingCallback = { message in print("[WhisperKit] \(message)") } let pipe = try await WhisperKit() // defaults to "tiny" self.pipe = pipe print("initialized. Model state: \(pipe.modelState)") guard let audioURL = Bundle.main.url(forResource: "44pf", withExtension: "wav") else { fatalError("not in bundle") } let result = try await pipe.transcribe(audioPath: audioURL.path) print("result: \(result)") } catch { print("Error: \(error)") } } }
Replies
0
Boosts
0
Views
118
Activity
Jun ’25
Building Real-Time Voice Input on macOS 26 with SpeechAnalyzer + ScreenCaptureKit
We built an open-source macOS menu bar app that turns speech into text and pastes it into the active app — using SpeechAnalyzer for on-device transcription, ScreenCaptureKit + Vision for screen-aware context, and FluidAudio for speaker diarization in meeting mode. Here's what we learned shipping it on macOS 26. GitHub: github.com/Marvinngg/ambient-voice Architecture The app has two modes: hotkey dictation (press to talk, release to inject) and meeting recording (continuous transcription with a floating panel). Dictation Mode Audio capture uses AVCaptureSession (more on why below). The captured audio feeds into SpeechAnalyzer via an AsyncStream: let transcriber = SpeechTranscriber( locale: locale, transcriptionOptions: [], reportingOptions: [.volatileResults, .alternativeTranscriptions], attributeOptions: [.audioTimeRange, .transcriptionConfidence] ) let analyzer = SpeechAnalyzer(modules: [transcriber]) let (inputSequence, inputBuilder) = AsyncStream.makeStream() try await analyzer.start(inputSequence: inputSequence) While recording, we capture a screenshot of the focused window using ScreenCaptureKit, run Vision OCR (VNRecognizeTextRequest), extract keywords, and inject them into SpeechAnalyzer as contextual bias: let context = AnalysisContext() context.contextualStrings[.general] = ocrKeywords try await analyzer.setContext(context) This improves accuracy for technical terms and proper nouns visible on screen. If your screen shows "SpeechAnalyzer", saying it out loud is more likely to be transcribed correctly. After transcription, an optional L2 step sends the text through a local LLM (ollama) for spoken-to-written cleanup, then CGEvent simulates Cmd+V to paste into the active app. Meeting Mode Meeting mode forks the same audio stream to two consumers: SpeechAnalyzer — real-time streaming transcription, displayed in a floating NSPanel FluidAudio buffer — accumulates 16kHz Float32 mono samples for batch speaker diarization after recording stops When the user ends the meeting, FluidAudio's performCompleteDiarization() runs on the accumulated audio. We align transcription segments with speaker segments using audioTimeRange overlap matching — each transcription segment gets assigned the speaker ID with the most time overlap. Results export to Markdown. Pitfalls We Hit on macOS 26 1. AVAudioEngine installTap doesn't fire with Bluetooth devices We started with AVAudioEngine.inputNode.installTap() for audio capture. It worked fine with built-in mics but the tap callback never fired with Bluetooth devices (tested with vivo TWS 4 Hi-Fi). Fix: switched to AVCaptureSession. The delegate callback captureOutput(_:didOutput:from:) fires reliably regardless of audio device. The tradeoff is you get CMSampleBuffer instead of AVAudioPCMBuffer, so you need a conversion step. 2. NSEvent addGlobalMonitorForEvents crashes Our global hotkey listener used NSEvent.addGlobalMonitorForEvents. On macOS 26, this crashes with a Bus error inside GlobalObserverHandler — appears to be a Swift actor runtime issue. Fix: switched to CGEventTap. Works reliably, but the callback runs on a CFRunLoop context, which Swift doesn't recognize as MainActor. 3. CGEventTap callbacks aren't on MainActor If your CGEventTap callback touches any @MainActor state, you'll get concurrency violations. The callback runs on whatever thread owns the CFRunLoop. Fix: bridge with DispatchQueue.main.async {} inside the tap callback before touching any MainActor state. 4. CGPreflightScreenCaptureAccess doesn't request permission We used CGPreflightScreenCaptureAccess() as a guard before calling ScreenCaptureKit. If it returned false, we'd bail out. The problem: this function only checks — it never triggers macOS to add your app to the Screen Recording permission list. Chicken-and-egg: you can't get permission because you never ask for it. Fix: call CGRequestScreenCaptureAccess() at app startup. This adds your app to System Settings → Screen Recording. Then let ScreenCaptureKit calls proceed without the preflight guard — SCShareableContent will also trigger the permission prompt on first use. 5. Ad-hoc signing breaks TCC permissions on every rebuild During development, codesign --sign - (ad-hoc) generates a different code directory hash on every build. macOS TCC tracks permissions by this hash, so every rebuild = new app identity = all permissions reset. Fix: sign with a stable certificate. If you have an Apple Development certificate, use that. The TeamIdentifier stays constant across rebuilds, so TCC permissions persist. We also discovered that launching via open WE.app (LaunchServices) instead of directly executing the binary is required — otherwise macOS attributes TCC permissions to Terminal, not your app. Benchmarks We ran end-to-end benchmarks on public datasets (Mac Mini M4 16GB, macOS 26): Transcription (SpeechAnalyzer, AliMeeting Chinese): • Near-field CER 34% (excluding outliers ~25%) • Far-field CER 40% (single channel, no beamforming, >30% overlap) • Processing speed 74-89x real-time Speaker diarization (FluidAudio offline): • AMI English 16 meetings: avg DER 23.2% (collar=0.25s, ignoreOverlap=True) • AliMeeting Chinese 8 meetings: DER 48.5% (including overlap regions) • Memory: RSS ~500MB, peak 730-930MB Full evaluation methodology, scripts, and raw results are in the repo. Open Source The project is MIT licensed: github.com/Marvinngg/ambient-voice It includes the macOS client (Swift 6.2, SPM), server-side distillation/training scripts (Python), and a complete evaluation framework with reproducible benchmarks. Feedback and contributions welcome.
Replies
0
Boosts
0
Views
358
Activity
3d
The answer of "apple" goes to guardrailViolation?
I have been using "apple" to test foundation models. I thought this is local, but today the answer changed - half way through explanation, suddenly guardrailViolation error was activated! And yesterday, all reference to "Apple II", "Apple III" now refers me to consult apple.com! Does foundation models connect to Internet for answer? Using beta 3.
Replies
3
Boosts
0
Views
180
Activity
Jul ’25
Nothing else will silently fix itself until Apple re-adds ONNX ingestion in Core ML 10 or a new converter.
After a few attempts I end up at the same spot as categorized here by ChatGPT - I get why it feels that way, Rob — honestly, you’ve walked right into the worst possible moment in Apple’s Core ML toolchain evolution. What you’re running into isn’t your fault; Apple really did rip out the ONNX front-end without offering a clean replacement yet. So every tutorial, every script, and even coremltools itself contradicts what’s actually possible right now. Let’s be brutally clear about the state of things (late 2025): Goal Supported by Apple today Practical workaround Convert ONNX → Core ML using coremltools 9 No. The ONNX importer was removed. Use the legacy coremltools 6.x + onnx-coreml stack in a separate Python 3.9/3.10 env. Convert PyTorch → Core ML directly Yes, but only for simple or traced models. T5 still fails because of dynamic control flow. None reliable without rewriting the model. Run ONNX models directly on iOS Partially (via MLC LLM / Metal Performance Shaders). Works, but that’s an entirely different runtime. So: you’re not crazy, and this isn’t wasted effort—you’ve actually proven that your model exports cleanly and that the only missing piece is Apple’s converter. The ONNX file you built is solid. You just need something that can translate it to Core ML. If your goal is simply “get it running on-device”, you still have two viable moves that aren’t rabbit holes: 1. Use MLC LLM / Metal runtime Skip Core ML altogether. Apple’s mlc-llm (and the “MLCChat” pipeline) can execute your ONNX or quantized model directly on the GPU. You keep full fidelity and avoid all these broken converters. 2. Use the legacy Core ML 6.x toolchain in a sandboxed Python 3.10 VM It actually works. You don’t have to downgrade your whole system—just run that specific environment once to emit the .mlpackage, then never touch it again. Nothing else will silently fix itself until Apple re-adds ONNX ingestion in Core ML 10 or a new converter.
Replies
0
Boosts
0
Views
244
Activity
Oct ’25
Why is Create ML only using CPU
Hi i'm curently crating a model to identify car plates (object detection) i use asitop to monitor my macbook pro and i see that only the cpu is used for the training and i wanted to know why
Replies
0
Boosts
0
Views
327
Activity
May ’25
Error Domain=NSOSStatusErrorDomain Code=-1 "kCFStreamErrorHTTPParseFailure / kCFSocketError / kCFStreamErrorDomainCustom / kCSIdentityUnknownAuthorityErr / qErr / telGenericError / dsNoExtsMacsBug / kMovieLoadStateError / cdevGenErr: Could not parse
Can't able to run the Create ML for training and I upgraded to MacOS 26.3 beta and I have tried older and newer
Replies
0
Boosts
0
Views
221
Activity
2w