Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Posts under Audio subtopic

Post

Replies

Boosts

Views

Activity

Core Audio Tap: per-device attenuation vs. number of stereo output pairs — how to get unattenuated “raw” app streams?
Hi all, I’ve implemented the new Core Audio Tap API (AudioHardwareCreateProcessTap with CATapDescription) and I’m seeing consistent level attenuation that scales with the number of stereo output pairs exposed by the target device. What I observe Device with 4 stereo pairs (8 outs) → tap shows −12.04 dB relative to source. True 2-ch devices (built-in speakers, AirPods) → ~0 dB attenuation. The attenuation appears regardless of whether I: Create a global (default-output) tap via initStereoGlobalTapButExcludeProcesses: Or create a per-process/per-device tap via initWithProcesses:andDeviceUID:withStream: Additionally, the routing choice inside the sending app matters: App output to “System/Default Output” → I often see no attenuation. App output directly to a multi-out interface (e.g., RME Fireface) → I see the pair-count-scaled attenuation. I can query Core Audio for the number of output channels/pairs and gain-compensate (+20·log10(N_pairs) dB) and that matches my measurements for many cases. However, this compensation is not universally correct because it seems to depend on where each process routes its audio (Default Output vs. direct device), even when those processes are included in the same tap aggregate. Question Is there a supported way to obtain the raw, unattenuated streams for all processes through the Tap API—i.e., to bypass this automatic headroom/attenuation behavior entirely? If this attenuation is expected by design: Is there a documented rule for when it applies (global vs. device taps, per-process taps, stream selection, etc.)? Is there a property/flag to disable it, or a reliable, official method to compute the exact compensation (beyond counting stereo pairs)? Any guidance on ensuring consistent levels when multiple processes route differently (Default Output vs. direct device) but are captured by the same tap? Environment API: AudioHardwareCreateProcessTap + CATapDescription Devices: built-in output (2-ch), RME Fireface (8+ outs / 4+ stereo pairs) Behavior reproducible with both global and per-process/per-device tap descriptions. Attenuation example: 4 stereo pairs → −12.04 dB observed. Happy to provide a minimal sample, measurements, and device logs. Thanks! — David
0
0
379
Nov ’25
In Speech framework is SFTranscriptionSegment timing supposed to be off and speechRecognitionMetadata nil until isFinal?
I'm working in Swift/SwiftUI, running XCode 16.3 on macOS 15.4 and I've seen this when running in the iOS simulator and in a macOS app run from XCode. I've also seen this behaviour with 3 different audio files. Nothing in the documentation says that the speechRecognitionMetadata property on an SFSpeechRecognitionResult will be nil until isFinal, but that's the behaviour I'm seeing. I've stripped my class down to the following: private var isAuthed = false // I call this in a .task {} in my SwiftUI View public func requestSpeechRecognizerPermission() { SFSpeechRecognizer.requestAuthorization { authStatus in Task { self.isAuthed = authStatus == .authorized } } } public func transcribe(from url: URL) { guard isAuthed else { return } let locale = Locale(identifier: "en-US") let recognizer = SFSpeechRecognizer(locale: locale) let recognitionRequest = SFSpeechURLRecognitionRequest(url: url) // the behaviour occurs whether I set this to true or not, I recently set // it to true to see if it made a difference recognizer?.supportsOnDeviceRecognition = true recognitionRequest.shouldReportPartialResults = true recognitionRequest.addsPunctuation = true recognizer?.recognitionTask(with: recognitionRequest) { (result, error) in guard result != nil else { return } if result!.isFinal { //speechRecognitionMetadata is not nil } else { //speechRecognitionMetadata is nil } } } } Further, and this isn't documented either, the SFTranscriptionSegment values don't have correct timestamp and duration values until isFinal. The values aren't all zero, but they don't align with the timing in the audio and they change to accurate values when isFinal is true. The transcription otherwise "works", in that I get transcription text before isFinal and if I wait for isFinal the segments are correct and speechRecognitionMetadata is filled with values. The context here is I'm trying to generate a transcription that I can then highlight the spoken sections of as audio plays and I'm thinking I must be just trying to use the Speech framework in a way it does not work. I got my concept working if I pre-process the audio (i.e. run it through until isFinal and save the results I need to json), but being able to do even a rougher version of it 'on the fly' - which requires segments to have the right timestamp/duration before isFinal - is perhaps impossible?
1
0
193
Jul ’25
Indexing of Music App
Recently, after the update of 26.3 Mac OS (Tahoe), the ordering of my music app has been horrible at best - music disappearing, tracks not aligning with albums (even if the albums are different years). It's created quite a problem, because the disappearing tracks issue seems to be replicating to iOS devices as well (although track numbering and album association seem to be stable). Has anyone else heard of this issue?
0
0
242
Dec ’25
Electron app + Apple Music playback: queue works, playback does not start. Looking for guidance.
Hi everyone. I’m building a macOS-first desktop app where music drives the app's behavior loop. The app is currently an Electron prototype. The blocker: we’re testing Apple Music inside an Electron app. MusicKit JS authorization works, catalog search works, and setting the queue works, but playback does not actually start in Electron. What we tried: Created Apple Developer / MusicKit credentials. Generated Apple Music developer tokens successfully. Retrieved a Music User Token through MusicKit JS. Confirmed Apple Music API calls work. Confirmed /v1/test and /me/storefront return 200 OK. Built a local HTTP auth/playback window inside Electron instead of using file://. Tested music.setQueue() with both: { song: songId } { url: catalogUrl } In Electron, the queue loads correctly: queueEmpty=false queueLength=1 volume=1 playbackRate=1 But after music.play(), playbackTime stays at 0 and no audio plays. Then we ran the same MusicKit playback test in normal Chrome using the same token, same local origin, same catalog track, and same queue descriptor. Chrome played successfully and playbackTime advanced. We also checked Electron directly and found navigator.requestMediaKeySystemAccess is missing, so our current theory is that stock Electron lacks the protected media / EME support Apple Music web playback needs. Important: we are not trying to bypass DRM or extract audio. We just want a legitimate way for a user-authorized macOS app to control Apple Music playback or observe playback state. What we’re considering next: Use the native macOS Music app as the playback engine and control it from our app. Test AppleScript / Automation permissions for play, pause, next, current track, player state, etc. Later, possibly build a native Swift helper using Apple Music / MediaPlayer APIs and communicate with Electron over IPC. Avoid relying on Electron MusicKit JS playback if this is a known dead end. Questions: Has anyone successfully made Apple Music / MusicKit JS playback work inside Electron? Is the missing EME/protected-media layer the expected blocker here? Is controlling the native macOS Music app the more realistic path? Any gotchas with AppleScript, MusicKit native APIs, or Electron + native helper architecture for this use case? Any pointers from people who have dealt with Electron + Apple Music / protected media would be appreciated.
0
0
41
3d
ioreg AVBControllerState - AVB/EAV Mode
Hello. To determine wether "AVB/EAV Mode" of a AV-capable network interfaces is turned on or off I query the IO registry and evaluate the property "AVBControllerState". I was wondering if this is the "correct" approach and if there is anything known about the values for this property? Network interfaces without AV capability may also carry this property (e.g.: for my WiFi adapter the value of 1) whereas the value for interfaces with AV capability can be 0 and 3. At least as far as I could observe with my limited amount of test devices at hand. Is it safe to assume that a value of 3 means this feature is turned on, 0 that it is turned off and ignore values of 1? Is there another approach to get to know the status of the "AVB/EAV Mode"? Thanks for any insight. Best regards, Ingo
0
0
204
Feb ’26
tvOS AVQueuePlayer Now Playing Info in Control Center?
I have a music app I'm developing and having a weird issue where I can see now playing info for every other platform than tvOS. As far as I can tell I have correctly configured the MPNowPlayingInfoCenter MPNowPlayingInfoCenter.default().nowPlayingInfo = nowPlayingInfo MPNowPlayingInfoCenter.default().playbackState = .playing Are there any extra requirements to get my app's now-playing info showing in control center on tvOS? Another strange issue that might be related is I can use the apple TV remote to pause audio but not resume playback, so I feel like there's something I'm missing about registering audio playback on tvOS specifically.
0
0
109
Jun ’25
dlsym cannot find symbol g_dwILResult when debugging an audio plugin
I am trying to debug the AAX version of my plugin (MIDI effect) on Pro Tools, but I am getting the following error (Mac console) when attempting to load it: dlsym cannot find symbol g_dwILResult in CFBundle etc.. I used Xcode 16.4 to build the plugin. Has anybody come across the same or a similar message? Best, Achillefs Axart Labs
2
0
608
Sep ’25
Convert CoreAudio AudioObjectID to IOUSB LocationID
Is there a recommended way on macOS 26 Tahoe to take a CoreAudio AudioObjectID and use it to lookup the underlying USB LocationID? I previously used AudioObjectID to query the corresponding DeviceUID with kAudioDevicePropertyDeviceUID. Then I queried for the IOService matching kIOAudioEngineClassName with property kIOAudioEngineGlobalUniqueIDKey matching DeviceUID, and I loaded kUSBDevicePropertyLocationID from the result. This fails on macOS 26, because the IO Registry for the device has an entry for usbaudiod rather than AppleUSBAudioEngine, and usbaudiod does not include a kIOAudioEngineGlobalUniqueIDKey property (or any other property to map it to a CoreAudio DeviceUID). My use-case here is a piece of audio recording software that allows configuring a set of supported audio devices via USB HID prior to recording. I present the user with a list of CoreAudio devices to use, but without a way to lookup the underlying USB LocationID, I cannot guarantee that the configured device matches the selected device (e.g. if the user plugged in two identical microphones).
2
0
692
Sep ’25
Is Call Translation API available for VOIP?
I might have misunderstood the docs, but is Call Translation going to be available for VOIP applications? Eg in an already connected VOIP call, would it be possible for Call Translations to be enabled on an iOS 26 and Apple Intelligence supported device? I have personally tried it and it doesn’t look like it supported VOIP but would love to confirm this. reference: https://developer.apple.com/documentation/callkit/cxsettranslatingcallaction/
1
0
84
Jun ’25
tvOS: Background audio + local caching works on Simulator but stops on real Apple TV device
Description: I’m developing a tvOS app using SwiftUI where we play background audio (music) in the Welcome screen, with support for offline playback via local caching. 🔹 Feature Overview App fetches audio metadata from API Starts streaming audio (HLS .m3u8) immediately In parallel, downloads the raw audio file (.mp3) Once download completes: Switches playback from streaming → local file On next launch (offline mode), app plays audio from local storage 🔹 Issue This flow works perfectly on the Simulator, but on a real Apple TV device: Audio plays for a few seconds (2–5 sec) and then stops Especially after switching from streaming → local file No explicit AVPlayer error is logged Playback sometimes stops after UI updates or periodic API refresh 🔹 Implementation Details Using AVPlayer with AVPlayerItem Background audio controlled via a shared manager (singleton) Files stored locally using FileManager (currently using .cachesDirectory) Switching playback using: player.replaceCurrentItem(with: AVPlayerItem(url: localURL)) player.play() 🔹 Observations Works reliably on Simulator On device: Playback stops silently Seems related to lifecycle, buffering, or file access No issues when continuously streaming (without switching to local) 🔹 Questions Is there any limitation or known issue with AVPlayer when switching from streaming (HLS) to local file playback on tvOS? Are there specific requirements for playing locally cached media files on tvOS (e.g., file location, permissions, or sandbox behavior)? What is the recommended storage location and size limit for cached media files on tvOS? We understand tvOS has limited persistent storage Is .cachesDirectory the correct approach for this use case? Are there known differences in AVPlayer behavior between Simulator and real Apple TV devices (especially regarding buffering or lifecycle)? What is the recommended approach for implementing offline background audio on tvOS apps? 🔹 Goal We want to implement a reliable system where: Audio streams initially Seamlessly switches to local file after download Continues playing without interruption Supports offline playback on subsequent launches Any guidance or best practices would be greatly appreciated. Thank you!
1
0
240
Apr ’26
iOS AUv3 extension: no Icon shown in host
Hi, I'm working on an AUv3 project. The app itself displays my icon. However the Auv3 extension does not display any icon in any host app (AUM, Drambo, etc.0). I thought that the extension would inherit the host app icon but that it does not appear to be the case. I tried to add the icon as a 1024x1024 file to the extension target and the update my extension plist file withe a CFBundleIconFile key but no luck either. It must surely be really easy. What am I missing? Thanks in advance for your help!
5
0
163
May ’25
Issues with monitoring and changing WebRTC audio output device in WKWebView
I am developing a VoIP app that uses WebRTC inside a WKWebView. Question 1: How can I monitor which audio output device WebRTC is currently using? I want to display this information in the UI for the user . Question 2: How can I change the current audio output device for WebRTC? I am using a JS Bridge to Objective-C code, attempting to change the audio device with the following code: void set_speaker(int n) { session = [AVAudioSession sharedInstance]; NSError *err = nil; if (n == 1) { [session overrideOutputAudioPort:AVAudioSessionPortOverrideSpeaker error:&err]; } else { [session overrideOutputAudioPort:AVAudioSessionPortOverrideNone error:&err]; } } However, this approach does not work. I am testing on an iPhone with iOS 16.7. Is a higher iOS version required?
2
0
356
1w
ShazamKit supported for iOS apps that can run on Mac silicon?
I am having issues deploying my iOS app, that uses ShazamKit, to get working on a Mac with Apple silicon. When uploading the archive to App Store Connect I do get ITMS-90863: Macs with Apple silicon support issue - The app links with libraries that aren’t present in macOS: /usr/lib/swift/libswiftShazamKit.dylib Is ShazamKit not supported for iOS apps that can run on Macs with Apple silicon? Or is there something I should fix in my setup / deployment?
26
0
1.2k
Jun ’25
Mac (Designed for iPad) cannot access microphone
I have an application that is a VOIP application of sorts that needs access to the microphone. I am using the Mac (Designed for iPad) support to not have to do huge amounts of conditional building and support for all the many iOS specific things my app includes. I never get prompted to allow microphone permissions and I never see my app name appear in Privacy & Security -> Microphone permissions setup. So is it that Mac is just a dead end for any form of an application that needs a microphone and is running under Mac (Designed for iPad) compatibility mode? Why doesn't TCC have some mechanism to notice and grant access to mic use?
3
0
437
3d
AVB Support for the AVnu MILAN Conventions
The AVB AVnu MILAN Convention has a groweing Population. Many big companies (Cisco, Meyer Sound, d&b Audio, l‘acoustics, Presonus, digico etc.) implements the AVB AVnu Milan Standards. Is there a plan on the Apple side to also implement AVnu Milan on top of the AVB Protocol? The advantage for Apple Sound would be a great Integration in the professionell Audio market and a more stable intergration on top of the AVB protocol. The atdecc work, but Not that stable.
1
0
185
Oct ’25
Apple Music iOS 26 features in Android
Since many users like me use Apple Music on Android, the app is almost as feature-rich as iOS. It would be fantastic if the developers could add the new iOS 26 features to the Android app, along with a minor UI change. I know it’s challenging to implement liquid glass on Android hardware or design, but features like auto-mix, pronunciation, and translation could be added. kindly consider this request !!!!
1
0
289
Jul ’25
Core Audio Tap: per-device attenuation vs. number of stereo output pairs — how to get unattenuated “raw” app streams?
Hi all, I’ve implemented the new Core Audio Tap API (AudioHardwareCreateProcessTap with CATapDescription) and I’m seeing consistent level attenuation that scales with the number of stereo output pairs exposed by the target device. What I observe Device with 4 stereo pairs (8 outs) → tap shows −12.04 dB relative to source. True 2-ch devices (built-in speakers, AirPods) → ~0 dB attenuation. The attenuation appears regardless of whether I: Create a global (default-output) tap via initStereoGlobalTapButExcludeProcesses: Or create a per-process/per-device tap via initWithProcesses:andDeviceUID:withStream: Additionally, the routing choice inside the sending app matters: App output to “System/Default Output” → I often see no attenuation. App output directly to a multi-out interface (e.g., RME Fireface) → I see the pair-count-scaled attenuation. I can query Core Audio for the number of output channels/pairs and gain-compensate (+20·log10(N_pairs) dB) and that matches my measurements for many cases. However, this compensation is not universally correct because it seems to depend on where each process routes its audio (Default Output vs. direct device), even when those processes are included in the same tap aggregate. Question Is there a supported way to obtain the raw, unattenuated streams for all processes through the Tap API—i.e., to bypass this automatic headroom/attenuation behavior entirely? If this attenuation is expected by design: Is there a documented rule for when it applies (global vs. device taps, per-process taps, stream selection, etc.)? Is there a property/flag to disable it, or a reliable, official method to compute the exact compensation (beyond counting stereo pairs)? Any guidance on ensuring consistent levels when multiple processes route differently (Default Output vs. direct device) but are captured by the same tap? Environment API: AudioHardwareCreateProcessTap + CATapDescription Devices: built-in output (2-ch), RME Fireface (8+ outs / 4+ stereo pairs) Behavior reproducible with both global and per-process/per-device tap descriptions. Attenuation example: 4 stereo pairs → −12.04 dB observed. Happy to provide a minimal sample, measurements, and device logs. Thanks! — David
Replies
0
Boosts
0
Views
379
Activity
Nov ’25
In Speech framework is SFTranscriptionSegment timing supposed to be off and speechRecognitionMetadata nil until isFinal?
I'm working in Swift/SwiftUI, running XCode 16.3 on macOS 15.4 and I've seen this when running in the iOS simulator and in a macOS app run from XCode. I've also seen this behaviour with 3 different audio files. Nothing in the documentation says that the speechRecognitionMetadata property on an SFSpeechRecognitionResult will be nil until isFinal, but that's the behaviour I'm seeing. I've stripped my class down to the following: private var isAuthed = false // I call this in a .task {} in my SwiftUI View public func requestSpeechRecognizerPermission() { SFSpeechRecognizer.requestAuthorization { authStatus in Task { self.isAuthed = authStatus == .authorized } } } public func transcribe(from url: URL) { guard isAuthed else { return } let locale = Locale(identifier: "en-US") let recognizer = SFSpeechRecognizer(locale: locale) let recognitionRequest = SFSpeechURLRecognitionRequest(url: url) // the behaviour occurs whether I set this to true or not, I recently set // it to true to see if it made a difference recognizer?.supportsOnDeviceRecognition = true recognitionRequest.shouldReportPartialResults = true recognitionRequest.addsPunctuation = true recognizer?.recognitionTask(with: recognitionRequest) { (result, error) in guard result != nil else { return } if result!.isFinal { //speechRecognitionMetadata is not nil } else { //speechRecognitionMetadata is nil } } } } Further, and this isn't documented either, the SFTranscriptionSegment values don't have correct timestamp and duration values until isFinal. The values aren't all zero, but they don't align with the timing in the audio and they change to accurate values when isFinal is true. The transcription otherwise "works", in that I get transcription text before isFinal and if I wait for isFinal the segments are correct and speechRecognitionMetadata is filled with values. The context here is I'm trying to generate a transcription that I can then highlight the spoken sections of as audio plays and I'm thinking I must be just trying to use the Speech framework in a way it does not work. I got my concept working if I pre-process the audio (i.e. run it through until isFinal and save the results I need to json), but being able to do even a rougher version of it 'on the fly' - which requires segments to have the right timestamp/duration before isFinal - is perhaps impossible?
Replies
1
Boosts
0
Views
193
Activity
Jul ’25
Indexing of Music App
Recently, after the update of 26.3 Mac OS (Tahoe), the ordering of my music app has been horrible at best - music disappearing, tracks not aligning with albums (even if the albums are different years). It's created quite a problem, because the disappearing tracks issue seems to be replicating to iOS devices as well (although track numbering and album association seem to be stable). Has anyone else heard of this issue?
Replies
0
Boosts
0
Views
242
Activity
Dec ’25
Electron app + Apple Music playback: queue works, playback does not start. Looking for guidance.
Hi everyone. I’m building a macOS-first desktop app where music drives the app's behavior loop. The app is currently an Electron prototype. The blocker: we’re testing Apple Music inside an Electron app. MusicKit JS authorization works, catalog search works, and setting the queue works, but playback does not actually start in Electron. What we tried: Created Apple Developer / MusicKit credentials. Generated Apple Music developer tokens successfully. Retrieved a Music User Token through MusicKit JS. Confirmed Apple Music API calls work. Confirmed /v1/test and /me/storefront return 200 OK. Built a local HTTP auth/playback window inside Electron instead of using file://. Tested music.setQueue() with both: { song: songId } { url: catalogUrl } In Electron, the queue loads correctly: queueEmpty=false queueLength=1 volume=1 playbackRate=1 But after music.play(), playbackTime stays at 0 and no audio plays. Then we ran the same MusicKit playback test in normal Chrome using the same token, same local origin, same catalog track, and same queue descriptor. Chrome played successfully and playbackTime advanced. We also checked Electron directly and found navigator.requestMediaKeySystemAccess is missing, so our current theory is that stock Electron lacks the protected media / EME support Apple Music web playback needs. Important: we are not trying to bypass DRM or extract audio. We just want a legitimate way for a user-authorized macOS app to control Apple Music playback or observe playback state. What we’re considering next: Use the native macOS Music app as the playback engine and control it from our app. Test AppleScript / Automation permissions for play, pause, next, current track, player state, etc. Later, possibly build a native Swift helper using Apple Music / MediaPlayer APIs and communicate with Electron over IPC. Avoid relying on Electron MusicKit JS playback if this is a known dead end. Questions: Has anyone successfully made Apple Music / MusicKit JS playback work inside Electron? Is the missing EME/protected-media layer the expected blocker here? Is controlling the native macOS Music app the more realistic path? Any gotchas with AppleScript, MusicKit native APIs, or Electron + native helper architecture for this use case? Any pointers from people who have dealt with Electron + Apple Music / protected media would be appreciated.
Replies
0
Boosts
0
Views
41
Activity
3d
ioreg AVBControllerState - AVB/EAV Mode
Hello. To determine wether "AVB/EAV Mode" of a AV-capable network interfaces is turned on or off I query the IO registry and evaluate the property "AVBControllerState". I was wondering if this is the "correct" approach and if there is anything known about the values for this property? Network interfaces without AV capability may also carry this property (e.g.: for my WiFi adapter the value of 1) whereas the value for interfaces with AV capability can be 0 and 3. At least as far as I could observe with my limited amount of test devices at hand. Is it safe to assume that a value of 3 means this feature is turned on, 0 that it is turned off and ignore values of 1? Is there another approach to get to know the status of the "AVB/EAV Mode"? Thanks for any insight. Best regards, Ingo
Replies
0
Boosts
0
Views
204
Activity
Feb ’26
tvOS AVQueuePlayer Now Playing Info in Control Center?
I have a music app I'm developing and having a weird issue where I can see now playing info for every other platform than tvOS. As far as I can tell I have correctly configured the MPNowPlayingInfoCenter MPNowPlayingInfoCenter.default().nowPlayingInfo = nowPlayingInfo MPNowPlayingInfoCenter.default().playbackState = .playing Are there any extra requirements to get my app's now-playing info showing in control center on tvOS? Another strange issue that might be related is I can use the apple TV remote to pause audio but not resume playback, so I feel like there's something I'm missing about registering audio playback on tvOS specifically.
Replies
0
Boosts
0
Views
109
Activity
Jun ’25
Usage of Apple Music Feed leads to error 500
Hello, I'm trying to receive parquet files using the example that provided in documentation. I've done all required steps but receive constantly error 500 with "Upstream Service Error". By looking into the issues list, seems this error exists for months. Is it possible to get it working?
Replies
2
Boosts
0
Views
183
Activity
May ’25
Audio Unit logo for website
hi, Is there an Audio Unit logo I can show on my website? I would love to show that my application is able to host Audio Unit plugins. regards, Joël
Replies
0
Boosts
0
Views
548
Activity
Sep ’25
dlsym cannot find symbol g_dwILResult when debugging an audio plugin
I am trying to debug the AAX version of my plugin (MIDI effect) on Pro Tools, but I am getting the following error (Mac console) when attempting to load it: dlsym cannot find symbol g_dwILResult in CFBundle etc.. I used Xcode 16.4 to build the plugin. Has anybody come across the same or a similar message? Best, Achillefs Axart Labs
Replies
2
Boosts
0
Views
608
Activity
Sep ’25
(iOS 18) SFSpeechRecognitionResult providing new text after a gap in speaking
Here is the demo from Apple's site This issues is specific to iOS 18. When running this demo, we are getting new text when we have a gap in speaking, the recognitionTask(with:resultHandler:) provides new text which is only spoken after the gap and not the concatenation of old text and the new spoken text.
Replies
6
Boosts
0
Views
1.3k
Activity
May ’25
Convert CoreAudio AudioObjectID to IOUSB LocationID
Is there a recommended way on macOS 26 Tahoe to take a CoreAudio AudioObjectID and use it to lookup the underlying USB LocationID? I previously used AudioObjectID to query the corresponding DeviceUID with kAudioDevicePropertyDeviceUID. Then I queried for the IOService matching kIOAudioEngineClassName with property kIOAudioEngineGlobalUniqueIDKey matching DeviceUID, and I loaded kUSBDevicePropertyLocationID from the result. This fails on macOS 26, because the IO Registry for the device has an entry for usbaudiod rather than AppleUSBAudioEngine, and usbaudiod does not include a kIOAudioEngineGlobalUniqueIDKey property (or any other property to map it to a CoreAudio DeviceUID). My use-case here is a piece of audio recording software that allows configuring a set of supported audio devices via USB HID prior to recording. I present the user with a list of CoreAudio devices to use, but without a way to lookup the underlying USB LocationID, I cannot guarantee that the configured device matches the selected device (e.g. if the user plugged in two identical microphones).
Replies
2
Boosts
0
Views
692
Activity
Sep ’25
Is Call Translation API available for VOIP?
I might have misunderstood the docs, but is Call Translation going to be available for VOIP applications? Eg in an already connected VOIP call, would it be possible for Call Translations to be enabled on an iOS 26 and Apple Intelligence supported device? I have personally tried it and it doesn’t look like it supported VOIP but would love to confirm this. reference: https://developer.apple.com/documentation/callkit/cxsettranslatingcallaction/
Replies
1
Boosts
0
Views
84
Activity
Jun ’25
tvOS: Background audio + local caching works on Simulator but stops on real Apple TV device
Description: I’m developing a tvOS app using SwiftUI where we play background audio (music) in the Welcome screen, with support for offline playback via local caching. 🔹 Feature Overview App fetches audio metadata from API Starts streaming audio (HLS .m3u8) immediately In parallel, downloads the raw audio file (.mp3) Once download completes: Switches playback from streaming → local file On next launch (offline mode), app plays audio from local storage 🔹 Issue This flow works perfectly on the Simulator, but on a real Apple TV device: Audio plays for a few seconds (2–5 sec) and then stops Especially after switching from streaming → local file No explicit AVPlayer error is logged Playback sometimes stops after UI updates or periodic API refresh 🔹 Implementation Details Using AVPlayer with AVPlayerItem Background audio controlled via a shared manager (singleton) Files stored locally using FileManager (currently using .cachesDirectory) Switching playback using: player.replaceCurrentItem(with: AVPlayerItem(url: localURL)) player.play() 🔹 Observations Works reliably on Simulator On device: Playback stops silently Seems related to lifecycle, buffering, or file access No issues when continuously streaming (without switching to local) 🔹 Questions Is there any limitation or known issue with AVPlayer when switching from streaming (HLS) to local file playback on tvOS? Are there specific requirements for playing locally cached media files on tvOS (e.g., file location, permissions, or sandbox behavior)? What is the recommended storage location and size limit for cached media files on tvOS? We understand tvOS has limited persistent storage Is .cachesDirectory the correct approach for this use case? Are there known differences in AVPlayer behavior between Simulator and real Apple TV devices (especially regarding buffering or lifecycle)? What is the recommended approach for implementing offline background audio on tvOS apps? 🔹 Goal We want to implement a reliable system where: Audio streams initially Seamlessly switches to local file after download Continues playing without interruption Supports offline playback on subsequent launches Any guidance or best practices would be greatly appreciated. Thank you!
Replies
1
Boosts
0
Views
240
Activity
Apr ’26
iOS AUv3 extension: no Icon shown in host
Hi, I'm working on an AUv3 project. The app itself displays my icon. However the Auv3 extension does not display any icon in any host app (AUM, Drambo, etc.0). I thought that the extension would inherit the host app icon but that it does not appear to be the case. I tried to add the icon as a 1024x1024 file to the extension target and the update my extension plist file withe a CFBundleIconFile key but no luck either. It must surely be really easy. What am I missing? Thanks in advance for your help!
Replies
5
Boosts
0
Views
163
Activity
May ’25
Issues with monitoring and changing WebRTC audio output device in WKWebView
I am developing a VoIP app that uses WebRTC inside a WKWebView. Question 1: How can I monitor which audio output device WebRTC is currently using? I want to display this information in the UI for the user . Question 2: How can I change the current audio output device for WebRTC? I am using a JS Bridge to Objective-C code, attempting to change the audio device with the following code: void set_speaker(int n) { session = [AVAudioSession sharedInstance]; NSError *err = nil; if (n == 1) { [session overrideOutputAudioPort:AVAudioSessionPortOverrideSpeaker error:&err]; } else { [session overrideOutputAudioPort:AVAudioSessionPortOverrideNone error:&err]; } } However, this approach does not work. I am testing on an iPhone with iOS 16.7. Is a higher iOS version required?
Replies
2
Boosts
0
Views
356
Activity
1w
ShazamKit supported for iOS apps that can run on Mac silicon?
I am having issues deploying my iOS app, that uses ShazamKit, to get working on a Mac with Apple silicon. When uploading the archive to App Store Connect I do get ITMS-90863: Macs with Apple silicon support issue - The app links with libraries that aren’t present in macOS: /usr/lib/swift/libswiftShazamKit.dylib Is ShazamKit not supported for iOS apps that can run on Macs with Apple silicon? Or is there something I should fix in my setup / deployment?
Replies
26
Boosts
0
Views
1.2k
Activity
Jun ’25
Mac (Designed for iPad) cannot access microphone
I have an application that is a VOIP application of sorts that needs access to the microphone. I am using the Mac (Designed for iPad) support to not have to do huge amounts of conditional building and support for all the many iOS specific things my app includes. I never get prompted to allow microphone permissions and I never see my app name appear in Privacy & Security -> Microphone permissions setup. So is it that Mac is just a dead end for any form of an application that needs a microphone and is running under Mac (Designed for iPad) compatibility mode? Why doesn't TCC have some mechanism to notice and grant access to mic use?
Replies
3
Boosts
0
Views
437
Activity
3d
Radio stations unable to play on Android with MusicKit SDK
Radio stations are currently not supported by the MusicKit SDK for Android. The SDK has not been updated for years now. It lacks pretty big features of Apple Music
Replies
1
Boosts
0
Views
390
Activity
3w
AVB Support for the AVnu MILAN Conventions
The AVB AVnu MILAN Convention has a groweing Population. Many big companies (Cisco, Meyer Sound, d&b Audio, l‘acoustics, Presonus, digico etc.) implements the AVB AVnu Milan Standards. Is there a plan on the Apple side to also implement AVnu Milan on top of the AVB Protocol? The advantage for Apple Sound would be a great Integration in the professionell Audio market and a more stable intergration on top of the AVB protocol. The atdecc work, but Not that stable.
Replies
1
Boosts
0
Views
185
Activity
Oct ’25
Apple Music iOS 26 features in Android
Since many users like me use Apple Music on Android, the app is almost as feature-rich as iOS. It would be fantastic if the developers could add the new iOS 26 features to the Android app, along with a minor UI change. I know it’s challenging to implement liquid glass on Android hardware or design, but features like auto-mix, pronunciation, and translation could be added. kindly consider this request !!!!
Replies
1
Boosts
0
Views
289
Activity
Jul ’25