Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Posts under Audio subtopic

Post

Replies

Boosts

Views

Activity

Memory Leak in AVAudioPlayer in Simulator only
I have a memory leak, when using AVAudioPlayer. I managed to narrow down the issue into a very simple app, which code I paste in at the end. The memory leak start immediately when I start playing sound, but only in the emylator. On the real iPhone there is no memory leak. The memory leak on the Simulator looks like this: import SwiftUI import AVFoundation struct ContentView_Audio: View { var sound: AVAudioPlayer? init() { guard let path = Bundle.main.path(forResource: "cd201", ofType: "mp3") else { return } let url = URL(fileURLWithPath: path) do { try AVAudioSession.sharedInstance().setCategory(.playback, mode: .default, options: [.mixWithOthers]) } catch { return } do { try AVAudioSession.sharedInstance().setActive(true) } catch { return } do { sound = try AVAudioPlayer(contentsOf: url) } catch { return } } var body: some View { HStack { Button { playSound() } label: { ZStack { Circle() .fill(.mint.opacity(0.3)) .frame(width: 44, height: 44) .shadow(radius: 8) Image(systemName: "play.fill") .resizable() .frame(width: 20, height: 20) } } .padding() Button { stopSound() } label: { ZStack { Circle() .fill(.mint.opacity(0.3)) .frame(width: 44, height: 44) .shadow(radius: 8) Image(systemName: "stop.fill") .resizable() .frame(width: 20, height: 20) } } .padding() } } private func playSound() { guard sound != nil else { return } sound?.volume = 1 // sound?.numberOfLoops = -1 sound?.play() } func stopSound() { sound?.stop() } }
2
0
124
Apr ’25
USB microphone input : Mac "Designed for iPad"
My app - natively iOS but built with the "Designed for iPad" option to run on Mac - does not recognise an attached USB microphone when running on a Mac. This line int32_t items = (int32_t) [[[AVAudioSession sharedInstance] availableInputs] count ]; returns 1, which is the Mac internal mic. On iPad and iPhone it sees both the internal mic and the USB mic. Is this an inherent "Designed for iPad" restriction, and is there some trick I can pull to get the USB microphone to be recognised by the system?
1
0
215
5d
The files generated using AVAudioRecorder have a constant size of only 4kb
Hello. My app uses AVAudioRecorder to generate recording files, which are consistently only 4kb in size. Most users generate audio files normally, with only a few users experiencing this phenomenon occasionally. After uninstalling and installing the app, it will work normally, but it will reappear after a period of time. I have compared that the problematic audio files generated each time are fixed and cannot be played. Added the audioRecorderDidFinishRecording proxy method, which shows that the recording was completed normally. The user also reported that the recording is normal, but there is a problem with the generated file. How should I handle this issue? Look forward to your reply. - (void)startRecordWithOrderID:(NSString *)orderID { AVAudioSession *audioSession = [AVAudioSession sharedInstance]; [audioSession setCategory:AVAudioSessionCategoryRecord error:nil]; [audioSession setActive:YES error:nil]; NSMutableDictionary *settings = [[NSMutableDictionary alloc] init]; [settings setObject:[NSNumber numberWithFloat: 8000.0] forKey:AVSampleRateKey]; [settings setObject:[NSNumber numberWithInt: kAudioFormatLinearPCM] forKey:AVFormatIDKey]; [settings setObject:[NSNumber numberWithInt:16] forKey:AVLinearPCMBitDepthKey]; [settings setObject:[NSNumber numberWithInt: 1] forKey:AVNumberOfChannelsKey]; [settings setObject:[NSNumber numberWithBool:NO] forKey:AVLinearPCMIsBigEndianKey]; [settings setObject:[NSNumber numberWithBool:NO] forKey:AVLinearPCMIsFloatKey]; NSString *path = [WDUtility createDirInDocument:@"audios" withOrderID:orderID withPathExtension:@"wav"]; NSURL *tmpFile = [NSURL fileURLWithPath:path]; recorder = [[AVAudioRecorder alloc] initWithURL:tmpFile settings:settings error:nil]; [recorder setDelegate:self]; [recorder prepareToRecord]; [recorder record]; }
0
0
167
Jul ’25
How to match music with shazamkit for Android ?
Hi all, i can successfully match music using shazamkit on Apple using SwiftUI, a simple app that let user to load an audio file and exctracts the relative match, while i am unable to match music using shamzamkit on Android. I am trying to make the same simple app but i cannot match music as i get MATCH_ATTEMPT_FAILED every time i try to. I don't know what i am doing wrong but the shazam part in the kotlin Android code is in this method : suspend fun processAudioFileInBackground( filePath: String, developerTokenProvider: DeveloperTokenProvider ) = withContext(Dispatchers.IO) { val bufferSize = 1024 * 1024 val audioFile = FileInputStream(filePath) val byteBuffer = ByteBuffer.allocate(bufferSize) byteBuffer.order(ByteOrder.LITTLE_ENDIAN) var bytesRead: Int while (audioFile.read(byteBuffer.array()).also { bytesRead = it } != -1) { val signatureGenerator = (ShazamKit.createSignatureGenerator(AudioSampleRateInHz.SAMPLE_RATE_44100) as ShazamKitResult.Success).data signatureGenerator.append(byteBuffer.array(), bytesRead, System.currentTimeMillis()) val signature = signatureGenerator.generateSignature() println("Signature: ${signature.durationInMs}") val catalog = ShazamKit.createShazamCatalog(developerTokenProvider, Locale.ENGLISH) val session = (ShazamKit.createSession(catalog) as ShazamKitResult.Success).data val matchResult = session.match(signature) println("MatchResult : $matchResult") setMatchResult(matchResult) byteBuffer.clear() } audioFile.close() } I noticed that changing Locale in catalog creation results in different result as i get NoMatch without exception. Can you please help me with this?
0
0
88
Apr ’25
AVPlayerItem. externalMetadata not available
According to the documentation (https://developer.apple.com/documentation/avfoundation/avplayeritem/externalmetadata), AVPlayerItem should have an externalMetadata property. However it does not appear to be visible to my app. When I try, I get: Value of type 'AVPlayerItem' has no member 'externalMetadata' Documentation states iOS 12.2+; I am building with a minimum deployment target of iOS 18. Code snippet: import Foundation import AVFoundation /// ... in function ... // create metadata as described in https://developer.apple.com/videos/play/wwdc2022/110338 var title = AVMutableMetadataItem() title.identifier = .commonIdentifierAlbumName title.value = "My Title" as NSString? title.extendedLanguageTag = "und" var playerItem = await AVPlayerItem(asset: composition) playerItem.externalMetadata = [ title ]
0
0
86
Apr ’25
FaceTime Screen-Share Audio and Video Experience
FaceTime’s screen-share audio balance is insanely absurd right now. Whenever I share media, the system audio that gets sent through FaceTime is a tiny whisper even at full volume (or even when connected to my speaker or headphones). The moment anyone on the call makes any noise at all, the shared audio ducks so hard it disappears, while the voice (or rustling or air conditioning noise) spikes to painful levels. It’s impossible to watch or listen to anything together. Also, the feature where FaceTime would shrink to a square during screen-sharing has been completely removed. That was a good feature and I'm really confused why it's gone. Now, the FaceTime window stays as a long rectangle that covers part of the content I'm trying to share (unless I do full screen tile, but then I can't pull up any other windows during the call) and can't be made smaller than about a third of the screen. You can't resize the window or adjust its dimensions, so it ends up blocking the actual media you're trying to watch. Here are some feature requests/fixes that would greatly improve the FaceTime screen-share experience: Option to adjust the shared media volume independently of call audio. Disable/toggle the extreme automatic audio docking while screen-sharing Reintroduce the minimized “floating square” mode or allow full manual resizing and repositioning of the FaceTime window during screen-share sessions. Overall, this setup makes FaceTime screen-sharing basically unusable. The audio balance is so inconsistent that it’s easier to switch to Zoom or Google Meet, which both handle shared sound correctly and let you move the call window out of the way. Until these issues are fixed, there’s no practical reason to use FaceTime for shared viewing at all.
1
0
271
Nov ’25
ScaleTimeRange will cause noise in sound
I'm using AVFoundation to make a multi-track editor app, which can insert multiple track and clip, including scale some clip to change the speed of the clip, (also I'm not sure whether AVFoundation the best choice for me) but after making the scale with scaleTimeRange API, there is some short noise sound in play back. Also, sometimes it's fine when play AVMutableCompostion using AVPlayer with AVPlayerItem, but after exporting with AVAssetReader, will catch some short noise sounds in result file.... Not sure why. Here is the example project, which can build and run directly. https://github.com/luckysmg/daily_images/raw/refs/heads/main/TestDemo.zip
0
0
133
Jul ’25
Where is the License Agreement for Android version of ShazamKit?
I have integrated the ShazamKit SDK into my iOS app and would like to implement the same functionality in my Android app. My question is: Can I use the Android version of the ShazamKit SDK for commercial purposes? After extensive research, I could not find any official information regarding the license of the Android version of the ShazamKit SDK. Could you please provide a formal license statement?
1
0
129
Apr ’25
Can a Location-Based Audio AR Experience Run in the Background on iOS?
Hi everyone! I’ve developed a location-based Audio AR app in Unity with FMOD & Resonance Audio and AirPods Pro Head-Tracking to create a ubiquitous augmented soundscape experience. Think of it as an audio version of Pokémon Go, but with a more precise location requirement to ensure spatial audio is placed correctly. I want this experience to run in the background on iOS, but from what I’ve gathered, it seems Unity doesn’t support this well. So, I’m considering developing a Swift version instead. Since this is primarily for research purposes, privacy concerns are not a major issue in my case. However, I’ve come across some potential challenges: Real-time precise location updates – Can iOS provide fully instantaneous, high-accuracy location updates in the background? Continuous real-time data processing – Can an app continuously process spatial audio, head-tracking, and location data while running in the background? I’m not sure if newer iOS versions have improved in these areas or if there are workarounds to achieve this. Would this kind of experience be feasible to run in the background on iOS? Any insights or pointers would be greatly appreciated! I’m very new to iOS development, so apologies if this is a basic question. Thanks in advance!
0
0
100
Apr ’25
TTS Audio Unit Extension: File Write Access in App Group Container Denied Despite Proper Entitlements
I'm developing a TTS Audio Unit Extension that needs to write trace/log files to a shared App Group container. While the main app can successfully create and write files to the container, the extension gets sandbox denied errors despite having proper App Group entitlements configured. Setup: Main App (Flutter) and TTS Audio Unit Extension share the same App Group App Group is properly configured in developer portal and entitlements Main app successfully creates and uses files in the container Container structure shows existing directories (config/, dictionary/) with populated files Both targets have App Group capability enabled and entitlements set Current behavior: Extension can access/read the App Group container Extension can see existing directories and files All write attempts are blocked with "sandbox deny(1) file-write-create" errors Code example: const char* createSharedGroupPathWithComponent(const char* groupId, const char* component) { NSString* groupIdStr = [NSString stringWithUTF8String:groupId]; NSString* componentStr = [NSString stringWithUTF8String:component]; NSURL* url = [[NSFileManager defaultManager] containerURLForSecurityApplicationGroupIdentifier:groupIdStr]; NSURL* fullPath = [url URLByAppendingPathComponent:componentStr]; NSError *error = nil; if (![[NSFileManager defaultManager] createDirectoryAtPath:fullPath.path withIntermediateDirectories:YES attributes:nil error:&error]) { NSLog(@"Unable to create directory %@", error.localizedDescription); } return [[fullPath path] UTF8String]; } Error output: Sandbox: simaromur-extension(996) deny(1) file-write-create /private/var/mobile/Containers/Shared/AppGroup/36CAFE9C-BD82-43DD-A962-2B4424E60043/trace Key questions: Are there additional entitlements required for TTS Audio Unit Extensions to write to App Group containers? Is this a known limitation of TTS Audio Unit Extensions? What is the recommended way to handle logging/tracing in TTS Audio Unit Extensions? If writing to App Group containers is not supported, what alternatives are available? Current entitlements: <dict> <key>com.apple.security.application-groups</key> <array> <string>group.com.<company>.<appname></string> </array> </dict>
0
0
110
Apr ’25
When to set AVAudioSession's preferredInput?
I want the audio session to always use the built-in microphone. However, when using the setPreferredInput() method like in this example private func enableBuiltInMic() { // Get the shared audio session. let session = AVAudioSession.sharedInstance() // Find the built-in microphone input. guard let availableInputs = session.availableInputs, let builtInMicInput = availableInputs.first(where: { $0.portType == .builtInMic }) else { print("The device must have a built-in microphone.") return } // Make the built-in microphone input the preferred input. do { try session.setPreferredInput(builtInMicInput) } catch { print("Unable to set the built-in mic as the preferred input.") } } and calling that function once in the initializer, the audio session still switches to the external microphone once one is plugged in. The session's preferredInput is nil again at that point, even if the built-in microphone is still listed in availableInputs. So, why is the preferredInput suddenly reset? when would be the appropriate time to set the preferredInput again? Observing the session’s availableInputs did not work and setting the preferredInput again in the routeChangeNotification handler seems a bad choice as it’s already a bit too late then.
1
0
846
Oct ’25
AVAudioSession.outputVolume does not reflect system volume changes made while app is in background
I have a question regarding the behavior of AVAudioSession.sharedInstance().outputVolume. Observed behavior: When the app is in the foreground, I read audioSession.outputVolume (for example, 0.1). The app is then moved to the background. While the app is in the background, the user changes the system volume using the hardware buttons (for example, to 0.5). When the app returns to the foreground, audioSession.outputVolume still reports the previous value (0.1). From my testing, outputVolume only seems to update when the system volume is changed while the app is in the foreground. Volume changes made while the app is in the background are not reflected when the app returns to the foreground. Questions: According to Apple’s documentation for AVAudioSession.outputVolume: “The systemwide output volume set by the user.” https://developer.apple.com/documentation/avfaudio/avaudiosession/outputvolume However, based on our testing on iOS 18.6.2 and iOS 18.1, the observed behavior seems to differ from this description. Questions: The documentation states that outputVolume represents the system-wide volume set by the user. In our testing, the value does not reflect volume changes made while the app is in the background and only updates when the app is in the foreground.Is this the expected behavior of AVAudioSession.outputVolume? Is there any other recommended way in Swift to retrieve the current system volume that reflects user changes made both while the app is in the foreground and while it is in the background? Any clarification on the intended behavior or recommended handling would be greatly appreciated.
0
0
118
2w
Unstable Playlist.Entry.id causes crashes when removing duplicates
When multiple identical songs are added to a playlist, Playlist.Entry.id uses a suffix-based identifier (e.g. songID_0, songID_1, etc.). Removing one entry causes others to shift, changing their .id values. This leads to diffing errors and collection view crashes in SwiftUI or UIKit when entries are updated. Steps to Reproduce: Add the same song to a playlist multiple times. Observe .id.rawValue of entries (e.g. i.SONGID_0, i.SONGID_1). Remove one entry. Fetch playlist again — note the other IDs have shifted. FB18879062
0
0
542
Jul ’25
AVAudioSession setActive(true) fails after phone call when app is in background
I’m seeing what appears to be an iOS audio-session issue that occurs only when a phone call happens while the app is in the background. API: AVAudioSession, AVAudioRecorder Background Modes: Audio enabled (UIBackgroundModes = audio) Category: .playAndRecord Microphone permission: granted Expected Behavior If the app is recording audio in the background and a phone call interrupts it: AVAudioSession.interruptionNotification(.began) fires Call ends AVAudioSession.interruptionNotification(.ended) fires App should be able to re-activate its audio session and resume or restart recording Apple documentation suggests this should be supported for background audio apps. Actual Behavior When the app is in the background and phone call is ended: AVAudioSession.interruptionNotification(.ended) does fire Attempting to reactivate the audio session always fails: Error Domain=NSOSStatusErrorDomain Code=560557684 ("!int") "Session activation failed" The session appears to remain permanently “interrupted” Retrying activation (with delays) does not help Recreating AVAudioRecorder does not help Reactivation works only after the app is opened again
0
0
132
2w
UIDocumentPickerViewController in Audiounit Extension unable to receive touches
Hello, I have an existing AUv3 instrument plugin. In the plug in, users can access files (audio files, song projects) via a UIDocumentPickerViewController In Logic Pro, (and some other hosts, but not all), the document picker is unable to receive touches, while a keyboard case is attached to the iPad. Removing the case (this is an Apple brand iPad case) allows the interactions to resume and allows me to pick files in the usual way. One of my users reports this non-responsive behavior occurs even after disconnecting their keyboard. I have fiddled with entitlements all day, and have determined that is not the issue, since the keyboard disconnection appears to fix it every time for me. Here is my, very boilerplate, presentation code : guard let type = UTType("com.my.type") else { return } let fileBrowser = UIDocumentPickerViewController(forOpeningContentTypes: [type]) fileBrowser.overrideUserInterfaceStyle = .dark fileBrowser.delegate = self fileBrowser.directoryURL = myFileFolderURL() self.present(fileBrowser, animated: true) {
2
0
528
Jul ’25
Notification interruptions
My app Balletrax is a music player for people to use while they teach ballet. Used to be you could silence notifications during use, but now the customer seems to have to know how to use Focus mode, remember to turn it on and off, and have to check the notifications one does and doesn't want to use. Is there no way to silence all notifications when the app is in use?
0
0
83
Apr ’25
Different behaviors of USB-C to Headphone Jack Adapters
I bought two "Apple USB-C to Headphone Jack Adapters". Upon closer inspection, they seems to be of different generations: The one with product ID 0x110a on top is working fine. The one with product ID 0x110b has two issues: There is a short but loud click noise on the headphone when I connect it to the iPad. When I play audio using AVAudioPlayer the first half of a second or so is cut off. Here's how I'm playing the audio: audioPlayer = try AVAudioPlayer(contentsOf: url) audioPlayer?.delegate = self audioPlayer?.prepareToPlay() audioPlayer?.play() Is this a known issue? Am I doing something wrong?
0
0
320
Jul ’25
Question about Apple Vision Pro audio input sampling rate for research
I am a graduate student conducting research in speech/audio signal processing and multimodal interaction. Apple Vision Pro is widely recognized as a multimodal interactive system supporting voice, eye, and gesture inputs. However, I could not find detailed specifications or documentation about the audio input sampling rate used by the device’s built-in microphone array when capturing user audio. Specifically, I would like to understand: What is the default audio input sampling rate (e.g., 16 kHz, 44.1 kHz, 48 kHz, etc.) for the Vision Pro’s microphones? When developing with visionOS / AVAudioSession / AVAudioEngine, is there a documented or recommended sampling rate for audio capture? Are there any best practices or settings for enabling high-quality voice capture on Vision Pro (especially for voice research tasks)? For context, my work involves voice processing, analysis, and possibly on-device real-time speech recognition. Any pointers to relevant APIs, documentation or examples (especially regarding audio capture buffer size or available formats on visionOS) would be very helpful. Thank you in advance! Best regards.
0
0
143
2w