Hi all!
I have been experiencing some issues when using the AVAudioEngine to play audio and record input while doing a voice chat (through the PTT Interface).
I noticed if I connect any players to the AudioGraph OR call start that the audio session becomes active (this is on iOS).
I don't see anything in the docs or the header files in the AVFoundation, but is it possible that calling the stop method on an engine deactivates the audio session too?
In a normal app this behavior seems logical, but when using PTT all activation and deactivation of the audio session must go through the framework and its delegate methods.
The issue I am debugging is that when the engine with the input node tapped gets stopped, and there is a gap between the input and when the server replies with inbound audio to be played and something seems to be getting the hardware/audio session into a jammed state.
Thanks for any feedback and/or confirmation on this behavior!
Audio
RSS for tagDive into the technical aspects of audio on your device, including codecs, format support, and customization options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi,
I am trying to remove the audio controls for my app on the lock screen. Since I use WKWebView, there are 3 audio tags in my html and I play and pause em via JS. However, if I do not play any sound since app launch, there are no audio controls on the lock screen. But if I play one of those 3 files (they are even less then 3 Sec sound effects e.g. for buttons) the audio controls appears on lock screen.
Note even when the sounds on pause() or not playing they were listed on the lock screen.
What I have tried so far without success
MPNowPlayingInfoCenter.default().nowPlayingInfo = [:]
and
``try audioSession.setCategory(.playback, mode: .default, options: [])
try audioSession.setActive(false, options: .notifyOthersOnDeactivation)``
and
UIApplication.shared.endReceivingRemoteControlEvents()
Another problem is that the app scales with iOS system settings "display zoom". Is there a way to deny it?
It is latest Xcode verion 16.3 and iOS 18.
I have no background mode in my Capabilities.
Nothing worked so far. Has anyone an idea?
Greetings
Hi all, I have spent a lot of time reading the tech note and watching the WDDC video that introduce the PTTFramework on iOS. I currently have a custom setup where I am using AVAudioEngine to schedule and play buffers that are being streamed through a call.
I am looking to use the PTTFramework to allow a user to trigger this push to talk behavior from the lock screen and the various places with the system UI it provides.
However I am unsure what the correct behavior is regarding the handling of the audio session. Right now I am using .playback when there is no active voice transmission so that devices such as AirPods can be in AD2P mode where applicable, and then transitioning to .playbackAndRecord category only when the mic input should become active. Following this change in my AVAudioEngine manager I am then manually activating and deactivating the audio session manually when the engine is either playing/recording or idle.
In the documentation it states that you should not attempt to activate or deactivate your audio session directly, but allow the framework to handle it.
Does that mean that I need to either call the request to transmit delegate function or set an active participant on the channel manager first, and then wait for the didBecomeActive delegate method to trigger before I actually attempt to play or record any audio? (I am using the fullDuplex mode currently.) I noticed that that delegate method will only trigger if the audio session wasn't active before doing one of the above (setting active participant, requesting transmit).
Lastly, when using the PTTFramework it also mentions that we get support for PTT devices and I notice on the didBeginTransmittingFrom property we have a handsfreeButton case. Is there any documentation or resources for what is actually supported out of the box for this? I am currently working on handling a lot of the push to talk through bluetooth LE, and wanted to make sure there wasn't overlap with what the system provides.
Thank you!
In MusicKit Web the playback states are provided as numbers.
For example the playbackStateDidChange event listener will return:
{oldState: 2, state: 3, item:...}
When the state changes from playing (2) to paused (3).
Those are pretty easy to guess, but I'm having a hard time with some of the others: completed,
ended,
loading,
none,
paused,
playing,
seeking,
stalled,
stopped,
waiting.
I cannot find a mapping of states to numbers documented anywhere. I got the above states from an enum in a d.ts file that is often incorrect/incomplete.
Can someone help out pointing to the docs or provide a mapping?
Thanks.
I am using an AVAudioPlayer to play a "tick" sound once per second in a SwiftUI app.
When running the app on an iPhone 16 (18.2.1) the tick sounds increase in volume after a few seconds. This does not happen in the simulator nor on an iPhone SE 2020 (18.1.1).
Topic:
Media Technologies
SubTopic:
Audio
I created a virtual audio device to capture system audio with a sample rate of 44.1 kHz. After capturing the audio, I forward it to the hardware sound card using AVAudioEngine, also with a sample rate of 44.1 kHz. However, due to the clock sources being unsynchronized, problems occur after a period of playback. How can I retrieve the clock source of the hardware device and set it for the virtual device?
I am trying to use AVAudioEngine for recording and playing for a voice chat kind of app, but when the speaker plays any audio while recording, the recording take the speaker audio as input. I want to filter that out. Are there any suggestions for the swift code
Hi,
I am creating an app that can include videos or images in it's data. While
@Attribute(.externalStorage)
helps with images, with AVAssets I actually would like access to the URL behind that data. (as it would be stupid to load and then save the data again just to have a URL)
One key component is to keep all of this clean enough so that I can use (private) CloudKit syncing with the resulting model.
All the best
Christoph
How does a third party developer go about supporting the new Enhanced Dialogue option for video apps in tvOS 18?
If an app is using the standard AVPlayerViewController, I had assumed it would be a simple-ish matter of building against the tvOS 18 SDK but apparently not, the options don't appear, not even greyed out.
Hi I'm new to the forum,
I'm planning an app just for Apple watch, I would like to use bluetooth audio in background, how can I do it?
The messages I send via bluetooth stop as soon as the watch display turns off.
Thank you!
Nax
Hi,
On macOS I used to open MP3 and MP4 files with ExtAudioFile. For a few years it doesn't work anymore.
So I decided to try different macOS API using the AudioFileID of AudioToolbox framework.
I decided to write a test:
https://gist.github.com/joelkraehemann/7f5b241b52ca38c3a765c138fb647588
It fails right here:
AudioFileOpenWithCallbacks()
By telling OSStatus error 1954115647, which means kAudioFileUnsupportedFileTypeError.
The filename was set to an MP4 file:
~/Music/test.mp4
Howto fix this?
regards, Joël
There appears to be no method of going forward or backwards in Get Info in the Music application,
Topic:
Media Technologies
SubTopic:
Audio
private var audioEngine = AVAudioEngine()
private var inputNode: AVAudioInputNode!
func startAnalyzing() {
inputNode = audioEngine.inputNode
let recordingFormat = inputNode.outputFormat(forBus: 0)
let hardwareSampleRate = recordingSession.sampleRate
inputNode.removeTap(onBus: 0)
if recordingFormat.sampleRate != hardwareSampleRate {
print("。")
let newFormat = AVAudioFormat(commonFormat: recordingFormat.commonFormat,
sampleRate: hardwareSampleRate,
channels: recordingFormat.channelCount,
interleaved: recordingFormat.isInterleaved)
inputNode.installTap(onBus: 0, bufferSize: 1024, format: newFormat) { buffer, time in
self.processAudioBuffer(buffer, time: time)
}
} else {
inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { buffer, time in
self.processAudioBuffer(buffer, time: time)
}
}
do {
audioEngine.prepare()
try audioEngine.start()
} catch {
print(": \(error)")
}
}
I back the app to the background and then call startAnalyzing(), which reports an error and the background recording permissions are configured。
error:
[10429:570139] [aurioc] AURemoteIO.cpp:1668 AUIOClient_StartIO failed (561145187)
[10429:570139] [avae] AVAEInternal.h:109 [AVAudioEngineGraph.mm:1545:Start: (err = PerformCommand(*ioNode, kAUStartIO, NULL, 0)): error 561145187
Audio engine couldn't start.
Is background boot not allowed?
Feature Request: Long-Lived Access to Personal Apple Music Data
Use Case Summary
I'm developing a personal portfolio website (using Nuxt) and want to display information from my own Apple Music library - showcasing personal playlists, recently played tracks, or a read-only "now playing" widget. This is purely for personal use on my website and doesn't require other users to log in.
With Spotify's API, implementing this was straightforward thanks to automatic token refresh. I want a similarly seamless integration with Apple Music.
Challenge with MusicKit and Music User Tokens
Apple Music API requirements
Apple's Music API requires a valid Music User Token (MUT) for requests involving personal library data. Beyond the Apple Developer Token, you must obtain a user-specific token via MusicKit authentication to access your own library playlists, play history, or current playback status.
Token expiration and manual renewal
Music User Tokens expire after approximately 6 months without any mechanism to automatically refresh or renew them - unlike typical OAuth flows that provide refresh tokens. Apple's guidance suggests the device (e.g., iPhone) is responsible for obtaining new user tokens when old ones expire. This works for interactive apps on Apple devices but fails in server-side or long-lived web contexts like a personal website widget.
Impact on personal projects
Displaying Apple Music data on a public-facing site becomes difficult. I would need to periodically re-authenticate through the MusicKit JS flow every few months just to keep a widget alive. Embedding credentials in a public site is insecure, and manual token refreshing is cumbersome and easy to forget.
Comparison to Spotify's Token Model
Spotify's API offers a developer-friendly authentication model. Their OAuth flow provides a Refresh Token that applications can use to obtain new access tokens automatically without requiring user re-authorization. This means a personal app can maintain continuous access to a user's Spotify data for extended periods until access is revoked.
When building a similar feature with Spotify, this automatic token renewal was crucial. I could safely store the refresh token on my server and have my app periodically update the access token. Many developers have created public-facing widgets showing currently playing tracks on blogs or GitHub profiles using this model. Unfortunately, Apple Music's API lacks an equivalent capability, putting it at a disadvantage for personal projects.
Proposed Solutions
I request Apple's consideration for one of these enhancements:
Provide a mechanism to refresh or extend a Music User Token programmatically for server-side applications. This could be an OAuth-style refresh token issued alongside the MUT, or a dedicated endpoint to exchange an expired MUT for a new one. This would enable renewal without a full user re-auth/login each time.
Allow developers to access their own Apple Music library data with just the long-lived Developer Token. Apple could permit GET requests to personal library endpoints using the Developer Token alone, or a special token tied to the developer's Apple ID. This access would be read-only - no ability to modify the library, purely for retrieving data. It could be an opt-in feature in the Apple Developer account settings.
Either solution would significantly improve the developer experience for Apple Music API in personal projects.
Security and Privacy Considerations
This request is not about accessing others' data or creating privacy loopholes - it's about empowering an Apple Music subscriber to access their own information more conveniently. The proposed options respect privacy principles:
The data accessed is only what the user already has access to - their own playlists, library items, or playback status.
An automatic token refresh can be designed securely (revocable tokens bound to a single account with no increase in permissions).
Read-only developer token access could be restricted to non-sensitive data and require explicit opt-in.
Conclusion
I request an improvement to Apple Music's developer experience through either (1) an automatic Music User Token refresh mechanism, or (2) a provision for read-only personal library access using a Developer Token. This would bring Apple Music integration capabilities closer to parity with services like Spotify for personal projects.
I ask Apple's Developer Relations and the Apple Music API team to consider this feature request. If there are existing best practices or workarounds with current APIs, I would appreciate guidance.
I invite feedback from Apple or other developers. Are there known patterns for maintaining an Apple Music user token for server-side applications, or any plans to support non-interactive use cases? Any advice is welcome.
Thank you for your consideration. I look forward to integrating Apple Music into my personal site as smoothly as with other services, and believe many developers would benefit from this added flexibility.
Sources:
User Authentication for MusicKit - Requirements for Music User Tokens
StackOverflow: Do Apple Music User Tokens expire? - Confirmation of 6-month expiration
MetaBrainz GSoC Blog - Documentation of MusicKit authentication limitations
Apple Developer Forums - Information on token renewal behavior
Spotify for Developers - Documentation on refresh token mechanism
Topic:
Media Technologies
SubTopic:
Audio
Tags:
Apple Music API
MusicKit
MusicKit JS
Apple Music Feed
Songs can be unavailable (greyed out) in Apple Music. How can I check if a song is unavailable via the MusicKit framework? Obviously the playback will fail with MPMusicPlayerControllerErrorDomain Code=6 "Failed to prepare to play" but how can I know that in advance? I need to check the availability of hundreds of albums and therefore initiating a playback for each of them is not an option.
Things I have tried:
Checking if the release date property is set to a future date. This filters out all future releases but doesn't solve the problem for already released songs.
Checking if the duration is 0. This does not work since the duration of unavailable songs does not have to be 0.
Initiating a playback and checking for the "Failed to prepare to play" error. This is not suitable for a huge amount of Albums.
I couldn't find a solution yet but somehow other third-party-apps are able ignore/don't shows these albums. I believe the Apple Music app is only displaying albums where at least one song is available.
I am using this function to fetch all albums of an artist.
private func fetchAlbumsFor(_ artist: Artist) async throws -> [Album] {
let artistWithAlbums = try await artist.with(.albums)
var allAlbums = [Album]()
guard var currentBadge = artistWithAlbums.albums else {
return []
}
allAlbums.append(contentsOf: currentBadge)
while currentBadge.hasNextBatch {
if let nextBatch = try await currentBadge.nextBatch() {
currentBadge = nextBatch
allAlbums.append(contentsOf: nextBatch)
} else {
break
}
}
return allAlbums
}
Here is an example album where I am unable to detect its unavailability (at least in Germany):
https://music.apple.com/de/album/die-haferhorde-immer-den-n%C3%BCstern-nach-h%C3%B6rspiel-zu-band-3/1755774804
Furthermore I was unable to navigate to this album via the Apple Music app directly.
Thanks for any help
Edit: Apparently this album is not included in an apple music subscription but can be bought seperately. The question remains: How can I check that?
Hi,
I am looking for a good way to play sounds at a high frequency.
At the moment I am using the AVAudioEngine, and create a couple AVAudioPlayerNode and for each sound I need to play I create a AVAudioPCMBuffer.
When the app needs to play a sound, I get the correct AVAudioPCMBuffer for the sound and use the first available AVAudioPlayerNode and feed it to the buffer.
The timing for a metronome app has to be very precise because if it's of by about 16ms the user can hear that it is not playing had the right interval. For low speeds this is working without any problems, but at high speeds it is getting worse.
Maybe anyone has an idea on how I can improve my method.
Its a Plugin for Flutter.
import AVFoundation
class FastSoundPlayer {
private var audioPlayers: [SoundPlayer?] = []
private var sounds: [String: Sound] = [:]
private var engine = AVAudioEngine()
let session = AVAudioSession.sharedInstance()
init() {
do {
try session.setCategory(AVAudioSession.Category.playback, mode: AVAudioSession.Mode.default, options: [AVAudioSession.CategoryOptions.mixWithOthers])
try session.setActive(true)
createSoundPlayers(count: 20)
try engine.start()
} catch {
print("Error starting audio engine: \(error.localizedDescription)")
}
}
// Selector method to handle applicationDidBecomeActiveNotification
func applicationDidBecomeActive() {
// Reinitialize AVAudioEngine and reattach all nodes
do {
engine.reset()
objc_sync_enter(audioPlayers)
audioPlayers.removeAll()
createSoundPlayers(count: 20)
objc_sync_exit(audioPlayers)
try engine.start()
} catch {
print("Error starting audio engine: \(error.localizedDescription)")
}
}
func createSoundPlayers(count: Int) {
for _ in 0..<count {
let player = SoundPlayer()
engine.attach(player.player)
engine.connect(player.player, to: engine.mainMixerNode, format: nil)
audioPlayers.append(player)
}
}
func load(sound: Data, name: String) {
let sound = Sound(soundData: sound)
sounds[name] = sound
}
func play(name: String) {
if !engine.isRunning {
applicationDidBecomeActive()
}
guard let sound = sounds[name] else {
print("Sound not found")
return
}
if let player = getAvailablePlayer() {
player.play(sound: sound)
}
}
func getAvailablePlayer() -> SoundPlayer? {
for player in audioPlayers {
if !player!.isPlaying {
return player
}
}
return nil
}
}
class SoundPlayer {
let player = AVAudioPlayerNode()
var isPlaying = false
init() {
player.volume = 1.0
}
func play(sound: Sound) {
player.scheduleBuffer(sound.sound!, at: nil, options: .interrupts, completionCallbackType: .dataPlayedBack) { _ in
self.complete()
}
if (player.engine != nil && player.engine!.isRunning) {
player.play()
isPlaying = true
}
}
func complete() {
isPlaying = false
}
}
class Sound {
var sound: AVAudioPCMBuffer?
init(soundData: Data) {
do {
let temporaryURL = FileManager.default.temporaryDirectory.appendingPathComponent("tempSound.wav")
try soundData.write(to: temporaryURL)
// Create AVAudioFile from the temporary file URL
let audioFile = try AVAudioFile(forReading: temporaryURL)
// Define the format for the PCM buffer (44100Hz, stereo)
let format = AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: 44100, channels: 2, interleaved: false)
// Create AVAudioPCMBuffer
guard let pcmBuffer = AVAudioPCMBuffer(pcmFormat: format!, frameCapacity: AVAudioFrameCount(audioFile.length)) else {
// Failed to create PCM buffer
self.sound = nil
return
}
// Read audio file into PCM buffer
try audioFile.read(into: pcmBuffer)
// Assign the created AVAudioPCMBuffer to the sound property
self.sound = pcmBuffer
} catch {
print("Error loading sound file: \(error.localizedDescription)")
self.sound = nil
}
}
}
Thanks!
Hi there!
We have a suite of AudioUnit v2 plugins that have been shipped for some time as aufx plugins, and we are looking into MIDI-related platform upgrades, so we need a way to update these plugins to request MIDI from Logic (and other AU hosts) but avoid changing our AU type and subtype so we don't break existing sessions. Any ideas on how we can do this?
Hello All,
It seems that it's "very easy" (😬) to implement a little Swift code inside the prepared AU using Xcode 16.2 on Sequoia 15.1.1 and a Mac Studio M1 Ultra, but my issue is that I finally don't know... where.
The documentation says that I've to find the AudioUnitViewController.swift file and then modify the render block :
audioUnit.renderBlock = { (numFrames, ioData) in
// Process audio here
}
in the Xcode project automatically generated, but I didn't find such a file...
If somebody can help me in showing where is the file to be modified, I'll be very grateful !
Thank you very much.
J
I am trying to stream audio from local filesystem.
For that, I am trying to use an AVAssetResourceLoaderDelegate for an AVURLAsset. However, Content-Length is not known at the start. To overcome this, I tried several methods:
Set content length as nil, in the AVAssetResourceLoadingContentInformationRequest
Set content length to -1, in the ContentInformationRequest
Both of these cause the AVPlayerItem to fail with an error.
I also tried setting Content-Length as INT_MAX, and setting a renewalDate = Date(timeIntervalSinceNow: 5). However, that seems to be buggy. Even after updating the Content-Length to the correct value (e.g. X bytes) and finishing that loading request, the resource loader keeps getting requests with requestedOffset = X with dataRequest.requestsAllDataToEndOfResource = true. These requests keep coming indefinitely, and as a result it seems that the next item in the queue does not get played. Also, .AVPlayerItemDidPlayToEndTime notification does not get called.
I wanted to check if this is an expected behavior or is there a bug in this implementation. Also, what is the recommended way to stream audio of unknown initial length from local file system?
Thanks!
Good day, ladies and gents.
I have an application that reads audio from the microphone. I'd like it to also be able to read from the Mac's audio output stream. (A bonus would be if it could detect when the Mac is playing music.)
I'd eventually be able to figure it out reading docs, but if someone can give a hint, I'd be very grateful, and would owe you the libation of your choice.
Here's the code used to set up the AudioUnit:
-(NSString*) configureAU
{
AudioComponent component = NULL;
AudioComponentDescription description;
OSStatus err = noErr;
UInt32 param;
AURenderCallbackStruct callback;
if( audioUnit ) { AudioComponentInstanceDispose( audioUnit ); audioUnit = NULL; } // was CloseComponent
// Open the AudioOutputUnit
description.componentType = kAudioUnitType_Output;
description.componentSubType = kAudioUnitSubType_HALOutput;
description.componentManufacturer = kAudioUnitManufacturer_Apple;
description.componentFlags = 0;
description.componentFlagsMask = 0;
if( component = AudioComponentFindNext( NULL, &description ) )
{
err = AudioComponentInstanceNew( component, &audioUnit );
if( err != noErr ) { audioUnit = NULL; return [ NSString stringWithFormat: @"Couldn't open AudioUnit component (ID=%d)", err] ; }
}
// Configure the AudioOutputUnit:
// You must enable the Audio Unit (AUHAL) for input and output for the same device.
// When using AudioUnitSetProperty the 4th parameter in the method refers to an AudioUnitElement.
// When using an AudioOutputUnit for input the element will be '1' and the output element will be '0'.
param = 1; // Enable input on the AUHAL
err = AudioUnitSetProperty( audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, ¶m, sizeof(UInt32) ); chkerr("Couldn't set first EnableIO prop (enable inpjt) (ID=%d)");
param = 0; // Disable output on the AUHAL
err = AudioUnitSetProperty( audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, 0, ¶m, sizeof(UInt32) ); chkerr("Couldn't set second EnableIO property on the audio unit (disable ootpjt) (ID=%d)");
param = sizeof(AudioDeviceID); // Select the default input device
AudioObjectPropertyAddress OutputAddr = { kAudioHardwarePropertyDefaultInputDevice, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMaster };
err = AudioObjectGetPropertyData( kAudioObjectSystemObject, &OutputAddr, 0, NULL, ¶m, &inputDeviceID );
chkerr("Couldn't get default input device (ID=%d)");
// Set the current device to the default input unit
err = AudioUnitSetProperty( audioUnit, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Global, 0, &inputDeviceID, sizeof(AudioDeviceID) );
chkerr("Failed to hook up input device to our AudioUnit (ID=%d)");
callback.inputProc = AudioInputProc; // Setup render callback, to be called when the AUHAL has input data
callback.inputProcRefCon = self;
err = AudioUnitSetProperty( audioUnit, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Global, 0, &callback, sizeof(AURenderCallbackStruct) );
chkerr("Could not install render callback on our AudioUnit (ID=%d)");
param = sizeof(AudioStreamBasicDescription); // get hardware device format
err = AudioUnitGetProperty( audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 1, &deviceFormat, ¶m );
chkerr("Could not install render callback on our AudioUnit (ID=%d)");
audioChannels = MAX( deviceFormat.mChannelsPerFrame, 2 ); // Twiddle the format to our liking
actualOutputFormat.mChannelsPerFrame = audioChannels;
actualOutputFormat.mSampleRate = deviceFormat.mSampleRate;
actualOutputFormat.mFormatID = kAudioFormatLinearPCM;
actualOutputFormat.mFormatFlags = kAudioFormatFlagIsFloat | kAudioFormatFlagIsPacked | kAudioFormatFlagIsNonInterleaved;
if( actualOutputFormat.mFormatID == kAudioFormatLinearPCM && audioChannels == 1 )
actualOutputFormat.mFormatFlags &= ~kLinearPCMFormatFlagIsNonInterleaved;
#if __BIG_ENDIAN__
actualOutputFormat.mFormatFlags |= kAudioFormatFlagIsBigEndian;
#endif
actualOutputFormat.mBitsPerChannel = sizeof(Float32) * 8;
actualOutputFormat.mBytesPerFrame = actualOutputFormat.mBitsPerChannel / 8;
actualOutputFormat.mFramesPerPacket = 1;
actualOutputFormat.mBytesPerPacket = actualOutputFormat.mBytesPerFrame;
// Set the AudioOutputUnit output data format
err = AudioUnitSetProperty( audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &actualOutputFormat, sizeof(AudioStreamBasicDescription));
chkerr("Could not change the stream format of the output device (ID=%d)");
param = sizeof(UInt32); // Get the number of frames in the IO buffer(s)
err = AudioUnitGetProperty( audioUnit, kAudioDevicePropertyBufferFrameSize, kAudioUnitScope_Global, 0, &audioSamples, ¶m );
chkerr("Could not determine audio sample size (ID=%d)");
err = AudioUnitInitialize( audioUnit ); // Initialize the AU
chkerr("Could not initialize the AudioUnit (ID=%d)");
// Allocate our audio buffers
audioBuffer = [self allocateAudioBufferListWithNumChannels: actualOutputFormat.mChannelsPerFrame size: audioSamples * actualOutputFormat.mBytesPerFrame];
if( audioBuffer == NULL ) { [ self cleanUp ]; return [NSString stringWithFormat: @"Could not allocate buffers for recording (ID=%d)", err]; }
return nil;
}
(...again, it would be nice to know if audio output is active and thereby choose the clean output stream over the noisy mic, but that would be a different chunk of code, and my main question may just be a quick edit to this chunk.)
Thanks for your attention! ==Dave
[p.s. if i get more than one useful answer, can i "Accept" more than one, to spread the credit around?]
{pps: of course, the code lines up prettier in a monospaced font!}