I'm implementing the PushToTalk framework and have encountered an issue where channelManager(_:didActivate:) is not called under specific circumstances.
What works:
App is in foreground, receives PTT push → didActivate is called ✅
App receives audio in foreground, then is backgrounded → subsequent pushes trigger didActivate ✅
What doesn't work:
App is launched, user joins channel, then immediately backgrounds
PTT push arrives while app is backgrounded
incomingPushResult is called, I return .activeRemoteParticipant(participant)
The system UI shows the speaker name correctly
However, didActivate is never called
Audio data arrives via WebSocket but cannot be played (no audio session)
Setup:
Channel joined successfully before backgrounding
UIBackgroundModes includes push-to-talk
No manual audio session activation (setActive) anywhere in my code
AVAudioEngine setup only happens inside didActivate delegate method
Issue persists even after channel restoration via channelDescriptor(restoredChannelUUID:)
Question:
Is this expected behavior or a bug? If expected, what's the correct approach to handle
incoming PTT audio when the app is backgrounded and hasn't received audio while in the
foreground yet?
AVAudioSession
RSS for tagUse the AVAudioSession object to communicate to the system how you intend to use audio in your app.
Posts under AVAudioSession tag
68 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I am facing a severe audio routing instability issue when using CallKit and the Zego Express SDK on iOS. The problem is that the audio route immediately reverts from the Speaker back to the Earpiece, effectively disabling the Speaker button functionality.📝 Observed BehaviorWhen the user taps the native CallKit Speaker Button, the audio route is correctly changed to the Speaker, but then instantly flips back to the Receiver (Earpiece), as shown in the system log captured via AVAudioSession.routeChangeNotification monitoring.🧾 Log Evidence (Flapping Occurs in 0.4 seconds)The following log snippet clearly illustrates the system overriding the user's action (reason: 4) with an unexpected CategoryChange (reason: 3) event:
TimestampComponentReason CodeDescriptionRouteIs
Speaker16:31:18.009[CallKitManager]4Override (CallKit/ControlCenter)Loa ngoài
(Speaker)true16:31:18.411[CallKitManager]3CategoryChangeThiết bị nhận (Receiver)false
My app utilizes background audio to play music files. I have the audio background mode enabled and I initialize the AVAudioSession in playback mode with the mixWithOthers option. And it usually works great while the app is backgrounded. I listen for audio interruptions as well as route changes and I am able to handle them appropriately and I can usually resume my background audio no problem.
I discovered an issue while connected to CarPlay though. Roughly 50% of the time when I disconnect from a phone call while connected to CarPlay I get the following error after calling the play() method of my AVAudioPlayer instance:
"ATAudioSessionClientImpl.mm:281 activation failed. status = 561015905"
If I instead try to start a new audio session I get a similar error:
Error Domain=NSOSStatusErrorDomain Code=561015905 "Session activation failed" UserInfo={NSLocalizedDescription=Session activation failed}
Like I said, this isn't reproducible 100% of the time and is so far only seen while connected to CarPlay. I don't think Im forgetting so additional capability or plist setting, but if anyone has any clues it would be greatly appreciated. Otherwise this is likely just a bug that I need to report to Apple.
One very important note, and reason I believe it's just a bug, is that while I was testing I found that other music apps like Spotify will also fail to resume their audio at the same time my app fails.
Another important detail is that when it works successfully I receive the audio session interruption ended notification, and when it doesn't work I only receive a route configuration change or route override notification. From there I am able to still successfully granted background time to execute code, but my call to resume audio fails with the above mentioned error codes.
I can reproduce the bug that CallKit doesn't active audiosession after the outgoing call put on hold because of an incoming call.
VoIP calling with CallKit
Steps to reproduce:
Download SpeakerBox example app from the link above and start it with XCode
Start a new outgoing call
Call your phone from other phone
Hold and Accept the call
After a few secs finish the call from the other phone
The outgoing call will be still on hold
Click on the call and click Toggle Hold
The call won't be active again because the audiosession is activated.
Logs:
Received provider(_:didDeactivate:)
Received provider(_:didDeactivate:)
Received provider(_:didDeactivate:)
Received provider(_:didDeactivate:)
Received provider(_:didDeactivate:)
Requested transaction successfully
Starting audio
Type: stdio
AURemoteIO.cpp:1162 failed: 561017449 (enable 3, outf< 1 ch, 44100 Hz, Float32> inf< 1 ch, 44100 Hz, Float32>)
Type: Error | Timestamp: 2024-08-15 12:20:29.949437+02:00 | Process: Speakerbox | Library: libEmbeddedSystemAUs.dylib | Subsystem: com.apple.coreaudio | Category: aurioc | TID: 0x19540d
AVAEInternal.h:109 [AVAudioEngineGraph.mm:1344:Initialize: (err = PerformCommand(*outputNode, kAUInitialize, NULL, 0)): error 561017449
Type: Error | Timestamp: 2024-08-15 12:20:29.949619+02:00 | Process: Speakerbox | Library: AVFAudio | Subsystem: com.apple.avfaudio | Category: avae | TID: 0x19540d
Couldn't start Apple Voice Processing IO: Error Domain=com.apple.coreaudio.avfaudio Code=561017449 "(null)" UserInfo={failed call=err = PerformCommand(*outputNode, kAUInitialize, NULL, 0)}
Type: Notice | Timestamp: 2024-08-15 12:20:29.949730+02:00 | Process: Speakerbox | Library: Speakerbox | TID: 0x19540d
Route change:
Type: Notice | Timestamp: 2024-08-15 12:20:30.167498+02:00 | Process: Speakerbox | Library: Speakerbox | TID: 0x19540d
ReasonUnknown
Type: Notice | Timestamp: 2024-08-15 12:20:30.167549+02:00 | Process: Speakerbox | Library: Speakerbox | TID: 0x19540d
Previous route:
Type: Notice | Timestamp: 2024-08-15 12:20:30.167568+02:00 | Process: Speakerbox | Library: Speakerbox | TID: 0x19540d
<AVAudioSessionRouteDescription: 0x302c00bc0,
inputs = (
"<AVAudioSessionPortDescription: 0x302c01330, type = MicrophoneBuiltIn; name = iPhone Mikrofon; UID = Built-In Microphone; selectedDataSource = (null)>"
);
outputs = (
"<AVAudioSessionPortDescription: 0x302c004d0, type = Receiver; name = Vev\U0151; UID = Built-In Receiver; selectedDataSource = (null)>"
)>
Type: Notice | Timestamp: 2024-08-15 12:20:30.167626+02:00 | Process: Speakerbox | Library: Speakerbox | TID: 0x19540d
Hello,
I am wondering if it is possible to have audio from my AirPods be sent to my speech to text service and at the same time have the built in mic audio input be sent to recording a video?
I ask because I want my users to be able to say "CAPTURE" and I start recording a video (with audio from the built in mic) and then when the user says "STOP" I stop the recording.
Hi,
We've noticed that this issue occurs more frequently after upgrading to iOS 18.4.1 and can result in one-way audio.
Our app uses CallKit with WebRTC to establish VoIP connections.
However, on iOS 18.4.1, CallKit no longer triggers:
func provider(_ provider: CXProvider, didActivate audioSession: AVAudioSession)
We're currently comparing the occurrence rate across different iOS versions to better understand the impact.
Could you please help analyze the root cause of this issue?
I am having an issue with the code that I posted below. I capture voice in my CarPlay app, then allow the user to have it read back to them using AVSpeechUtterance.
This works fine on some cars, but many of my beta testers report no audio being played. I have also experienced this in a rental car where the audio was either too quiet or the audio didn't play.
Does anyone see any issue with the code that I posted? This is for CarPlay specifically.
class CarPlayTextToSpeechService: NSObject, ObservableObject, AVSpeechSynthesizerDelegate {
private var speechSynthesizer = AVSpeechSynthesizer()
static let shared = CarPlayTextToSpeechService()
/// Completion callback
private var completionCallback: (() -> Void)?
override init() {
super.init()
speechSynthesizer.delegate = self
}
func configureAudioSession() {
do {
try AVAudioSession.sharedInstance().setCategory(.playback, mode: .voicePrompt, options: [.duckOthers, .interruptSpokenAudioAndMixWithOthers, .allowBluetoothHFP])
} catch {
print("Failed to set audio session category: \(error.localizedDescription)")
}
}
public func speak(_ text: String, completion: (() -> Void)? = nil) {
self.configureAudioSession()
// Store the completion callback
self.completionCallback = completion
Task(priority: .high) {
let speechUtterance = AVSpeechUtterance(string: text)
let langCode = Locale.preferredLocalLanguageCountryCode
if langCode == "en-US" {
speechUtterance.voice = AVSpeechSynthesisVoice(identifier: AVSpeechSynthesisVoiceIdentifierAlex)
} else {
speechUtterance.voice = AVSpeechSynthesisVoice(language: langCode)
}
try AVAudioSession.sharedInstance().setActive(true, options: .notifyOthersOnDeactivation)
speechSynthesizer.speak(speechUtterance)
}
}
func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didFinish utterance: AVSpeechUtterance) {
Task {
stopSpeech()
try AVAudioSession.sharedInstance().setActive(false)
}
// Call completion callback if available
self.completionCallback?()
self.completionCallback = nil
}
func stopSpeech() {
speechSynthesizer.stopSpeaking(at: .immediate)
}
}
My workout watch app supports audio playback during exercise sessions.
When users carry both Apple Watch, iPhone, and AirPods, with AirPods connected to the iPhone, I want to route audio from Apple Watch to AirPods for playback. I've implemented this functionality using the following code.
try? session.setCategory(.playback, mode: .default, policy: .longFormAudio, options: [])
try await session.activate()
When users are playing music on iPhone and trigger my code in the watch app, Apple Watch correctly guides users to select
AirPods, pauses the iPhone's music, and plays my audio.
However, when playback finishes and I end the session using the code below:
try session.setActive(false, options:[.notifyOthersOnDeactivation])
the iPhone
doesn't automatically resume the previously interrupted music playback—it requires manual intervention.
Is this expected behavior, or am I missing other important steps in my code?
Hello! Thank you for bringing the new iPhone experience with the PushToTalk framework.
I have a working walkie talkie app based on the PushToTalk framework. Everything works fine except for an intermittent bug that I face from time to time on different devices with different iOS versions, from iOS 18 to iOS 26.2 Beta.
Sometimes the app goes into a state where the AVAudioInputNode input node tap returns buffers with a constant size that contain only silence. Leaving and rejoining a channel helps, but relaunching or reinstalling (from Xcode) the app does not. Rebooting the device or deleting and reinstalling the app also helps.
I do not activate the audio session in my app. I only configure it on launch using
setCategory(.playAndRecord, options: [.defaultToSpeaker, .allowBluetooth])
So the flow is:
channelManager?.requestBeginTransmitting(channelUUID: globalChannelUUID)
func channelManager(
_ channelManager: PTChannelManager,
channelUUID: UUID,
didBeginTransmittingFrom source: PTChannelTransmitRequestSource
)
func channelManager(
_ channelManager: PTChannelManager,
didActivate audioSession: AVAudioSession
) {
/// ...
installTapAndStart()
}
private func installTapAndStart() {
let inputNode = audioEngine.inputNode
let hardwareFormat = inputNode.outputFormat(forBus: 0)
guard let targetFormat = AVAudioFormat(
commonFormat: .pcmFormatFloat32,
sampleRate: configuration.audioSampleRate,
channels: configuration.audioChannelsCount,
interleaved: true
) else {
handleError(RecorderError.invalidAudioFormat)
return
}
let converter = AVAudioConverter(from: hardwareFormat, to: targetFormat)!
print("[QUICOpusRecorder]: installTap")
inputNode.installTap(onBus: 0, bufferSize: tapBufferSize, format: hardwareFormat) { [weak self] buffer, _ in
guard let self else { return }
// Here I handle audio data and sometimes get silence
}
//...
do {
audioEngine.prepare()
try audioEngine.start()
} catch {
print(" ⚠️ Audio engine start error: \(error)")
handleError(error)
}
}
Moreover, if the app is in the foreground and PushToTalk gets stuck in this “silence bug”, I can avoid relying on the PushToTalk flow and simulate audio session activation manually in code. In this case I do not request a transmission and do not use any PushToTalk related code, and the app captures audio data perfectly.
Once I leave the channel and rejoin it again, the issue is fixed and I start to receive non silent buffers of varying size, as expected.
It works for a while. It can work fine for a day or more, communicating without launching the app, or with the app in the foreground. But it can also go into the “silence” state 30 minutes after working normally.
I have no clue why this happens.
The only thing I notice is that when the app is in this “stuck silence bug” state, iOS does not play its “chirp” system sound when audio recording starts.
P.S. Channel descriptor restoration code:
extension PushToTalkEngine: PTChannelRestorationDelegate {
public func channelDescriptor(restoredChannelUUID channelUUID: UUID) -> PTChannelDescriptor {
print("☀️ \(#function) channelUUID: \(channelUUID)")
Task { @MainActor in
// Here I fetch more detailed channel data asynchronously
do {
await initChannelManagerIfNeeded(channelUUID: channelUUID)
let channelDescriptor = currentChannelDescriptor()
lastChannelDescriptorName = channelDescriptor.name
try await channelManager?.setChannelDescriptor(
channelDescriptor,
channelUUID: channelUUID
)
} catch {
handleError(error)
}
}
return PTChannelDescriptor(name: "Loading...", image: nil)
}
}
private func initChannelManagerIfNeeded(channelUUID: UUID? = nil) async {
guard let channelUUID = channelUUID ?? currentUser?.globalChannelUUID else {
print("❌ No global channel uuid found")
return
}
do {
guard channelManager == nil else {
try await channelManager?.setTransmissionMode(.halfDuplex, channelUUID: channelUUID)
return
}
channelManager = try await PTChannelManager.channelManager(
delegate: self,
restorationDelegate: self
)
try await channelManager?.setTransmissionMode(.halfDuplex, channelUUID: channelUUID)
} catch {
handleError(error)
}
}
I have an iPadOS M-processor application with two different running configurations.
In config1, the shared AVAudioSession is configured for .videoChat mode using the built-in microphone. The input/output nodes of the AVAudioEngine are configured with voice processing enabled. The built-in mic is formatted for 1 channel at 48KHz.
In config2, the shared AVAudioSession is configured for .measurement mode using an external USB microphone. The input/output nodes of the AVAudioEngine are configured with voice processing disabled. The external mic is formatted for 2 channels at 44.1KHz
I've written a configuration manager designed to safely switch between these two configurations. It works by stopping AVAudioEngine and detaching all but the input and output nodes, updating the shared audio session for the desired mic and sample-rates, and setting the appropriate state for voice processing to either true or false as required by the configuration. Finally the new audio graph is constructed by attaching appropriate nodes, connecting them, and re-starting AVAudioEngine
I'm experiencing what I believe is a race-condition between switching voice processing on or off and then trying to re-build and start the new audio graph. Even though notifications, which are dumped to the console indicate that my requested input and sample-rate settings are in place, I crash when trying to start the audio engine because the sample-rate is wrong. Investigating further it looks like the switch from remote I/O to voice-processing I/O or vice-versa has not yet actually completed. I introduced a 100ms second delay and that seems to help but is obviously not a reliable way to build software that must work consistently.
How can I make sure that what are apparently asynchronous configuration changes to the shared audio session and the input/output nodes have completed before I go on?
I tried using route change notifications from the shared AVAudioSession but these lie. They say my preferred mic input and sample-rate setting is in place but when I dump the AVAudioEngine graph to the debugger console, I still see the wrong sample rate assigned to the input/output nodes. Also these are the wrong AU nodes. That is, VPIO is still in place when RIO should be, or vice-versa.
How can I make the switch reliable without arbitrary time delays?
Is my configuration manager approach appropriate (question for Apple engineers)?
We require assistance in resolving a critical audio design conflict within our Push-to-Talk (PTT) application. Our current volume amplification strategy—which relies on applying a GAIN factor to PCM samples in conjunction with setting the AVAudioSession category to Playback—is working successfully when PTT is used independently. However, upon integrating and reporting the same PTT call through the CallKit framework, this amplification effect is lost. The CallKit integration appears to be forcing a different, non-amplifying audio session category or configuration, negatively impacting the user's perceived call volume. We need guidance on how to maintain the AVAudioSessionCategoryPlayback setting, or an equivalent high-volume configuration, while operating under the control of CallKit.
Hi all,
I have been quite stumped on this behavior for a little bit now, so thought it best to share here and see if someone more experience with AVAudioEngine / AVAudioSession can weigh in.
Right now I have a AVAudioEngine that I am using to perform some voice chat with and give buffers to play. This works perfectly until route changes start to occur, which causes the AVAudioEngine to reset itself, which then causes all players attached to this engine to be stopped.
Once a AVPlayerNode gets stopped due to this (but also any other time), all samples that were scheduled to be played then get purged. Where this becomes confusing for me is the completion handler gets called every time regardless of the sound actually being played.
Is there a reliable way to know if a sample needs to be rescheduled after a player has been reset?
I am not quite sure in my case what my observer of AVAudioEngineConfigurationChange needs to be doing, as this engine only handles output. All input is through a separate engine for simplicity.
Currently I am storing a queue of samples as they get sent to the AVPlayerNode for playback, and after that completion checking if the player isPlaying or not. If it's playing I assume that the sound actually was played- and if not then I leave it in the queue and assume that an observer on the route change or the configuration change will realize there are samples in the queue and reset them
Thanks for any feedback!
I'm building a React Native call application using the following combination of libraries:
https://github.com/react-native-webrtc/react-native-callkeep
https://github.com/react-native-webrtc/react-native-webrtc
https://github.com/react-native-webrtc/react-native-voip-push-notification
When I press the speaker button on the call screen displayed by CallKit and change it to ON, the speaker button display on the call screen reverts back to OFF after a few seconds.
However, when the speaker button display reverts to OFF, the actual audio output route does not return to the earpiece - the audio continues to output from the speaker without any change.
Could you please advise on what cases might cause the speaker button display to revert, and if there are any potential solutions?
I want the audio session to always use the built-in microphone. However, when using the setPreferredInput() method like in this example
private func enableBuiltInMic() {
// Get the shared audio session.
let session = AVAudioSession.sharedInstance()
// Find the built-in microphone input.
guard let availableInputs = session.availableInputs,
let builtInMicInput = availableInputs.first(where: { $0.portType == .builtInMic }) else {
print("The device must have a built-in microphone.")
return
}
// Make the built-in microphone input the preferred input.
do {
try session.setPreferredInput(builtInMicInput)
} catch {
print("Unable to set the built-in mic as the preferred input.")
}
}
and calling that function once in the initializer,
the audio session still switches to the external microphone once one is plugged in.
The session's preferredInput is nil again at that point, even if the built-in microphone is still listed in availableInputs.
So,
why is the preferredInput suddenly reset?
when would be the appropriate time to set the preferredInput again?
Observing the session’s availableInputs did not work and setting the preferredInput again in the routeChangeNotification handler seems a bad choice as it’s already a bit too late then.
I'm currently I'm working on an iOS app + custom keyboard extension, and I’m hoping to get some insight into how to best architect a workflow where the keyboard acts as a remote trigger for dictation, but the main app handles the actual microphone recording and transcription.
I know that third-party keyboards are sandboxed and can’t access the microphone directly, so the pattern I’m following is similar to what Wispr Flow appears to be doing:
What I'm Trying to Build
The user taps a mic button in the custom keyboard (installed system-wide).
This triggers the main app to open, start a recording session, and send the audio to my transcription endpoint (not using Speech.framework).
Once the transcription result is ready, it's stored in an App Group shared container.
The keyboard extension polls for or receives the transcribed text and inserts it into the current input field via textDocumentProxy.insertText(...).
Key Questions
Triggering App Dictation from Keyboard
Is there a clean system-native way to transition from the keyboard to the main app and back?
Are there best practices around?:
Preventing jarring transitions (keyboard disappearing)?
Letting the app record in the background once triggered (see below)?
Keeping the App Alive During Audio Recording
Once the main app is opened to handle the dictation:
I want it to record audio continuously (sometimes for up to a minute or 2 ), send it to an external transcription API, and return the result.
However, from what I have read, iOS aggressively suspends or kills apps that are not in the foreground or haven’t requested the correct background modes - especially if there are background tasks running for longer than 30 seconds.
Even with audio enabled in Background Modes and a live AVAudioEngine session, I find that the app is sometimes paused or killed after a few seconds; especially when the user switches back to the app where they want to type.
But apps like Wispr Flow seem to manage this well. Their flow allows the user to record voice and insert it seamlessly without the app being terminated mid-recording.
So:
✅ How do I prevent my app from being killed/suspended while it's recording, especially when the user switches back to the original app?
Do I need:
A background AVAudioSession hack?
Audio playback tricks (e.g. silent audio)?
A workaround using CallKit (some apps seem to use it for persistent audio sessions)?
Something else Apple allows but doesn’t document clearly (or I am just a bad sercher)?
Returning Text to the Keyboard Extension
I’m using UserDefaults(suiteName:) in the App Group to pass the transcription result. Is that still the recommended approach?
Would it be better to use a shared file (for larger data or richer metadata)?
Are there any timing issues I should be aware of, e.g. like race conditions, stale reads, etc.?
I’m using the shared instance of AVAudioSession. After activating it with .setActive(true), I observe the outputVolume, and it correctly reports the device’s volume.
However, after deactivating the session using .setActive(false), changing the volume, and then reactivating it again, the outputVolume returns the previous volume (before deactivation), not the current device volume. The correct volume is only reported after the user manually changes it again using physical buttons or Control Center, which triggers the observer.
What I need is a way to retrieve the actual current device volume immediately after reactivating the audio session, even on the second and subsequent activations.
Disabling and re-enabling the audio session is essential to how my application functions.
I’ve tested this behavior with my colleagues, and the issue is consistently reproducible on iOS 18.0.1, iOS 18.1, iOS 18.3, iOS 18.5 and iOS 18.6.2. On devices running iOS 17.6.1 and iOS 16.0.3, outputVolume correctly reflects the current volume immediately after calling .setActive(true) multiple times.
Hello,
I'm trying to determine the best/recommended AVAudioSession configuration (i.e category, mode, and options) for the following use-case.
Essentially, I'd like to switch between periods of playing an audio file and then recognizing speech. The audio file is typically speech and I don't intend for playback and speech recognition to occur simultaneously. I'd like for the user to sill be able to interact with Siri and I'd like for it to work with CarPlay where navigation prompts can occur.
I would assume the category to use is 'playAndRecord', but I'm not sure if it's better to just set that once for the entire lifecycle, or set to 'playback' for audio file playback and then switch to 'playAndRecord' for speech recognition . I'm also not sure on the best 'mode' and 'options' to set. Any suggestions would be appreciated.
Thanks.
Hi everyone,
I’m testing audio recording on an iPhone 15 Plus using AVFoundation.
Here’s a simplified version of my setup:
let settings: [String: Any] = [
AVFormatIDKey: Int(kAudioFormatLinearPCM),
AVSampleRateKey: 8000,
AVNumberOfChannelsKey: 1,
AVLinearPCMBitDepthKey: 16,
AVLinearPCMIsFloatKey: false
]
audioRecorder = try AVAudioRecorder(url: fileURL, settings: settings)
audioRecorder?.record()
When I check the recorded file’s sample rate, it logs:
Actual sample rate: 8000.0
However, when I inspect the hardware sample rate:
try session.setCategory(.playAndRecord, mode: .default)
try session.setActive(true)
print("Hardware sample rate:", session.sampleRate)
I consistently get:
`Hardware sample rate: 48000.0
My questions are:
Is the iPhone mic actually capturing at 8 kHz, or is it recording at 48 kHz and then downsampling to 8 kHz internally?
Is there any way to force the hardware to record natively at 8 kHz?
If not, what’s the recommended approach for telephony-quality audio (true 8 kHz) on iOS devices?
Thanks in advance for your guidance!
Hello,
I’m new here. I'm developing an iOS app and I’d like to know whether it is possible to detect if a phone call is being recorded by another app running in the background.
I’ve already reviewed the documentation for CallKit and AVAudioSession, but I couldn’t find anything related. My expectation was that iOS might provide some callback or API to indicate if a call is being recorded (third-party apps), but so far I haven’t found a way.
My questions are:
Does iOS expose any API to detect if a call is being recorded?
If not, is there any indirect, Apple's policy compliant method (e.g., microphone usage events) that can be relied upon?
Or is this something that iOS explicitly prevents for privacyreasons?
Expecting solutions that align with Apple’s policies and would be accepted under the App Store Review Guidelines.
Thanks in advance for any guidance.
AVAudioSessionCategoryOptionAllowBluetooth is marked as deprecated in iOS 8 in iOS 26 beta 5 when this option was not deprecated in iOS 18.6. I think this is a mistake and the deprecation is in iOS 26. Am I right?
It seems that the substitute for this option is "AVAudioSessionCategoryOptionAllowBluetoothHFP". The documentation does not make clear if the behaviour is exactly the same or if any difference should be expected... Has anyone used this option in iOS 26? Should I expect any difference with the current behaviour of "AVAudioSessionCategoryOptionAllowBluetooth"?
Thank you.