Hi, I'm trying to plan out development of an app and am wondering if it is possible to have user generated content automatically populate into a custom shazamkit catalogue and be able to query this catalogue non-locally?
Storing all the submissions locally would obviously not scale.
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Issue Description
I'm implementing a system audio capture feature using AudioHardwareCreateProcessTap and AudioHardwareCreateAggregateDevice. The app successfully creates the tap and aggregate device, but when starting the IO procedure with AudioDeviceStart, it sometimes fails with OSStatus error 1852797029. (The operation couldn’t be completed. (OSStatus error 1852797029.)) The error occurs inconsistently, which makes it particularly difficult to debug and reproduce.
Questions
Has anyone encountered this intermittent "nope" error code (0x6e6f7065) when working with system audio capture?
Are there specific conditions or system states that might trigger this error sporadically?
Are there any known workarounds for handling this intermittent failure case?
Any insights or guidance would be greatly appreciated. I'm wondering if anyone else has encountered this specific "nope" error code (0x6e6f7065) when working with system audio capture.
Hi 👋! We have a SpriteKit-based app where we play AVAudio sounds in three different ways:
Effects (incl. UI sounds) with AVAudioPlayer.
Long looping tracks with AVAudioPlayer.
Short animation effects on the timeline of SpriteKit's SKScene files (effectively SKAudioNode nodes).
We've found that when you exit the app or otherwise interrupt audio plays, future audio plays often fail. For example, there's a WebKit-based video trailer inside the app, and if you play it, our looping background music track (2.) will stop playing, and won't resume as you close the trailer (return from WebKit). This is probably due to us not manually restarting the track (so may well be easily fixed). Periodically played AVAudioPlayer audio (1.) are not affected.
However, the more concerning thing is that the audio tracks on SKScene file timelines (3.) will no longer play. My hypothesis is that AVAudioEngine gets interrupted, and needs to be restarted for those AVAudioNode elements to regain functionality. Thing is, we don't deal with AVAudioEngine at all currently in the app, meaning it is never initiated to begin with.
Obviously things return to normal when you remove the app from short-term memory and restart it. However, it seems many of our users aren't doing this, and often report audio failing presumably due to some interruption in the past without the app ever being cleared from memory.
Any idea why timeline-run SKAudioNodes would fail like this? Should the app react to app backgrounding/foregrounding regarding audio?
Any help would be very much appreciated ✌️!
Context
We develop an iOS/Apple TV app that allows to play HLS+FP Live streams (custom playback UI), some of which use the same FairPlay content key id. All FairPlay content keys are requested to the same content key server.
Implementation
Despite Apple documentation warning to not reuse AVContentKeySessions, we use only one AVContentKeySession for all channels which allows the system to reuse the content key when a content key id is met again. As seen in another thread, people seems to think this is OK.
Issue
When reusing the AVContentKeySession and the user quickly tunes channels multiple times (up to 2 or 3 times per second using gestures), an inconsistency may occur where the content key request for a previous streams is asked to the delegate after a new stream is already being prepared and its AVURLAsset already assigned as the content key session AVContentKeyRecipient. Note that the previous content key recipient is removed before the new one is added.
We also have been reported for crashes (though I haven't experienced it myself) when performing multiple channels tunings which makes us think that the AVContentKeySession should definitely not been reused.
Note: On the other hand if a new AVContentKeySession is used for each stream, the system systematically requests a content key even if previous streams have used the same content key id. In this case, neither the crash nor the inconsistency issue are observed but it dramatically increases the number of calls to the content key server.
Questions
Should AVContentKeySessions definitely not be reused? Otherwise, how to handle the inconsistency issue described above?
On Apple TV 4K 3rd generation, with tvOS 26 beta 2, when two HomePod 2 are paired to the device, music and movie sources with Dolby Atmos can only be listened to in stereo. dolby atmos not supported
Topic:
Media Technologies
SubTopic:
Audio
I want to create a Live Photo. The project includes a .jpg image and a .mov video (2 seconds).
Two permissions in xcode have been added:
Privacy - Photo Library Usage Description
Privacy - Photo Library Additions Usage Description
Simulate: iphone 16, ios 18.3
The codes in ContentView.swift :
private func saveLivePhoto(imageURL: URL, videoURL: URL, completion: @escaping (Bool, Error?) -> Void) {
PHPhotoLibrary.shared().performChanges {
let creationRequest = PHAssetCreationRequest.forAsset()
let options = PHAssetResourceCreationOptions()
options.shouldMoveFile = false
creationRequest.addResource(with: .photo, fileURL: imageURL, options: options)
creationRequest.addResource(with: .pairedVideo, fileURL: videoURL, options: options)
} completionHandler: { success, error in
DispatchQueue.main.async {
print(error)
completion(success, error)
}
}
}
guard let imageURL = Bundle.main.url(forResource: "livephoto", withExtension: "jpeg"),
let videoURL = Bundle.main.url(forResource: "livephoto", withExtension: "mov") else {
showAlertMessage(title: "error", message: "cant find Live Photo ")
return
}
print("imageURL: \(imageURL)")
print("videoURL: \(videoURL)")
saveLivePhoto(imageURL: imageURL, videoURL: videoURL) { success, error in
if success {
xxxxx
} else {
xxxxx
}
}
Really need help, thanks
Hi,
I'm sending an API request to:
https://api.music.apple.com/v1/me/library/playlists?limit=$limit&offset=$offset
To list all of the users library playlists, however the resulting objects do not contain the playlist artwork in the JSON. I've tried adding the extend and include attributes as well but to no avail.
A partial example of the response:
{"id": PLAYLIST_ID, "type": "library-playlists", "href": "/v1/me/library/playlists/PLAYLIST_ID", "attributes": {"lastModifiedDate": "2024-09-18T20:18:24Z", "canEdit": true, "name": "Afro Party Anthems", "isPublic": false, "description": {"standard": "Definitive African party starters"}, "hasCatalog": false, "dateAdded": "2022-03-10T18:30:56Z", "playParams": {"id": PLAYLIST_ID, "kind": "playlist", "isLibrary": true}}, "relationships": {"catalog": {"href": "/v1/me/library/playlists/PLAYLIST_ID/catalog", "data": []}}}
Is there a way to get the artwork URL without sending a request for each playlist? And if not can this be fixed?
hi,
i need to read wether the transport is playing or stopped but my current method that works for vst does not work for au.
is there a lpx resource available for developers anywhere?
if (auto* playHead = processor->getPlayHead())
{
juce::AudioPlayHead::CurrentPositionInfo posInfo;
if (playHead->getCurrentPosition(posInfo))
{
bool isCurrentlyPlaying = posInfo.isPlaying;
if (isCurrentlyPlaying != wasTransportPlaying)
{
if (isCurrentlyPlaying)
{
wasTransportPlaying = isCurrentlyPlaying;
startAllTimers();
}
else
{
wasTransportPlaying = isCurrentlyPlaying;
stopAllTimers();
}
}
}
}
thanks :)
I am getting high error rates from the Apple Music API. This has been happening for months now, and it is quite frustrating. It is a mix of 404, 504, and random 500 errors. I hit these endpoints all of the time, so it is not like I am hitting a resource that doesn't exist. Why is this happening? Is this a known issue that is getting worked on?
t has been quite some time since I requested the Apple FPS package, yet I haven’t received it. I haven’t received any email either. Is there a developer support inquiry center where I can check the status of the process? Alternatively, could you share approximately how long it took for you to receive a response email?
Topic:
Media Technologies
SubTopic:
Streaming
Tags:
Accounts
FairPlay Streaming
Video
HTTP Live Streaming
Hello!
I'm experiencing an issue with iOS's audio routing system when trying to use Bluetooth headphones for audio output while also recording environmental audio from the built-in microphone.
Desired behavior:
Play audio through Bluetooth headset (AirPods)
Record unprocessed environmental audio from the iPhone's built-in microphone
Actual behavior:
When explicitly selecting the built-in microphone, iOS reports it's using it (in currentRoute.inputs)
However, the actual audio data received is clearly still coming from the AirPods microphone
The audio is heavily processed with voice isolation/noise cancellation, removing environmental sounds
Environment Details
Device: iPhone 12 Pro Max
iOS Version: 18.4.1
Hardware: AirPods
Audio Framework: AVAudioEngine (also tried AudioQueue)
Code Attempted
I've tried multiple approaches to force the correct routing:
func configureAudioSession() {
let session = AVAudioSession.sharedInstance()
// Configure to allow Bluetooth output but use built-in mic
try? session.setCategory(.playAndRecord,
options: [.allowBluetoothA2DP, .defaultToSpeaker])
try? session.setActive(true)
// Explicitly select built-in microphone
if let inputs = session.availableInputs,
let builtInMic = inputs.first(where: { $0.portType == .builtInMic }) {
try? session.setPreferredInput(builtInMic)
print("Selected input: \(builtInMic.portName)")
}
// Log the current route
let route = session.currentRoute
print("Current input: \(route.inputs.first?.portName ?? "None")")
// Configure audio engine with native format
let inputNode = audioEngine.inputNode
let nativeFormat = inputNode.inputFormat(forBus: 0)
inputNode.installTap(onBus: 0, bufferSize: 1024, format: nativeFormat) { buffer, time in
// Process audio buffer
// Despite showing "Built-in Microphone" in route, audio appears to be
// coming from AirPods with voice isolation applied - welp!
}
try? audioEngine.start()
}
I've also tried various combinations of:
Different audio session modes (.default, .measurement, .voiceChat)
Different option combinations (with/without .allowBluetooth, .allowBluetoothA2DP)
Setting session.setPreferredInput() both before and after activation
Diagnostic Observations
When AirPods are connected:
AVAudioSession.currentRoute.inputs correctly shows "Built-in Microphone" after setPreferredInput()
The actual audio data received shows clear signs of AirPods' voice isolation processing
Background/environmental sounds are actively filtered out...
When recording a test audio played near the phone (not through the app), the recording is nearly silent. Only headset voice goes through.
Questions
Is there a workaround to force iOS to actually use the built-in microphone while maintaining Bluetooth output?
Are there any lower-level configurations that might resolve this issue?
Any insights, workarounds, or suggestions would be greatly appreciated. This is blocking a critical feature in my application that requires environmental audio recording while providing audio feedback through headphones 😅
I'm developing a TTS Audio Unit Extension that needs to write trace/log files to a shared App Group container. While the main app can successfully create and write files to the container, the extension gets sandbox denied errors despite having proper App Group entitlements configured.
Setup:
Main App (Flutter) and TTS Audio Unit Extension share the same App Group
App Group is properly configured in developer portal and entitlements
Main app successfully creates and uses files in the container
Container structure shows existing directories (config/, dictionary/) with populated files
Both targets have App Group capability enabled and entitlements set
Current behavior:
Extension can access/read the App Group container
Extension can see existing directories and files
All write attempts are blocked with "sandbox deny(1) file-write-create" errors
Code example:
const char* createSharedGroupPathWithComponent(const char* groupId, const char* component) {
NSString* groupIdStr = [NSString stringWithUTF8String:groupId];
NSString* componentStr = [NSString stringWithUTF8String:component];
NSURL* url = [[NSFileManager defaultManager]
containerURLForSecurityApplicationGroupIdentifier:groupIdStr];
NSURL* fullPath = [url URLByAppendingPathComponent:componentStr];
NSError *error = nil;
if (![[NSFileManager defaultManager] createDirectoryAtPath:fullPath.path
withIntermediateDirectories:YES
attributes:nil
error:&error]) {
NSLog(@"Unable to create directory %@", error.localizedDescription);
}
return [[fullPath path] UTF8String];
}
Error output:
Sandbox: simaromur-extension(996) deny(1) file-write-create /private/var/mobile/Containers/Shared/AppGroup/36CAFE9C-BD82-43DD-A962-2B4424E60043/trace
Key questions:
Are there additional entitlements required for TTS Audio Unit Extensions to write to App Group containers?
Is this a known limitation of TTS Audio Unit Extensions?
What is the recommended way to handle logging/tracing in TTS Audio Unit Extensions?
If writing to App Group containers is not supported, what alternatives are available?
Current entitlements:
<dict>
<key>com.apple.security.application-groups</key>
<array>
<string>group.com.<company>.<appname></string>
</array>
</dict>
We’ve encountered a reproducible issue where the iPhone fails to reconnect to a Wi-Fi access point under the following conditions:
The device is connected to a 2.4GHz Wi-Fi network.
A Bluetooth audio accessory is connected (e.g. headset).
AVAudioSession is active (such as during a voice call or when using the Voice Memos app).
The user moves away from the access point, causing a disconnect.
Upon returning within range, the access point is no longer recognized or reconnected while AVAudioSession remains active.
However, if the Bluetooth device is disconnected or the AVAudioSession is deactivated, the Wi-Fi access point is immediately recognized again.
We confirmed this behavior not only in my app but also using Apple's built-in Voice Memos app, suggesting this is not specific to our implementation.
It appears that the Wi-Fi system deprioritizes reconnection while AVAudioSession is engaged. Could this be by design? Or is this a known issue or limitation with Wi-Fi and AVAudioSession interaction?
Test Environment:
Device: iPhone 13 mini
iOS: 17.5.1
Wi-Fi: 2.4GHz band
Accessories: Bluetooth headset
We’d appreciate clarification on whether this is expected behavior or a bug. Thank you!
In our logging tools (Firebase) I see a lot of errors reported when users are playing content and the app transitions to the background. A AVPlayerItemFailedToPlayToEndTime notification is fired with an error containing error codes like -1102 and 1852797029 which seem to correspond to NSURLErrorNoPermissionsToReadFile and kCMIOHardwareIllegalOperationError respectively. To me, it looks like these might have something to do with caching logic.
The items being played are HLS streams and we make use of AVAssetDownloadTask to make any streamed content offline available. Our setup is similar to the sample provided here: https://developer.apple.com/documentation/avfoundation/using-avfoundation-to-play-and-persist-http-live-streams. Whenever an item is selected for playback the app will check if a cached version is available and if so gets the url to the stored file like the "localAssetForStream()" method in the example, or get the asset from a currently running AVAssetDownloadTask for the item, or else, starts a new AVAssetDownloadTask and returns a AVAsset from that task to play.
This seems to work fine, and I can't reproduce the issues our users and our logging tools are reporting.
Is there some case I am missing where AVAssetDownloadTask and associated AVAssets might become unreadable when the app transitions to the background? Or do these errors indicate a different problem entirely?
Topic:
Media Technologies
SubTopic:
Streaming
When I use IOKit/usb/IOUSBLib to toggle build-in camera, I got an ERROR:ret IOReturn -536870210
How can I resolve it? Can I use IOUSBLib to disable or hide build-in camera?
My environment:
Model Name: MacBook Pro
ProductVersion: 15.5
Model Identifier: MacBookPro15,2
Processor Name: Quad-Core Intel Core i5
Processor Speed: 2.4 GHz
Number of Processors: 1
// 禁用/启用USB设备
bool toggleUSBDevice(uint16_t vendorID, uint16_t productID, bool enable) {
std::cout << (enable ? "Enabling" : "Disabling") << " USB device with VID: 0x"
<< std::hex << vendorID << ", PID: 0x" << productID << std::endl;
// 创建匹配字典查找指定VID/PID的USB设备
CFMutableDictionaryRef matchingDict = IOServiceMatching(kIOUSBDeviceClassName);
if (!matchingDict) {
std::cerr << "Failed to create USB device matching dictionary." << std::endl;
return false;
}
// 设置VID/PID匹配条件
CFNumberRef vendorIDRef = CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt16Type, &vendorID);
CFNumberRef productIDRef = CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt16Type, &productID);
CFDictionarySetValue(matchingDict, CFSTR(kUSBVendorID), vendorIDRef);
CFDictionarySetValue(matchingDict, CFSTR(kUSBProductID), productIDRef);
CFRelease(vendorIDRef);
CFRelease(productIDRef);
// 获取匹配的设备迭代器
io_iterator_t deviceIterator;
if (IOServiceGetMatchingServices(kIOMainPortDefault, matchingDict, &deviceIterator) != KERN_SUCCESS) {
std::cerr << "Failed to get USB device iterator." << std::endl;
CFRelease(matchingDict);
return false;
}
io_service_t usbDevice;
bool result = false;
int deviceCount = 0;
// 遍历所有匹配的设备
while ((usbDevice = IOIteratorNext(deviceIterator)) != IO_OBJECT_NULL) {
deviceCount++;
// 获取设备路径
char path[1024];
if (IORegistryEntryGetPath(usbDevice, kIOServicePlane, path) == KERN_SUCCESS) {
std::cout << "Found device at path: " << path << std::endl;
}
// 打开设备
IOCFPlugInInterface** plugInInterface = NULL;
IOUSBDeviceInterface** deviceInterface = NULL;
SInt32 score;
IOReturn ret = IOCreatePlugInInterfaceForService(
usbDevice,
kIOUSBDeviceUserClientTypeID,
kIOCFPlugInInterfaceID,
&plugInInterface,
&score);
if (ret == kIOReturnSuccess && plugInInterface) {
ret = (*plugInInterface)->QueryInterface(plugInInterface,
CFUUIDGetUUIDBytes(kIOUSBDeviceInterfaceID),
(LPVOID*)&deviceInterface);
(*plugInInterface)->Release(plugInInterface);
}
if (ret != kIOReturnSuccess) {
std::cerr << "Failed to open USB device interface. Error:" << ret << std::endl;
IOObjectRelease(usbDevice);
continue;
}
// 禁用/启用设备
if (enable) {
// 启用设备 - 重新配置设备
ret = (*deviceInterface)->USBDeviceReEnumerate(deviceInterface, 0);
if (ret == kIOReturnSuccess) {
std::cout << "Device enabled successfully." << std::endl;
result = true;
} else {
std::cerr << "Failed to enable device. Error: " << ret << std::endl;
}
} else {
// 禁用设备 - 断开设备连接
ret = (*deviceInterface)->USBDeviceClose(deviceInterface);
if (ret == kIOReturnSuccess) {
std::cout << "Device disabled successfully." << std::endl;
result = true;
} else {
std::cerr << "Failed to disable device. Error: " << ret << std::endl;
}
}
// 关闭设备接口
(*deviceInterface)->Release(deviceInterface);
IOObjectRelease(usbDevice);
}
IOObjectRelease(deviceIterator);
if (deviceCount == 0) {
std::cerr << "No device found with specified VID/PID." << std::endl;
return false;
}
return result;
}
Is there any way for me to use an AutoMix api in my IOS apps, I would play tracks using the Apple Music api and use AutoMix to attempt to merge tracks.
Is this feature/api available to developers.
According to the doc, I did a simple demo to verify.
My env:
ProductName: macOS
ProductVersion: 15.5
BuildVersion: 24F74
2.4 GHz 四核Intel Core i5
Info.plist:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>IOKitPersonalities</key>
<dict>
<key>UVCamera</key>
<dict>
<key>CFBundleIdentifierKernel</key>
<string>com.apple.kpi.iokit</string>
<key>IOClass</key>
<string>IOUserService</string>
<key>IOMatchCategory</key>
<string>$(PRODUCT_BUNDLE_IDENTIFIER)</string>
<key>IOProviderClass</key>
<string>IOUserResources</string>
<key>IOResourceMatch</key>
<string>IOKit</string>
<key>IOUserClass</key>
<string>UVCamera</string>
<key>IOUserServerName</key>
<string>$(PRODUCT_BUNDLE_IDENTIFIER)</string>
<key>IOProbeScore</key>
<integer>100000</integer>
<key>idVendor</key>
<integer>1452</integer>
<key>idProduct</key>
<integer>34068</integer>
</dict>
</dict>
<key>OSBundleUsageDescription</key>
<string></string>
</dict>
</plist>
UVCamera.cpp
//
// UVCamera.cpp
// UVCamera
//
// Created by DTEN on 2025/6/12.
//
#include <os/log.h>
#include <DriverKit/IOUserServer.h>
#include <DriverKit/IOLib.h>
#include "UVCamera.h"
kern_return_t
IMPL(UVCamera, Start)
{
kern_return_t ret;
ret = Start(provider, SUPERDISPATCH);
os_log(OS_LOG_DEFAULT, "Hello World");
return ret;
}
UVCamera.iig
//
// UVCamera.iig
// UVCamera
//
// Created by DTEN on 2025/6/12.
//
#ifndef UVCamera_h
#define UVCamera_h
#include <Availability.h>
#include <DriverKit/IOService.iig>
class UVCamera: public IOService
{
public:
virtual kern_return_t
Start(IOService * provider) override;
};
#endif /* UVCamera_h */
Then I build by xcode and mv it to /Library/DriverExtensions:
sudo mv com.lqs.MyVirtualCam.UVCamera.dext /Library/DriverExtensions
sudo kmutil install -R / -r /Library/DriverExtensions
kmutil rebuild done
However,the dext can't be loaded:
kmutil showloaded --list-only | grep UVCamera
No variant specified, falling back to release
What's the problem? anyone can help me?
Hello, I'm trying to create a webbrowser but currently when signed into apple music webplayer I get the following message when I attempt to play on any versions of my webbrowser:
Not available on the web
You can listen to this in the Apple Music app.
Is there a way to setup DRM (assuming this is the issue) with apple to allow my webbrowser to play this content?
I believe Apple TV is also affected.
Thank you ahead of time.
I'm currently using ReplayKit for background screen recording, but I can't determine whether the screen is in landscape mode from the CMSampleBuffer. All other APIs for detecting screen orientation are foreground-based. What should I do?
I am playing FairPlay + Multi-Key content (fMP4) in Safari browser.
I want to implement the implementation to distinguish between SD and HD video quality, and play it in HD if HDCP is supported, and in SD if HDCP is not supported.
I have already confirmed that HDCP support is the default, and that a black screen is output in non-HDCP environments.
What I want is to improve the user experience by appropriately switching to SD/HD depending on HDCP support when playing DRM content.
Question: Is there an API or function that can detect HDCP support in Safari through JavaScript or other methods? Or is there a way to indirectly guess it?
Topic:
Media Technologies
SubTopic:
Streaming
Tags:
FairPlay Streaming
WebKit
Safari
HTTP Live Streaming