am new to using Swift for a Mac Application. I am trying to control an external UVC-compliant camera focus and other capabilities. However, I'm having trouble with this and don't know where to start. I have downloaded an application from the App Store and it can control the focus and other capabilities.
I've tried IOKit but this seems to be complicated and this does not return any capabilities or control the camera.
I also tried AVfoundation and was able to open the camera, but using the following code did not work for me. as a device.isFocusPointOfInterestSupported returns false and without checking the app crashes.
@IBAction func focusChanged(_ sender: NSSlider) {
do {
guard let device = videoDevice else { return }
try device.lockForConfiguration()
// Check if focus mode and point of interest are supported
if device.isFocusModeSupported(.locked) {
device.focusMode = .locked
}
if device.isFocusPointOfInterestSupported {
// Map the slider value (0.0 to 1.0) to the focus point's X coordinate
let focusX = CGFloat(sender.doubleValue)
let focusPoint = CGPoint(x: focusX, y: 0.5) // Y coordinate is typically 0.5 (centered vertically)
device.focusPointOfInterest = focusPoint
} else {
print("Focus point of interest is not supported on this device.")
}
device.unlockForConfiguration()
// Log focus settings
print("Focus point: \(device.focusPointOfInterest)")
print("Focus mode: \(device.focusMode.rawValue)")
} catch {
print("Error adjusting focus: \(error)")
}
Any help or advice is much appreciated.
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi,
I'm sending an API request to:
https://api.music.apple.com/v1/me/library/playlists?limit=$limit&offset=$offset
To list all of the users library playlists, however the resulting objects do not contain the playlist artwork in the JSON. I've tried adding the extend and include attributes as well but to no avail.
A partial example of the response:
{"id": PLAYLIST_ID, "type": "library-playlists", "href": "/v1/me/library/playlists/PLAYLIST_ID", "attributes": {"lastModifiedDate": "2024-09-18T20:18:24Z", "canEdit": true, "name": "Afro Party Anthems", "isPublic": false, "description": {"standard": "Definitive African party starters"}, "hasCatalog": false, "dateAdded": "2022-03-10T18:30:56Z", "playParams": {"id": PLAYLIST_ID, "kind": "playlist", "isLibrary": true}}, "relationships": {"catalog": {"href": "/v1/me/library/playlists/PLAYLIST_ID/catalog", "data": []}}}
Is there a way to get the artwork URL without sending a request for each playlist? And if not can this be fixed?
Hi guys,
I am having issue in live-streaming audio from Bluetooth headset and playing it live on the iPhone speaker.
I am able to redirect audio back to the headset but this is not what I want.
The issue happens when I am trying to override output - the iPhone switches to speaker but also switches a microphone.
This is example of the code:
import AVFoundation
class AudioRecorder {
let player: AVAudioPlayerNode
let engine:AVAudioEngine
let audioSession:AVAudioSession
let audioSessionOutput:AVAudioSession
init() {
self.player = AVAudioPlayerNode()
self.engine = AVAudioEngine()
self.audioSession = AVAudioSession.sharedInstance()
self.audioSessionOutput = AVAudioSession()
do {
try self.audioSession.setCategory(AVAudioSession.Category.playAndRecord, options: [.defaultToSpeaker])
try self.audioSessionOutput.setCategory(AVAudioSession.Category.playAndRecord, options: [.allowBluetooth]) // enables Bluetooth HFP profile
try self.audioSession.setMode(AVAudioSession.Mode.default)
try self.audioSession.setActive(true)
// try self.audioSession.overrideOutputAudioPort(.speaker) // doens't work
} catch {
print(error)
}
let input = self.engine.inputNode
self.engine.attach(self.player)
let bus = 0
let inputFormat = input.inputFormat(forBus: bus)
self.engine.connect(self.player, to: engine.mainMixerNode, format: inputFormat)
input.installTap(onBus: bus, bufferSize: 512, format: inputFormat) { (buffer, time) -> Void in
self.player.scheduleBuffer(buffer)
print(buffer)
}
}
public func start() {
try! self.engine.start()
self.player.play()
}
public func stop() {
self.player.stop()
self.engine.stop()
}
}
I am not sure if this is a bug or not.
Can somebody point me into the right direction?
I there a way to design a custom audio routing?
I would also appreciate some good documentation besides AVFoundation docs.
When I try to send a DRM-protected video via Airplay to an Apple TV, the license request is made twice instead of once as it normally does on iOS.
We only allow one request per session for security reasons, this causes the second request to fail and the video won't play.
We've tested DRM-protected videos without token usage limits and it works, but this creates a security hole in our system.
Why does it request the license twice in function: func contentKeySession(_ session: AVContentKeySession, didProvide keyRequest: AVContentKeyRequest)?
Is there a way to prevent this?
Please consider adding the ability to programatically download Premium and Enhanced voices. At the moment it is extremely inconvenient for our users, as they have to navigate to settings themselves to download voices. Our app relies heavily on SpeechSynthesis integration, and it would greatly benefit from this feature.
FB16307193
Is there any way for me to use an AutoMix api in my IOS apps, I would play tracks using the Apple Music api and use AutoMix to attempt to merge tracks.
Is this feature/api available to developers.
When using the Apple Devices to sync Apple Music to iPhone where is the Apple Devices backup being written to?
Apple Devices->music->sync.
Not trying to backup the iPhone via Apple Devices app.
I have an app that displays artwork via MPMediaItem.artwork, requesting an image with a specific size. How do I get a media item's MPMediaItemAnimatedArtwork, and how to get the preview image and video to display to the user?
I am working on a project for macOS where I am taking an AVCaptureSession's CVPixelBuffer and I need to convert it into a MTLTexture for rendering. On macOS the pixel format is 2vuy, there does not seem to be a clear format conversion while converting to a metal texture. I have been able to convert it to a texture but the color space seems to be off as it is rendering distorted colors with a double image.
I believe 2vuy is a single pane color space and I have tried to account for that, but I am unaware of what is off.
I have attached The CVPixelBuffer and The distorted MTLTexture along with a laundry list of errors.
On iOS my conversions are fine, it is only the macOS 2vuy pixel format that seems to have issues.
My code for the conversion is also attached.
If there are any suggestions or guidance on how to properly convert a 2vuy CVPixelBuffer to a MTLTexture I would greatly appreciate it.
Many Thanks
Conversion_Logs.txt
ConversionCode.swift
Hi,
I'm developing a SwiftUI app using RealityKit and ARKit for an AR measuring feature. I’ve noticed that after navigating away from my AR view and performing extensive cleanup (including removing all anchors/entities, pausing the ARSession, and nil-ing out all references), memory usage remains elevated and sometimes grows with repeated AR sessions.
Each time I enter and exit the AR view, memory increases
The memory does not return to the baseline after cleanup, even though all custom objects are deallocated.
Are there best practices beyond what I’ve described to ensure all ARKit/RealityKit resources are released after an AR session?
I am developing a VOD playback app, but when I stream video to an external monitor connected via HDMI with Lightning on iOS 18 or later, the screen goes dark and I cannot confirm playback.
The app I am developing does not detect the HDMI and display the Player separately, but simply mirrors the video.
We have confirmed that the same phenomenon occurs with other services, but we were able to confirm playback with some services such as Apple TV.
Please let us know if there are any other necessary settings such as video certificates required for video playback.
We would also like to know if the problem occurs with iOS 18 or later.
Topic:
Media Technologies
SubTopic:
Audio
When I use IOKit/usb/IOUSBLib to toggle build-in camera, I got an ERROR:ret IOReturn -536870210
How can I resolve it? Can I use IOUSBLib to disable or hide build-in camera?
My environment:
Model Name: MacBook Pro
ProductVersion: 15.5
Model Identifier: MacBookPro15,2
Processor Name: Quad-Core Intel Core i5
Processor Speed: 2.4 GHz
Number of Processors: 1
// 禁用/启用USB设备
bool toggleUSBDevice(uint16_t vendorID, uint16_t productID, bool enable) {
std::cout << (enable ? "Enabling" : "Disabling") << " USB device with VID: 0x"
<< std::hex << vendorID << ", PID: 0x" << productID << std::endl;
// 创建匹配字典查找指定VID/PID的USB设备
CFMutableDictionaryRef matchingDict = IOServiceMatching(kIOUSBDeviceClassName);
if (!matchingDict) {
std::cerr << "Failed to create USB device matching dictionary." << std::endl;
return false;
}
// 设置VID/PID匹配条件
CFNumberRef vendorIDRef = CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt16Type, &vendorID);
CFNumberRef productIDRef = CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt16Type, &productID);
CFDictionarySetValue(matchingDict, CFSTR(kUSBVendorID), vendorIDRef);
CFDictionarySetValue(matchingDict, CFSTR(kUSBProductID), productIDRef);
CFRelease(vendorIDRef);
CFRelease(productIDRef);
// 获取匹配的设备迭代器
io_iterator_t deviceIterator;
if (IOServiceGetMatchingServices(kIOMainPortDefault, matchingDict, &deviceIterator) != KERN_SUCCESS) {
std::cerr << "Failed to get USB device iterator." << std::endl;
CFRelease(matchingDict);
return false;
}
io_service_t usbDevice;
bool result = false;
int deviceCount = 0;
// 遍历所有匹配的设备
while ((usbDevice = IOIteratorNext(deviceIterator)) != IO_OBJECT_NULL) {
deviceCount++;
// 获取设备路径
char path[1024];
if (IORegistryEntryGetPath(usbDevice, kIOServicePlane, path) == KERN_SUCCESS) {
std::cout << "Found device at path: " << path << std::endl;
}
// 打开设备
IOCFPlugInInterface** plugInInterface = NULL;
IOUSBDeviceInterface** deviceInterface = NULL;
SInt32 score;
IOReturn ret = IOCreatePlugInInterfaceForService(
usbDevice,
kIOUSBDeviceUserClientTypeID,
kIOCFPlugInInterfaceID,
&plugInInterface,
&score);
if (ret == kIOReturnSuccess && plugInInterface) {
ret = (*plugInInterface)->QueryInterface(plugInInterface,
CFUUIDGetUUIDBytes(kIOUSBDeviceInterfaceID),
(LPVOID*)&deviceInterface);
(*plugInInterface)->Release(plugInInterface);
}
if (ret != kIOReturnSuccess) {
std::cerr << "Failed to open USB device interface. Error:" << ret << std::endl;
IOObjectRelease(usbDevice);
continue;
}
// 禁用/启用设备
if (enable) {
// 启用设备 - 重新配置设备
ret = (*deviceInterface)->USBDeviceReEnumerate(deviceInterface, 0);
if (ret == kIOReturnSuccess) {
std::cout << "Device enabled successfully." << std::endl;
result = true;
} else {
std::cerr << "Failed to enable device. Error: " << ret << std::endl;
}
} else {
// 禁用设备 - 断开设备连接
ret = (*deviceInterface)->USBDeviceClose(deviceInterface);
if (ret == kIOReturnSuccess) {
std::cout << "Device disabled successfully." << std::endl;
result = true;
} else {
std::cerr << "Failed to disable device. Error: " << ret << std::endl;
}
}
// 关闭设备接口
(*deviceInterface)->Release(deviceInterface);
IOObjectRelease(usbDevice);
}
IOObjectRelease(deviceIterator);
if (deviceCount == 0) {
std::cerr << "No device found with specified VID/PID." << std::endl;
return false;
}
return result;
}
A bit of a novice to app development here but I have a paid developer account, I have registered the identifier for MusicKit on the developer website (using the bundle identifier I've selected in Xcode) but the option to add MusicKit as a capability is not available in Xcode?
I've manually updated the certificates, closed the app and reopened it, started a new project and tried with a different demo project?
Apologies if I am missing something obvious but could someone help me get this capability added?
Hello,
My company has an in-store app with FPS SDK 4.x (1024) keys. We've handed those keys over to a trusted third-party and we do not have them. We've been in-store for several years.
The person that created the keys in our organization mistakenly stored them encrypted to our third-party's PGP keys, so we cannot decrypt them, and the third party also has no mechanism to provide us with the keys even though it is in their runtime environment. They only have secure mechanisms for us to upload keys onto their servers.
We are trying to migrate to a different third-party DRM provider, and would like to obtain new keys. Unfortunately, the developer portal won't let me create new keys, saying that we have exceeded the number of keys allowed, which I assume is one.
Additionally, the new DRM provider can only support SDK 4.x keys, and it appears that we can only request SDK 5.x keys on the Apple Developer portal, as the SDK 4.0 option is grayed out. Regardless, it seems that we are not able to request any keys.
We've submitted a request to the support e-mail address and received an automated e-mail that the response should take a few days, but may take longer on occasion. It's now been a month. The e-mail says that the reply address is not monitored. Is there any way we can accelerate this?
Thank you,
Carlos
Among the millions of users of our online product, we have identified through data metrics that the silent audio data capture rate on iPadOS 18.4.1 or 18.5 has increased abnormally. However, we are unable to reproduce the issue. Has anyone encountered a similar issue? The parameters we used are as follows:
AudioSession:
category:AVAudioSessionCategoryPlayAndRecord
mode:AVAudioSessionModeDefault
option:77
preferredSampleRate:48000.000000
preferredIOBufferDuration:0.010000
AudioUnit
format.mFormatID = kAudioFormatLinearPCM;
format.mSampleRate = 48000.0;
format.mChannelsPerFrame = 2;
format.mBitsPerChannel = 16;
format.mFramesPerPacket = 1;
format.mBytesPerFrame = format.mChannelsPerFrame * 16 / 8;
format.mBytesPerPacket = format.mBytesPerFrame * format.mFramesPerPacket;
format.mFormatFlags = kAudioFormatFlagsNativeEndian | kLinearPCMFormatFlagIsPacked | kLinearPCMFormatFlagIsSignedInteger;
component.componentType = kAudioUnitType_Output;
component.componentSubType = kAudioUnitSubType_RemoteIO;
component.componentManufacturer = kAudioUnitManufacturer_Apple;
component.componentFlags = 0;
component.componentFlagsMask = 0;
Issue Description
I'm implementing a system audio capture feature using AudioHardwareCreateProcessTap and AudioHardwareCreateAggregateDevice. The app successfully creates the tap and aggregate device, but when starting the IO procedure with AudioDeviceStart, it sometimes fails with OSStatus error 1852797029. (The operation couldn’t be completed. (OSStatus error 1852797029.)) The error occurs inconsistently, which makes it particularly difficult to debug and reproduce.
Questions
Has anyone encountered this intermittent "nope" error code (0x6e6f7065) when working with system audio capture?
Are there specific conditions or system states that might trigger this error sporadically?
Are there any known workarounds for handling this intermittent failure case?
Any insights or guidance would be greatly appreciated. I'm wondering if anyone else has encountered this specific "nope" error code (0x6e6f7065) when working with system audio capture.
Hi there,
Can anyone tell me how to possibly get approved as an Apple News Publisher in 2025?
We attempted in 2024, but received this message from Apple support:
"Thank you for your interest in Apple News. At this time, we're not accepting new applications."
When I inquired further, I got this second response:
"Apple News is no longer accepting unsolicited applications. To learn more about Apple News requirements, visit the Apple News support page. If you have any feedback, please use this form to send us your comments. Keep in mind that while we read all feedback, we are unable to respond to each submission individually."
My questions are:
Is this still the case? (Especially when you are a legit local news outlet)
Is there a link to apply as a news publisher? I don't seem to have that option at all.
Thanks for any feedback.
Hello,
I'm observing an intermittent memory leak being reported in the iOS Simulator when initializing and starting an AVAudioEngine. Even with minimal setup—just attaching a single AVAudioPlayerNode and connecting it to the mainMixerNode—Xcode's memory diagnostics and Instruments sometimes flag a leak.
Here is a simplified version of the code I'm using:
// This function is called when the user taps a button in the view controller:
#import "ViewController.h"
@interface ViewController ()
@end
@implementation ViewController
- (void)viewDidLoad {
[super viewDidLoad];
}
- (IBAction)myButtonAction:(id)sender {
NSLog(@"Test");
soundCreate();
}
@end
// media.m
static AVAudioEngine *audioEngine = nil;
void soundCreate(void)
{
if (audioEngine != nil)
return;
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryAmbient error:nil];
[[AVAudioSession sharedInstance] setActive:YES error:nil];
audioEngine = [[AVAudioEngine alloc] init];
AVAudioPlayerNode* playerNode = [[AVAudioPlayerNode alloc] init];
[audioEngine attachNode:playerNode];
[audioEngine connect:playerNode to:(AVAudioNode *)[audioEngine mainMixerNode] format:nil];
[audioEngine startAndReturnError:nil];
}
In the memory leak report, the following call stack is repeated, seemingly in a loop:
ListenerMap::InsertEvent(XAudioUnitEvent const&, ListenerBinding*) AudioToolboxCore
ListenerMap::AddParameter(AUListener*, void*, XAudioUnitEvent const&) AudioToolboxCore
AUListenerAddParameter AudioToolboxCore
addOrRemoveParameterListeners(OpaqueAudioComponentInstance*, AUListenerBase*, AUParameterTree*, bool) AudioToolboxCore
0x180178ddf
I am playing FairPlay + Multi-Key content (fMP4) in Safari browser.
I want to implement the implementation to distinguish between SD and HD video quality, and play it in HD if HDCP is supported, and in SD if HDCP is not supported.
I have already confirmed that HDCP support is the default, and that a black screen is output in non-HDCP environments.
What I want is to improve the user experience by appropriately switching to SD/HD depending on HDCP support when playing DRM content.
Question: Is there an API or function that can detect HDCP support in Safari through JavaScript or other methods? Or is there a way to indirectly guess it?
Topic:
Media Technologies
SubTopic:
Streaming
Tags:
FairPlay Streaming
WebKit
Safari
HTTP Live Streaming
Hi, I am a newbie here.
We have been given a task to build a robotic vision system to capture an immersive video in a hazed environment, which will later be played on Apple Vision Pro. I am thinking of starting with 2 or 4 basic CMOS camera sensors, such as IMX378, AR0144, or VD66GY, and designing an FPGA-based circuit to synchronously capture and store raw frame-by-frame data. Some frame initial processing such as demosaicing and filtering can also be done by the FPGA. Then, I would use software for post-processing to convert the data into a compatible video format for Apple Vision Pro.
Will this idea work? I can handle the raw data capture, but I’m unsure if this approach is feasible and what post-processing software I should use.
Thanks a lot for your suggestions!
Charlie
Topic:
Media Technologies
SubTopic:
Video