Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics
Posts under Media Technologies topic

Post

Replies

Boosts

Views

Created

Inquiry about Low-Latency Frame Interpolation & Super Resolution using VTFrameProcessor
Hello, I have implemented Low-Latency Frame Interpolation using the VTFrameProcessor framework, based on the sample code from https://developer.apple.com/kr/videos/play/wwdc2025/300. It is currently working well for both LIVE and VOD streams. However, I have a few questions regarding the lifecycle management and synchronization of this feature: 1. Common Questions (Applicable to both Frame Interpolation & Super Resolution) 1.1 Dynamic Toggling Do you recommend enabling/disabling these features dynamically during playback? Or is it better practice to configure them only during the initial setup/preparation phase? If dynamic toggling is supported, are there any recommended patterns for managing VTFrameProcessor session lifecycle (e.g., startSession / endSession timing)? 1.2 Synchronization Method I am currently using CADisplayLink to fetch frames from AVPlayerItemVideoOutput and perform processing. Is CADisplayLink the recommended approach for real-time frame acquisition with VTFrameProcessor? If the feature needs to be toggled on/off during active playback, are there any concerns or alternative approaches you would recommend? 1.3 Supported Resolution/Quality Range What are the minimum and maximum video resolutions supported for each feature? Are there any aspect ratio restrictions (e.g., does it support 1:1 square videos)? Is there a recommended resolution range for optimal performance and quality? 2. Frame Interpolation Specific Questions 2.1 LIVE Stream Support Is Low-Latency Frame Interpolation suitable for LIVE streaming scenarios where latency is critical? Are there any special considerations for LIVE vs VOD? 3. Super Resolution Specific Questions 3.1 Adaptive Bitrate (ABR) Stream Support In ABR (HLS/DASH) streams, the video resolution can change dynamically during playback. Is VTLowLatencySuperResolutionScaler compatible with ABR streams where resolution changes mid-playback? If resolution changes occur, should I recreate the VTLowLatencySuperResolutionScalerConfiguration and restart the session, or does the API handle this automatically? 3.2 Small/Square Resolution Issue I observed that 144x144 (1:1 square) videos fail with error:   "VTFrameProcessorErrorDomain Code=-19730: processWithSourceFrame within VCPFrameSuperResolutionProcessor failed" However, 480x270 (16:9) videos work correctly. minimumDimensions reports 96x96, but 144x144 still fails. Is there an undocumented restriction on aspect ratio or a practical minimum resolution? 3.3 Scale Factor Selection supportedScaleFactors returns [2.0, 4.0] for most resolutions. Is there a recommended scale factor for balancing quality and performance? Are there scenarios where 4.0x should be avoided? The documentation on this specific topic seems limited, so I would appreciate any insights or advice. Thank you.
0
0
38
2h
Repeated NUIdentifier crash
Some of our app's users are repeatedly running into a crash on NeutrinoCore -[NUIdentifier initWithNamespace:name:version:] + 2352. It looks from the stack trace like multiple threads are performing PHFetchRequests, but that shouldn't be causing a crash. It's isolated to a small number of users, which makes me think that it's something related to their specific Photos databases (e.g., data corruption.) Do you have any suggestions how I might be able to resolve this? 2 libsystem_c.dylib abort + 124 3 NeutrinoCore -[NUAssertionPolicyCrashReport notifyAssertion:] + 66 4 NeutrinoCore -[NUAssertionPolicyComposite notifyAssertion:] + 160 5 NeutrinoCore -[NUAssertionPolicyUnique notifyAssertion:] + 176 6 NeutrinoCore -[NUAssertionHandler handleFailureInFunction:file:lineNumber:currentlyExecutingJobName:description:arguments:] + 156 7 NeutrinoCore _NUAssertFailHandler + 176 8 NeutrinoCore -[NUIdentifier initWithNamespace:name:version:] + 2352 9 NeutrinoCore -[NUIdentifier initWithName:version:] + 84 10 NeutrinoCore -[NUIdentifier initWithName:] + 68 11 PhotoImaging +[PISchema identifier] + 36 12 PhotoImaging +[PISchema registeredPhotosSchemaIdentifier] + 32 13 PhotoImaging +[PIPhotoEditHelper newComposition] + 28 14 PhotoImaging +[PICompositionSerializer deserializeCompositionFromAdjustments:metadata:formatIdentifier:formatVersion:sidecarData:error:] + 160 15 PhotoImaging +[PICompositionSerializer deserializeCompositionFromData:formatIdentifier:formatVersion:sidecarData:error:] + 224 16 PhotoLibraryServices -[PLPhotoEditPersistenceManager loadCompositionFrom:formatIdentifier:formatVersion:sidecarData:error:] + 1848 17 PhotoLibraryServices +[PLPhotoEditPersistenceManager validateAdjustmentData:formatIdentifier:formatVersion:error:] + 108 18 Photos __167+[PHContentEditingInputRequestContext contentEditingInputRequestContextForAsset:requestID:managerID:networkAccessAllowed:downloadIntent:progressHandler:resultHandler:]_block_invoke + 260 19 Photos -[PHAdjustmentData(ContentEditingInput) _contentEditing_readableByClientWithVerificationBlock:] + 136 20 Photos -[PHAdjustmentData(ContentEditingInput) _contentEditing_requiredBaseVersionReadableByClient:verificationBlock:] + 88 21 Photos -[PHContentEditingInputRequestContext _adjustmentBaseVersionFromResult:request:canHandleAdjustmentData:] + 356 22 Photos -[PHContentEditingInputRequestContext produceChildRequestsForRequest:reportingIsLocallyAvailable:isDegraded:result:] + 624 23 Photos -[PHMediaRequestContext _produceChildRequestsForRequest:withResult:] + 88 24 Photos -[PHMediaRequestContext mediaRequest:didFinishWithResult:] + 92 25 Photos -[PHAdjustmentDataRequest _finishFromAsynchronousCallback] + 124 26 Photos __39-[PHAdjustmentDataRequest startRequest]_block_invoke + 584 27 PhotoLibraryServicesCore __106-[PLAssetsdResourceClient adjustmentDataForAsset:networkAccessAllowed:trackCPLDownload:completionHandler:]block_invoke.84 + 880 28 CoreFoundation invoking + 148 29 CoreFoundation -[NSInvocation invoke] + 424 30 Foundation <deduplicated_symbol> + 16 31 Foundation -[NSXPCConnection _decodeAndInvokeReplyBlockWithEvent:sequence:replyInfo:] + 528 32 Foundation __88-[NSXPCConnection _sendInvocation:orArguments:count:methodSignature:selector:withProxy:]_block_invoke_5 + 188 33 libxpc.dylib _xpc_connection_reply_callout + 124 42 libsystem_pthread.dylib start_wqthread + 8
0
0
44
12h
FairPlay: SDK4 vs SDK26 credentials/certificate for iOS/tvOS client apps
Hi We’re updating our KSM to support SPC v2/v3 and currently operate with both legacy SDK4 credentials (ASK + 1024 cert) and SDK26 credentials (certificate bundle + provisioning data + 1024/2048 keys). Our client apps run across a wide range of iOS/tvOS versions, so we want to follow Apple’s recommended client strategy for certificate selection. The docs describe SHA‑1 vs SHA‑256 in the SPC header, but do not specify which OS versions should use SDK4 vs SDK26 credentials. Could you clarify: Is there an official minimum iOS/tvOS version where you recommend SDK26 credentials for client apps? For older OS versions (e.g. iOS 15), is SDK4 still the recommended choice for client apps? Are there any official migration guidelines for client apps moving from SDK4 to SDK26 credentials? Thanks in advance.
1
0
120
17h
USDZ model files when opened in Preview turns black
On macOS 26.2 (Tahoe), the Preview app fails to render many USDZ models correctly. The issue appears inconsistent: • Some USDZ files open normally in Preview • Some USDZ files open but the viewport turns completely black (no geometry, no material, no lighting) • All the same files open correctly when opened using “Open With → Xcode” This was not the behavior on macOS 15.7.2, where the exact same USDZ files rendered consistently in Preview without failures.
1
0
94
1d
Audio DSP Processing Issue / Metallic Ringing Artifacts when recording acoustic instruments on iPhone 17 Pro Max
Description: I have identified a specific issue when recording acoustic guitar and other instruments on the iPhone 17 Pro Max using native applications (Voice Memos, Camera). The recordings contain an unnatural metallic resonance (ringing artifacts) that should not be present. Testing and Methodology: Hardware Verification: Initially, I suspected a hardware defect in the audio chip or microphone. However, extensive testing with third-party software suggests this is likely a software-level issue. AudioShare Test: I conducted a test using the AudioShare app in "Measurement Mode" (which bypasses standard iOS system-wide audio processing). In this mode, the audio remains perfectly clean, and the metallic ringing disappears entirely. Conclusion: The issue is rooted in the DSP (Digital Signal Processing) algorithms that iOS applies for noise suppression or voice enhancement. These algorithms appear to misinterpret the high-frequency overtones of acoustic instruments as background noise and attempt to "filter" them, resulting in audible digital artifacts. Comparison Results: This issue has not been observed on devices from other brands or on older iPhone models (preliminary tests suggest older versions handle this better). Notably, the problem persists even in GarageBand, as the app still utilizes certain system-level processing layers. Proposed Solution: I suggest adding a "Raw Audio" or "Instrument Mode" toggle within the Microphone/Audio settings for native apps. This mode should disable aggressive DSP processing, similar to how the AVAudioSession.Mode.measurement works in specialized apps. Attachments: I am attaching 4 archives, including a final "Measurement Mode" folder with comparative samples (Measurement Mode vs. Standard Mode). The artifacts are most prominent when monitored through headphones.
0
0
56
4d
AVAudioEngine fails to start during FaceTime call (error 2003329396)
Is it possible to perform speech-to-text using AVAudioEngine to capture microphone input while being on a FaceTime call at the same time? I tried implementing this, but whenever I attempt to start the  AVAudioEngine  while a FaceTime call is active, I get the following error: “The operation couldn’t be completed. (OSStatus error 2003329396)” I assume this might be due to microphone resource restrictions during FaceTime, but I’d like to confirm whether this limitation is at the system level or if there’s any possible workaround or entitlement that allows concurrent microphone access. Has anyone encountered this issue or found a solution?
1
1
430
4d
AVCam Sample Code - Undesired "Jump" in Video Recording Image
On iPhone 16 Pro Max (not tested other devices) there's a noticeable jump in the framing of the preview video when you record in the iOS AVCam Sample App. The same jump in camera framing can be observed by switching to the front facing camera and then back to the rear one. It looks roughly consistent with switching between the 0.5x and 1x camera (but not quite a match for the same viewable area in the Camera app) - and it's only when it's initially loaded, once recording is started it retains the 'closer' image no matter how many times it's stopped/started thereafter. I'm relatively new to Swift and haven't done anything with the camera before, so odd 'buggy' behaviour in the sample code isn't helping me understand it! :-) Is there any way to fix this?
0
0
144
5d
AudioOutputUnitStart takes ~500 ms when using Push-to-Talk framework after beginTransmission
I’m working with the Push-to-Talk (PTT) framework and observing a consistent delay when starting audio capture. Scenario: A PTT call is already active The AVAudioSession is fully configured I request beginTransmission on the PTT channel I start my Audio Unit for recording (AudioOutputUnitStart) Observed behavior: AudioOutputUnitStart takes ~500 ms This happens whether I start the Audio Unit: after didBeginTransmission, or after AVAudioSession didActivate Comparison: Using the same Audio Unit, same format, and same configuration Without the PTT framework, AudioOutputUnitStart takes ~200 ms Additional notes: I am not modifying or reconfiguring AVAudioSession when requesting beginTransmission The audio session is already set up when the PTT call starts There are no interruptions or route changes at the time of starting the Audio Unit Impact: This extra latency is significant for Push-to-Talk use cases where fast transmit start is critical.
1
0
197
6d
How to mark Audio Unit as dirty (needing to be saved)
I'm working on a v2 Audio Unit that has some complicated internal state (audio, midi, other settings). When the internal state changes, I want to inform the host (f.i. Logic Pro) that my plugin state has changed, and that the main window should show the 'project changed' status through the window close button. This was easy to achieve for the VST version of the plugin, but I can't figure out a way to do it for the Audio Unit. I've tried: Notifying change of the kAudioUnitProperty_ClassInfo property that stores the plugin state: unit->PropertyChanged(kAudioUnitProperty_ClassInfo, kAudioUnitScope_Global, 0); Setting the kAudioUnitProperty_ClassInfo property value each time the plugin state changes. Adding a new parameter called 'dirtystate' and toggling it and notifying the change each time the plugin state changes. But nothing really make Logic take notice. This should be an easy task, but I can't put my finger on it. How do I flag may AUv2 as needing its status saved (i.e. the host project needs saving)?
0
0
120
6d
USB microphone input : Mac "Designed for iPad"
My app - natively iOS but built with the "Designed for iPad" option to run on Mac - does not recognise an attached USB microphone when running on a Mac. This line int32_t items = (int32_t) [[[AVAudioSession sharedInstance] availableInputs] count ]; returns 1, which is the Mac internal mic. On iPad and iPhone it sees both the internal mic and the USB mic. Is this an inherent "Designed for iPad" restriction, and is there some trick I can pull to get the USB microphone to be recognised by the system?
1
0
209
1w
Cannot generate 2048-bit FairPlay Streaming certificate
Hello, I have a problem generating a 2048-bit FairPlay Streaming certificate. I tried generating SDK v26.x certificate in two ways. (1) Use existing certificate (2) Create new certificate Though, in both ways, Apple gives me a certificate bundle of 1024-bit certificate. (fps_certificate.bin) I've uploaded 2048-bit CSR on creating a certificate. Just to note, I have created a SDK v4.x certificate few years ago. Have anyone bumped into a same issue? Or am I missing something?
2
0
233
1w
Dell monitor volume control issue on iMac via USB-C
I have a new 2725QC (Dell) Monitor that uses USB-C connection to connect with the iMac (2019, 27 inch) through the back port but the problem is that the volume control can currently only be done from the hardware, not the software control using the Apple keyboard. What should I do in terms of writing code to do this (Swift or Obj-C)? Is there a third-party solution for Intel iMac and ARM Mac?
2
0
141
1w
What does CoreMediaErrorDomain code -15418 indicate during LL-HLS live playback?
Hello, I am currently developing a video player using Custom AVPlayer SDK and testing LL-HLS live streaming. I encountered a specific error, CoreMediaErrorDomain -15418, during playback. I have searched through the official documentation and the forums, but I could not find any information regarding this error code. I would like to inquire about the following: Description & Cause: What does the error code -15418 specifically represent in the context of CoreMedia and LL-HLS? Severity: Is this a critical error that halts playback, or is it merely a warning? Environment Details: iOS Version: iOS 26.2 Device: iPhone 15 Pro Max Stream Type: LL-HLS (Low-Latency HLS) Impact: Quality drops Any insights or references to documentation would be greatly appreciated. Thank you.
1
0
228
1w
AVAssetWriterInput.PixelBufferReceiver.append hangs indefinitely (suspends and never resumes)
I’ve been struggling with a very frustrating issue using the new iOS 26 Swift Concurrency APIs for video processing. My pipeline reads frames using AVAssetReader, processes them via CIContext (Lanczos upscale), and then appends the result to an AVAssetWriter using the new PixelBufferReceiver. The Problem: The execution randomly stops at the ]await append(...)] call. The task suspends and never resumes. It is completely unpredictable: It might hang on the very first run, or it might work fine for 4-5 runs and then hang on the next one. It is independent of video duration: It happens with 5-second clips just as often as with long videos. No feedback from the system: There is no crash, no error thrown, and CPU usage drops to zero. The thread just stays in the suspended state indefinitely. If I manually cancel the operation and restart the VideoEngine, it usually starts working again for a few more attempts, which makes me suspect some internal resource exhaustion or a deadlock between the GPU context and the writer's input. The Code: Here is a simplified version of my processing loop: private func proccessVideoPipeline( readerOutputProvider: AVAssetReaderOutput.Provider<CMReadySampleBuffer<CMSampleBuffer.DynamicContent>>, pixelBufferReceiver: AVAssetWriterInput.PixelBufferReceiver, nominalFrameRate: Float, targetSize: CGSize ) async throws { while !Task.isCancelled, let payload = try await readerOutputProvider.next() { let sampleBufferInfo: (imageBuffer: CVPixelBuffer?, presentationTimeStamp: CMTime) = payload.withUnsafeSampleBuffer { sampleBuffer in return (sampleBuffer.imageBuffer, sampleBuffer.presentationTimeStamp) } guard let currentPixelBuffer = sampleBufferInfo.imageBuffer else { throw AsyncFrameProcessorError.missingImageBuffer } guard let pixelBufferPool = pixelBufferReceiver.pixelBufferPool else { throw NSError(domain: "PixelBufferPool", code: -1, userInfo: [NSLocalizedDescriptionKey: "No pixel buffer pool available"]) } let newPixelBuffer = try pixelBufferPool.makeMutablePixelBuffer() let newCVPixelBuffer = newPixelBuffer.withUnsafeBuffer({ $0 }) try upscale(currentPixelBuffer, outputPixelBuffer: newCVPixelBuffer, targetSize: targetSize ) let presentationTime = sampleBufferInfo.presentationTimeStamp try await pixelBufferReceiver.append(.init(unsafeBuffer: newCVPixelBuffer), with: presentationTime) } } Does anyone know how to fix it?
0
0
85
1w
Apple Music API no longer returns standalone singles as “single” albums
I’ve been using Apple Music API for quite a while now and a recent change must have happened which is quite disruptive. On many occasions, artists release singles from an album as part of promoting this album. For recent examples, Harry Styles released “Aperture” (a single) to promote his upcoming album “Kiss All The Time. Disco, Occasionally“. Similarly, Bruno Mars released “I Just Might”, a single from the upcoming album “The Romantic”. Previously, those would return at the endpoint ”artists/{artist_id}/albums” with a “- Single” suffix. But it seems a recent change happens where they only appear as playable tracks inside the album. This behavior is also evident in the Apple Music app itself. Those singles no longer appear under “Singles & EPs”. Instead, they would only be visible if the single becomes popular enough to be shown on “Top Songs“. Otherwise one would have to know to tap on the (future) album to discover if there are released singles. Meanwhile Spotify’s API returns those as singles properly, just like Apple Music API used to. This change must be recent but the question is if it’s intentional, and if so, how can the API be used from here on out to “extract” those singles and represent them?
0
1
152
1w
Camera Shutter Sound Control on iOS (Programmatic Query)
Hi Apple Developer Support Team, We are developing an iOS application using a camera package within a hybrid (cross-platform) framework, and we would like to confirm whether it is possible to disable the camera shutter sound programmatically. As per our understanding, the shutter sound on iOS is system-controlled and depends on the device’s silent/ring mode, and there is no App Store–approved API available to force-disable this sound. Kindly confirm whether this understanding is correct or if any supported alternative approach exists for hybrid or native implementations. Thank you for your clarification. Best regards, ParkhyaSolutions
1
0
449
1w
FairPlay Client Question
The ASk is used by the KSM to derive the dASk, which is then used to decrypt the SK...R1. If the only thing we give the client is the certificate, how does it encrypt the SK...R1 so the server is able to process it. Would be nice to know it it works generally, because I've been getting questions about it and can't provide a helpful answer. Thanks in advance.
1
0
102
1w
AVCaptureDevice.RotationCoordinator.videoRotationAngleForHorizonLevelCapture: behavior is different with iPhone 17
The front facing camera on iPhone 16 (and every model previous) gives the following values for AVCaptureDevice.RotationCoordinator.videoRotationAngleForHorizonLevelCapture: 90 degrees portrait 180 degrees landscape left 270 degrees for upside-down 0 degrees for landscape right Using these values a transform is calculated: var transform: CGAffineTransform { let degrees = rotationCoordinator.videoRotationAngleForHorizonLevelCapture let radians = degrees * .pi / 180.0 return CGAffineTransform(rotationAngle: radians) } And then applied to the AVAssetInput: videoInput = AVAssetWriterInput(mediaType: .video, outputSettings: videoSettings, sourceFormatHint: videoFormatDescription) videoInput.transform = transform And this ensures the correct transform is added to the metadata so that the recorded video plays in the correct orientation. However, with the iPhone 17 Pro and iPhone 17 Pro Max front facing cameras, AVCaptureDevice.RotationCoordinator.videoRotationAngleForHorizonLevelCapture return the different values: 0 degrees portrait 90 degrees landscape left 180 degrees for upside-down 270 degrees for landscape right So this approach breaks down, and the video orientation is incorrect. How is this intended to be handled?
2
0
459
1w