Our team conducted security testing and found one vulnerability with fairplay license acquisition.
Our QA engineer manually changed the device's system date and time (setting it 4 days into the future) and was able to successfully obtain a license response and initiate playback on an iOS device. However, on an Android device, the license acquisition failed.
Can you please tell us if Time Manipulation Detection is available in FairPlay SDK?
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I am playing FairPlay + Multi-Key content (fMP4) in Safari browser.
I want to implement the implementation to distinguish between SD and HD video quality, and play it in HD if HDCP is supported, and in SD if HDCP is not supported.
I have already confirmed that HDCP support is the default, and that a black screen is output in non-HDCP environments.
What I want is to improve the user experience by appropriately switching to SD/HD depending on HDCP support when playing DRM content.
Question: Is there an API or function that can detect HDCP support in Safari through JavaScript or other methods? Or is there a way to indirectly guess it?
Topic:
Media Technologies
SubTopic:
Streaming
Tags:
FairPlay Streaming
WebKit
Safari
HTTP Live Streaming
Hi everyone,
I wanted to bring up a question about Core Audio and its potential for future updates or improvements, specifically regarding latency optimization. As someone who relies on Core Audio for real-time audio processing, any enhancements in this area would be incredibly beneficial for professionals in the industry.
Does anyone know if Apple has shared any plans or updates regarding Core Audio’s performance, particularly for low-latency applications? I’d appreciate any insights or advice from the community!
Thanks so much!
Best,
Michael
Hi, In my project I am using AVFoundation for recording the audio. We are using AVAudioMixerNode class below method to record the audio packet.
**func installTap(
onBus bus: AVAudioNodeBus,
bufferSize: AVAudioFrameCount,
format: AVAudioFormat?,
block tapBlock: @escaping AVAudioNodeTapBlock
)
**
It works perfectly fine.
But in production env some small percentage of the user we are facing issue like after recording few packets it stops automatically without stopping the audio engine. Can anyone help here that why this happens? I have also observed for mediaServicesWereResetNotification and added log on receiving this notification but when this issue happens I don't see any occurence of this log. Also is there any callback when the engine stops?
Hello, my company is developing a product that will send data to/from the phone over cable and Wi-Fi. I have three questions:
Do we need an MFi authentication chip in our product if we plan to send video and commands to the iPhone/iPad over USB or Lightning cable?
Likewise, do we need an MFI authentication chip for communication over Wi-Fi? (Informal research suggests that the answer is no to this one.)
And, do we even still need MFI certification at all for Wi-Fi comms? (We are not using HomeKit.)
Thank you!
Topic:
Media Technologies
SubTopic:
Photos & Camera
iPadOS 18.3 beta 3 (22D5055b) fixed the issue for me and my 7th generation iPad.
Topic:
Media Technologies
SubTopic:
Audio
If I call AudioDeviceStart on an AudioDevice in my application then "Hey Siri!" will not wake Siri up. Our users have complained that Siri does not get activated with my application is running. We found that calling AudioDeviceStart is causing the issue.
How should we handle this?
As the image access policy has changed with Android targeting SDK 34, I’m planning to update the way our app accesses photos.
We are using the react-native-image-picker library to access images.
On Android, the system no longer prompts the user for image access permissions, but on iOS, permission requests still appear.
Since Android no longer requires explicit permissions, I’ve removed the permission request logic for Android.
In this case, is it also safe to remove the permission request for iOS?
In our app, photo access is only used for changing the user profile picture and attaching images when writing a post on the bulletin board.
Are there any limitations or considerations for this kind of usage?
I want to modify the photo's exif information, which means putting the original image in through CGImageDestinationAddImageFromSource, and then adding the modified exif information to a place called properties.
Here is the complete process:
static func setImageMetadata(asset: PHAsset, exif: [String: Any]) {
let options = PHContentEditingInputRequestOptions()
options.canHandleAdjustmentData = { (adjustmentData) -> Bool in
return true
}
asset.requestContentEditingInput(with: options, completionHandler: { input, map in
guard let inputNN = input else {
return
}
guard let url = inputNN.fullSizeImageURL else {
return
}
let output = PHContentEditingOutput(contentEditingInput: inputNN)
let adjustmentData = PHAdjustmentData(formatIdentifier: AppInfo.appBundleId(), formatVersion: AppInfo.appVersion(), data: Data())
output.adjustmentData = adjustmentData
let outputURL = output.renderedContentURL
guard let source = CGImageSourceCreateWithURL(url as CFURL, nil) else {
return
}
guard let dest = CGImageDestinationCreateWithURL(outputURL as CFURL, UTType.jpeg.identifier as CFString, 1, nil) else {
return
}
CGImageDestinationAddImageFromSource(dest, source, 0, exif as CFDictionary)
let d = CGImageDestinationFinalize(dest)
// d is true, and I checked the content of outputURL, image has been write correctly, it could be convert to UIImage and image is ok.
PHPhotoLibrary.shared().performChanges {
let changeReq = PHAssetChangeRequest(for: asset)
changeReq.contentEditingOutput = output
} completionHandler: { succ, err in
if !succ {
print(err) // 3303 here, always!
}
}
})
}
Hey - I am developing an app that uses the camera for recording video. I put the ability to choose a framerate and resolution and all combinations work perfectly fine, except for 4k 120fps for the new iPhone 16 pro. This just shows black on the preview. I tried to record even though the preview was black, but the recording is also just a black screen. Is there anything special that needs to be done in the camera setup for 4k 120fps to work? I have my camera setup code attached. Is it possible this is a bug in Apple's code, since this works with every other combination (1080p up to 240fps and 4k up to 60fps)?
Thanks so much for the help.
class CameraManager: NSObject {
enum Errors: Error {
case noCaptureDevice
case couldNotAddInput
case unsupportedConfiguration
}
enum Resolution {
case hd1080p
case uhd4K
var preset: AVCaptureSession.Preset {
switch self {
case .hd1080p:
return .hd1920x1080
case .uhd4K:
return .hd4K3840x2160
}
}
var dimensions: CMVideoDimensions {
switch self {
case .hd1080p:
return CMVideoDimensions(width: 1920, height: 1080)
case .uhd4K:
return CMVideoDimensions(width: 3840, height: 2160)
}
}
}
enum CameraType {
case wide
case ultraWide
var captureDeviceType: AVCaptureDevice.DeviceType {
switch self {
case .wide:
return .builtInWideAngleCamera
case .ultraWide:
return .builtInUltraWideCamera
}
}
}
enum FrameRate: Int {
case fps60 = 60
case fps120 = 120
case fps240 = 240
}
let orientationManager = OrientationManager()
let captureSession: AVCaptureSession
let previewLayer: AVCaptureVideoPreviewLayer
let movieFileOutput = AVCaptureMovieFileOutput()
let videoDataOutput = AVCaptureVideoDataOutput()
private var videoCaptureDevice: AVCaptureDevice?
override init() {
self.captureSession = AVCaptureSession()
self.previewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession)
super.init()
self.previewLayer.videoGravity = .resizeAspect
}
func configureSession(resolution: Resolution, frameRate: FrameRate, stabilizationEnabled: Bool, cameraType: CameraType, sampleBufferDelegate: AVCaptureVideoDataOutputSampleBufferDelegate?) throws {
assert(Thread.isMainThread)
captureSession.beginConfiguration()
defer { captureSession.commitConfiguration() }
captureSession.sessionPreset = resolution.preset
if captureSession.canAddOutput(movieFileOutput) {
captureSession.addOutput(movieFileOutput)
} else {
throw Errors.couldNotAddInput
}
videoDataOutput.setSampleBufferDelegate(sampleBufferDelegate, queue: DispatchQueue(label: "VideoDataOutputQueue"))
if captureSession.canAddOutput(videoDataOutput) {
captureSession.addOutput(videoDataOutput)
// Set the video orientation if needed
if let connection = videoDataOutput.connection(with: .video) {
//connection.videoOrientation = .portrait
}
} else {
throw Errors.couldNotAddInput
}
guard let videoCaptureDevice = AVCaptureDevice.default(cameraType.captureDeviceType, for: .video, position: .back) else {
throw Errors.noCaptureDevice
}
let useDimensions = resolution.dimensions
guard let format = videoCaptureDevice.formats.first(where: { format in
let dimensions = CMVideoFormatDescriptionGetDimensions(format.formatDescription)
let isRes = dimensions.width == useDimensions.width && dimensions.height == useDimensions.height
let frameRates = format.videoSupportedFrameRateRanges
return isRes && frameRates.contains(where: { $0.maxFrameRate >= Float64(frameRate.rawValue) })
}) else {
throw Errors.unsupportedConfiguration
}
self.videoCaptureDevice = videoCaptureDevice
do {
let videoInput = try AVCaptureDeviceInput(device: videoCaptureDevice)
if captureSession.canAddInput(videoInput) {
captureSession.addInput(videoInput)
} else {
throw Errors.couldNotAddInput
}
try videoCaptureDevice.lockForConfiguration()
videoCaptureDevice.activeFormat = format
videoCaptureDevice.activeVideoMinFrameDuration = CMTime(value: 1, timescale: CMTimeScale(frameRate.rawValue))
videoCaptureDevice.activeVideoMaxFrameDuration = CMTime(value: 1, timescale: CMTimeScale(frameRate.rawValue))
videoCaptureDevice.activeMaxExposureDuration = CMTime(seconds: 1.0 / 960, preferredTimescale: 1000000)
videoCaptureDevice.exposureMode = .locked
videoCaptureDevice.unlockForConfiguration()
} catch {
throw error
}
configureStabilization(enabled: stabilizationEnabled)
}`
Feature Request: Long-Lived Access to Personal Apple Music Data
Use Case Summary
I'm developing a personal portfolio website (using Nuxt) and want to display information from my own Apple Music library - showcasing personal playlists, recently played tracks, or a read-only "now playing" widget. This is purely for personal use on my website and doesn't require other users to log in.
With Spotify's API, implementing this was straightforward thanks to automatic token refresh. I want a similarly seamless integration with Apple Music.
Challenge with MusicKit and Music User Tokens
Apple Music API requirements
Apple's Music API requires a valid Music User Token (MUT) for requests involving personal library data. Beyond the Apple Developer Token, you must obtain a user-specific token via MusicKit authentication to access your own library playlists, play history, or current playback status.
Token expiration and manual renewal
Music User Tokens expire after approximately 6 months without any mechanism to automatically refresh or renew them - unlike typical OAuth flows that provide refresh tokens. Apple's guidance suggests the device (e.g., iPhone) is responsible for obtaining new user tokens when old ones expire. This works for interactive apps on Apple devices but fails in server-side or long-lived web contexts like a personal website widget.
Impact on personal projects
Displaying Apple Music data on a public-facing site becomes difficult. I would need to periodically re-authenticate through the MusicKit JS flow every few months just to keep a widget alive. Embedding credentials in a public site is insecure, and manual token refreshing is cumbersome and easy to forget.
Comparison to Spotify's Token Model
Spotify's API offers a developer-friendly authentication model. Their OAuth flow provides a Refresh Token that applications can use to obtain new access tokens automatically without requiring user re-authorization. This means a personal app can maintain continuous access to a user's Spotify data for extended periods until access is revoked.
When building a similar feature with Spotify, this automatic token renewal was crucial. I could safely store the refresh token on my server and have my app periodically update the access token. Many developers have created public-facing widgets showing currently playing tracks on blogs or GitHub profiles using this model. Unfortunately, Apple Music's API lacks an equivalent capability, putting it at a disadvantage for personal projects.
Proposed Solutions
I request Apple's consideration for one of these enhancements:
Provide a mechanism to refresh or extend a Music User Token programmatically for server-side applications. This could be an OAuth-style refresh token issued alongside the MUT, or a dedicated endpoint to exchange an expired MUT for a new one. This would enable renewal without a full user re-auth/login each time.
Allow developers to access their own Apple Music library data with just the long-lived Developer Token. Apple could permit GET requests to personal library endpoints using the Developer Token alone, or a special token tied to the developer's Apple ID. This access would be read-only - no ability to modify the library, purely for retrieving data. It could be an opt-in feature in the Apple Developer account settings.
Either solution would significantly improve the developer experience for Apple Music API in personal projects.
Security and Privacy Considerations
This request is not about accessing others' data or creating privacy loopholes - it's about empowering an Apple Music subscriber to access their own information more conveniently. The proposed options respect privacy principles:
The data accessed is only what the user already has access to - their own playlists, library items, or playback status.
An automatic token refresh can be designed securely (revocable tokens bound to a single account with no increase in permissions).
Read-only developer token access could be restricted to non-sensitive data and require explicit opt-in.
Conclusion
I request an improvement to Apple Music's developer experience through either (1) an automatic Music User Token refresh mechanism, or (2) a provision for read-only personal library access using a Developer Token. This would bring Apple Music integration capabilities closer to parity with services like Spotify for personal projects.
I ask Apple's Developer Relations and the Apple Music API team to consider this feature request. If there are existing best practices or workarounds with current APIs, I would appreciate guidance.
I invite feedback from Apple or other developers. Are there known patterns for maintaining an Apple Music user token for server-side applications, or any plans to support non-interactive use cases? Any advice is welcome.
Thank you for your consideration. I look forward to integrating Apple Music into my personal site as smoothly as with other services, and believe many developers would benefit from this added flexibility.
Sources:
User Authentication for MusicKit - Requirements for Music User Tokens
StackOverflow: Do Apple Music User Tokens expire? - Confirmation of 6-month expiration
MetaBrainz GSoC Blog - Documentation of MusicKit authentication limitations
Apple Developer Forums - Information on token renewal behavior
Spotify for Developers - Documentation on refresh token mechanism
Topic:
Media Technologies
SubTopic:
Audio
Tags:
Apple Music API
MusicKit
MusicKit JS
Apple Music Feed
Issue Description
I'm implementing a system audio capture feature using AudioHardwareCreateProcessTap and AudioHardwareCreateAggregateDevice. The app successfully creates the tap and aggregate device, but when starting the IO procedure with AudioDeviceStart, it sometimes fails with OSStatus error 1852797029. (The operation couldn’t be completed. (OSStatus error 1852797029.)) The error occurs inconsistently, which makes it particularly difficult to debug and reproduce.
Questions
Has anyone encountered this intermittent "nope" error code (0x6e6f7065) when working with system audio capture?
Are there specific conditions or system states that might trigger this error sporadically?
Are there any known workarounds for handling this intermittent failure case?
Any insights or guidance would be greatly appreciated. I'm wondering if anyone else has encountered this specific "nope" error code (0x6e6f7065) when working with system audio capture.
Hello,
I'm observing an intermittent memory leak being reported in the iOS Simulator when initializing and starting an AVAudioEngine. Even with minimal setup—just attaching a single AVAudioPlayerNode and connecting it to the mainMixerNode—Xcode's memory diagnostics and Instruments sometimes flag a leak.
Here is a simplified version of the code I'm using:
// This function is called when the user taps a button in the view controller:
#import "ViewController.h"
@interface ViewController ()
@end
@implementation ViewController
- (void)viewDidLoad {
[super viewDidLoad];
}
- (IBAction)myButtonAction:(id)sender {
NSLog(@"Test");
soundCreate();
}
@end
// media.m
static AVAudioEngine *audioEngine = nil;
void soundCreate(void)
{
if (audioEngine != nil)
return;
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryAmbient error:nil];
[[AVAudioSession sharedInstance] setActive:YES error:nil];
audioEngine = [[AVAudioEngine alloc] init];
AVAudioPlayerNode* playerNode = [[AVAudioPlayerNode alloc] init];
[audioEngine attachNode:playerNode];
[audioEngine connect:playerNode to:(AVAudioNode *)[audioEngine mainMixerNode] format:nil];
[audioEngine startAndReturnError:nil];
}
In the memory leak report, the following call stack is repeated, seemingly in a loop:
ListenerMap::InsertEvent(XAudioUnitEvent const&, ListenerBinding*) AudioToolboxCore
ListenerMap::AddParameter(AUListener*, void*, XAudioUnitEvent const&) AudioToolboxCore
AUListenerAddParameter AudioToolboxCore
addOrRemoveParameterListeners(OpaqueAudioComponentInstance*, AUListenerBase*, AUParameterTree*, bool) AudioToolboxCore
0x180178ddf
As of iOS 18, as far as I can tell, it appears there's still no AVPlayer options that allow users to toggle the caption / subtitle track on and off. Does anyone know of a way to do this with AVPlayer or with SwiftUI's VideoPlayer?
The following code reproduces this issue. It can be pasted into an app playground. This is a random video and a random vtt file I found on the internet.
import SwiftUI
import AVKit
import UIKit
struct ContentView: View {
private let video = URL(string: "https://server15700.contentdm.oclc.org/dmwebservices/index.php?q=dmGetStreamingFile/p15700coll2/15.mp4/byte/json")!
private let captions = URL(string: "https://gist.githubusercontent.com/samdutton/ca37f3adaf4e23679957b8083e061177/raw/e19399fbccbc069a2af4266e5120ae6bad62699a/sample.vtt")!
@State private var player: AVPlayer?
var body: some View {
VStack {
VideoPlayerView(player: player)
.frame(maxWidth: .infinity, maxHeight: 200)
}
.task {
// Captions won't work for some reason
player = try? await loadPlayer(video: video, captions: captions)
}
}
}
private struct VideoPlayerView: UIViewControllerRepresentable {
let player: AVPlayer?
func makeUIViewController(context: Context) -> AVPlayerViewController {
let controller = AVPlayerViewController()
controller.player = player
controller.modalPresentationStyle = .overFullScreen
return controller
}
func updateUIViewController(_ uiViewController: AVPlayerViewController, context: Context) {
uiViewController.player = player
}
}
private func loadPlayer(video: URL, captions: URL?) async throws -> AVPlayer {
let videoAsset = AVURLAsset(url: video)
let videoPlusSubtitles = AVMutableComposition()
try await videoPlusSubtitles.add(videoAsset, withMediaType: .video)
try await videoPlusSubtitles.add(videoAsset, withMediaType: .audio)
if let captions {
let captionAsset = AVURLAsset(url: captions)
// Must add as .text. .closedCaption and .subtitle don't work?
try await videoPlusSubtitles.add(captionAsset, withMediaType: .text)
}
return await AVPlayer(playerItem: AVPlayerItem(asset: videoPlusSubtitles))
}
private extension AVMutableComposition {
func add(_ asset: AVAsset, withMediaType mediaType: AVMediaType) async throws {
let duration = try await asset.load(.duration)
try await asset.loadTracks(withMediaType: mediaType).first.map { track in
let newTrack = self.addMutableTrack(withMediaType: mediaType, preferredTrackID: kCMPersistentTrackID_Invalid)
let range = CMTimeRangeMake(start: .zero, duration: duration)
try newTrack?.insertTimeRange(range, of: track, at: .zero)
}
}
}
Short summary
When setting exposureMode to .locked or .custom the brightness of a video stream still changes depending on the composition and contrast of the visible scene. These changes seem to come from contrast enhancements or dynamic range optimizations and totally break any analysis of the image that requires to assess absolute luminance. While exposure lock seems to indeed lock the physical exposure parameters of the camera (shutter speed and ISO), I cannot find any way to control these "soft" modifiers.
Details
Background
I am the developer of the app "phyphox", an educational app that makes the phone's sensors accessible to students as measurement tools in science experiments. Currently I am working on implementing photometric measurements through the camera and one very important aspect of it is luminance measurements.
This is particularly relevant since the light sensor of the phone has no publicly accessible API and the camera could to some extend make experiments available to Apple users that are otherwise only possible on Android devices.
Implementation
The app uses AVFoundation and explicitly picks individual cameras since camera groups do not support custom exposure settings. This means that it handles camera switching during zoom by itself and even implements its own auto exposure routines to optimize for the use in experiments. Therefore it always stays in custom exposure mode. The app uses YUV420 color space and the individual frames are analyzed in Metal using compute shaders.
However, the effects discussed here still occur if I remove all code to control the camera and replace it with a simple sequence of setting the exposure mode to custom, setting custom exposure values, setting a fixed white balance and then setting the exposure mode to locked as suggested on stackoverflow. This neither helps on an iPhone 14 Pro nor on an iPhone 8 despite a report on the developer forums that it would resolve the issue for older devices.
The app is open source, so the code can be seen in our current development branch (without the changes for the tests here, though) on github.
The videos below use the implementation with the suggestion from stackoverflow, but they can be reproduced in the same way with "professional" camera apps that promise manual control over the camera (like the Blackmagic cam to quote a reputable company) as well as the stock camera app after pressing and holding on the preview to enable AE/AF lock.
Demonstration
These examples were captured on an iPhone 14 Pro. The central part of the image (highlighted by the app using metal shaders after capture) should not change with fixed exposure settings, but significant changes are noticable if there are changes at the edge of the frame when I move a black piece of cardboard in from above:
https://share.icloud.com/photos/0b1f_3IB6yAQG-qSH27pm6oDQ
The graph above the camera preview is the average luminance (gamma corrected and weighted based on sRGB) across the highlighted central area and as mentioned before it should not change because of something happening at the side of the frame (worst case it should get a bit darker because of the cardboard's shadow).
In my opinion, the iPhone changes its mind on the ideal contrast as soon as it has a different exposure histogram because of the dark image part from the cardboard, but that's just me guessing.
For completeness here is the same effect in the stock camera app with AE/AF lock enabled:
https://share.icloud.com/photos/0cd7QM8ucBZKwPwE9mybnEowg
Here you can also see that the iPhone "ramps" the changes. The brightness of the gray area does not change immediately but transitions smoothly, so this is clearly deliberate postprocessing.
So...
Any suggestion on how to prevent this behavior would be highly appreciated.
I'm developing a TTS Audio Unit Extension that needs to write trace/log files to a shared App Group container. While the main app can successfully create and write files to the container, the extension gets sandbox denied errors despite having proper App Group entitlements configured.
Setup:
Main App (Flutter) and TTS Audio Unit Extension share the same App Group
App Group is properly configured in developer portal and entitlements
Main app successfully creates and uses files in the container
Container structure shows existing directories (config/, dictionary/) with populated files
Both targets have App Group capability enabled and entitlements set
Current behavior:
Extension can access/read the App Group container
Extension can see existing directories and files
All write attempts are blocked with "sandbox deny(1) file-write-create" errors
Code example:
const char* createSharedGroupPathWithComponent(const char* groupId, const char* component) {
NSString* groupIdStr = [NSString stringWithUTF8String:groupId];
NSString* componentStr = [NSString stringWithUTF8String:component];
NSURL* url = [[NSFileManager defaultManager]
containerURLForSecurityApplicationGroupIdentifier:groupIdStr];
NSURL* fullPath = [url URLByAppendingPathComponent:componentStr];
NSError *error = nil;
if (![[NSFileManager defaultManager] createDirectoryAtPath:fullPath.path
withIntermediateDirectories:YES
attributes:nil
error:&error]) {
NSLog(@"Unable to create directory %@", error.localizedDescription);
}
return [[fullPath path] UTF8String];
}
Error output:
Sandbox: simaromur-extension(996) deny(1) file-write-create /private/var/mobile/Containers/Shared/AppGroup/36CAFE9C-BD82-43DD-A962-2B4424E60043/trace
Key questions:
Are there additional entitlements required for TTS Audio Unit Extensions to write to App Group containers?
Is this a known limitation of TTS Audio Unit Extensions?
What is the recommended way to handle logging/tracing in TTS Audio Unit Extensions?
If writing to App Group containers is not supported, what alternatives are available?
Current entitlements:
<dict>
<key>com.apple.security.application-groups</key>
<array>
<string>group.com.<company>.<appname></string>
</array>
</dict>
I can't play video content with HEVC and DRM. Tested HEVC only: OK. Tested DRM+AVC: Ok.
Tested 2 players (Clappr/Stevie and BitMovin)
Master, variants and EXT-X-MAPs are downloaded Ok, DRM keys Ok and then, for instance with BitMovin Player:
[BMP] [Player] [Error] Event: SourceError, Data: {"code":2001,"data":{"message":"The operation couldn’t be completed. (CoreMediaErrorDomain error -12927.)","code":-12927},"message":"Source Error. The operation couldn’t be completed. (CoreMediaErrorDomain error -12927.)","timestamp":1740320663.4505711,"type":"onSourceError"} code: 2001 [Data code: -12927, message: The operation couldn’t be completed. (CoreMediaErrorDomain error -12927.), underlying error: Error Domain=CoreMediaErrorDomain Code=-12927 "(null)"]
4k-master.m3u8.txt
4k.m3u8.txt
4k-audio.m3u8.txt
Overlay changes color in HDR video When I’m using trying to add an overlay to an image with AVMutableVideoComposition, When the video is in HDR the overlay colors are changing and white becomes grey screen shot from original HDR video result from the code with the wrong overlay colorthe result when reducing to SDR (the right overlay color)
the distorted colorsthe way it should look(sdr)
Im creating the overlay with a CGContext
class CustomHdrCompositor: NSObject, AVVideoCompositing {
private let coreImageContext = CIContext(options: [CIContextOption.cacheIntermediates: false])
let combinedFilter = CIFilter(name: "CISourceOverCompositing")!
var sourcePixelBufferAttributes: [String: Any]? = [String(kCVPixelBufferPixelFormatTypeKey): [kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange]]
var requiredPixelBufferAttributesForRenderContext: [String: Any] =
[String(kCVPixelBufferPixelFormatTypeKey): [kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange]]
var supportsWideColorSourceFrames = true
var supportsHDRSourceFrames = true
func renderContextChanged(_ newRenderContext: AVVideoCompositionRenderContext) {
return
}
func startRequest(_ request: AVAsynchronousVideoCompositionRequest) {
guard let outputPixelBuffer = request.renderContext.newPixelBuffer() else {
print("No valid pixel buffer found. Returning.")
request.finish(with: CustomCompositorError.ciFilterFailedToProduceOutputImage)
return
}
guard let requiredTrackIDs = request.videoCompositionInstruction.requiredSourceTrackIDs, !requiredTrackIDs.isEmpty else {
print("No valid track IDs found in composition instruction.")
return
}
let sourceCount = requiredTrackIDs.count
if sourceCount > 1 {
request.finish(with: CustomCompositorError.notSupportingMoreThanOneSources)
return
}
if sourceCount == 1 {
let sourceID = requiredTrackIDs[0]
let sourceBuffer = request.sourceFrame(byTrackID: sourceID.value(of: Int32.self)!)!
let sourceCIImage = CIImage(cvPixelBuffer: sourceBuffer)
var textImage = TextLayerPlayer.instance.getTextLayerAtTimesStamp(ts:request.compositionTime.seconds)
combinedFilter.setValue(textImage, forKey: "inputImage")
if let outputImage = combinedFilter.outputImage {
let renderDestination = CIRenderDestination(pixelBuffer: outputPixelBuffer)
do {
try coreImageContext.startTask(toRender: outputImage, to: renderDestination)
} catch {
}
}
}
request.finish(withComposedVideoFrame: outputPixelBuffer)
}
}
func regularCompositionHdr(asset: AVAsset) -> AVVideoComposition
{
self.isHdr = checkHdr(asset: asset)
let avComposition = AVMutableComposition()
let composition = AVMutableVideoComposition()
composition.colorPrimaries = AVVideoColorPrimaries_ITU_R_2020
composition.colorTransferFunction = AVVideoTransferFunction_ITU_R_2100_HLG
composition.colorYCbCrMatrix = AVVideoYCbCrMatrix_ITU_R_2020
composition.renderSize = assetSize
composition.frameDuration = CMTime(value: 1, timescale: 30)
composition.customVideoCompositorClass = CustomHdrCompositor.self
composition.perFrameHDRDisplayMetadataPolicy = .propagate
return composition
}
I’m using this function to transfer the transparent CGImage to CIImage that supports HDR
func convertToHDRCIImage(from cgImage: CGImage,
maxBrightness: CGFloat = 3.0) -> CIImage? {
// Create a CIImage from the input CGImage
let baseImage = CIImage(cgImage: cgImage)
// Create HDR color adjustment filter
let colorAdjust = CIFilter(name: "CIColorMatrix")!
colorAdjust.setValue(baseImage, forKey: kCIInputImageKey)
// Calculate HDR multipliers based on maxBrightness
// This will maintain color ratios while increasing brightness
colorAdjust.setValue(CIVector(x: maxBrightness, y: 0, z: 0, w: 0), forKey: "inputRVector")
colorAdjust.setValue(CIVector(x: 0, y: maxBrightness, z: 0, w: 0), forKey: "inputGVector")
colorAdjust.setValue(CIVector(x: 0, y: 0, z: maxBrightness, w: 0), forKey: "inputBVector")
// Maintain alpha channel
colorAdjust.setValue(CIVector(x: 0, y: 0, z: 0, w: 1), forKey: "inputAVector")
guard let adjustedImage = colorAdjust.outputImage else {
return nil
}
// Apply color space transformation using CIImage's colorSpace property
let transformedImage = adjustedImage.matchedFromWorkingSpace(to: hdrWorkingSpace)!
// Create context with HDR color space
let context = CIContext(options: [
.workingColorSpace: hdrColorSpace,
.outputColorSpace: hdrColorSpace
])
// Get the image bounds
let bounds = transformedImage.extent
// Create a new pixel buffer with HDR format
var pixelBuffer: CVPixelBuffer?
let pixelBufferAttributes = [
kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_64RGBAHalf,
kCVPixelBufferMetalCompatibilityKey: true
] as CFDictionary
CVPixelBufferCreate(kCFAllocatorDefault,
Int(bounds.width),
Int(bounds.height),
kCVPixelFormatType_64RGBAHalf,
pixelBufferAttributes,
&pixelBuffer)
guard let destinationBuffer = pixelBuffer else {
return nil
}
context.render(transformedImage,
to: destinationBuffer,
bounds: bounds,
colorSpace: hdrColorSpace)
// Create final CIImage from the HDR pixel buffer
let finalImage = CIImage(cvPixelBuffer: destinationBuffer,
options: [.colorSpace: hdrColorSpace])
return finalImage
}
When reducing the HDR to SDR it keeps the right color of the overlay with, but than it reduces the HDR effect which I want to keep
Topic:
Media Technologies
SubTopic:
Video
Hi,
Currently I am developing a 3D reconstruction project.
Which requires images to be distortion-free (rectilinear) and with known intrinsics.
The session I am developing on is a builtInDualWideCamera, with isGeometricDistortionCorrectionEnabled set to false to be able to get the intrinsic matrix of the images, isVirtualDeviceConstituentPhotoDeliveryEnabled set to true and isAutoVirtualDeviceFusionEnabled set to false to get both images and isCameraCalibrationDataDeliveryEnabled set to true to actually get the calibration data.
The distortion correction parameters such as lensDistortionLookupTable are used.
The 42 coefficients mapping array is used as described in the AVCameraCalibrationData header file. A simple piecewise linear interpolation.
There are two questions I would like to get support on:
A way to set the calibration parameters in each image.
I have an approach that sets the parameters in the kCGImagePropertyExifDictionary -> "UserComment". Is there a better approach to write calibration parameter data into the images? I feel like this is a bit dirty and there might be a better and neat approach.
For the ultra-wide angle camera's images, the lensDistortionLookupTable contains several zeros at the end of the array.
For example (last 10 elements are zero):
"LensDistortionLookupTable":"0.000000000000000,0.000349554029526,0.001385628827848,0.003071037586778,... ,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000"
The problem comes when the complete array is used to correct the image (including zeros), the end result is a wrapped-like-circle image close to the edges of it which is completely wrong.
In contrast, if the LensDistortionLookupTable is used without the last zeros and the new size accommodated the image looks better (although not as rectilinear as if you take the image from the iPhone's camera app), but definitely less distorted.
Including zeros (full array):
Excluding zeros (array size changed):
Am I missing an important point in the usage of the lensDistortionLookupTable where this case is addressed (zeros at the end)?
What is the criteria to shrink/exclude elements of the array?
Any advice is very much welcome.
Hello everyone,
I am working on an app that allows you to review your own music using Apple Music. Currently I am running into an issue with the skipping forwards and backwards outside of the app.
How it should work: When skipping forward or backwards on the lock or home screen of an iPhone, the next or previous song on an album should play and the information should change to reflect that in the app.
If you play a song in Apple Music, you can see a Now Playing view in the lock screen.
When you skip forward or backwards, it will do either action and it would reflect that when you see a little frequency icon on artwork image of a song.
What it's doing: When skipping forward or backwards on the lock or home screen of an iPhone, the next or previous song is reflected outside of the app, but not in the app.
When skipping a song outside of the app, it works correctly to head to the next song.
But when I return to the app, it is not reflected
NOTE: I am not using MusicKit variables such as Track, Album to display the songs. Since I want to grab the songs and review them I need a rating so I created my own that grabs the MusicItemID, name, artist(s), etc.
NOTE: I am using ApplicationMusicPlayer.shared
Is there a way to get the song to reflect in my app?
(If its easier, a simple example of it would be nice. No need to create an entire xprod file)