Dive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.

Video Documentation

Posts under Video subtopic

Post

Replies

Boosts

Views

Activity

Bluetooth headphones
Any one experience this bug, when playing a video the bluetooth headphones loose audio? my workaround is to select from one of the other audio outputs, and go back again and select the affected headphone.
1
0
110
Aug ’25
High shutter speed with low frame rate and auto exposure
Hi there, I want to set the iphone camera to "S mode, or Shutter Priority" in camera terminology. Which is a semi-auto exposure model with shutter speed fixed, or set manually. However, when setting the shutter speed manually, it disables the auto exposure. So is there a way to keep the auto exposure on while restrict the shutter speed? Also, I would like to keep a low frame rate, e.g. 30 fps. Would I be able to set shutter speed independent of frame rate? Here's the code for setting up the camera Best,
1
0
515
Sep ’25
AVPlayer loading performance problem in iOS 26
Hi, I have an app that displays tens of short (<1mb) mp4 videos stored in a remote server in a vertical UICollectionView that has horizontally scrollable sections. I'm caching all mp4 files on disk after downloading, and I also have a in-memory cache that holds a limited number (around 30) of players. The players I'm using are simple views that wrap an AVPlayerLayer and its AVPlayerItem, along with a few additional UI components. The scrolling performance was good before iOS 26, but with the release of iOS 26, I noticed that there is significant stuttering during scrolling while creating players with a fileUrl. It happens even if use the same video file cached on disk for each cell for testing. I also started getting this kind of log messages after the players are deinitialized: <<<< PlayerRemoteXPC >>>> signalled err=-12785 at <>:1107 <<<< PlayerRemoteXPC >>>> signalled err=-12785 at <>:1095 <<<< PlayerRemoteXPC >>>> signalled err=-12785 at <>:1095 There's also another log message that I see occasionally, but I don't know what triggers it. << FigXPC >> signalled err=-16152 at <>:1683 Is there anyone else that experienced this kind of problem with the latest release? Also, I'm wondering what's the best way to resolve the issue. I could increase the size of the memory cache to something large like 100, but I'm not sure if it is an acceptable solution because: 1- There will be 100 player instance in memory at all times. 2- There will still be stuttering during the initial loading of the videos from the web. Any help is appreciated!
1
0
333
2w
CoreMediaErrorDomain error -12848
Good day. A video I created via iOS AVAssetWriter with the following settings: let videoWriterInput = AVAssetWriterInput( mediaType: .video, outputSettings: [ AVVideoCodecKey: AVVideoCodecType.hevc, AVVideoWidthKey: 1080, AVVideoHeightKey: 1920, AVVideoCompressionPropertiesKey: [ AVVideoAverageBitRateKey: 2_000_000, AVVideoMaxKeyFrameIntervalKey: 30 ], ] ) let audioWriterInput = AVAssetWriterInput( mediaType: .audio, outputSettings: [ AVFormatIDKey: kAudioFormatMPEG4AAC, AVNumberOfChannelsKey: 2, AVSampleRateKey: 44100, AVEncoderBitRateKey: 128000 ] ) When It is split into fMP4 HLS format using ffmpeg, the video is unable to be played in iOS with the following error: CoreMediaErrorDomain error -12848 However, the video is played normally in Android, Browser HLS players, and also VLC Media Player. Please assist. Thank you.
1
0
374
Sep ’25
Background GPU access in iOS 26 for iPhones
We build mobile apps for creators to edit their videos. Post editing the video, the creator has to export the video so that it can be uploaded to Youtube. The export is a time consuming and GPU intensive process. The creator can exit the app due to various reasons like receiving the call, putting the app in background etc. This causes the export to fail :( Keeping this limitation in mind there was an announcement from Apple that with the IOS 26 launch would start to support background GPU access. Here is the official documentation: https://developer.apple.com/documentation/BundleResources/Entitlements/com.apple.developer.background-tasks.continued-processing.gpu When we tried using this feature, we were not able to get it to work on IOS 26. We stumbled upon this ticket(https://developer.apple.com/forums/thread/797538?answerId=854825022#854825022) in the Apple Developer forum, in which possibly an Apple engineer claims it is supported ONLY for iPadOS 26. This is a very big bummer for us. 96% of the users are on iPhone(compared to iPad), and if we refer to the official documentation above, it claims that this feature should work on IOS 26. This feature is extremely important for having the best user experience and reducing user frustration and will be useful for other video editing apps. Looking forward to a resolution.
1
0
223
Oct ’25
Retrieving the DRM expiration time for FairPlay offline assets on iOS
I’m implementing FairPlay offline streaming on iOS and ran into a question about DRM expiration handling. As far as I understand, when issuing a FairPlay offline license, there are typically two time windows: 1. The period during which the user can start offline playback (the longer “rental window”). 2. Once playback starts, the duration allowed to complete playback (the shorter “playback window”). I’d like to display this information (the remaining validity or expiration time) in the app’s UI next to each downloaded asset. My question is: 👉 Is there a way to programmatically check or retrieve the expiration time for a FairPlay offline asset on the client side (via AVFoundation or AVContentKeySession)? Any guidance or best practices for surfacing DRM expiration info in the UI would be greatly appreciated.
1
0
485
Oct ’25
AVFoundation Custom Video Compositor Skipping Frames During AVPlayer Playback Despite 60 FPS Frame Duration
I'm building a Swift video editor with AVFoundation and a custom compositor. Despite setting AVVideoComposition.frameDuration to 60 FPS, I'm seeing significant frame skipping during playback. Console Output Shows Frame Skipping Frame #0 at 0.0 ms (fps: 60.0) Frame #2 at 33.333333333333336 ms (fps: 60.0) Frame #6 at 100.0 ms (fps: 60.0) Frame #10 at 166.66666666666666 ms (fps: 60.0) Frame #32 at 533.3333333333334 ms (fps: 60.0) Frame #62 at 1033.3333333333335 ms (fps: 60.0) Frame #96 at 1600.0 ms (fps: 60.0) Instead of frames every ~16.67ms (60 FPS), I'm getting irregular intervals, sometimes 33ms, 67ms, or hundreds of milliseconds apart. Renderer.swift (Key Parts) @MainActor class Renderer: ObservableObject { @Published var playerItem: AVPlayerItem? private let assetManager: ProjectAssetManager? private let compositorId: String func buildComposition() async { // ... load mouse moves/clicks data ... let composition = AVMutableComposition() let videoTrack = composition.addMutableTrack( withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid ) var currentTime = CMTime.zero var layerInstructions: [AVMutableVideoCompositionLayerInstruction] = [] // Insert video segments for videoURL in videoURLs { let asset = AVAsset(url: videoURL) let tracks = try await asset.loadTracks(withMediaType: .video) let assetVideoTrack = tracks.first let duration = try await asset.load(.duration) try videoTrack.insertTimeRange( CMTimeRange(start: .zero, duration: duration), of: assetVideoTrack, at: currentTime ) let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videoTrack) let transform = try await assetVideoTrack.load(.preferredTransform) layerInstruction.setTransform(transform, at: currentTime) layerInstructions.append(layerInstruction) currentTime = CMTimeAdd(currentTime, duration) } let videoComposition = AVMutableVideoComposition() videoComposition.frameDuration = CMTime(value: 1, timescale: 60) // 60 FPS // Set render size from first video if let firstURL = videoURLs.first { let firstAsset = AVAsset(url: firstURL) let firstTrack = try await firstAsset.loadTracks(withMediaType: .video).first let naturalSize = try await firstTrack.load(.naturalSize) let transform = try await firstTrack.load(.preferredTransform) videoComposition.renderSize = CGSize( width: abs(naturalSize.applying(transform).width), height: abs(naturalSize.applying(transform).height) ) } let instruction = CompositorInstruction() instruction.timeRange = CMTimeRange(start: .zero, duration: currentTime) instruction.layerInstructions = layerInstructions instruction.compositorId = compositorId videoComposition.instructions = [instruction] videoComposition.customVideoCompositorClass = CustomVideoCompositor.self let playerItem = AVPlayerItem(asset: composition) playerItem.videoComposition = videoComposition self.playerItem = playerItem } } class CompositorInstruction: NSObject, AVVideoCompositionInstructionProtocol { var timeRange: CMTimeRange = .zero var enablePostProcessing: Bool = false var containsTweening: Bool = false var requiredSourceTrackIDs: [NSValue]? var passthroughTrackID: CMPersistentTrackID = kCMPersistentTrackID_Invalid var layerInstructions: [AVVideoCompositionLayerInstruction] = [] var compositorId: String = "" } class CustomVideoCompositor: NSObject, AVVideoCompositing { var sourcePixelBufferAttributes: [String : Any]? = [ kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA) ] var requiredPixelBufferAttributesForRenderContext: [String : Any] = [ kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA) ] func renderContextChanged(_ newRenderContext: AVVideoCompositionRenderContext) {} func startRequest(_ asyncVideoCompositionRequest: AVAsynchronousVideoCompositionRequest) { guard let sourceTrackID = asyncVideoCompositionRequest.sourceTrackIDs.first?.int32Value, let sourcePixelBuffer = asyncVideoCompositionRequest.sourceFrame(byTrackID: sourceTrackID), let outputBuffer = asyncVideoCompositionRequest.renderContext.newPixelBuffer() else { asyncVideoCompositionRequest.finish(with: NSError(domain: "VideoCompositor", code: -1)) return } let videoComposition = asyncVideoCompositionRequest.renderContext.videoComposition let frameDuration = videoComposition.frameDuration let fps = Double(frameDuration.timescale) / Double(frameDuration.value) let compositionTime = asyncVideoCompositionRequest.compositionTime let seconds = CMTimeGetSeconds(compositionTime) let frameInMilliseconds = seconds * 1000 let frameNumber = Int(round(seconds * fps)) print("Frame #\(frameNumber) at \(frameInMilliseconds) ms (fps: \(fps))") asyncVideoCompositionRequest.finish(withComposedVideoFrame: outputBuffer) } func cancelAllPendingVideoCompositionRequests() {} } VideoPlayerViewModel @MainActor class VideoPlayerViewModel: ObservableObject { let player = AVPlayer() private let renderer: Renderer func loadVideo() async { await renderer.buildComposition() if let playerItem = renderer.playerItem { player.replaceCurrentItem(with: playerItem) } } } What I've Tried Frame skipping is consistent—exact same timestamps on every playback Issue persists even with minimal processing (just passing through buffers) Occurs regardless of compositor complexity Please note that I need every frame at exact millisecond intervals for my application. Frame loss or inconsistent frameInMillisecond values are not acceptable.
1
0
288
Oct ’25
AVAssetExportSession ignores frameDuration 60fps and exports at 30fps, but AVPlayer playback is correct
Hey everyone, I'm stuck on a really frustrating AVFoundation problem. I'm building a video editor that uses a custom AVVideoCompositor to add effects, and I need the final output to be 60 FPS. So basically, I create an AVMutableComposition to sequence my video clips. I create an AVMutableVideoComposition and set the frame rate to 60 FPS: videoComposition.frameDuration = CMTime(value: 1, timescale: 60) I assign my CustomVideoCompositor class to the videoComposition. I create an AVPlayerItem with the composition and video composition. The Problem: Playback Works: When I play the AVPlayerItem in an AVPlayer, it's perfect. It plays at a smooth 60 FPS, and my custom compositor's startRequest method is called 60 times per second. Export Fails: When I try to export the exact same composition and video composition using AVAssetExportSession, the final .mp4 file is always 30 FPS (or 29.97). I've logged inside my custom compositor during the export, and it's definitely being called 30 times per second, so it's generating the 30 frames. It seems like AVAssetExportSession is just dropping every other frame when it encodes the video. My source videos are screen recordings which I recorded using ScreenCaptureKit itself with the minimum frame interval to be 60. Here is my export function. I'm using the AVAssetExportPresetHighestQuality preset :- func exportVideo(to outputURL: URL) async throws { guard let composition = composition, let videoComposition = videoComposition else { throw VideoCompositionError.noValidVideos } try? FileManager.default.removeItem(at: outputURL) guard let exportSession = AVAssetExportSession( asset: composition, presetName: AVAssetExportPresetHighestQuality // Is this the problem? ) else { throw VideoCompositionError.trackCreationFailed } exportSession.outputFileType = .mp4 exportSession.videoComposition = videoComposition // This has the 60fps setting try await exportSession.export(to: outputURL, as: .mp4) } I've created a bare bones sample project that shows this exact bug in action. The resulting video is 60fps during playback, but only 30fps during the export. https://github.com/zaidbren/SimpleEditor My Question: Why is AVAssetExportSession ignoring my 60 FPS frameDuration and defaulting to 30 FPS, even though AVPlayer respects it?
1
0
363
Oct ’25
PhotosPickerItem.itemIdentifier always nil
I'm using the SwiftUI Photos Picker to select videos from the users Photos library and then opening the video using the PhotosPickerItem. I'm looking for a way to allow the user to open the same video on their other devices as the app uses SwiftData and CloudKit to provide access to a recently watched list of videos. The URL from the PhotosPickerItem appears to be device specific and so I was looking to see if I can use the itemIdentifier and then the init that takes the itemIdentifier to create the PhotosPickerItem on the other devices. The itemIdentifier however is always nil and so wouldn't be able to be used in this way. Is there an alternative approach whereby the users can open a video using a PhotosPickerItem and that item would be viewable on their other devices with an item identifier or a URL that is device agnostic. This approach should also not involve copying the video into other storage as it would simply expand the use of the users iCloud storage, providing a less than ideal user experience. If the user has opened the video from their Photos library, there should be a way to allow the same user (e.g. same Apple ID), to use the same app on another device to open the video again.
1
0
274
Nov ’25
AVCaptureSession setting preset has no effect if HDR configured with AVCaptureVideoDataOutput
I want to confirm if this is a bug or a programming error. Very easy to reproduce it by modifying AVCam sample code. Steps to reproduce: Add AVCaptureVideoDataOutput to AVCaptureSession, no need to set delegate in AVCam sample code (CaptureService actor) private let videoDataOutput = AVCaptureVideoDataOutput() and then in configureSession method, add the following line try addOutput(videoDataOutput) if videoDataOutput.availableVideoPixelFormatTypes.contains(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange) { videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as AnyHashable as! String : kCVPixelFormatType_420YpCbCr8BiPlanarFullRange] } And next modify set HDR method: /// Sets whether the app captures HDR video. func setHDRVideoEnabled(_ isEnabled: Bool) { // Bracket the following configuration in a begin/commit configuration pair. captureSession.beginConfiguration() defer { captureSession.commitConfiguration() } do { // If the current device provides a 10-bit HDR format, enable it for use. if isEnabled, let format = currentDevice.activeFormat10BitVariant { try currentDevice.lockForConfiguration() currentDevice.activeFormat = format currentDevice.unlockForConfiguration() isHDRVideoEnabled = true if videoDataOutput.availableVideoPixelFormatTypes.contains(kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange) { videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as AnyHashable as! String : kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange] } } else { captureSession.sessionPreset = .high isHDRVideoEnabled = false if videoDataOutput.availableVideoPixelFormatTypes.contains(kCVPixelFormatType_32BGRA) { print("Setting sdr pixel format \(kCVPixelFormatType_32BGRA)") videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as AnyHashable as! String : kCVPixelFormatType_32BGRA] } try currentDevice.lockForConfiguration() currentDevice.activeColorSpace = .sRGB currentDevice.unlockForConfiguration() } } catch { logger.error("Unable to obtain lock on device and can't enable HDR video capture.") } The problem now is toggling HDR on and off no longer works in video mode. If after setting HDR on, you set HDR to off, active format of device does not change (setting sessionPreset has no effect). This does not happen if video data output is not added to session. Is there any workaround available?
1
0
172
3w
VideoPlayer crashes on Archive build
I have found that following code runs without issue from Xcode, either in Debug or Release mode, yet crashes when running from the binary produced by archiving - i.e. what will be sent to the app store. import SwiftUI import AVKit @main struct tcApp: App { var body: some Scene { WindowGroup { VideoPlayer(player: nil) } } } This is the most stripped down code that shows the issue. One can try and point the VideoPlayer at a file and the same issue will occur. I've attached the crash log: Crash log Please note that this was seen with Xcode 26.2 and MacOS 26.2.
1
0
127
18h
Custom Share Desination stopped working in FCP X 11
We integrate with FCP X using a custom share destination and the Apple Script interface. This has been working fine until the the recent version 11 update of FCP X. With this update we are no longer receiving the open event when the export has completed. We get the apple event to creat the Asset and the file is exported to the location we set in the response. There is just no open event after that. I suspect something is wrong with our scripting support but I have no idea what or how to troubleshoot. This works fine in 10.8.1 and below.
0
0
378
Dec ’24
Capacitor app on iOS inline video stops
Hello, I’m experiencing an issue with video playback in my Javascript (SvelteKit) application using Capacitor. The video plays and loops correctly on Android and web browsers (including Safari) but stops unexpectedly after a few iterations on iOS native App. <video src={videoPath} autoplay muted loop playsinline class="h-auto w-full max-w-full object-cover"></video> Has anyone encountered a similar issue or have insights into what might be causing this behavior on iOS? Any suggestions or workarounds would be greatly appreciated. Maybe it has something to do with the iOS power saving policy? Thank you in advance for your help!
0
0
498
Jan ’25
4k 120fps Showing Black Screen on iPhone 16
Hey - I am developing an app that uses the camera for recording video. I put the ability to choose a framerate and resolution and all combinations work perfectly fine, except for 4k 120fps for the new iPhone 16 pro. This just shows black on the preview. I tried to record even though the preview was black, but the recording is also just a black screen. Is there anything special that needs to be done in the camera setup for 4k 120fps to work? I have my camera setup code attached. Is it possible this is a bug in Apple's code, since this works with every other combination (1080p up to 240fps and 4k up to 60fps)? Thanks so much for the help. class CameraManager: NSObject { enum Errors: Error { case noCaptureDevice case couldNotAddInput case unsupportedConfiguration } enum Resolution { case hd1080p case uhd4K var preset: AVCaptureSession.Preset { switch self { case .hd1080p: return .hd1920x1080 case .uhd4K: return .hd4K3840x2160 } } var dimensions: CMVideoDimensions { switch self { case .hd1080p: return CMVideoDimensions(width: 1920, height: 1080) case .uhd4K: return CMVideoDimensions(width: 3840, height: 2160) } } } enum CameraType { case wide case ultraWide var captureDeviceType: AVCaptureDevice.DeviceType { switch self { case .wide: return .builtInWideAngleCamera case .ultraWide: return .builtInUltraWideCamera } } } enum FrameRate: Int { case fps60 = 60 case fps120 = 120 case fps240 = 240 } let orientationManager = OrientationManager() let captureSession: AVCaptureSession let previewLayer: AVCaptureVideoPreviewLayer let movieFileOutput = AVCaptureMovieFileOutput() let videoDataOutput = AVCaptureVideoDataOutput() private var videoCaptureDevice: AVCaptureDevice? override init() { self.captureSession = AVCaptureSession() self.previewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession) super.init() self.previewLayer.videoGravity = .resizeAspect } func configureSession(resolution: Resolution, frameRate: FrameRate, stabilizationEnabled: Bool, cameraType: CameraType, sampleBufferDelegate: AVCaptureVideoDataOutputSampleBufferDelegate?) throws { assert(Thread.isMainThread) captureSession.beginConfiguration() defer { captureSession.commitConfiguration() } captureSession.sessionPreset = resolution.preset if captureSession.canAddOutput(movieFileOutput) { captureSession.addOutput(movieFileOutput) } else { throw Errors.couldNotAddInput } videoDataOutput.setSampleBufferDelegate(sampleBufferDelegate, queue: DispatchQueue(label: "VideoDataOutputQueue")) if captureSession.canAddOutput(videoDataOutput) { captureSession.addOutput(videoDataOutput) // Set the video orientation if needed if let connection = videoDataOutput.connection(with: .video) { //connection.videoOrientation = .portrait } } else { throw Errors.couldNotAddInput } guard let videoCaptureDevice = AVCaptureDevice.default(cameraType.captureDeviceType, for: .video, position: .back) else { throw Errors.noCaptureDevice } let useDimensions = resolution.dimensions guard let format = videoCaptureDevice.formats.first(where: { format in let dimensions = CMVideoFormatDescriptionGetDimensions(format.formatDescription) let isRes = dimensions.width == useDimensions.width && dimensions.height == useDimensions.height let frameRates = format.videoSupportedFrameRateRanges return isRes && frameRates.contains(where: { $0.maxFrameRate >= Float64(frameRate.rawValue) }) }) else { throw Errors.unsupportedConfiguration } self.videoCaptureDevice = videoCaptureDevice do { let videoInput = try AVCaptureDeviceInput(device: videoCaptureDevice) if captureSession.canAddInput(videoInput) { captureSession.addInput(videoInput) } else { throw Errors.couldNotAddInput } try videoCaptureDevice.lockForConfiguration() videoCaptureDevice.activeFormat = format videoCaptureDevice.activeVideoMinFrameDuration = CMTime(value: 1, timescale: CMTimeScale(frameRate.rawValue)) videoCaptureDevice.activeVideoMaxFrameDuration = CMTime(value: 1, timescale: CMTimeScale(frameRate.rawValue)) videoCaptureDevice.activeMaxExposureDuration = CMTime(seconds: 1.0 / 960, preferredTimescale: 1000000) videoCaptureDevice.exposureMode = .locked videoCaptureDevice.unlockForConfiguration() } catch { throw error } configureStabilization(enabled: stabilizationEnabled) }`
0
0
466
Jan ’25
Transparent overlay changes color in HDR video
Overlay changes color in HDR video When I’m using trying to add an overlay to an image with AVMutableVideoComposition, When the video is in HDR the overlay colors are changing and white becomes grey screen shot from original HDR video result from the code with the wrong overlay colorthe result when reducing to SDR (the right overlay color) the distorted colorsthe way it should look(sdr) Im creating the overlay with a CGContext class CustomHdrCompositor: NSObject, AVVideoCompositing { private let coreImageContext = CIContext(options: [CIContextOption.cacheIntermediates: false]) let combinedFilter = CIFilter(name: "CISourceOverCompositing")! var sourcePixelBufferAttributes: [String: Any]? = [String(kCVPixelBufferPixelFormatTypeKey): [kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange]] var requiredPixelBufferAttributesForRenderContext: [String: Any] = [String(kCVPixelBufferPixelFormatTypeKey): [kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange]] var supportsWideColorSourceFrames = true var supportsHDRSourceFrames = true func renderContextChanged(_ newRenderContext: AVVideoCompositionRenderContext) { return } func startRequest(_ request: AVAsynchronousVideoCompositionRequest) { guard let outputPixelBuffer = request.renderContext.newPixelBuffer() else { print("No valid pixel buffer found. Returning.") request.finish(with: CustomCompositorError.ciFilterFailedToProduceOutputImage) return } guard let requiredTrackIDs = request.videoCompositionInstruction.requiredSourceTrackIDs, !requiredTrackIDs.isEmpty else { print("No valid track IDs found in composition instruction.") return } let sourceCount = requiredTrackIDs.count if sourceCount > 1 { request.finish(with: CustomCompositorError.notSupportingMoreThanOneSources) return } if sourceCount == 1 { let sourceID = requiredTrackIDs[0] let sourceBuffer = request.sourceFrame(byTrackID: sourceID.value(of: Int32.self)!)! let sourceCIImage = CIImage(cvPixelBuffer: sourceBuffer) var textImage = TextLayerPlayer.instance.getTextLayerAtTimesStamp(ts:request.compositionTime.seconds) combinedFilter.setValue(textImage, forKey: "inputImage") if let outputImage = combinedFilter.outputImage { let renderDestination = CIRenderDestination(pixelBuffer: outputPixelBuffer) do { try coreImageContext.startTask(toRender: outputImage, to: renderDestination) } catch { } } } request.finish(withComposedVideoFrame: outputPixelBuffer) } } func regularCompositionHdr(asset: AVAsset) -> AVVideoComposition { self.isHdr = checkHdr(asset: asset) let avComposition = AVMutableComposition() let composition = AVMutableVideoComposition() composition.colorPrimaries = AVVideoColorPrimaries_ITU_R_2020 composition.colorTransferFunction = AVVideoTransferFunction_ITU_R_2100_HLG composition.colorYCbCrMatrix = AVVideoYCbCrMatrix_ITU_R_2020 composition.renderSize = assetSize composition.frameDuration = CMTime(value: 1, timescale: 30) composition.customVideoCompositorClass = CustomHdrCompositor.self composition.perFrameHDRDisplayMetadataPolicy = .propagate return composition } I’m using this function to transfer the transparent CGImage to CIImage that supports HDR func convertToHDRCIImage(from cgImage: CGImage, maxBrightness: CGFloat = 3.0) -> CIImage? { // Create a CIImage from the input CGImage let baseImage = CIImage(cgImage: cgImage) // Create HDR color adjustment filter let colorAdjust = CIFilter(name: "CIColorMatrix")! colorAdjust.setValue(baseImage, forKey: kCIInputImageKey) // Calculate HDR multipliers based on maxBrightness // This will maintain color ratios while increasing brightness colorAdjust.setValue(CIVector(x: maxBrightness, y: 0, z: 0, w: 0), forKey: "inputRVector") colorAdjust.setValue(CIVector(x: 0, y: maxBrightness, z: 0, w: 0), forKey: "inputGVector") colorAdjust.setValue(CIVector(x: 0, y: 0, z: maxBrightness, w: 0), forKey: "inputBVector") // Maintain alpha channel colorAdjust.setValue(CIVector(x: 0, y: 0, z: 0, w: 1), forKey: "inputAVector") guard let adjustedImage = colorAdjust.outputImage else { return nil } // Apply color space transformation using CIImage's colorSpace property let transformedImage = adjustedImage.matchedFromWorkingSpace(to: hdrWorkingSpace)! // Create context with HDR color space let context = CIContext(options: [ .workingColorSpace: hdrColorSpace, .outputColorSpace: hdrColorSpace ]) // Get the image bounds let bounds = transformedImage.extent // Create a new pixel buffer with HDR format var pixelBuffer: CVPixelBuffer? let pixelBufferAttributes = [ kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_64RGBAHalf, kCVPixelBufferMetalCompatibilityKey: true ] as CFDictionary CVPixelBufferCreate(kCFAllocatorDefault, Int(bounds.width), Int(bounds.height), kCVPixelFormatType_64RGBAHalf, pixelBufferAttributes, &pixelBuffer) guard let destinationBuffer = pixelBuffer else { return nil } context.render(transformedImage, to: destinationBuffer, bounds: bounds, colorSpace: hdrColorSpace) // Create final CIImage from the HDR pixel buffer let finalImage = CIImage(cvPixelBuffer: destinationBuffer, options: [.colorSpace: hdrColorSpace]) return finalImage } When reducing the HDR to SDR it keeps the right color of the overlay with, but than it reduces the HDR effect which I want to keep
0
0
399
Jan ’25