I’m encountering a consistent crash in WebKit when using WKWebView to play a YouTube playlist in my iOS app. Playback starts successfully, but the web process terminates during the second video in the playlist. This only occurs on physical devices, not in the simulator.
Here’s a simplified Swift example of my setup:
import SwiftUI
import WebKit
struct ContentView: View {
private let playlistID = "PLig2mjpwQBZnghraUKGhCqc9eAy0UbpDN"
var body: some View {
YouTubeWebView(playlistID: playlistID)
.edgesIgnoringSafeArea(.all)
}
}
struct YouTubeWebView: UIViewRepresentable {
let playlistID: String
func makeUIView(context: Context) -> WKWebView {
let config = WKWebViewConfiguration()
config.allowsInlineMediaPlayback = true
let webView = WKWebView(frame: .zero, configuration: config)
webView.scrollView.isScrollEnabled = true
let html = """
<!doctype html>
<html>
<head>
<meta name="viewport" content="initial-scale=1.0, maximum-scale=1.0">
<style>body,html{height:100%;margin:0;background:#000}iframe{width:100%;height:100%;border:0}</style>
</head>
<body>
<iframe
src="https://www.youtube-nocookie.com/embed/videoseries?list=\(playlistID)&controls=1&rel=0&playsinline=1&iv_load_policy=3"
frameborder="0"
allow="encrypted-media; picture-in-picture; fullscreen"
webkit-playsinline
allowfullscreen
></iframe>
</body>
</html>
"""
webView.loadHTMLString(html, baseURL: nil)
return webView
}
func updateUIView(_ uiView: WKWebView, context: Context) {}
}
#Preview {
ContentView()
}
Observed behavior:
First video plays without issue.
Web process crashes when the second video in the playlist starts.
Console logs show WebProcessProxy::didClose and repeated memory status messages.
Using ProcessAssertion or background activity does not prevent the crash.
Only occurs on physical devices; simulators do not reproduce the issue.
Questions:
Is there something I should change or add in my WKWebView setup or HTML/iframe to prevent the crash when playing the second video in a playlist on physical iOS devices?
Is there an officially supported way to limit memory or prevent WebKit from terminating the web process during multi-video playback?
Are there recommended patterns for playing YouTube playlists in a WKWebView on iOS without risking crashes?
Any tips for debugging or configuring WKWebView to make it more stable for continuous playlist playback?
Thanks in advance for any guidance!
Video
RSS for tagDive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Created
SBS ViewPacking add a half a frame to the opposite eye.
Meaning if you look all the way right you can see an extra half frame with left eye and vice versa.
OU doesn't work at all, the preview just doesn't show a thumbnail and the video doesn't play.
Any hints on how to fix this? I submitted a bug report but haven't heard anything.
Is there sample code on how to use VTFrameProcessorOpticalFlow?
Topic:
Media Technologies
SubTopic:
Video
How to USE iphone16 pro to capture HDR dolby_vision and use videotoolbox to encode with dolby_vision metadata in bitstream!
I know apple ios system can achieve this target,but I want to get source code how to do that ?
Topic:
Media Technologies
SubTopic:
Video
Hi everyone,
I’m exploring using the iPhone 17 Pro with the Blackmagic ProDock in a custom capture app. The genlock functionality seems accessible via AVExternalSyncDevice and related APIs, which is great.
I’m specifically curious about external timecode coming in from the ProDock:
• Is there a public way to access the timecode feed in a custom app via AVFoundation or another Apple API?
• If so, what is the recommended approach to read or apply that timecode during capture?
• Are there any current limitations or entitlements required to access timecode from ProDock in a third-party app?
I’m excited to start integrating synchronized capture in my app, and any guidance or sample patterns would be greatly appreciated.
Thanks in advance!
— [Artem]
Topic:
Media Technologies
SubTopic:
Video
context:Explore low-latency video encoding with VideoToolbox https://developer.apple.com/videos/play/wwdc2021/10158/
I see above post said HEVC VideoToolbox can support SVC, temporal layering of streams in low-latency mode with all P frames.
my question is that Does HEVC VideoToolbox support temporal layering of streams with B-frames ?? thanks
Does videotoolbox hevc encoder on iphone16 pro encode with 10bit hdr vui info in bitstream ?
I want to encode 4k 10bit hdr bitstream with vui info use hevc videotoolbox .but I don't know how to encode vui info in bitstream ? please give me some sample code ?
I’m implementing FairPlay offline streaming on iOS and ran into a question about DRM expiration handling.
As far as I understand, when issuing a FairPlay offline license, there are typically two time windows:
1. The period during which the user can start offline playback (the longer “rental window”).
2. Once playback starts, the duration allowed to complete playback (the shorter “playback window”).
I’d like to display this information (the remaining validity or expiration time) in the app’s UI next to each downloaded asset.
My question is:
👉 Is there a way to programmatically check or retrieve the expiration time for a FairPlay offline asset on the client side (via AVFoundation or AVContentKeySession)?
Any guidance or best practices for surfacing DRM expiration info in the UI would be greatly appreciated.
it seems that video toolbox in ios26 is having trouble decoding h.264 streams I have filed a bug report under https://feedbackassistant.apple.com/feedback/20741484
Topic:
Media Technologies
SubTopic:
Video
Is there limits on the supported dimension for VTLowLatencyFrameInterpolationConfiguration. Querying VTLowLatencyFrameInterpolationConfiguration.maximumDimensions and VTLowLatencyFrameInterpolationConfiguration.minimumDimensions returns nil. When I try the WWDC sample project EnhancingYourAppWithMachineLearningBasedVideoEffects with a 4k video this statement try frameProcessor.startSession(configuration: configuration) executes but try await frameProcessor.process(parameters: parameters) throws error Error Domain=VTFrameProcessorErrorDomain Code=-19730 "Processor is not initialized" UserInfo={NSLocalizedDescription=Processor is not initialized}.
Also, why is VTLowLatencyFrameInterpolationConfiguration able to run while app is backgrounded but VTFrameRateConversionParameters can't (due to gpu usage)?
方法不执行,当我点击avplayerviewcontroller 全屏点击返回按钮 时 func playerViewController(_ playerViewController: AVPlayerViewController,
willEndFullScreenPresentationWithAnimationCoordinator coordinator: UIViewControllerTransitionCoordinator) 没有执行
Topic:
Media Technologies
SubTopic:
Video
I'm building a Swift video editor with AVFoundation and a custom compositor. Despite setting AVVideoComposition.frameDuration to 60 FPS, I'm seeing significant frame skipping during playback.
Console Output Shows Frame Skipping
Frame #0 at 0.0 ms (fps: 60.0)
Frame #2 at 33.333333333333336 ms (fps: 60.0)
Frame #6 at 100.0 ms (fps: 60.0)
Frame #10 at 166.66666666666666 ms (fps: 60.0)
Frame #32 at 533.3333333333334 ms (fps: 60.0)
Frame #62 at 1033.3333333333335 ms (fps: 60.0)
Frame #96 at 1600.0 ms (fps: 60.0)
Instead of frames every ~16.67ms (60 FPS), I'm getting irregular intervals, sometimes 33ms, 67ms, or hundreds of milliseconds apart.
Renderer.swift (Key Parts)
@MainActor
class Renderer: ObservableObject {
@Published var playerItem: AVPlayerItem?
private let assetManager: ProjectAssetManager?
private let compositorId: String
func buildComposition() async {
// ... load mouse moves/clicks data ...
let composition = AVMutableComposition()
let videoTrack = composition.addMutableTrack(
withMediaType: .video,
preferredTrackID: kCMPersistentTrackID_Invalid
)
var currentTime = CMTime.zero
var layerInstructions: [AVMutableVideoCompositionLayerInstruction] = []
// Insert video segments
for videoURL in videoURLs {
let asset = AVAsset(url: videoURL)
let tracks = try await asset.loadTracks(withMediaType: .video)
let assetVideoTrack = tracks.first
let duration = try await asset.load(.duration)
try videoTrack.insertTimeRange(
CMTimeRange(start: .zero, duration: duration),
of: assetVideoTrack,
at: currentTime
)
let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videoTrack)
let transform = try await assetVideoTrack.load(.preferredTransform)
layerInstruction.setTransform(transform, at: currentTime)
layerInstructions.append(layerInstruction)
currentTime = CMTimeAdd(currentTime, duration)
}
let videoComposition = AVMutableVideoComposition()
videoComposition.frameDuration = CMTime(value: 1, timescale: 60) // 60 FPS
// Set render size from first video
if let firstURL = videoURLs.first {
let firstAsset = AVAsset(url: firstURL)
let firstTrack = try await firstAsset.loadTracks(withMediaType: .video).first
let naturalSize = try await firstTrack.load(.naturalSize)
let transform = try await firstTrack.load(.preferredTransform)
videoComposition.renderSize = CGSize(
width: abs(naturalSize.applying(transform).width),
height: abs(naturalSize.applying(transform).height)
)
}
let instruction = CompositorInstruction()
instruction.timeRange = CMTimeRange(start: .zero, duration: currentTime)
instruction.layerInstructions = layerInstructions
instruction.compositorId = compositorId
videoComposition.instructions = [instruction]
videoComposition.customVideoCompositorClass = CustomVideoCompositor.self
let playerItem = AVPlayerItem(asset: composition)
playerItem.videoComposition = videoComposition
self.playerItem = playerItem
}
}
class CompositorInstruction: NSObject, AVVideoCompositionInstructionProtocol {
var timeRange: CMTimeRange = .zero
var enablePostProcessing: Bool = false
var containsTweening: Bool = false
var requiredSourceTrackIDs: [NSValue]?
var passthroughTrackID: CMPersistentTrackID = kCMPersistentTrackID_Invalid
var layerInstructions: [AVVideoCompositionLayerInstruction] = []
var compositorId: String = ""
}
class CustomVideoCompositor: NSObject, AVVideoCompositing {
var sourcePixelBufferAttributes: [String : Any]? = [
kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)
]
var requiredPixelBufferAttributesForRenderContext: [String : Any] = [
kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)
]
func renderContextChanged(_ newRenderContext: AVVideoCompositionRenderContext) {}
func startRequest(_ asyncVideoCompositionRequest: AVAsynchronousVideoCompositionRequest) {
guard let sourceTrackID = asyncVideoCompositionRequest.sourceTrackIDs.first?.int32Value,
let sourcePixelBuffer = asyncVideoCompositionRequest.sourceFrame(byTrackID: sourceTrackID),
let outputBuffer = asyncVideoCompositionRequest.renderContext.newPixelBuffer() else {
asyncVideoCompositionRequest.finish(with: NSError(domain: "VideoCompositor", code: -1))
return
}
let videoComposition = asyncVideoCompositionRequest.renderContext.videoComposition
let frameDuration = videoComposition.frameDuration
let fps = Double(frameDuration.timescale) / Double(frameDuration.value)
let compositionTime = asyncVideoCompositionRequest.compositionTime
let seconds = CMTimeGetSeconds(compositionTime)
let frameInMilliseconds = seconds * 1000
let frameNumber = Int(round(seconds * fps))
print("Frame #\(frameNumber) at \(frameInMilliseconds) ms (fps: \(fps))")
asyncVideoCompositionRequest.finish(withComposedVideoFrame: outputBuffer)
}
func cancelAllPendingVideoCompositionRequests() {}
}
VideoPlayerViewModel
@MainActor
class VideoPlayerViewModel: ObservableObject {
let player = AVPlayer()
private let renderer: Renderer
func loadVideo() async {
await renderer.buildComposition()
if let playerItem = renderer.playerItem {
player.replaceCurrentItem(with: playerItem)
}
}
}
What I've Tried
Frame skipping is consistent—exact same timestamps on every playback
Issue persists even with minimal processing (just passing through buffers)
Occurs regardless of compositor complexity
Please note that I need every frame at exact millisecond intervals for my application. Frame loss or inconsistent frameInMillisecond values are not acceptable.
We're distributing a virtual camera with our app that does not profit in the slightest from automatically applied system video effects both to the video going in (physical camera device) or out (virtual camera device). I'm aware of setting NSCameraReactionEffectGesturesEnabledDefault in Info.plist and determining active video effects via AVCaptureDevice API. Those are obviously crutches, because having to tell users to go look for and click around in menu bar apps is the opposite of a great UX.
To make our product's video output more deterministic, I'm looking for a way to tell the CMIO subsystem that our virtual camera does not support any of the system video effects. I'm seeing properties like
AVCaptureDevice.Format.isPortraitEffectSupported and AVCaptureDevice.Format.isStudioLightSupported whose documentation refers to the format's ability to support these effects. Since we're setting a CMFormatDescription via CMIOExtensionStreamSource.formats I was hoping to find something in the extensions, but wasn't successful so far.
Can this be done?
Hey everyone,
I'm stuck on a really frustrating AVFoundation problem. I'm building a video editor that uses a custom AVVideoCompositor to add effects, and I need the final output to be 60 FPS.
So basically, I create an AVMutableComposition to sequence my video clips. I create an AVMutableVideoComposition and set the frame rate to 60 FPS: videoComposition.frameDuration = CMTime(value: 1, timescale: 60)
I assign my CustomVideoCompositor class to the videoComposition.
I create an AVPlayerItem with the composition and video composition.
The Problem:
Playback Works: When I play the AVPlayerItem in an AVPlayer, it's perfect. It plays at a smooth 60 FPS, and my custom compositor's startRequest method is called 60 times per second.
Export Fails: When I try to export the exact same composition and video composition using AVAssetExportSession, the final .mp4 file is always 30 FPS (or 29.97).
I've logged inside my custom compositor during the export, and it's definitely being called 30 times per second, so it's generating the 30 frames. It seems like AVAssetExportSession is just dropping every other frame when it encodes the video.
My source videos are screen recordings which I recorded using ScreenCaptureKit itself with the minimum frame interval to be 60.
Here is my export function. I'm using the AVAssetExportPresetHighestQuality preset :-
func exportVideo(to outputURL: URL) async throws {
guard let composition = composition,
let videoComposition = videoComposition else {
throw VideoCompositionError.noValidVideos
}
try? FileManager.default.removeItem(at: outputURL)
guard let exportSession = AVAssetExportSession(
asset: composition,
presetName: AVAssetExportPresetHighestQuality // Is this the problem?
) else {
throw VideoCompositionError.trackCreationFailed
}
exportSession.outputFileType = .mp4
exportSession.videoComposition = videoComposition // This has the 60fps setting
try await exportSession.export(to: outputURL, as: .mp4)
}
I've created a bare bones sample project that shows this exact bug in action. The resulting video is 60fps during playback, but only 30fps during the export. https://github.com/zaidbren/SimpleEditor
My Question:
Why is AVAssetExportSession ignoring my 60 FPS frameDuration and defaulting to 30 FPS, even though AVPlayer respects it?
When trying to record ProRes RAW (btp2) with AVAssetWriter I get several types of errors: -12780 or -11875.
I wonder if recording ProRes RAW can only be done through AVCaptureMovieFileOutput, or if there a way to correctly configure AVAssetWriter to do it.
I'm using the SwiftUI Photos Picker to select videos from the users Photos library and then opening the video using the PhotosPickerItem.
I'm looking for a way to allow the user to open the same video on their other devices as the app uses SwiftData and CloudKit to provide access to a recently watched list of videos.
The URL from the PhotosPickerItem appears to be device specific and so I was looking to see if I can use the itemIdentifier and then the init that takes the itemIdentifier to create the PhotosPickerItem on the other devices. The itemIdentifier however is always nil and so wouldn't be able to be used in this way.
Is there an alternative approach whereby the users can open a video using a PhotosPickerItem and that item would be viewable on their other devices with an item identifier or a URL that is device agnostic. This approach should also not involve copying the video into other storage as it would simply expand the use of the users iCloud storage, providing a less than ideal user experience.
If the user has opened the video from their Photos library, there should be a way to allow the same user (e.g. same Apple ID), to use the same app on another device to open the video again.
Adding both AVCaptureMovieFileOutput and AVCaptureVideoDataOutput is supported in AVCaptureSession as seen in documentation (copied snippet below) but then when AVCaptureDevice is configured with ProRes422 codec, it fails unless one of the two outputs is removed from the capture session. It is very much reproducible on iPhone 14 pro running iOS 26.0.
Prior to iOS 16, you can add an AVCaptureVideoDataOutput and an AVCaptureMovieFileOutput to the same session, but only one may have its connection active. If you attempt to enable both connections, the system chooses the movie file output as the active connection and disables the video data output’s connection. For apps that link against iOS 16 or later, this restriction no longer exists.
Playback of any kind of HD H.264 MP4 files (720p, 50fps) could randomly cause a stalled image. Playback does not stop in such a case. Image is frozen/stalled. Audio goes on. Timeline goes on. By tapping play/pause or scrubbing in the timeline, the playback recovers.
It could also happen, if you are scrubbing in the timeline, especially to areas not loaded already (progressive MP4 download). Behaviour is always the same: image is stalled/frozen, audio goes on.
To reproduce: use example project https://developer.apple.com/documentation/AVKit/playing-video-content-in-a-standard-user-interface
Example file: https://www.keepinmind.info/test.mp4
I’m building a macOS video editor that uses AVComposition and AVVideoComposition.
Initially, my renderer creates a composition with some default video/audio tracks:
@Published var composition: AVComposition?
@Published var videoComposition: AVVideoComposition?
@Published var playerItem: AVPlayerItem?
Then I call a buildComposition() function that inserts all the default video segments.
Later in the editing workflow, the user may choose to add their own custom video clip. For this I have a function like:
private func handlePickedVideo(_ url: URL) {
guard url.startAccessingSecurityScopedResource() else {
print("Failed to access security-scoped resource")
return
}
let asset = AVURLAsset(url: url)
let videoTracks = asset.tracks(withMediaType: .video)
guard let firstVideoTrack = videoTracks.first else {
print("No video track found")
url.stopAccessingSecurityScopedResource()
return
}
renderer.insertUserVideoTrack(from: asset, track: firstVideoTrack)
url.stopAccessingSecurityScopedResource()
}
What I want to achieve is the same behavior professional video editors provide,
after the composition has already been initialized and built, the user should be able to add a new video track and the composition should update live, meaning the preview player should immediately reflect the changes without rebuilding everything from scratch manually.
How can I structure my AVComposition / AVMutableComposition and my rendering pipeline so that adding a new clip later updates the existing composition in real time (similar to Final Cut/Adobe Premiere), instead of needing to rebuild everything from zero?
You can find a playable version of this entire setup at :- https://github.com/zaidbren/SimpleEditor
I believe this should work:
CFMutableDictionaryRef attrs = CFDictionaryCreateMutable(kCFAllocatorDefault, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
CFDictionaryAddValue(attrs, kCVImageBufferColorPrimariesKey, kCVImageBufferColorPrimaries_ITU_R_709_2);
CFDictionaryAddValue(attrs, kCVImageBufferTransferFunctionKey, kCVImageBufferTransferFunction_ITU_R_709_2);
CFDictionaryAddValue(attrs, kCVImageBufferYCbCrMatrixKey, kCVImageBufferYCbCrMatrix_ITU_R_709_2);
CVPixelBufferRef pixelBuffer = NULL;
CVPixelBufferCreate(kCFAllocatorDefault, width, height, kCVPixelFormatType_32ARGB, attrs, &pixelBuffer);
assert(CFDictionaryGetCount(CVBufferGetAttachments(pixelBuffer, kCVAttachmentMode_ShouldPropagate)) > 0);
But that last assert fails, so it appears the color info does not get attached.
kCVImageBufferColorPrimariesKey and the others are not one of the keys listed under BufferAttributeKeys, but I think they're supposed to be allowed because they're listed by CMVideoFormatDescriptionGetExtensionKeysCommonWithImageBuffers().
I'm hoping that putting the color matrix info in there will control how AVAssetWriter converts the RGB to YCbCr.
Topic:
Media Technologies
SubTopic:
Video