Discuss using the camera on Apple devices.

Posts under Camera tag

95 Posts

Post

Replies

Boosts

Views

Activity

Camera launched via Camera Control is terminated with “AVCaptureEventInteraction not installed” when viewing/editing photos
I’m seeing a reproducible system-level Camera crash/termination on iPhone Air running iOS 26.4.2. Steps to reproduce: Press Camera Control to launch the Camera app. Tap the lower-left thumbnail to enter the recent photo view. Browse photos, or tap Edit and start cropping a photo. The Camera/Photos flow unexpectedly exits and returns to the Home Screen or widget view. Additional detail: The issue can happen whether or not a new photo is taken after launching Camera with Camera Control. In other words, using Camera Control as a shortcut into Camera, then tapping the lower-left thumbnail to browse photos, can trigger the issue. Sometimes it happens while only browsing photos, without entering Edit. Expected result: The photo viewer/editor should stay open and allow normal browsing or cropping. Actual result: The flow exits unexpectedly. Mac Console evidence: Around 2026-05-12 21:53:59-21:54:00, Console showed SpringBoard/RunningBoard terminating com.apple.camera. Relevant log excerpt: Capture Application Requirements Unmet: "AVCaptureEventInteraction not installed" reportType: CrashLog ReportCrash Parsing corpse data for pid 94087 com.apple.camera: Foreground: false Storage is sufficient. Restart/reset-style support steps have already been tried and did not resolve the issue. This appears specific to the Camera Control launch path, not normal Photos app browsing. Has anyone else seen this on iOS 26.x, or is this a known Camera Control / AVCaptureEventInteraction regression? Already Filed as FB22766094.
0
0
58
9h
RealityView Camera Target Error when set while Orbiting
When interacting with RealityView’s realityViewCameraControls .orbit and setting a new RealityViewCameraContent .cameraTarget, the resulting camera target and camera orbit is incorrect. This can be demonstrated where one finger is orbiting the RealityView, and another pushes a button which changes the camera target. Instead of the camera facing the new target, some point in the scene is the new effective camera target and orbit point. This only occurs when an orbit interaction is currently taking place. If you stop interacting with the orbit, change target, then start orbit interacting again, everything works as expected. Though this example uses two-touches, any change of the camera target has this conflict with orbit interaction. This means interacting with orbit will result in the wrong camera view which is unexpected for users and difficult to reconcile or detect, for developers. Expected: Interacting (orbiting) the scene while setting a new camera target with the buttons on screen (at the same time), the camera’s new target shows centred in view the orbit revolves the new target and continues to match my gestures. Reality: Interacting (orbiting) the scene while setting a new camera target with the buttons on screen (at the same time), the camera’s new target is not centred in view, and camera is now orbiting an unexpected point in the scene, that is not my expected target. One imperfect workaround is to force a rebuild of the view after setting a new cameraTarget. This sets all targets correctly but results in a flicker, loss of orbit controls until re-touch and ultimately is a poor user experience, but is better than the wrong target being shown unexpectedly. Code Sample: import SwiftUI import RealityKit struct RKOribtTarget: View { @State private var target: Int = 0 @State private var rcContent: RealityViewCameraContent? @State private var rkID: UUID = UUID() let root = Entity() let center = ModelEntity(mesh: .generateSphere(radius: 0.05), materials: [UnlitMaterial(color: UIColor(.gray.opacity(0.5)))]) let red = ModelEntity(mesh: .generateBox(size: 0.1), materials: [SimpleMaterial(color: .red, isMetallic: false)]) let blue = ModelEntity(mesh: .generateBox(size: 0.1), materials: [SimpleMaterial(color: .blue, isMetallic: false)]) let green = ModelEntity(mesh: .generateBox(size: 0.1), materials: [SimpleMaterial(color: .green, isMetallic: false)]) var body: some View { VStack{ RealityView { content in red.position.x = 0.5 blue.position.z = 0.5 green.position.y = 0.5 center.position = .init(repeating: 0.25) content.cameraTarget = target == 0 ? root : blue root.addChild(red) root.addChild(blue) root.addChild(green) root.addChild(center) content.add(root) } update: { content in switch target{ case 0: content.cameraTarget = root case 1: content.cameraTarget = blue case 2: content.cameraTarget = red case 3: content.cameraTarget = green default: content.cameraTarget = root } } .id(rkID) .realityViewCameraControls(.orbit) VStack{ Text("Target") Button("Default") { target = 0 // Force rebuilding view resets orbit target and rotation // But shows a flicker, interaction requires touch reset // Not an ideal workaround // rkID = UUID() } .buttonStyle(.bordered) Button("Blue") { target = 1 // rkID = UUID() } .buttonStyle(.bordered) .tint(.blue) Button("Red") { target = 2 // rkID = UUID() } .buttonStyle(.bordered) .tint(.red) Button("Green") { target = 3 // rkID = UUID() } .buttonStyle(.bordered) .tint(.green) } } } } Xcode Version: Version 26.0 (17A324) iOS Version: iOS 26.5 (23F75) Tested on devices, iPhone 12 Pro, iPhone 15 Pro
2
0
454
16h
Radiometric interpretation of Apple ProRAW and Bayer RAW access via AVFoundation
I am working on a computational photography research project involving multi-exposure HDR reconstruction using Bayer RAW and Apple ProRAW captures. I would like to clarify the radiometric interpretation of Apple ProRAW and the availability of Bayer RAW capture through AVFoundation. My questions are: 1.On current iPhone Pro devices, is it possible for third-party apps to capture and export true Bayer-pattern RAW DNG files through AVFoundation, rather than Apple ProRAW linear DNG files? If so, which availableRawPhotoPixelFormatTypes correspond to Bayer RAW, and what device or format restrictions apply? 2.Apple ProRAW appears to be demosaiced and computationally processed, and may include multi-frame fusion. Is the decoded ProRAW image intended to be radiometrically linear and scene-referred? 3.For a bracketed ProRAW sequence captured with fixed ISO, white balance, lens, and focus, but different exposure times, can one assume that the decoded linear pixel values Y_i(p) satisfy an exposure-proportional model in non-saturated regions, such as Y_i(p) ≈ t_i R(p), across brackets? This question is about radiometric consistency for algorithmic use, not about visual editing or tone mapping. Thank you for your help.
0
0
139
3d
Radiometric interpretation of Apple ProRAW and Bayer RAW access via AVFoundation
I am working on a computational photography research project involving multi-exposure HDR reconstruction using Bayer RAW and Apple ProRAW captures. I would like to clarify the radiometric interpretation of Apple ProRAW and the availability of Bayer RAW capture through AVFoundation. My questions are: On current iPhone Pro devices, is it possible for third-party apps to capture and export true Bayer-pattern RAW DNG files through AVFoundation, rather than Apple ProRAW linear DNG files? If so, which availableRawPhotoPixelFormatTypes correspond to Bayer RAW, and what device or format restrictions apply? Apple ProRAW appears to be demosaiced and computationally processed, and may include multi-frame fusion. Is the decoded ProRAW image intended to be radiometrically linear and scene-referred? For a bracketed ProRAW sequence captured with fixed ISO, white balance, lens, and focus, but different exposure times, can one assume that the decoded linear pixel values Y_i(p) satisfy an exposure-proportional model in non-saturated regions, such as Y_i(p) ≈ t_i R(p), across brackets? This question is about radiometric consistency for algorithmic use, not about visual editing or tone mapping. Thank you for your help.
0
0
144
3d
Working with kCVPixelFormatType_96VersatileBayerPacked12
Whilst AVCaptureSession is setup to capture ProRes RAW video, is it possible to get video pixel data which can read and processed, such as using CIImage(cvPixelBuffer: ) AVCaptureVideoDataOutput outputs ProRes RAW in kCVPixelFormatType_96VersatileBayerPacked12 pixel format. Is there a provided way to debayer this pixel format into something more usable?
0
0
100
1w
Setting up video and image capture pipeline creates internal errors in AVFoundation.
I have created code for iOS that allows me to start and stop video acquisition from a proprietary USB camera using AVFoundation's AVCaptureSession and AVCaptureDevice APIs. There is a start and stop method. The start method takes an argument to specify one of two formats that I use for my custom camera application. I can start the session and switch between formats all day without any errors. However, if I start and then stop the camera three times in a row, on the third invocation of start, I get errors in the console output and the CMSampleBuffers stop flowing to my callback. Additionally, once I get AVFoundation into this state, stoping the camera doesn't help. I have to kill the app and start over. Here are the errors. And below these, the code. I'm hoping someone who has experience with these errors or an engineer from Apple who knows the AVFoundation image capture pipeline code, can respond and tell me what I'm doing wrong. Thanks. <<<< FigCaptureSourceRemote >>>> Fig assert: "! storage->connectionDied" at bail (FigCaptureSourceRemote.m:235) - (err=0) <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:558) - (err=-16453) <<<< FigCaptureSourceRemote >>>> Fig assert: "! storage->connectionDied" at bail (FigCaptureSourceRemote.m:235) - (err=0) <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:253) - (err=-16453) <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:269) - (err=-16453) <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:511) - (err=-16453) Capture session error: The operation could not be completed Capture session error: The operation could not be completed func start(for deviceFormat: String) async throws -> AnyPublisher<CMSampleBuffer, Swift.Error> { func configureCaptureDevice(with deviceFormat: String) throws { guard let format = formatDict[deviceFormat] else { throw Error.captureFormatNotFound } captureSession.beginConfiguration() defer { captureSession.commitConfiguration() } try captureDevice.lockForConfiguration() captureDeviceFormat = deviceFormat captureDevice.activeFormat = format captureDevice.unlockForConfiguration() } return try await withCheckedThrowingContinuation { continuation in sessionQueue.async { [unowned self] in logger.debug("Start capture session for \(deviceFormat): \(String(describing: captureSession))") // If we were already steaming camera images from a different mode, terminate that stream. bufferPublisher?.send(completion: .finished) bufferPublisher = nil captureDeviceFormat = "" do { // Re-configure with the new format; should be harmless if called with the currently configured format. try configureCaptureDevice(with: deviceFormat) // Return a new stream publisher for this invocation. bufferPublisher = PassthroughSubject<CMSampleBuffer, Swift.Error>() // If we are not currently running, start the image capture pipeline. if captureSession.isRunning == false { captureSession.startRunning() } continuation.resume(returning: bufferPublisher!.eraseToAnyPublisher()) } catch { logger.fault("Failed to start camera: \(error.localizedDescription)") continuation.resume(throwing: error) } } } } func stop() async throws { try await withCheckedThrowingContinuation { continuation in sessionQueue.async { [unowned self] in logger.debug("Stop capture session: \(String(describing: captureSession))") // The following invocation is synchronous and takes time to execute; // looks like a stall but you can ignore it as the MainActor is not blocked. captureSession.stopRunning() // Terminate the stream and reset our state. bufferPublisher?.send(completion: .finished) bufferPublisher = nil captureDeviceFormat = "" // Signal the caller that we are done here. continuation.resume() } } }
0
0
232
4w
AVCaptureVideoDataOutput stops zooming while AVCaptureVideoPreviewLayer continues — physical wide / ultra-wide / telephoto only
We use a single AVCaptureSession with AVCaptureVideoPreviewLayer and AVCaptureVideoDataOutput (preview-sized buffers, BGRA). When we increase videoZoomFactor, beyond a certain zoom level the image from AVCaptureVideoDataOutput no longer zooms further, while AVCaptureVideoPreviewLayer continues to zoom with the same zoom control. The preview and the video-data output therefore diverge. This behavior appears when the active camera is a physical lens device — wide, ultra-wide, or telephoto (e.g. builtInWideAngleCamera, builtInUltraWideCamera, builtInTelephotoCamera, or similar). It does not appear when the active input is a virtual / multi-camera (e.g. triple camera, dual-wide, or other system multi-camera). Are there known conditions under which this mismatch between preview and video-data output is expected? Thank you.
1
0
190
Apr ’26
Is 18MP Front Camera Capture Available to Third-Party Apps via AVFoundation?
Hi, I'm investigating whether 18MP photo capture from the front camera on iPhone 17 Pro is available to third-party apps using AVFoundation. I first inspected all available AVCaptureDevice formats, but I could not find any format corresponding to ~18MP resolution (e.g., around 4896×3672). for format in device.formats { let desc = format.formatDescription let dims = CMVideoFormatDescriptionGetDimensions(desc) print("Format: (dims.width) x (dims.height)") } All reported formats appear to be limited to resolutions such as 4032×3024 (12MP) or below. Question: Is 18MP front camera capture actually available to third-party apps via AVFoundation on iPhone 17?
1
0
475
Apr ’26
iPadOS problem with camera focus
After update to ipad OS 26.4 or latest 26.3., we’ve been experiencing issues with focusing. We have an app that scans 1x1 cm QR or DataMatrix codes from a distance of 10–20 cm, and users across different devices (ipad 9 and 10) are reporting problems. I didn’t find anything related to the camera in the version changelog, but users from various places are reporting problem with camera.
0
0
307
Mar ’26
Management of Camera File Formats
It seems like every time an IOS update is installed, the camera app file formats get reset to defaults. This setting is not available to manage at the MDM level. Many people need the the most compatible settings for the purpose of file sharing. So, now we have nearly 1,000 devices with a complete mix of photo and video formats. And IT has wasted MANY hours converting files for people. Feature Request: Please either stop resetting the camera app file formats or allow us to manage those settings at the MDM level. Respectfully, Robert
1
0
1.1k
Mar ’26
Corrupted image data when using QualityPrioritization.Quality on iPhone 17 Pro
Hey, I've noticed that in some scenarios photo data can be corrupted from the cameras on iPhone 17 Pro. The requirements are: The zoom level is greater than 2 times the base zoom, so 2x for the wide lens, and 8x for the telephoto. QualityPrioritization is set to .Quality. If set to .Balanced the images look as expected. The scene is well lit. I haven't managed to work out if there's an ISO cut off, but in darker scenes the images look as expected. The scene does not contain any objects or texture, e.g. a blank white screen, a blue sky, up close against a bright wall. Here is an example: This is really weird behavior. I have opened a ticket here: https://feedbackassistant.apple.com/feedback/22092908 There's also a repo here if anyone would like to try it: https://github.com/alexfoxy/CameraQualityTest. Thanks, Alex
0
0
162
Mar ’26
Locked Camera Capture Extension: provisioning profile for ExtensionKit appex missing com.apple.developer.locked-camera-capture entitlement (paid team)
I’m attempting to use a Locked Camera Capture Extension (created from Xcode’s template / following Apple’s “Creating a camera experience for the Lock Screen” guidance). The extension builds, embeds, and installs on a physical device, but I cannot get it provisioned with the required entitlement com.apple.developer.locked-camera-capture. Environment Xcode: 26.0.1 (17A400) iOS: 26.2.1 (device) Apple Developer Program: paid Individual (Team ID: FT55UW9363) Key issue: provisioning profile for the ExtensionKit appex lacks the locked-camera entitlement The locked camera capture target is embedded as an ExtensionKit extension: .../DirectionalCamera.app/Extensions/LockedCapture.appex I decoded the embedded provisioning profile inside that .appex and printed its Entitlements dictionary: security cms -D -i ".../DirectionalCamera.app/Extensions/LockedCapture.appex/embedded.mobileprovision" > /tmp/locked_profile.plist /usr/libexec/PlistBuddy -c "Print:Entitlements" /tmp/locked_profile.plist Entitlements present in the embedded profile: Dict { com.apple.developer.avfoundation.multitasking-camera-access = true application-identifier = FT55UW9363.arp.geocam.LockedCapture keychain-access-groups = Array { FT55UW9363.* com.apple.token } get-task-allow = true com.apple.security.application-groups = Array { group.arp.geocam } com.apple.developer.team-identifier = FT55UW9363 } Critically, the required entitlement is absent: /usr/libexec/PlistBuddy -c "Print:Entitlements:com.apple.developer.locked-camera-capture" /tmp/locked_profile.plist Print: Entry, ":Entitlements:com.apple.developer.locked-camera-capture", Does Not Exist Build behavior If I manually add com.apple.developer.locked-camera-capture to the extension’s .entitlements, Xcode refuses to sign with: “Provisioning profile failed qualification: Profile doesn't include the com.apple.developer.locked-camera-capture entitlement.” Notes The only other embedded extension is a widget/control extension under .../DirectionalCamera.app/PlugIns/... with a separate profile (expected). Question Has anyone successfully provisioned a Locked Camera Capture Extension on a standard paid developer account? Is com.apple.developer.locked-camera-capture gated/restricted (requiring Apple to enable it for a specific Team ID / App ID), or is there a specific capability in the Developer portal that maps to it? If it’s restricted, what is the official process to request enablement for a team/app-id? Any pointers appreciated.
2
0
282
Feb ’26
Lock Screen Quick Action Fails to Present CameraCaptureIntent View After Main App Transition
I am encountering an issue where the Lock Screen Quick Action fails to visibly open my app. My app is a camera application that utilizes a CameraCaptureIntent to launch a standalone, lightweight camera view (accessible while the device is locked), distinct from the main application. Steps to Reproduce: Open the lightweight camera view using the Lock Screen Quick Action. From this view, launch the Main App. Lock the iPhone (put it to sleep). Attempt to launch the lightweight camera view via the Quick Action again. A slight animation occurs, but the camera view does not appear on screen. After multiple tests, it seems the view is actually launching but remains in an "invisible state." I suspect that the system hides the lightweight camera view when transitioning to the Main App, but fails to reset this hidden state when the Quick Action is triggered subsequently. I would appreciate any guidance on a potential workaround or confirmation if this is a known issue awaiting a system update.
0
1
232
Feb ’26
AVCam Sample Code - Undesired "Jump" in Video Recording Image
On iPhone 16 Pro Max (not tested other devices) there's a noticeable jump in the framing of the preview video when you record in the iOS AVCam Sample App. The same jump in camera framing can be observed by switching to the front facing camera and then back to the rear one. It looks roughly consistent with switching between the 0.5x and 1x camera (but not quite a match for the same viewable area in the Camera app) - and it's only when it's initially loaded, once recording is started it retains the 'closer' image no matter how many times it's stopped/started thereafter. I'm relatively new to Swift and haven't done anything with the camera before, so odd 'buggy' behaviour in the sample code isn't helping me understand it! :-) Is there any way to fix this?
0
0
413
Jan ’26
Are White Balance gains applied before or after ADC?
At which point in the image processing pipeline does iOS apply the white balance gains which can be set via AVCaptureDevice.setWhiteBalanceModeLocked(with:completionHandler:)? Are those gains applied in the analog part of the camera pipeline, before the pixel voltage gets converted via the ADC to digital values? Or does the camera first convert the pixel voltages to digital values and then the gains are applied to the digital values? Is this consistent across devices or can the behavior vary from device to device?
1
0
404
Jan ’26
Mac Studio: Continuity Camera unavailable after reboot unless USB camera is connected
Summary On Mac Studio systems (no built-in camera), macOS does not initialize camera services after a normal reboot if no physical camera is present. As a result, Continuity Camera does not appear anywhere in the system. Observed behavior System Information → Camera reports “No video capture devices were found.” Continuity Camera (iPhone) is completely absent from camera lists. Plugging in any USB UVC webcam immediately initializes camera services and causes both the USB camera and the iPhone (Continuity Camera) to appear. The USB camera can then be unplugged and Continuity Camera continues working until the next reboot. Reproduction steps Use a Mac Studio (no built-in camera) on recent macOS. Ensure no USB webcam or external camera is connected. Reboot the Mac normally. After login, open System Information → Camera. Expected Camera services should initialize even when no physical camera is present, allowing Continuity Camera to be available as the primary camera. Actual No camera devices are present unless a physical USB camera is connected at least once after boot. This reproduces 100% of the time on Mac Studio and appears to be a camera service bootstrap issue where Continuity Camera cannot be the first camera device. Issue has been filed via Feedback Assistant.
1
0
244
Jan ’26
Any way to trigger cameraLensSmudgeDetectionStatus to change?
Looking to implement to UI to tell the user to clean their lens in our app. Implemented the KVO for the cameraLensSmudgeDetectionStatus but I'm having issues reliably triggering it in, both in our app and the main camera app. Tried to get inventive by putting tupperware over the lens, but I think the model driving this or the LiDAR sensor might be smart enough to detect there is something close to the lens. Is there any way to trigger this change in a similar way we can trigger thermal changes in debug? Thanks.
2
0
465
Jan ’26
Access UltraWideCamera when ARSession is running
ARSession provides video stream from the wide angle camera. If ARSession uses the ultra wide camera at the same time, ARSession may provide video stream from that camera, otherwise AVCaptureSession with an ultra wide camera should be allowed to launch. It would be very very useful if we can access different cameras while ARSession is running. We'd like to cooperate with you if possible. Steps to reproduce: run an AVCaptureSession and then run an ARSession. The AVCaptureSession stops.
1
0
604
Jan ’26
AVCaptureSession preview briefly goes blank after interruption (lock/unlock or camera switch) while isRunning == true
Environment Device: iPhone 15 Pro iOS: iOS 18.0 Framework: AVFoundation App type: Custom camera app using AVCaptureSession + AVCaptureVideoPreviewLayer I’m seeing an intermittent but frequent issue where the camera preview layer briefly flashes empty after certain interruptions, even though the capture session reports itself as running and no errors are emitted. This happens most often after: Locking and unlocking the device Switching cameras (back ↔ front) The issue is not 100% reproducible, but occurs often enough to be noticeable in normal usage. What happens The preview layer briefly flashes as empty (sometimes just a “micro-frame”) Duration: typically ~0.5–2 seconds before frames resume session.isRunning == true throughout No crash, no runtime error, no interruption end failure Focus/exposure restore correctly once frames resume Visually it looks like the preview layer loses frames temporarily, even though the session appears healthy. Repro Intermittent but frequent after: Lock → unlock device Switching camera (front/back) Timing-dependent and non-deterministic Happens multiple times per session, but not every time Key observation AVCaptureSession.isRunning == true does not guarantee that frames are actually flowing. To verify this, I added an AVCaptureVideoDataOutput temporarily: During the blank period, no sample buffers are delivered Frames resume after ~1–2s without any explicit restart Session state remains “running” the entire time What I’ve tried (did NOT fix it) Adding delays before/after startRunning() (0.1–0.5s) Calling startRunning() on different queues Restarting the session in AVCaptureSessionInterruptionEnded Verifying session.connections (all show isActive == true) Rebuilding inputs/outputs during interruption recovery Ensuring startRunning() is never called between beginConfiguration() / commitConfiguration() (Hit the expected runtime warning when attempted) None of the above removed the brief blank preview. Workaround (works visually but expensive) This visually fixes the issue, but: Energy impact jumps from Low → High in Xcode Energy Gauge AVCaptureVideoDataOutput processes 30–60 FPS continuously The gap only lasts ~1–2s, but toggling the delegate on/off cleanly is difficult Overall CPU and energy cost is not acceptable for production Additional notes CPU usage is already relatively high even without the workaround (this app is camera-heavy by nature) With the workaround enabled, energy impact becomes noticeably worse The issue feels like a timing/state desync between session state and actual frame delivery, not a UI issue Questions Is this a known behavior where AVCaptureSession.isRunning == true but frames are temporarily unavailable after interruptions? Is there a recommended way to detect actual frame flow resumption (not just session state)? Should the AVCaptureVideoPreviewLayer.connection (isActive / isEnabled) be explicitly checked or reset after interruptions? Is there a lightweight, energy-efficient way to bridge this short “no frames” gap without using AVCaptureVideoDataOutput? Is rebuilding the entire session the only reliable solution here, or is there a better pattern Apple recommends?
1
0
976
Jan ’26
Camera launched via Camera Control is terminated with “AVCaptureEventInteraction not installed” when viewing/editing photos
I’m seeing a reproducible system-level Camera crash/termination on iPhone Air running iOS 26.4.2. Steps to reproduce: Press Camera Control to launch the Camera app. Tap the lower-left thumbnail to enter the recent photo view. Browse photos, or tap Edit and start cropping a photo. The Camera/Photos flow unexpectedly exits and returns to the Home Screen or widget view. Additional detail: The issue can happen whether or not a new photo is taken after launching Camera with Camera Control. In other words, using Camera Control as a shortcut into Camera, then tapping the lower-left thumbnail to browse photos, can trigger the issue. Sometimes it happens while only browsing photos, without entering Edit. Expected result: The photo viewer/editor should stay open and allow normal browsing or cropping. Actual result: The flow exits unexpectedly. Mac Console evidence: Around 2026-05-12 21:53:59-21:54:00, Console showed SpringBoard/RunningBoard terminating com.apple.camera. Relevant log excerpt: Capture Application Requirements Unmet: "AVCaptureEventInteraction not installed" reportType: CrashLog ReportCrash Parsing corpse data for pid 94087 com.apple.camera: Foreground: false Storage is sufficient. Restart/reset-style support steps have already been tried and did not resolve the issue. This appears specific to the Camera Control launch path, not normal Photos app browsing. Has anyone else seen this on iOS 26.x, or is this a known Camera Control / AVCaptureEventInteraction regression? Already Filed as FB22766094.
Replies
0
Boosts
0
Views
58
Activity
9h
RealityView Camera Target Error when set while Orbiting
When interacting with RealityView’s realityViewCameraControls .orbit and setting a new RealityViewCameraContent .cameraTarget, the resulting camera target and camera orbit is incorrect. This can be demonstrated where one finger is orbiting the RealityView, and another pushes a button which changes the camera target. Instead of the camera facing the new target, some point in the scene is the new effective camera target and orbit point. This only occurs when an orbit interaction is currently taking place. If you stop interacting with the orbit, change target, then start orbit interacting again, everything works as expected. Though this example uses two-touches, any change of the camera target has this conflict with orbit interaction. This means interacting with orbit will result in the wrong camera view which is unexpected for users and difficult to reconcile or detect, for developers. Expected: Interacting (orbiting) the scene while setting a new camera target with the buttons on screen (at the same time), the camera’s new target shows centred in view the orbit revolves the new target and continues to match my gestures. Reality: Interacting (orbiting) the scene while setting a new camera target with the buttons on screen (at the same time), the camera’s new target is not centred in view, and camera is now orbiting an unexpected point in the scene, that is not my expected target. One imperfect workaround is to force a rebuild of the view after setting a new cameraTarget. This sets all targets correctly but results in a flicker, loss of orbit controls until re-touch and ultimately is a poor user experience, but is better than the wrong target being shown unexpectedly. Code Sample: import SwiftUI import RealityKit struct RKOribtTarget: View { @State private var target: Int = 0 @State private var rcContent: RealityViewCameraContent? @State private var rkID: UUID = UUID() let root = Entity() let center = ModelEntity(mesh: .generateSphere(radius: 0.05), materials: [UnlitMaterial(color: UIColor(.gray.opacity(0.5)))]) let red = ModelEntity(mesh: .generateBox(size: 0.1), materials: [SimpleMaterial(color: .red, isMetallic: false)]) let blue = ModelEntity(mesh: .generateBox(size: 0.1), materials: [SimpleMaterial(color: .blue, isMetallic: false)]) let green = ModelEntity(mesh: .generateBox(size: 0.1), materials: [SimpleMaterial(color: .green, isMetallic: false)]) var body: some View { VStack{ RealityView { content in red.position.x = 0.5 blue.position.z = 0.5 green.position.y = 0.5 center.position = .init(repeating: 0.25) content.cameraTarget = target == 0 ? root : blue root.addChild(red) root.addChild(blue) root.addChild(green) root.addChild(center) content.add(root) } update: { content in switch target{ case 0: content.cameraTarget = root case 1: content.cameraTarget = blue case 2: content.cameraTarget = red case 3: content.cameraTarget = green default: content.cameraTarget = root } } .id(rkID) .realityViewCameraControls(.orbit) VStack{ Text("Target") Button("Default") { target = 0 // Force rebuilding view resets orbit target and rotation // But shows a flicker, interaction requires touch reset // Not an ideal workaround // rkID = UUID() } .buttonStyle(.bordered) Button("Blue") { target = 1 // rkID = UUID() } .buttonStyle(.bordered) .tint(.blue) Button("Red") { target = 2 // rkID = UUID() } .buttonStyle(.bordered) .tint(.red) Button("Green") { target = 3 // rkID = UUID() } .buttonStyle(.bordered) .tint(.green) } } } } Xcode Version: Version 26.0 (17A324) iOS Version: iOS 26.5 (23F75) Tested on devices, iPhone 12 Pro, iPhone 15 Pro
Replies
2
Boosts
0
Views
454
Activity
16h
Radiometric interpretation of Apple ProRAW and Bayer RAW access via AVFoundation
I am working on a computational photography research project involving multi-exposure HDR reconstruction using Bayer RAW and Apple ProRAW captures. I would like to clarify the radiometric interpretation of Apple ProRAW and the availability of Bayer RAW capture through AVFoundation. My questions are: 1.On current iPhone Pro devices, is it possible for third-party apps to capture and export true Bayer-pattern RAW DNG files through AVFoundation, rather than Apple ProRAW linear DNG files? If so, which availableRawPhotoPixelFormatTypes correspond to Bayer RAW, and what device or format restrictions apply? 2.Apple ProRAW appears to be demosaiced and computationally processed, and may include multi-frame fusion. Is the decoded ProRAW image intended to be radiometrically linear and scene-referred? 3.For a bracketed ProRAW sequence captured with fixed ISO, white balance, lens, and focus, but different exposure times, can one assume that the decoded linear pixel values Y_i(p) satisfy an exposure-proportional model in non-saturated regions, such as Y_i(p) ≈ t_i R(p), across brackets? This question is about radiometric consistency for algorithmic use, not about visual editing or tone mapping. Thank you for your help.
Replies
0
Boosts
0
Views
139
Activity
3d
Radiometric interpretation of Apple ProRAW and Bayer RAW access via AVFoundation
I am working on a computational photography research project involving multi-exposure HDR reconstruction using Bayer RAW and Apple ProRAW captures. I would like to clarify the radiometric interpretation of Apple ProRAW and the availability of Bayer RAW capture through AVFoundation. My questions are: On current iPhone Pro devices, is it possible for third-party apps to capture and export true Bayer-pattern RAW DNG files through AVFoundation, rather than Apple ProRAW linear DNG files? If so, which availableRawPhotoPixelFormatTypes correspond to Bayer RAW, and what device or format restrictions apply? Apple ProRAW appears to be demosaiced and computationally processed, and may include multi-frame fusion. Is the decoded ProRAW image intended to be radiometrically linear and scene-referred? For a bracketed ProRAW sequence captured with fixed ISO, white balance, lens, and focus, but different exposure times, can one assume that the decoded linear pixel values Y_i(p) satisfy an exposure-proportional model in non-saturated regions, such as Y_i(p) ≈ t_i R(p), across brackets? This question is about radiometric consistency for algorithmic use, not about visual editing or tone mapping. Thank you for your help.
Replies
0
Boosts
0
Views
144
Activity
3d
Working with kCVPixelFormatType_96VersatileBayerPacked12
Whilst AVCaptureSession is setup to capture ProRes RAW video, is it possible to get video pixel data which can read and processed, such as using CIImage(cvPixelBuffer: ) AVCaptureVideoDataOutput outputs ProRes RAW in kCVPixelFormatType_96VersatileBayerPacked12 pixel format. Is there a provided way to debayer this pixel format into something more usable?
Replies
0
Boosts
0
Views
100
Activity
1w
Setting up video and image capture pipeline creates internal errors in AVFoundation.
I have created code for iOS that allows me to start and stop video acquisition from a proprietary USB camera using AVFoundation's AVCaptureSession and AVCaptureDevice APIs. There is a start and stop method. The start method takes an argument to specify one of two formats that I use for my custom camera application. I can start the session and switch between formats all day without any errors. However, if I start and then stop the camera three times in a row, on the third invocation of start, I get errors in the console output and the CMSampleBuffers stop flowing to my callback. Additionally, once I get AVFoundation into this state, stoping the camera doesn't help. I have to kill the app and start over. Here are the errors. And below these, the code. I'm hoping someone who has experience with these errors or an engineer from Apple who knows the AVFoundation image capture pipeline code, can respond and tell me what I'm doing wrong. Thanks. <<<< FigCaptureSourceRemote >>>> Fig assert: "! storage->connectionDied" at bail (FigCaptureSourceRemote.m:235) - (err=0) <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:558) - (err=-16453) <<<< FigCaptureSourceRemote >>>> Fig assert: "! storage->connectionDied" at bail (FigCaptureSourceRemote.m:235) - (err=0) <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:253) - (err=-16453) <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:269) - (err=-16453) <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:511) - (err=-16453) Capture session error: The operation could not be completed Capture session error: The operation could not be completed func start(for deviceFormat: String) async throws -> AnyPublisher<CMSampleBuffer, Swift.Error> { func configureCaptureDevice(with deviceFormat: String) throws { guard let format = formatDict[deviceFormat] else { throw Error.captureFormatNotFound } captureSession.beginConfiguration() defer { captureSession.commitConfiguration() } try captureDevice.lockForConfiguration() captureDeviceFormat = deviceFormat captureDevice.activeFormat = format captureDevice.unlockForConfiguration() } return try await withCheckedThrowingContinuation { continuation in sessionQueue.async { [unowned self] in logger.debug("Start capture session for \(deviceFormat): \(String(describing: captureSession))") // If we were already steaming camera images from a different mode, terminate that stream. bufferPublisher?.send(completion: .finished) bufferPublisher = nil captureDeviceFormat = "" do { // Re-configure with the new format; should be harmless if called with the currently configured format. try configureCaptureDevice(with: deviceFormat) // Return a new stream publisher for this invocation. bufferPublisher = PassthroughSubject<CMSampleBuffer, Swift.Error>() // If we are not currently running, start the image capture pipeline. if captureSession.isRunning == false { captureSession.startRunning() } continuation.resume(returning: bufferPublisher!.eraseToAnyPublisher()) } catch { logger.fault("Failed to start camera: \(error.localizedDescription)") continuation.resume(throwing: error) } } } } func stop() async throws { try await withCheckedThrowingContinuation { continuation in sessionQueue.async { [unowned self] in logger.debug("Stop capture session: \(String(describing: captureSession))") // The following invocation is synchronous and takes time to execute; // looks like a stall but you can ignore it as the MainActor is not blocked. captureSession.stopRunning() // Terminate the stream and reset our state. bufferPublisher?.send(completion: .finished) bufferPublisher = nil captureDeviceFormat = "" // Signal the caller that we are done here. continuation.resume() } } }
Replies
0
Boosts
0
Views
232
Activity
4w
AVCaptureVideoDataOutput stops zooming while AVCaptureVideoPreviewLayer continues — physical wide / ultra-wide / telephoto only
We use a single AVCaptureSession with AVCaptureVideoPreviewLayer and AVCaptureVideoDataOutput (preview-sized buffers, BGRA). When we increase videoZoomFactor, beyond a certain zoom level the image from AVCaptureVideoDataOutput no longer zooms further, while AVCaptureVideoPreviewLayer continues to zoom with the same zoom control. The preview and the video-data output therefore diverge. This behavior appears when the active camera is a physical lens device — wide, ultra-wide, or telephoto (e.g. builtInWideAngleCamera, builtInUltraWideCamera, builtInTelephotoCamera, or similar). It does not appear when the active input is a virtual / multi-camera (e.g. triple camera, dual-wide, or other system multi-camera). Are there known conditions under which this mismatch between preview and video-data output is expected? Thank you.
Replies
1
Boosts
0
Views
190
Activity
Apr ’26
Is 18MP Front Camera Capture Available to Third-Party Apps via AVFoundation?
Hi, I'm investigating whether 18MP photo capture from the front camera on iPhone 17 Pro is available to third-party apps using AVFoundation. I first inspected all available AVCaptureDevice formats, but I could not find any format corresponding to ~18MP resolution (e.g., around 4896×3672). for format in device.formats { let desc = format.formatDescription let dims = CMVideoFormatDescriptionGetDimensions(desc) print("Format: (dims.width) x (dims.height)") } All reported formats appear to be limited to resolutions such as 4032×3024 (12MP) or below. Question: Is 18MP front camera capture actually available to third-party apps via AVFoundation on iPhone 17?
Replies
1
Boosts
0
Views
475
Activity
Apr ’26
iPadOS problem with camera focus
After update to ipad OS 26.4 or latest 26.3., we’ve been experiencing issues with focusing. We have an app that scans 1x1 cm QR or DataMatrix codes from a distance of 10–20 cm, and users across different devices (ipad 9 and 10) are reporting problems. I didn’t find anything related to the camera in the version changelog, but users from various places are reporting problem with camera.
Replies
0
Boosts
0
Views
307
Activity
Mar ’26
Management of Camera File Formats
It seems like every time an IOS update is installed, the camera app file formats get reset to defaults. This setting is not available to manage at the MDM level. Many people need the the most compatible settings for the purpose of file sharing. So, now we have nearly 1,000 devices with a complete mix of photo and video formats. And IT has wasted MANY hours converting files for people. Feature Request: Please either stop resetting the camera app file formats or allow us to manage those settings at the MDM level. Respectfully, Robert
Replies
1
Boosts
0
Views
1.1k
Activity
Mar ’26
Corrupted image data when using QualityPrioritization.Quality on iPhone 17 Pro
Hey, I've noticed that in some scenarios photo data can be corrupted from the cameras on iPhone 17 Pro. The requirements are: The zoom level is greater than 2 times the base zoom, so 2x for the wide lens, and 8x for the telephoto. QualityPrioritization is set to .Quality. If set to .Balanced the images look as expected. The scene is well lit. I haven't managed to work out if there's an ISO cut off, but in darker scenes the images look as expected. The scene does not contain any objects or texture, e.g. a blank white screen, a blue sky, up close against a bright wall. Here is an example: This is really weird behavior. I have opened a ticket here: https://feedbackassistant.apple.com/feedback/22092908 There's also a repo here if anyone would like to try it: https://github.com/alexfoxy/CameraQualityTest. Thanks, Alex
Replies
0
Boosts
0
Views
162
Activity
Mar ’26
Locked Camera Capture Extension: provisioning profile for ExtensionKit appex missing com.apple.developer.locked-camera-capture entitlement (paid team)
I’m attempting to use a Locked Camera Capture Extension (created from Xcode’s template / following Apple’s “Creating a camera experience for the Lock Screen” guidance). The extension builds, embeds, and installs on a physical device, but I cannot get it provisioned with the required entitlement com.apple.developer.locked-camera-capture. Environment Xcode: 26.0.1 (17A400) iOS: 26.2.1 (device) Apple Developer Program: paid Individual (Team ID: FT55UW9363) Key issue: provisioning profile for the ExtensionKit appex lacks the locked-camera entitlement The locked camera capture target is embedded as an ExtensionKit extension: .../DirectionalCamera.app/Extensions/LockedCapture.appex I decoded the embedded provisioning profile inside that .appex and printed its Entitlements dictionary: security cms -D -i ".../DirectionalCamera.app/Extensions/LockedCapture.appex/embedded.mobileprovision" > /tmp/locked_profile.plist /usr/libexec/PlistBuddy -c "Print:Entitlements" /tmp/locked_profile.plist Entitlements present in the embedded profile: Dict { com.apple.developer.avfoundation.multitasking-camera-access = true application-identifier = FT55UW9363.arp.geocam.LockedCapture keychain-access-groups = Array { FT55UW9363.* com.apple.token } get-task-allow = true com.apple.security.application-groups = Array { group.arp.geocam } com.apple.developer.team-identifier = FT55UW9363 } Critically, the required entitlement is absent: /usr/libexec/PlistBuddy -c "Print:Entitlements:com.apple.developer.locked-camera-capture" /tmp/locked_profile.plist Print: Entry, ":Entitlements:com.apple.developer.locked-camera-capture", Does Not Exist Build behavior If I manually add com.apple.developer.locked-camera-capture to the extension’s .entitlements, Xcode refuses to sign with: “Provisioning profile failed qualification: Profile doesn't include the com.apple.developer.locked-camera-capture entitlement.” Notes The only other embedded extension is a widget/control extension under .../DirectionalCamera.app/PlugIns/... with a separate profile (expected). Question Has anyone successfully provisioned a Locked Camera Capture Extension on a standard paid developer account? Is com.apple.developer.locked-camera-capture gated/restricted (requiring Apple to enable it for a specific Team ID / App ID), or is there a specific capability in the Developer portal that maps to it? If it’s restricted, what is the official process to request enablement for a team/app-id? Any pointers appreciated.
Replies
2
Boosts
0
Views
282
Activity
Feb ’26
Can I use the Camera API to shoot pictures with the wide camera, while AR is running on the main camera
I want to: Run ARKit on the main rear camera, and while it's running shoot high resolution pictures on the wide camera, without disturbing the AR tracking. Is this possible?
Replies
0
Boosts
0
Views
504
Activity
Feb ’26
Lock Screen Quick Action Fails to Present CameraCaptureIntent View After Main App Transition
I am encountering an issue where the Lock Screen Quick Action fails to visibly open my app. My app is a camera application that utilizes a CameraCaptureIntent to launch a standalone, lightweight camera view (accessible while the device is locked), distinct from the main application. Steps to Reproduce: Open the lightweight camera view using the Lock Screen Quick Action. From this view, launch the Main App. Lock the iPhone (put it to sleep). Attempt to launch the lightweight camera view via the Quick Action again. A slight animation occurs, but the camera view does not appear on screen. After multiple tests, it seems the view is actually launching but remains in an "invisible state." I suspect that the system hides the lightweight camera view when transitioning to the Main App, but fails to reset this hidden state when the Quick Action is triggered subsequently. I would appreciate any guidance on a potential workaround or confirmation if this is a known issue awaiting a system update.
Replies
0
Boosts
1
Views
232
Activity
Feb ’26
AVCam Sample Code - Undesired "Jump" in Video Recording Image
On iPhone 16 Pro Max (not tested other devices) there's a noticeable jump in the framing of the preview video when you record in the iOS AVCam Sample App. The same jump in camera framing can be observed by switching to the front facing camera and then back to the rear one. It looks roughly consistent with switching between the 0.5x and 1x camera (but not quite a match for the same viewable area in the Camera app) - and it's only when it's initially loaded, once recording is started it retains the 'closer' image no matter how many times it's stopped/started thereafter. I'm relatively new to Swift and haven't done anything with the camera before, so odd 'buggy' behaviour in the sample code isn't helping me understand it! :-) Is there any way to fix this?
Replies
0
Boosts
0
Views
413
Activity
Jan ’26
Are White Balance gains applied before or after ADC?
At which point in the image processing pipeline does iOS apply the white balance gains which can be set via AVCaptureDevice.setWhiteBalanceModeLocked(with:completionHandler:)? Are those gains applied in the analog part of the camera pipeline, before the pixel voltage gets converted via the ADC to digital values? Or does the camera first convert the pixel voltages to digital values and then the gains are applied to the digital values? Is this consistent across devices or can the behavior vary from device to device?
Replies
1
Boosts
0
Views
404
Activity
Jan ’26
Mac Studio: Continuity Camera unavailable after reboot unless USB camera is connected
Summary On Mac Studio systems (no built-in camera), macOS does not initialize camera services after a normal reboot if no physical camera is present. As a result, Continuity Camera does not appear anywhere in the system. Observed behavior System Information → Camera reports “No video capture devices were found.” Continuity Camera (iPhone) is completely absent from camera lists. Plugging in any USB UVC webcam immediately initializes camera services and causes both the USB camera and the iPhone (Continuity Camera) to appear. The USB camera can then be unplugged and Continuity Camera continues working until the next reboot. Reproduction steps Use a Mac Studio (no built-in camera) on recent macOS. Ensure no USB webcam or external camera is connected. Reboot the Mac normally. After login, open System Information → Camera. Expected Camera services should initialize even when no physical camera is present, allowing Continuity Camera to be available as the primary camera. Actual No camera devices are present unless a physical USB camera is connected at least once after boot. This reproduces 100% of the time on Mac Studio and appears to be a camera service bootstrap issue where Continuity Camera cannot be the first camera device. Issue has been filed via Feedback Assistant.
Replies
1
Boosts
0
Views
244
Activity
Jan ’26
Any way to trigger cameraLensSmudgeDetectionStatus to change?
Looking to implement to UI to tell the user to clean their lens in our app. Implemented the KVO for the cameraLensSmudgeDetectionStatus but I'm having issues reliably triggering it in, both in our app and the main camera app. Tried to get inventive by putting tupperware over the lens, but I think the model driving this or the LiDAR sensor might be smart enough to detect there is something close to the lens. Is there any way to trigger this change in a similar way we can trigger thermal changes in debug? Thanks.
Replies
2
Boosts
0
Views
465
Activity
Jan ’26
Access UltraWideCamera when ARSession is running
ARSession provides video stream from the wide angle camera. If ARSession uses the ultra wide camera at the same time, ARSession may provide video stream from that camera, otherwise AVCaptureSession with an ultra wide camera should be allowed to launch. It would be very very useful if we can access different cameras while ARSession is running. We'd like to cooperate with you if possible. Steps to reproduce: run an AVCaptureSession and then run an ARSession. The AVCaptureSession stops.
Replies
1
Boosts
0
Views
604
Activity
Jan ’26
AVCaptureSession preview briefly goes blank after interruption (lock/unlock or camera switch) while isRunning == true
Environment Device: iPhone 15 Pro iOS: iOS 18.0 Framework: AVFoundation App type: Custom camera app using AVCaptureSession + AVCaptureVideoPreviewLayer I’m seeing an intermittent but frequent issue where the camera preview layer briefly flashes empty after certain interruptions, even though the capture session reports itself as running and no errors are emitted. This happens most often after: Locking and unlocking the device Switching cameras (back ↔ front) The issue is not 100% reproducible, but occurs often enough to be noticeable in normal usage. What happens The preview layer briefly flashes as empty (sometimes just a “micro-frame”) Duration: typically ~0.5–2 seconds before frames resume session.isRunning == true throughout No crash, no runtime error, no interruption end failure Focus/exposure restore correctly once frames resume Visually it looks like the preview layer loses frames temporarily, even though the session appears healthy. Repro Intermittent but frequent after: Lock → unlock device Switching camera (front/back) Timing-dependent and non-deterministic Happens multiple times per session, but not every time Key observation AVCaptureSession.isRunning == true does not guarantee that frames are actually flowing. To verify this, I added an AVCaptureVideoDataOutput temporarily: During the blank period, no sample buffers are delivered Frames resume after ~1–2s without any explicit restart Session state remains “running” the entire time What I’ve tried (did NOT fix it) Adding delays before/after startRunning() (0.1–0.5s) Calling startRunning() on different queues Restarting the session in AVCaptureSessionInterruptionEnded Verifying session.connections (all show isActive == true) Rebuilding inputs/outputs during interruption recovery Ensuring startRunning() is never called between beginConfiguration() / commitConfiguration() (Hit the expected runtime warning when attempted) None of the above removed the brief blank preview. Workaround (works visually but expensive) This visually fixes the issue, but: Energy impact jumps from Low → High in Xcode Energy Gauge AVCaptureVideoDataOutput processes 30–60 FPS continuously The gap only lasts ~1–2s, but toggling the delegate on/off cleanly is difficult Overall CPU and energy cost is not acceptable for production Additional notes CPU usage is already relatively high even without the workaround (this app is camera-heavy by nature) With the workaround enabled, energy impact becomes noticeably worse The issue feels like a timing/state desync between session state and actual frame delivery, not a UI issue Questions Is this a known behavior where AVCaptureSession.isRunning == true but frames are temporarily unavailable after interruptions? Is there a recommended way to detect actual frame flow resumption (not just session state)? Should the AVCaptureVideoPreviewLayer.connection (isActive / isEnabled) be explicitly checked or reset after interruptions? Is there a lightweight, energy-efficient way to bridge this short “no frames” gap without using AVCaptureVideoDataOutput? Is rebuilding the entire session the only reliable solution here, or is there a better pattern Apple recommends?
Replies
1
Boosts
0
Views
976
Activity
Jan ’26