We are planning to develop an application using the Apple Music API.
We would like to design our system based on the details of the rate limits mentioned below and have a few questions:
https://developer.apple.com/documentation/applemusicapi/generating-developer-tokens#Request-Rate-Limiting
Regarding the Catalog API (/v1/catalog/*), we understand that server-side caching is enabled, making it less likely to reach the rate limit. Is this understanding correct? (Excluding the search API)
For APIs like the Library API (/v1/me/library/*), where responses vary by user, we assume they are more likely to reach the rate limit. Is this correct?
We plan to implement optimizations to minimize unnecessary API calls. Given this, would the current Music API be able to handle a significant increase in users? (Assuming a DAU of around 100,000 to 1,000,000)
If the API cannot support this scale, would it be allowed under Apple’s policy to cache responses from the Catalog API (/v1/catalog/*) via our proxy server to avoid hitting the rate limit?
The third question is the one we most want to confirm.
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello,
Is there a way to handle 403 error returned by the server, eg token expired ?
Cannot find any information about this and everything that I tried wasn't working (addObserver, NotificationCenter with .AVPlayerItemNewErrorLogEntry, AVPlayerItemPlaybackStalled, ...)
Thank you very much.
Topic:
Media Technologies
SubTopic:
Video
Our Final Cut Pro workflow extension built with ProExtensionHost framework uses an advanced NSPasteboardItemDataProvider system with multi-version FCPXML support (1.9, 1.10, 1.13) and proper relative path
UIDs for Motion templates. We've implemented clip wrapper approach with placeholder assets and elements containing effects to enable direct timeline drag functionality. However, drag
and drop from our Final Cut Pro workflow extension directly to timeline is still not working despite proper element structure in our FCPXML. Our implementation creates valid clip elements with
effects applied, but Final Cut Pro timeline doesn't accept them during drag operations from our ProExtensionHost-based workflow extension.
Steps to Reproduce:
Create Final Cut Pro workflow extension using ProExtensionHost framework with NSPasteboardItemDataProvider implementation
Generate FCPXML with proper element structure:
Expected Result: Clip should be accepted by timeline and effect applied from workflow extension
Actual Result: Timeline rejects drag operation from ProExtensionHost-based workflow extension
Question: Are there additional requirements or ProExtensionHost API calls needed beyond standard NSPasteboardItemDataProvider for Final Cut Pro workflow extension timeline drag functionality?
When multiple identical songs are added to a playlist, Playlist.Entry.id uses a suffix-based identifier (e.g. songID_0, songID_1, etc.). Removing one entry causes others to shift, changing their .id values. This leads to diffing errors and collection view crashes in SwiftUI or UIKit when entries are updated.
Steps to Reproduce:
Add the same song to a playlist multiple times.
Observe .id.rawValue of entries (e.g. i.SONGID_0, i.SONGID_1).
Remove one entry.
Fetch playlist again — note the other IDs have shifted.
FB18879062
I am developing an iOS app that needs to play spoken audio on demand from a server, while ducking the audio of background music from another app (e.g., SoundtrackYourBrand or Apple Music). This must work even when the app is in the background, and the server dictates when and what audio is played. Ideally, the message should be played within a minute of the server requesting it.
Current Attempt & Observations
I initially tried using Firebase Cloud Messaging (FCM) silent notifications to send a URL to an audio file, which the app would then play using AVPlayer.
This works consistently when the app is active, but in the background, it only works about 60% of the time.
In cases where it fails, iOS ducks the background music (e.g., from SoundtrackYourBrand) but never plays the spoken audio.
Interestingly, when I play the audio without enabling audio ducking, it seems to work 100% of the time from my limited testing, even in the background.
The app has background modes enabled for Audio, Background Fetch, and Remote Notifications.
Best Approach to Achieve This?
I’d like guidance on the best Apple-compliant approach to reliably play audio on command from the server, even when the app is in the background. Some possible paths:
Ensuring the app remains active in the background – Are there recommended ways to prevent the app from getting suspended, such as background tasks, a special background mode, or a persistent connection to the server?
Alternative triggering mechanisms – Would something like VoIP, Push-to-Talk, or another background service be better suited for this use case?
Built-in iOS speech synthesis (AVSpeechSynthesizer) – If playing external audio is unreliable, would generating speech dynamically from text be a more robust approach?
Streaming audio instead of sending a URL – Could continuous streaming from the server keep the app active and allow playback at the right moment?
I want to ensure the solution is reliable and works 100% of the time when needed. Any recommendations on the best approach for this would be greatly appreciated.
Thank you for your time and guidance.
Overlay changes color in HDR video When I’m using trying to add an overlay to an image with AVMutableVideoComposition, When the video is in HDR the overlay colors are changing and white becomes grey screen shot from original HDR video result from the code with the wrong overlay colorthe result when reducing to SDR (the right overlay color)
the distorted colorsthe way it should look(sdr)
Im creating the overlay with a CGContext
class CustomHdrCompositor: NSObject, AVVideoCompositing {
private let coreImageContext = CIContext(options: [CIContextOption.cacheIntermediates: false])
let combinedFilter = CIFilter(name: "CISourceOverCompositing")!
var sourcePixelBufferAttributes: [String: Any]? = [String(kCVPixelBufferPixelFormatTypeKey): [kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange]]
var requiredPixelBufferAttributesForRenderContext: [String: Any] =
[String(kCVPixelBufferPixelFormatTypeKey): [kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange]]
var supportsWideColorSourceFrames = true
var supportsHDRSourceFrames = true
func renderContextChanged(_ newRenderContext: AVVideoCompositionRenderContext) {
return
}
func startRequest(_ request: AVAsynchronousVideoCompositionRequest) {
guard let outputPixelBuffer = request.renderContext.newPixelBuffer() else {
print("No valid pixel buffer found. Returning.")
request.finish(with: CustomCompositorError.ciFilterFailedToProduceOutputImage)
return
}
guard let requiredTrackIDs = request.videoCompositionInstruction.requiredSourceTrackIDs, !requiredTrackIDs.isEmpty else {
print("No valid track IDs found in composition instruction.")
return
}
let sourceCount = requiredTrackIDs.count
if sourceCount > 1 {
request.finish(with: CustomCompositorError.notSupportingMoreThanOneSources)
return
}
if sourceCount == 1 {
let sourceID = requiredTrackIDs[0]
let sourceBuffer = request.sourceFrame(byTrackID: sourceID.value(of: Int32.self)!)!
let sourceCIImage = CIImage(cvPixelBuffer: sourceBuffer)
var textImage = TextLayerPlayer.instance.getTextLayerAtTimesStamp(ts:request.compositionTime.seconds)
combinedFilter.setValue(textImage, forKey: "inputImage")
if let outputImage = combinedFilter.outputImage {
let renderDestination = CIRenderDestination(pixelBuffer: outputPixelBuffer)
do {
try coreImageContext.startTask(toRender: outputImage, to: renderDestination)
} catch {
}
}
}
request.finish(withComposedVideoFrame: outputPixelBuffer)
}
}
func regularCompositionHdr(asset: AVAsset) -> AVVideoComposition
{
self.isHdr = checkHdr(asset: asset)
let avComposition = AVMutableComposition()
let composition = AVMutableVideoComposition()
composition.colorPrimaries = AVVideoColorPrimaries_ITU_R_2020
composition.colorTransferFunction = AVVideoTransferFunction_ITU_R_2100_HLG
composition.colorYCbCrMatrix = AVVideoYCbCrMatrix_ITU_R_2020
composition.renderSize = assetSize
composition.frameDuration = CMTime(value: 1, timescale: 30)
composition.customVideoCompositorClass = CustomHdrCompositor.self
composition.perFrameHDRDisplayMetadataPolicy = .propagate
return composition
}
I’m using this function to transfer the transparent CGImage to CIImage that supports HDR
func convertToHDRCIImage(from cgImage: CGImage,
maxBrightness: CGFloat = 3.0) -> CIImage? {
// Create a CIImage from the input CGImage
let baseImage = CIImage(cgImage: cgImage)
// Create HDR color adjustment filter
let colorAdjust = CIFilter(name: "CIColorMatrix")!
colorAdjust.setValue(baseImage, forKey: kCIInputImageKey)
// Calculate HDR multipliers based on maxBrightness
// This will maintain color ratios while increasing brightness
colorAdjust.setValue(CIVector(x: maxBrightness, y: 0, z: 0, w: 0), forKey: "inputRVector")
colorAdjust.setValue(CIVector(x: 0, y: maxBrightness, z: 0, w: 0), forKey: "inputGVector")
colorAdjust.setValue(CIVector(x: 0, y: 0, z: maxBrightness, w: 0), forKey: "inputBVector")
// Maintain alpha channel
colorAdjust.setValue(CIVector(x: 0, y: 0, z: 0, w: 1), forKey: "inputAVector")
guard let adjustedImage = colorAdjust.outputImage else {
return nil
}
// Apply color space transformation using CIImage's colorSpace property
let transformedImage = adjustedImage.matchedFromWorkingSpace(to: hdrWorkingSpace)!
// Create context with HDR color space
let context = CIContext(options: [
.workingColorSpace: hdrColorSpace,
.outputColorSpace: hdrColorSpace
])
// Get the image bounds
let bounds = transformedImage.extent
// Create a new pixel buffer with HDR format
var pixelBuffer: CVPixelBuffer?
let pixelBufferAttributes = [
kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_64RGBAHalf,
kCVPixelBufferMetalCompatibilityKey: true
] as CFDictionary
CVPixelBufferCreate(kCFAllocatorDefault,
Int(bounds.width),
Int(bounds.height),
kCVPixelFormatType_64RGBAHalf,
pixelBufferAttributes,
&pixelBuffer)
guard let destinationBuffer = pixelBuffer else {
return nil
}
context.render(transformedImage,
to: destinationBuffer,
bounds: bounds,
colorSpace: hdrColorSpace)
// Create final CIImage from the HDR pixel buffer
let finalImage = CIImage(cvPixelBuffer: destinationBuffer,
options: [.colorSpace: hdrColorSpace])
return finalImage
}
When reducing the HDR to SDR it keeps the right color of the overlay with, but than it reduces the HDR effect which I want to keep
Topic:
Media Technologies
SubTopic:
Video
Hi,
I have been working on a project that enables users to listen to their favorite music using a streaming service, which so far was Spotify. The app had a programmable 3D/2D interface with the ability to connect to devices in your home and have them react to music. As of September 2024, Spotify decomissioned their Audio Analysis API. I have seen other posts mention playing Apple Music through AVFoundation, which would break DRM and so it’s not supported. However, the Spotify Audio Analysis API does not allow for a full frequency reconstruction. It is entirely temporal data on beats, kicks, loudness, and timbre changes, which themselves are operators on the spectral data from the FFT. It would be very useful for the developer community if we get the ability to do this and it will probably Apple Music among developers and those who use their apps a lot more.
Would love to hear your thoughts about this and Happy New Year!
I'm developing a Final Cut Pro X workflow extension that transcribes audio and creates a text output. I need to allow users to drag this text directly from my extension into FCPX's timeline as titles.
Current Implementation:
Using NSFilePromiseProvider as per Apple's guidelines for drag and drop
Generating valid FCPXML (v1.10) with proper structure:
Complete resources section with format and asset references
Event and project hierarchy
Asset clip with connected title elements
Proper timing and duration calculations
Supporting multiple pasteboard types:
com.apple.finalcutpro.xml.v1-10
com.apple.finalcutpro.xml.v1-9
com.apple.finalcutpro.xml
What's Working:
Drag operation initiates correctly
File promise provider is set up properly
FCPXML generation is successful (verified content)
All required pasteboard types are registered
Proper logging confirms data is being requested and provided
Current Pasteboard Types Offered:
com.apple.NSFilePromiseItemMetaData
com.apple.pasteboard.promised-file-name
com.apple.pasteboard.promised-suggested-file-name
com.apple.pasteboard.promised-file-content-type
Apple files promise pasteboard type
com.apple.pasteboard.NSFilePromiseID
com.apple.pasteboard.promised-file-url
com.apple.finalcutpro.xml.v1-10
com.apple.finalcutpro.xml.v1-9
com.apple.finalcutpro.xml
What additional requirements or considerations are needed to make FCPX accept the dragged FCPXML content? Are there specific requirements for workflow extensions regarding drag and drop operations with titles that aren't documented?
Any insights, especially from those who have implemented similar functionality in FCPX workflow extensions, would be greatly appreciated.
Technical Details:
macOS Version: 15.5 (24F74)
FCPX Version: 11.1.1
Extension built with SwiftUI and AppKit integration
Using NSFilePromiseProvider and NSPasteboardItemDataProvider
Full pasteboard type support for FCPXML versions
Hello, we have HLS Stream app on Apple TV. Our streams are DRM protected. We have problem with streams when source device is turned off. For example, user start to watch our HLS DRM Protected content. After some time, user turns off device (it can be Monitor or TV via connected HDMI). Our app does not understand HDMI Source device turned off. Is there any way to understand HDMI connected device is turned off on Swift?
Topic:
Media Technologies
SubTopic:
Streaming
Tags:
FairPlay Streaming
Swift
Apple TV
HTTP Live Streaming
Before you post —Camera doesn't work on the Simulator— that's no longer true. I've made a solution that makes the Simulator believe there's an actual hardware device connected, allowing users to stream the macOS camera to the iOS Simulator (see for more info RocketSim's documentation: https://docs.rocketsim.app/features/hzQMSrSga7BGWvxdNVdwYs/simulator-camera-support/58tQ5jvevLNSnyUEA7VgAv)
Now, it works for VNDocumentCameraViewController, but when I try opening DataScannerViewController, I directly run into:
Failed to start scanning: The operation couldn’t be completed. (VisionKit.DataScannerViewController.ScanningUnavailable error 0.)
My question:
How does this view controller determine whether scanning is available?
Is there a certain capability the available AVCaptureDevice's need to support maybe?
Any direction would be helpful for me to make this work for developers, making them build apps faster!
Hi!
I am writing a browser extension that allows you to control the playback of media content on a music service website. Unfortunately Safari does not support tracking changes to the audible property in an event tabs.onUpdated. Is there an alternative to this event? I'm looking for a way to track when the automatic inference engine interrupts playback on a music service website.
That you.
I am working to update a live blog in Apple News. As far as I know there is an update endpoint to update a content in Apple News. Is there any feature in Apple News to trigger an event when the original content updated and pull the updated content?
Hi,
Currently I am developing a 3D reconstruction project.
Which requires images to be distortion-free (rectilinear) and with known intrinsics.
The session I am developing on is a builtInDualWideCamera, with isGeometricDistortionCorrectionEnabled set to false to be able to get the intrinsic matrix of the images, isVirtualDeviceConstituentPhotoDeliveryEnabled set to true and isAutoVirtualDeviceFusionEnabled set to false to get both images and isCameraCalibrationDataDeliveryEnabled set to true to actually get the calibration data.
The distortion correction parameters such as lensDistortionLookupTable are used.
The 42 coefficients mapping array is used as described in the AVCameraCalibrationData header file. A simple piecewise linear interpolation.
There are two questions I would like to get support on:
A way to set the calibration parameters in each image.
I have an approach that sets the parameters in the kCGImagePropertyExifDictionary -> "UserComment". Is there a better approach to write calibration parameter data into the images? I feel like this is a bit dirty and there might be a better and neat approach.
For the ultra-wide angle camera's images, the lensDistortionLookupTable contains several zeros at the end of the array.
For example (last 10 elements are zero):
"LensDistortionLookupTable":"0.000000000000000,0.000349554029526,0.001385628827848,0.003071037586778,... ,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000,0.000000000000000"
The problem comes when the complete array is used to correct the image (including zeros), the end result is a wrapped-like-circle image close to the edges of it which is completely wrong.
In contrast, if the LensDistortionLookupTable is used without the last zeros and the new size accommodated the image looks better (although not as rectilinear as if you take the image from the iPhone's camera app), but definitely less distorted.
Including zeros (full array):
Excluding zeros (array size changed):
Am I missing an important point in the usage of the lensDistortionLookupTable where this case is addressed (zeros at the end)?
What is the criteria to shrink/exclude elements of the array?
Any advice is very much welcome.
I have an iPad app that I want to run on Apple Silicon macs.
Everything works fine except for VNDocumentCameraViewController. According to the docs this class is available on:
iOS 13.0+ iPadOS 13.0+ Mac Catalyst 13.1+ visionOS 1.0+
yet when I try using it I get Document camera is not available on my Mac Studio running macOS 15.2
Is this expected behaviour?
Thanks
Hi Apple Team,
We have integrated FairPlay Streaming Server SDK v3 into our MDRM platform since 2017, the system works stable and stayed untouched. As you know, both Widevine and Playready have requirements to upgrade the Server SDK regularly. We want to know if Apple imposes similar requirements for upgrading the FPS SDK, or if we may continue using the old one without any updates.
Thanks for your support!
I'm trying to write 16-bit interleaved 2-channel data captured from a LiveSwitch audio source to a AVAudioFile. The buffer and file formats match but I get a bad parameter error from the API. Does this API not support the specified format or is there some other issue?
Here is the debugger output.
(lldb) po audioFile.url
▿ file:///private/var/mobile/Containers/Data/Application/1EB14379-0CF2-41B6-B742-4C9A80728DB3/tmp/Heart%20Sounds%201
- _url : file:///private/var/mobile/Containers/Data/Application/1EB14379-0CF2-41B6-B742-4C9A80728DB3/tmp/Heart%20Sounds%201
- _parseInfo : nil
- _baseParseInfo : nil
(lldb) po error
Error Domain=com.apple.coreaudio.avfaudio Code=-50 "(null)" UserInfo={failed call=ExtAudioFileWrite(_impl->_extAudioFile, buffer.frameLength, buffer.audioBufferList)}
(lldb) po buffer.format
<AVAudioFormat 0x302a12b20: 2 ch, 44100 Hz, Int16, interleaved>
(lldb) po audioFile.fileFormat
<AVAudioFormat 0x302a515e0: 2 ch, 44100 Hz, Int16, interleaved>
(lldb) po buffer.frameLength
882
(lldb) po buffer.audioBufferList
▿ 0x0000000300941e60
- pointerValue : 12894608992
This code handles the details of converting the Live Switch frame into an AVAudioPCMBuffer.
extension FMLiveSwitchAudioFrame {
func convertedToPCMBuffer() -> AVAudioPCMBuffer {
Self.convertToAVAudioPCMBuffer(from: self)!
}
static func convertToAVAudioPCMBuffer(from frame: FMLiveSwitchAudioFrame) -> AVAudioPCMBuffer? {
// Retrieve the audio buffer and format details from the FMLiveSwitchAudioFrame
guard
let buffer = frame.buffer(),
let format = buffer.format() as? FMLiveSwitchAudioFormat else { return nil }
// Extract PCM format details from FMLiveSwitchAudioFormat
let sampleRate = Double(format.clockRate())
let channelCount = AVAudioChannelCount(format.channelCount())
// Determine bytes per sample based on bit depth
let bitsPerSample = 16
let bytesPerSample = bitsPerSample / 8
let bytesPerFrame = bytesPerSample * Int(channelCount)
let frameLength = AVAudioFrameCount(Int(buffer.dataBuffer().length()) / bytesPerFrame)
// Create an AVAudioFormat from the FMLiveSwitchAudioFormat
guard let avAudioFormat = AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: sampleRate, channels: channelCount, interleaved: true) else {
return nil
}
// Create an AudioBufferList to wrap the existing buffer
let audioBufferList = UnsafeMutablePointer<AudioBufferList>.allocate(capacity: 1)
audioBufferList.pointee.mNumberBuffers = 1
audioBufferList.pointee.mBuffers.mNumberChannels = channelCount
audioBufferList.pointee.mBuffers.mDataByteSize = UInt32(buffer.dataBuffer().length())
audioBufferList.pointee.mBuffers.mData = buffer.dataBuffer().data().mutableBytes // Directly use LiveSwitch buffer
// Transfer ownership of the buffer to AVAudioPCMBuffer
let pcmBuffer = AVAudioPCMBuffer(pcmFormat: avAudioFormat, bufferListNoCopy: audioBufferList) /* { buffer in
// Ensure the buffer is freed when AVAudioPCMBuffer is deallocated
buffer.deallocate() // Only call this if LiveSwitch allows manual deallocation
} */
pcmBuffer?.frameLength = frameLength
return pcmBuffer
}
}
This is the handler that is invoked with every frame in order to convert it for use with AVAudioFile and optionally update a scrolling signal display on the screen.
private func onRaisedFrame(obj: Any!) -> Void {
// Bail out early if no one is interested in the data.
guard isMonitoring else { return }
// Convert LS frame to AVAudioPCMBuffer (no-copy)
let frame = obj as! FMLiveSwitchAudioFrame
let buffer = frame.convertedToPCMBuffer()
// Hand subscribers a reference to the buffer for rendering to display.
bufferPublisher?.send(buffer)
// If we have and output file, store the data there, as well.
guard let audioFile = self.audioFile else { return }
do {
try audioFile.write(from: buffer) // FIXME: This call is throwing error -50
} catch {
FMLiveSwitchLog.error(withMessage: "Failed to write buffer to audio file at \(audioFile.url): \(error)")
self.audioFile = nil
}
}
This is how the audio file is being setup.
static var recordingFormat: AVAudioFormat = {
AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: 44_100, channels: 2, interleaved: true)!
}()
let audioFile = try AVAudioFile(forWriting: outputURL, settings: Self.recordingFormat.settings)
Hi everyone!
I’ve developed a location-based Audio AR app in Unity with FMOD & Resonance Audio and AirPods Pro Head-Tracking to create a ubiquitous augmented soundscape experience. Think of it as an audio version of Pokémon Go, but with a more precise location requirement to ensure spatial audio is placed correctly.
I want this experience to run in the background on iOS, but from what I’ve gathered, it seems Unity doesn’t support this well. So, I’m considering developing a Swift version instead.
Since this is primarily for research purposes, privacy concerns are not a major issue in my case. However, I’ve come across some potential challenges:
Real-time precise location updates – Can iOS provide fully instantaneous, high-accuracy location updates in the background?
Continuous real-time data processing – Can an app continuously process spatial audio, head-tracking, and location data while running in the background?
I’m not sure if newer iOS versions have improved in these areas or if there are workarounds to achieve this.
Would this kind of experience be feasible to run in the background on iOS? Any insights or pointers would be greatly appreciated!
I’m very new to iOS development, so apologies if this is a basic question. Thanks in advance!
(Note: this is part 2 of a 3 part posting. See Part 1 or Part 3)
At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Camera & Photos.
WWDC25 Camera & Photos group lab ran for one hour at 6 PM PST on Tuesday June 10th, 2025
Question 10
Can we directly integrate auto-capture triggers (e.g., when image is steady or text is detected) using Vision and AVFoundation?
Yes apps can use AVCaptureSession's VDO + AVCapturePhotoOutput, run vision on VDO buffers and capture photo when certain scene or text is detected.
Just to be careful to run Vision on VDO buffers async so it doesn't cause frame drops.
Question 11
What Camera or Photos framework features support working with images from external media, like connected cameras or SD cards? Any best practices?
The ImageCaptureCore framework supports camera devices, memory cards, scanners
read and write, where supported
check out the docs to see how to browse connected devices, folders, files, etc.
Question 12
Hi Brad, to follow up on your SwiftUI cautionary note: using AVCaptureVideoPreview inside a UIViewRepresentable, is okay, right? Thanks all for the great info!
Yes, this is totally fine.
AppKit or UIKit views inside appropriate SwiftUI representables should be equivalent performance
Question 13
What’s the “right” way to transition media in my photos app between HDR modes? When I’m in a one-up view, we use HDR, but in other contexts (like thumbnail) we don’t want HDR. Is there a nice way to tone map?
There’s a suite of new System Tone Mapper APIs in this years’ OSes
CoreImage ImageKit CoreAnimation, CoreGraphics
For example:
CoreImage: new CISystemToneMap filter.
CoreAnimation: layer.preferredDynamicRange = CADynamicRangeConstrainedHigh
Using image views (NSImageView/UIImageView/SwiftUI Image/CALayer) support animations on preferredDynamicRange
Can go from high to constrained to standard
Tone mapping is provided by the system (CISystemToneMap for controllable example)
Question 14
What is your recommendation to preprocess and upscale your depth map in order to render a realistic portrait mode image?
One way to do this: the CIEdgePreserveUpsample CIFilter can be use to upsample a lower resolution depth map by using a higher resolution RGB image as a guide.
Question 15
For buffering frames for later processing from real-time camera output should we prefer a AVSampleBufferDisplayLayer centered approach or AVCaptureVideoDataOutputSampleBufferDelegate centered approach? When would we use each?
AVSampleBufferDisplayLayer and AVCaptureVideoDataOutputSampleBufferDelegate are used hand in hand for custom camera preview.
For buffering for later processing, ensure you make copies of VDO buffers to not drop frames from the output
Question 16
Hello, my question is on Deferred Photo Processing? Say I have a photo capture app that adds a CIFilter to the capture. How can I take advantage of Deferred Photo Processing? Since I don’t know how to detect when the deferred captured photo is ready
CIFilter can be called on the final at that point
Photo will have to be re-inserted into the Photo library as adjustment
Question 17
Is digital zoom (e.g., 1.5x) before taking a photo the same as cropping the photo afterward?
digital zoom upscales the image to output dimensions and cropping will yield a smaller output image
while digital zoom will crop, it also upscales
Question 18
How do you design camera interfaces that work for both casual users and photography enthusiasts?
Progressive disclosure: Put the most common controls up front, and make it easy for pros to drill down.
Sensible Defaults: Choose defaults that work well for casual users, but allow those defaults to be modified for photography enthusiasts
A good philosophy is: Keep the simple things easy, make the hard things possible
Question 19
Recent iPhone models introduced macro mode which automatically switch between lenses to take into account of the focal distance difference. Is there official API to implement this, or should I implement them myself using LiDAR values.
Using builtInTripleCamera and builtInDualWideCamera will automatically switch to macro when available
Question 20
a couple of years ago at WWDC, the option of replacing a camera with a virtual camera was mentioned. How does one do that - make the “physical” camera effectively disappear, so only the virtual camera is accessible to the user?
You can't prevent the built-in camera from being available to the user
Question 21
Can developers now integrate custom Core ML models with Vision for on-device photo analysis more seamlessly?
Yes they can, use CoreMLRequest , provide their model container
Been supported for a while (iOS 18/macOS 15)
For more details go to Machine Learning & AI group lab Thursday
use smaller images for better performance
Question 22
What would you recommend for capture of the new immersive and spatial formats?
To capture Spatial Video use AVCaptureMovieFileOutput’s spatialVideoCaptureEnabled property
Not all device formats support spatial capture, check AVCaptureDevice.activeFormat.spatialVideoCaptureSupported
See WWDC 2024 talk “Build compelling spatial photo and video experiences” for more details
Question 23
You mentioned JPEG-XL. What is the current status of support on iOS and macOS for encoding and decoding?
For decoding, we support JPEG-XL files in all our OSes, regular SDR files, as well as ISO HDR files.
For encoding, we only support JPEG-XL for ProRAW DNG capture in the Camera app or via third-party AVFoundation APIs.
If you have any requests for improvement or new features related to JPEG-XL, please file a Feedback request using the Feedback Assistant.
(Note: this is part 2 of a 3 part posting. See Part 1 or Part 3)
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Image I/O
Photos and Imaging
PhotoKit
Core Image
Hi There, I have an app which access the media library, to save and load files. Since the IOS 18.2, the access to the media library stopped working.
Now, I've noticed that our App doesn't show in the List of apps with access to Files ( Privacy & Security -> Files & Folders).
Weird behavior is that, one iPhone with iOS 18.3.1 can access to the Files but others no, same iOS version 18.3.1. Test on Simulators (MAC) and works fine also.
My info.plist file have the keys to access media library for long time and hasn't changed (at least in the las 4 years) including the key "Privacy - Media Library Usage Description".
Also, I've noticed, that the message (popup) that request access to the media library, when using the app for the first time, doesn't show up anymore. We request access to the network (wifi) and this message still showing up but no the media library.
I'm using Visual Studio with Xamarin on a MAC.
I really appreciate any help you can because is very odd behavior and this started from the iOS 18.2.
On Apple TV 4K 3rd generation, with tvOS 26 beta 2, when two HomePod 2 are paired to the device, music and movie sources with Dolby Atmos can only be listened to in stereo. dolby atmos not supported
Topic:
Media Technologies
SubTopic:
Audio