Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics
Posts under Media Technologies topic

Post

Replies

Boosts

Views

Activity

VideoPlayer crashes on Archive build
I have found that following code runs without issue from Xcode, either in Debug or Release mode, yet crashes when running from the binary produced by archiving - i.e. what will be sent to the app store. import SwiftUI import AVKit @main struct tcApp: App { var body: some Scene { WindowGroup { VideoPlayer(player: nil) } } } This is the most stripped down code that shows the issue. One can try and point the VideoPlayer at a file and the same issue will occur. I've attached the crash log: Crash log Please note that this was seen with Xcode 26.2 and MacOS 26.2.
0
0
8
1h
CIRAWFilter.outputImage first-time cost is huge (~3s), subsequent calls are ~3ms. Any official way to pre-initialize RAW pipeline (without taking a real photo)?
Hi Apple Developer Forums, I’m developing an iOS camera app that processes RAW captures using Core Image. I’m seeing a large “first use” performance penalty specifically when creating the CIImage from CIRAWFilter.outputImage. What’s slow (important detail) I’m measuring the time for: let rawFilter = CIRAWFilter(imageData: rawData, identifierHint: hint) let ciImage = rawFilter.outputImage This is not CIContext.render(...) / createCGImage(...). It’s just the time to access outputImage (i.e., building the Core Image graph / RAW pipeline setup). Observed behavior First time accessing CIRAWFilter.outputImage: ~3 seconds Second time (same app session, similar RAW): ~3 milliseconds So something heavy is happening only on first use (decoder initialization, pipeline setup, shader/library compilation, caching, etc.). Using Metal System Trace, I also noticed that during the slow first call there are many “Create MTLLibrary” events, while the second call doesn’t show this pattern. Warm-up attempts using bundled DNG I tried to “warm up” early (e.g., on camera screen entry) by loading a bundled DNG and then accessing CIRAWFilter.outputImage by taking a photo: Warm-up with a ~247 KB DNG → first real RAW outputImage cost drops to ~1.42s Warm-up with a ~25 MB DNG → first real RAW outputImage cost drops to ~843ms This helps, but it’s still far from the steady-state ~3ms. Warm-up by capturing a real RAW (works, but concerns) The only method that fully eliminates the delay is to trigger a real RAW capture programmatically before the user’s first photo, then use that captured rawData to warm up the CIRAWFilter.outputImage path. This brings the first user-facing capture close to the steady-state timing. However: In some regions, the camera shutter sound cannot be suppressed, so “hidden warm-up capture” is unacceptable UX. I’m also unsure whether triggering a real capture without an explicit user action could raise compliance/privacy concerns, even if the image is immediately discarded and never saved/uploaded. Questions Is the large first-time cost of CIRAWFilter.outputImage expected (RAW pipeline initialization / shader compilation)? Is there an Apple-recommended way to pre-initialize the Core Image RAW pipeline / Metal resources so the first outputImage is fast, without taking a real photo? Are there any best practices (e.g. CIContext creation timing, prepareRender(...), specific options) that reliably reduce this first-use overhead for CIRAWFilter? Attachments Figure 1: First RAW capture with no warm-up (~3s outputImage time) Figure 2: First RAW capture after warm-up with bundled DNG (improved but still hundreds of ms) Thanks for any guidance or experience sharing!
2
0
161
14h
Disconnect from AirPlay device programmatically
Hello there, I'm trying to implement feature which uses AirPlay with Apple TV. I want to disconnect from the device programmatically when something happens. Under something I mean a situation when a user wants to stop broadcasting (for example close the PiP window on his phone). I use this snippet: try audioSession.setCategory(.playAndRecord, options: .defaultToSpeaker) try audioSession.setActive(true, options: .notifyOthersOnDeactivation) It works fine sometimes but not always (it works on iOS 18 but it doesn't on iOS 17 or ). So I thought it's a bug and create a ticker to feedback assistant (FB21220013). The support told me write a post on the forum.
1
0
126
16h
Sound not working on testflight / Appstore
I have a flutter iOS app that has some simple sound FX for button clicks, swipes, etc. In simulator and on real device the sound works fine, but when i upload the app to testflight (and App store) the sound FX don't play. When I upload the app to my phone via xcode I am using the release profile so I don't see what the difference could be. I have also gone through the archive that i uploaded and verified that the sound files are indeed there. I have other flutter apps that use sound but non since the iOS 26 update. I've tried 3 different flutter sound libraries and all face the same issue. Wondering if anyone else is seeing this issue or if I'm missing a simple permission or something that has changed recently? Thanks in advanced
2
0
167
17h
CoreMediaErrorDomain -15628 playback failure in iOS 26 (React Native / AVPlayer, HLS stream)
Hi, After updating to iOS 26, our app is facing playback failures with AVPlayer. The same code and streams work fine on iOS 18 and earlier. Error - Domain[CoreMediaErrorDomain]:Code[-15628]:Desc[The operation couldn’t be completed.]:Underlying Error Domain[(null)]:Code[0]:Desc[(null)] Environment: iOS version: ios 26 React Native: 0.69 Video library: react-native-video (AVPlayer under the hood) Stream type: HLS (m3u8) with segment (.ts) files Observed behaviour: Playback works initially on iOS 26. On iOS 26, the stream fails at runtime after a few seconds/minutes (not on first load). Network logs show 307 redirects on some segment requests. After this, AVPlayer throws the above error. Playback fails intermittently on slow/unstable networks.
5
18
1k
20h
Error resuming background audio while connected to CarPlay
My app utilizes background audio to play music files. I have the audio background mode enabled and I initialize the AVAudioSession in playback mode with the mixWithOthers option. And it usually works great while the app is backgrounded. I listen for audio interruptions as well as route changes and I am able to handle them appropriately and I can usually resume my background audio no problem. I discovered an issue while connected to CarPlay though. Roughly 50% of the time when I disconnect from a phone call while connected to CarPlay I get the following error after calling the play() method of my AVAudioPlayer instance: "ATAudioSessionClientImpl.mm:281 activation failed. status = 561015905" If I instead try to start a new audio session I get a similar error: Error Domain=NSOSStatusErrorDomain Code=561015905 "Session activation failed" UserInfo={NSLocalizedDescription=Session activation failed} Like I said, this isn't reproducible 100% of the time and is so far only seen while connected to CarPlay. I don't think Im forgetting so additional capability or plist setting, but if anyone has any clues it would be greatly appreciated. Otherwise this is likely just a bug that I need to report to Apple. One very important note, and reason I believe it's just a bug, is that while I was testing I found that other music apps like Spotify will also fail to resume their audio at the same time my app fails. Another important detail is that when it works successfully I receive the audio session interruption ended notification, and when it doesn't work I only receive a route configuration change or route override notification. From there I am able to still successfully granted background time to execute code, but my call to resume audio fails with the above mentioned error codes.
0
0
152
1d
Dockkit custom tracking does not work on iOS18.3
Hi all, I'm using Apple Sample Code below to create application using dockkit. "Controlling a DockKit accessory using your camera app" https://developer.apple.com/documentation/dockkit/controlling-a-dockkit-accessory-using-your-camera-app?changes=_8 I used vision hand recognition and put the observation data to dockAccessory.track, but Belkin or Insta360 devices never move on iPhone 16 Pro Max with iOS 18.3. If I use other functions like face search (system tracking) in the app, those work ok. I used Belkin and Insta360 Flow 2 Pro to reproduce the problem. My friend is also saying that the custom tracking feature was working fine on the OS 18 beta, but on recent iOS 18.3 that feature does not work. If I can get the iOS 18.0 beta then we can test that feature. But I cannot revert my iOS from 18.3 to the iOS 18.0 Beta. Regards, TO
1
1
272
2d
AVAudioEngine Voice Processing Fails with Mismatched Input/Output Devices: AggregateDevice Channel Count Mismatch
I'm encountering errors while using AVAudioEngine with voice processing enabled (setVoiceProcessingEnabled(true)) in scenarios where the input and output audio devices are not the same. This issue arises specifically with mismatched devices, preventing the application from functioning as expected. Works: Paired devices (e.g., MacBook Pro mic → MacBook Pro speakers) Fails: Mismatched devices (e.g., AirPods mic → MacBook Pro speakers) When using paired input and output devices: The setup works as expected. Example: MacBook Pro microphone → MacBook Pro speakers. When using mismatched devices: AVAudioEngine setup fails during aggregate device construction. Example: AirPods microphone → MacBook Pro speakers. Error logs indicate a channel count mismatch. Here are the partial logs. Due to the content limit, I cannot post the entire logs. AUVPAggregate.cpp:1000 client-side input and output formats do not match (err=-10875) AUVPAggregate.cpp:1036 err=-10875 AVAEInternal.h:109 [AVAudioEngineGraph.mm:1344:Initialize: (err = PerformCommand(*outputNode, kAUInitialize, NULL, 0)): error -10875 AggregateDevice.mm:329 Failed expectation of constructed aggregate (312): mInput.streamChannelCounts == inputStreamChannelCounts AggregateDevice.mm:331 Failed expectation of constructed aggregate (312): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U) AggregateDevice.mm:182 error fetching default pair AggregateDevice.mm:329 Failed expectation of constructed aggregate (336): mInput.streamChannelCounts == inputStreamChannelCounts AggregateDevice.mm:331 Failed expectation of constructed aggregate (336): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U) AUHAL.cpp:1782 ca_verify_noerr: [AudioDeviceSetProperty(mDeviceID, NULL, 0, isInput, kAudioDevicePropertyIOProcStreamUsage, theSize, theStreamUsage), 560227702] AudioHardware-mac-imp.cpp:3484 AudioDeviceSetProperty: no device with given ID AUHAL.cpp:1782 ca_verify_noerr: [AudioDeviceSetProperty(mDeviceID, NULL, 0, isInput, kAudioDevicePropertyIOProcStreamUsage, theSize, theStreamUsage), 560227702] AggregateDevice.mm:182 error fetching default pair AggregateDevice.mm:329 Failed expectation of constructed aggregate (348): mInput.streamChannelCounts == inputStreamChannelCounts AggregateDevice.mm:331 Failed expectation of constructed aggregate (348): mInput.totalChannelCount == std::accumulate(inputStreamChannelCounts.begin(), inputStreamChannelCounts.end(), 0U) Is it possible to use voice processing with different input/output devices? If yes, are there any specific configurations required to handle mismatched devices? How can we resolve channel count mismatch errors during aggregate device construction? Are there settings or API adjustments to enforce compatibility between input/output devices? Are there any workarounds or alternative approaches to achieve voice processing functionality with mismatched devices? For instance, can we force an intermediate channel configuration or downmix input/output formats?
0
0
157
2d
Logged error/warning in FigCaptureSourceRemote when capturing a photo
I'm using this library: https://github.com/Yummypets/YPImagePicker to capture photos. I've modified it slightly, and I'm using an older version. When testing on my iPhone 16e, ios 26, whenever I take a photo, I get the following two error messages: <<<< FigXPCUtilities >>>> signalled err=-17281 at <>:302 <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:569) - (err=-17281) These error messages appear, but as far as I can tell, the photo comes through OK, and I can save the data no problem. I've even removed all my handling code to see if it was something I was doing. I don't really want to ship with these errors showing, but I also have no idea what can be causing this error to appear. chatgpt was not helpful diagnosing this. Does anyone know what can cause this error Is there a way I can see the source code to figure out if there's something I'm doing wrong here? It really seems like this is an internal apple error, or else I would have expected more details on the error relating to the code I've written. Any clues would be appreciated!
2
2
585
3d
Photos are captured with incorrect exposure bias in specific scenarios on iPhone 17 Pro
Hey, There seems to be an inconsistency when capturing a photo using QualityPrioritization.Quality on the iPhone 17 Pro Main wide Lens. If you zoom above "2x" the output image always has "-2.0ev" bias in the meta data and looks underexposued. This does not happen at zoom levels above 2, or if you set the QualityPrioritization to .Balanced. See below: with .Quality with .Balanced This does not happen on the other lenses. I'm using a simple set up and it is consistent across JPEG and ProRAW capture. I have a demo project if that is useful. Thanks, Alex
3
0
393
3d
Can iOS capture video at 4032×3024 while running a Vision/ML model?
I am new to Swift and iOS development, and I have a question about video capture performance. Is it possible to capture video at a resolution of 4032×3024 while simultaneously running a vision/ML model on the video stream (e.g., using Vision or CoreML)? I want to know: whether iOS devices support capturing video at that resolution, whether the frame rate drops significantly at that scale, and whether it is practical to run a Vision/ML model in real-time while recording at such a high resolution. If anyone has experience with high-resolution AVCaptureSession setups or combining them with real-time ML processing, I would really appreciate guidance or sample code.
1
0
94
3d
Background Upload Extension
Hello, We are trying to use the new Background Upload Extension to improve uploads of assets (Photos, Live Photos, Videos) in the background in our application. 1 - We implemented the code to upload still images. We would like to do the same for Live Photos but we are not sure how to proceed since a Live Photo is composed of 2 resources that we would like to upload in the same job so that we keep the association between the 2 resources. Steps: A Live Photo is captured on the device. Our application is notified for new content in the Photo Library and the asset is queued for upload by our application. The system calls the background upload extension and the Live Photo is prepared for upload but we can schedule a job only for one resource (still photo or paired video) so we can not upload both resources in a single job. 2 - Is there a way to synchronise the application and the extension so that the application would not process the same data or execute the same requests than the extension. For example: our application is working with tokens and we would like to prevent those tokens to be consumed by the application and the extension at the same time. 3 - is there a command in xcode or terminal to start or stop the extension, something similar to what exists for processing tasks (https://developer.apple.com/documentation/backgroundtasks/starting-and-terminating-tasks-during-development).
1
1
135
3d
VTFrameRateConversionConfiguration don't support 640x480
hello, I'm using VideoTololbox VTFrameRateConversionConfiguration to perform frame interpolation: https://developer.apple.com/documentation/videotoolbox/vtframerateconversionconfiguration?language=objc ,when using 640x480 vidoe input, I got error: Error ! Invalid configuration [VEEspressoModel] build failure : flow_adaptation_feature_extractor_rev2.espresso.net. Configuration: landscape640x480 [EpsressoModel] Cannot load Net file flow_adaptation_feature_extractor_rev2.espresso.net. Configuration: landscape640x480 Error: failed to create FRCFlowAdaptationFeatureExtractor for usage 8 Failed to switch (0x12c40e140) [usage:8, 1/4 flow:0, adaptation layer:1, twoStage:0, revision:2, flow size (320x240)]. Could not init FlowAdaptation initFlowAdaptationWithError fail tried 2048x1080 is ok.
3
0
297
4d
Video Audio + Speech To Text
Hello, I am wondering if it is possible to have audio from my AirPods be sent to my speech to text service and at the same time have the built in mic audio input be sent to recording a video? I ask because I want my users to be able to say "CAPTURE" and I start recording a video (with audio from the built in mic) and then when the user says "STOP" I stop the recording.
1
0
288
5d
Repeat song listens not queryable
Hi all, I've been working on some personal programming projects and have gotten into using the Apple Music API. I'm currently looking to get a list of recent songs using the /v1/me/recent/played/tracks endpoint and it's working well. However, I know there are some songs I've listened to multiple times in a row, and those are not showing up as unique tracks when querying this endpoint. I'm only seeing a list of the different songs I've listened to lately, not a true list of the most recent plays on my account. Is this intended behavior or am I going about something incorrectly here? My query is using that endpoint & specifying the types to be only [songs]. Thanks in advance for any ideas or insight.
0
0
198
5d
Unexpected artist names in song table
Hi team, In the Apple Music Feed datasets, we've noticed some unexpected values in the song and album tables. The primaryartists column from either song or album may contain a "non-default" artist name such as the katakana name shown in the example below: select id, name, namedefault, primaryartists from amf_song where id = '1698723329' id | name | namedefault | primaryartists ---------------------------------------- 1698723329 | {default=California} | California | [{id=1264818718, name=チャペル・ローン}] select * from amf_artist where id = '1264818718' id | name | namedefault | namepronunciation | ---------------------------------------------- 1264818718 | {default=Chappell Roan, ja=チャペル・ローン} | Chappell Roan | {ja=チャペルローン} | Shouldn't the primaryartists column be showing the namedefault instead of the Japanese language version? When can we expect this bug to resolved? Thanks,
0
2
205
5d
Changing Frame Rate of External Display on iPad
Hello, As far as I know and in all of my testing there is no way for a user or a developer to change the frame rate of the video output on iPadOS. If you connect an iPad via a USB Hub or a USB to HDMI Adaptor and then connect it to an external monitor it will output at 59.94fps. I have a video app where a user monitors live video at 25fps and 30fps, they often output to an external display and there are times when the external display will stutter due to the mismatch in frame rate, ie. using 25fps and outputting at 59.94fps. I thought it was impossible to change the video output frame rate, then in V3.1 of the Blackmagic Camera App I saw an interesting change in their release notes: ‘Support for HDMI Monitoring at Sensor Rate and Resolution’ This means there is some way to modify it, not sure if this is done via a Private API that Apple has allowed Blackmagic to use. If so, how can we access this or is there a way to enable this that is undocumented? Thanks!
5
0
675
1w
_shouldExposeItemIdentifier is false. Unable to get itemIdentifier
PHPhotoLibrary.authorizationStatus(for: .readWrite) == .authorized Iinfo.plist Privacy - Photo Library Usage Description set I check authorization before attempting to get the photoPickerItem.itemIdentifier, but every time the return value from itemIdentifier is nil. Seems I missing some permissions, but unsure why the system is still keeping _shouldExposeItemIdentifier set to false.
1
0
196
1w
How can third-party iOS apps obtain real-time waveform / spectrogram data for Apple Music tracks (similar to djay & other DJ apps)?
Hi everyone, I’m working on an iOS MusicKit app that overlays a metronome on top of Apple Music playback. To line the clicks up perfectly I’d like access to low-level audio analysis data—ideally a waveform / spectrogram or beat grid—while the track is playing. I’ve noticed that several approved DJ apps (e.g. djay, Serato, rekordbox) can already: • Display detailed scrolling waveforms of Apple Music songs • Scratch, loop or time-stretch those tracks in real time That implies they receive decoded PCM frames or at least high-resolution analysis data from Apple Music under a special entitlement. My questions: 1. Does MusicKit (or any public framework) expose real-time audio buffers, FFT bins, or beat markers for streaming Apple Music content? 2. If not, is there an Apple program or entitlement that developers can apply for—similar to the “DJ with Apple Music” initiative—to gain that deeper access? 3. Where can I find official documentation or a point of contact for this kind of request? I’ve searched the docs and forums but only see standard MusicKit playback APIs, which don’t appear to expose raw audio for DRM-protected songs. Any guidance, links or insider tips on the proper application process would be hugely appreciated! Thanks in advance.
1
1
594
1w