Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics
Posts under Media Technologies topic

Post

Replies

Boosts

Views

Activity

Incorrect 5.1 / Atmos channel mapping on Apple TV 4K (2022)
I ran 5.1 audio tests in both YouTube and Apple Music, and I noticed that when sound is supposed to play from the rear or front surround speakers, it’s also duplicated in the front left and right channels. I’m absolutely sure the issue is with the Apple TV, because I played the same video directly through my TV’s native system, and the channel separation was correct. Everything used to work perfectly before, so this must be a software issue. I’m currently on tvOS 26 Developer Beta 5, but I’m certain the problem also existed on the stable tvOS 18.5. I’ve already reset and updated my Apple TV, and I also tried switching the audio format to forced Dolby Atmos 5.1. On the forums, I mostly see complaints about Dolby Atmos not working at all — in my case, everything technically works, but not the way it’s supposed to.
1
0
99
Aug ’25
AVSpeechSynthesisVoices available on device
Hello there! Is there any list of voices that are always available on iOS/iPadOS devices? It seems that AVSpeechSynthesisVoice(identifier: "com.apple.voice.compact.en-US.Samantha") is always available on all devices. I thought that AVSpeechSynthesisVoice(identifier: "com.apple.ttsbundle.siri_Nicky_en-US_compact") and AVSpeechSynthesisVoice(identifier: "com.apple.ttsbundle.siri_Aaron_en-US_compact") were available by default on certain newer devices. Is this true? I also noticed that on the same iPad where I was using those 2 voices (Nicky and Aaron) - when I updated to the iPadOS 26 beta, those voices were no longer available. Any information you can share about which voices should be reliably available on which devices would be extremely helpful for our development. Thanks so much!
0
0
188
Jun ’25
Issue with Audio Sample Rate Conversion in Video Calls
Hey everyone, I'm encountering an issue with audio sample rate conversion that I'm hoping someone can help with. Here's the breakdown: Issue Description: I've installed a tap on an input device to convert audio to an optimal sample rate. There's a converter node added on top of this setup. The problem arises when joining Zoom or FaceTime calls—the converter gets deallocated from memory, causing the program to crash. Symptoms: The converter node is being deallocated during video calls. The program crashes entirely when this happens. Traditional methods of monitoring sample rate changes (tracking nominal or actual sample rates) aren't working as expected. The Big Challenge: I can't figure out how to properly monitor sample rate changes. Listeners set up to track these changes don't trigger when the device joins a Zoom or FaceTime call. Please, if anyone has experience with this or knows a solution, I'd really appreciate your help. Thanks in advance! ⁠
0
0
121
Apr ’25
Add icon to DEXT based on AudioDriverKit
Dear Sirs, I'd like to add an icon to my audio driver based on AudioDriverKit. This icon should show up left of my audio device in the audio devices dialog. For an Audio Server Plugin I managed to do this using the property kAudioDevicePropertyIcon and CFBundleCopyResourceURL(...) but how would you do this with AudioDriverKit? Should I use IOUserAudioCustomProperty or IOUserAudioControl and how would I refer to the Bundle? Is there an example available somewhere? Thanks and best regards, Johannes
7
0
1.2k
Jul ’25
Some questions about musickit
We are developing an apple music app on phone, the developed web works fine on chrome, but when i load it on webivew on my phone, i can't play the first song, We doubt that the drm init, key exchange, session creation was on the music.play() function, while we trigger the play, the drm or session was not ok for play a real song, so it got an error so we may wanna know: what about the realative process of drm, key, session, etc in the play() function? are there some state detect function to show weather the drm is ok?
1
0
161
Mar ’25
Live HLS Stream Not Playing After Window Resize in Vision Pro
Hi everyone, I'm developing a visionOS app for Apple Vision Pro, and I've encountered an issue related to window resizing at runtime when using AVPlayer to play a live HLS stream. ✅ What I'm Trying to Do Play a live HLS stream (from Wowza) inside my app using AVPlayer. Support resizing the immersive window using Vision Pro’s built-in runtime scaling gesture. Stream works fine at default window size when the app launches. ❌ Problem If I resize the app’s window at runtime (using the Vision Pro pinch-drag gesture), then try to start the stream, it does not play. Instead, it just shows the "Loading live stream..." state and never proceeds to playback. This issue only occurs after resizing the window — if I don’t resize, the stream works perfectly every time. 🧪 What I’ve Tried Verified the HLS URL — it’s working and plays fine in Safari and in the app before resizing. Set .automaticallyWaitsToMinimizeStalling = false on AVPlayer. Observed that .status on AVPlayerItem never reaches .readyToPlay after resizing. Tried to force window size back using UIWindowScene.requestGeometryUpdate(...), but behavior persists.
2
0
289
Jul ’25
ffmpeg xcframework not working on Mac, but working correctly on iOS
I have an app (currently in development stage) which needs to use ffmpeg, so I tried searching how to embed ffmpeg in apple apps and found this article https://doc.qt.io/qt-6/qtmultimedia-building-ffmpeg-ios.html It is working correctly for iOS but not for macOS ( I have made changes macOS specific using chatgpt and traditional web searching) Drive link for the file and instructions which I'm following: https://drive.google.com/drive/folders/11wqlvb8SU2thMSfII4_Xm3Kc2fPSCZed?usp=share_link Please can someone from apple or in general help me to figure out what I'm doing wrong?
1
0
205
Jun ’25
[CoreImage] OS 26 breaks Metal kernels for CIFilters
I maintain a couple of CoreImage libraries that provide custom Metal kernel backed CIFilters. In iOS/iPadOS 26, the CIColorKernel.apply() method invoked in the CIFilter subclass fails to add the coreimage::destination parameter to the Metal function call: -[CIColorKernel applyWithExtent:arguments:options:] argument count mismatch for kernel 'FractalNoise3D', expected 13 but saw 12. I've compiled the code with Xcode 26 and deployed to iOS 18 devices without any breakage, so this is definitely an iOS problem, not an Xcode problem. Library here: https://github.com/JoshuaSullivan/SimplexNoiseFilter Feedback ID: FB17874311
1
0
192
Jun ’25
currentPlaybackPitch or preservesPitch parameter
Hi Apple Music API / MusicKit / MediaPlayer Team, Similar to the currentPlaybackRate keeps the same pitch, it would be great to have a currentPlaybackPitch parameter as well. Alternatively, adding a preservesPitch parameter would also work. I see that iOS 26 AutoMix on Apple Music currently does pitch shifting during music transitions, so maybe this is something that could be exposed on the later betas of iOS 26? Main feature request we get is to have simple pitch changes to Apple Music we play through our app. Is this being considered?
0
0
440
Jul ’25
Unstable Playlist.Entry.id causes crashes when removing duplicates
When multiple identical songs are added to a playlist, Playlist.Entry.id uses a suffix-based identifier (e.g. songID_0, songID_1, etc.). Removing one entry causes others to shift, changing their .id values. This leads to diffing errors and collection view crashes in SwiftUI or UIKit when entries are updated. Steps to Reproduce: Add the same song to a playlist multiple times. Observe .id.rawValue of entries (e.g. i.SONGID_0, i.SONGID_1). Remove one entry. Fetch playlist again — note the other IDs have shifted. FB18879062
0
0
552
Jul ’25
PHPickerViewController Not Offering public.hevc UTI for a Known HEVC Video
I'm working on an app where a user needs to select a video from their Photos library, and I need to get the original, unmodified HEVC (H.265) data stream to preserve its encoding. The Problem I have confirmed that my source videos are HEVC. I can record a new video with my iPhone 15 Pro Max camera set to "High Efficiency," export the "Unmodified Original" from Photos on my Mac, and verify that the codec is MPEG-H Part2/HEVC (H.265). However, when I select that exact same video in my app using PHPickerViewController, the itemProvider does not list public.hevc as an available type identifier. This forces me to fall back to a generic movie type, which results in the system providing me with a transcoded H.264 version of the video. Here is the debug output from my app after selecting a known HEVC video: ⚠️ 'public.hevc' not found. Falling back to generic movie type (likely H.264). What I've Tried My code explicitly checks for the public.hevc identifier in the registeredTypeIdentifiers array. Since it's not found, my HEVC-specific logic is never triggered. Here is a minimal version of my PHPickerViewControllerDelegate implementation: import UniformTypeIdentifiers // ... inside the Coordinator class ... func picker(_ picker: PHPickerViewController, didFinishPicking results: [PHPickerResult]) { picker.dismiss(animated: true) guard let result = results.first else { return } let itemProvider = result.itemProvider let hevcIdentifier = "public.hevc" let identifiers = itemProvider.registeredTypeIdentifiers print("Available formats from itemProvider: \(identifiers)") if identifiers.contains(hevcIdentifier) { print("✅ HEVC format found, requesting raw data...") itemProvider.loadDataRepresentation(forTypeIdentifier: hevcIdentifier) { (data, error) in // ... process H.265 data ... } } else { print("⚠️ 'public.hevc' not found. Falling back to generic movie type (likely H.264).") itemProvider.loadFileRepresentation(forTypeIdentifier: UTType.movie.identifier) { url, error in // ... process H.264 fallback ... } } } My Environment Device: iPhone 15 Pro Max iOS Version: iOS 18.5 Xcode Version: 16.2 My Questions Are there specific conditions (e.g., the video being HDR/Dolby Vision, Cinematic, or stored in iCloud) under which PHPickerViewController's itemProvider would intentionally not offer the public.hevc type identifier, even for an HEVC video? What is the definitive, recommended API sequence to guarantee that I receive the original, unmodified data stream for a video asset, ensuring that no transcoding to H.264 occurs during the process? Any insight into why public.hevc might be missing from the registeredTypeIdentifiers for a known HEVC asset would be greatly appreciated. Thank you.
3
0
191
Jul ’25
PhotosPicker obtains the result information of the selected video
In the AVP project, a selector pops up, only wanting to filter spatial videos. When selecting the material of one of the spatial videos, the selection result returns empty. How can we obtain the video selected by the user and get the path and the URL of the file The code is as follows: PhotosPicker(selection: $selectedItem, matching: .videos) { Text("Choose a spatial photo or video") } func loadTransferable(from imageSelection: PhotosPickerItem) -> Progress { return imageSelection.loadTransferable(type: URL.self) { result in DispatchQueue.main.async { // guard imageSelection == self.imageSelection else { return } print("加载成功的图片集合:(result)") switch result { case .success(let url?): self.selectSpatialVideoURL = url print("获取视频链接:(url)") case .success(nil): break // Handle the success case with an empty value. case .failure(let error): print("spatial错误:(error)") // Handle the failure case with the provided error. } } } }
4
0
188
May ’25
Apple Music Artist - genres
The documentation for the Apple Music API indicates that the genreNames field for a given artist (see https://developer.apple.com/documentation/applemusicapi/artists/attributes-data.dictionary) is an array of strings. However, it only appears as though you return ONE SINGLE GENRE per Artist, regardless of how many genres might be attached to that artist's albums. Am I missing something? Is there an artist where multiple genres may be returned, or is this a bug in the documentation?
2
0
550
Jul ’25
iOS Audio Routing - Bluetooth Output + Built-in Microphone Input
Hello! I'm experiencing an issue with iOS's audio routing system when trying to use Bluetooth headphones for audio output while also recording environmental audio from the built-in microphone. Desired behavior: Play audio through Bluetooth headset (AirPods) Record unprocessed environmental audio from the iPhone's built-in microphone Actual behavior: When explicitly selecting the built-in microphone, iOS reports it's using it (in currentRoute.inputs) However, the actual audio data received is clearly still coming from the AirPods microphone The audio is heavily processed with voice isolation/noise cancellation, removing environmental sounds Environment Details Device: iPhone 12 Pro Max iOS Version: 18.4.1 Hardware: AirPods Audio Framework: AVAudioEngine (also tried AudioQueue) Code Attempted I've tried multiple approaches to force the correct routing: func configureAudioSession() { let session = AVAudioSession.sharedInstance() // Configure to allow Bluetooth output but use built-in mic try? session.setCategory(.playAndRecord, options: [.allowBluetoothA2DP, .defaultToSpeaker]) try? session.setActive(true) // Explicitly select built-in microphone if let inputs = session.availableInputs, let builtInMic = inputs.first(where: { $0.portType == .builtInMic }) { try? session.setPreferredInput(builtInMic) print("Selected input: \(builtInMic.portName)") } // Log the current route let route = session.currentRoute print("Current input: \(route.inputs.first?.portName ?? "None")") // Configure audio engine with native format let inputNode = audioEngine.inputNode let nativeFormat = inputNode.inputFormat(forBus: 0) inputNode.installTap(onBus: 0, bufferSize: 1024, format: nativeFormat) { buffer, time in // Process audio buffer // Despite showing "Built-in Microphone" in route, audio appears to be // coming from AirPods with voice isolation applied - welp! } try? audioEngine.start() } I've also tried various combinations of: Different audio session modes (.default, .measurement, .voiceChat) Different option combinations (with/without .allowBluetooth, .allowBluetoothA2DP) Setting session.setPreferredInput() both before and after activation Diagnostic Observations When AirPods are connected: AVAudioSession.currentRoute.inputs correctly shows "Built-in Microphone" after setPreferredInput() The actual audio data received shows clear signs of AirPods' voice isolation processing Background/environmental sounds are actively filtered out... When recording a test audio played near the phone (not through the app), the recording is nearly silent. Only headset voice goes through. Questions Is there a workaround to force iOS to actually use the built-in microphone while maintaining Bluetooth output? Are there any lower-level configurations that might resolve this issue? Any insights, workarounds, or suggestions would be greatly appreciated. This is blocking a critical feature in my application that requires environmental audio recording while providing audio feedback through headphones 😅
0
0
222
May ’25
Visual isTranslatable: NO; reason: observation failure: noObservations, when trying to play custom compositor video with AVPlayer
I am trying to achieve an animated gradient effect that changes values over time based on the current seconds. I am also using AVPlayer and AVMutableVideoComposition along with custom instruction and class to generate the effect. I didn't want to load any video file, but rather generate a custom video with my own set of instructions. I used Metal Compute shaders to generate the effects and make the video to be 20 seconds. However, when I run the code, I get a frozen player with the gradient applied, but when I try to play the video, I get this warning in the console :- Visual isTranslatable: NO; reason: observation failure: noObservations Here is the screenshot :- My entire code :- import AVFoundation import Metal class GradientVideoCompositorTest: NSObject, AVVideoCompositing { var sourcePixelBufferAttributes: [String: Any]? = [ kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA ] var requiredPixelBufferAttributesForRenderContext: [String: Any] = [ kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA ] private var renderContext: AVVideoCompositionRenderContext? private var metalDevice: MTLDevice! private var metalCommandQueue: MTLCommandQueue! private var metalLibrary: MTLLibrary! private var metalPipeline: MTLComputePipelineState! override init() { super.init() setupMetal() } func setupMetal() { guard let device = MTLCreateSystemDefaultDevice(), let queue = device.makeCommandQueue(), let library = try? device.makeDefaultLibrary(), let function = library.makeFunction(name: "gradientShader") else { fatalError("Metal setup failed") } self.metalDevice = device self.metalCommandQueue = queue self.metalLibrary = library self.metalPipeline = try? device.makeComputePipelineState(function: function) } func renderContextChanged(_ newRenderContext: AVVideoCompositionRenderContext) { renderContext = newRenderContext } func startRequest(_ request: AVAsynchronousVideoCompositionRequest) { guard let outputPixelBuffer = renderContext?.newPixelBuffer(), let metalTexture = createMetalTexture(from: outputPixelBuffer) else { request.finish(with: NSError(domain: "com.example.gradient", code: -1, userInfo: nil)) return } var time = Float(request.compositionTime.seconds) renderGradient(to: metalTexture, time: time) request.finish(withComposedVideoFrame: outputPixelBuffer) } private func createMetalTexture(from pixelBuffer: CVPixelBuffer) -> MTLTexture? { var texture: MTLTexture? let width = CVPixelBufferGetWidth(pixelBuffer) let height = CVPixelBufferGetHeight(pixelBuffer) let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor( pixelFormat: .bgra8Unorm, width: width, height: height, mipmapped: false ) textureDescriptor.usage = [.shaderWrite, .shaderRead] CVPixelBufferLockBaseAddress(pixelBuffer, .readOnly) if let textureCache = createTextureCache(), let cvTexture = createCVMetalTexture(from: pixelBuffer, cache: textureCache) { texture = CVMetalTextureGetTexture(cvTexture) } CVPixelBufferUnlockBaseAddress(pixelBuffer, .readOnly) return texture } private func renderGradient(to texture: MTLTexture, time: Float) { guard let commandBuffer = metalCommandQueue.makeCommandBuffer(), let commandEncoder = commandBuffer.makeComputeCommandEncoder() else { return } commandEncoder.setComputePipelineState(metalPipeline) commandEncoder.setTexture(texture, index: 0) var mutableTime = time commandEncoder.setBytes(&mutableTime, length: MemoryLayout<Float>.size, index: 0) let threadsPerGroup = MTLSize(width: 16, height: 16, depth: 1) let threadGroups = MTLSize( width: (texture.width + 15) / 16, height: (texture.height + 15) / 16, depth: 1 ) commandEncoder.dispatchThreadgroups(threadGroups, threadsPerThreadgroup: threadsPerGroup) commandEncoder.endEncoding() commandBuffer.commit() } private func createTextureCache() -> CVMetalTextureCache? { var cache: CVMetalTextureCache? CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, metalDevice, nil, &cache) return cache } private func createCVMetalTexture(from pixelBuffer: CVPixelBuffer, cache: CVMetalTextureCache) -> CVMetalTexture? { var cvTexture: CVMetalTexture? let width = CVPixelBufferGetWidth(pixelBuffer) let height = CVPixelBufferGetHeight(pixelBuffer) CVMetalTextureCacheCreateTextureFromImage( kCFAllocatorDefault, cache, pixelBuffer, nil, .bgra8Unorm, width, height, 0, &cvTexture ) return cvTexture } } class GradientCompositionInstructionTest: NSObject, AVVideoCompositionInstructionProtocol { var timeRange: CMTimeRange var enablePostProcessing: Bool = true var containsTweening: Bool = true var requiredSourceTrackIDs: [NSValue]? = nil var passthroughTrackID: CMPersistentTrackID = kCMPersistentTrackID_Invalid init(timeRange: CMTimeRange) { self.timeRange = timeRange } } func createGradientVideoComposition(duration: CMTime, size: CGSize) -> AVMutableVideoComposition { let composition = AVMutableComposition() let instruction = GradientCompositionInstructionTest(timeRange: CMTimeRange(start: .zero, duration: duration)) let videoComposition = AVMutableVideoComposition() videoComposition.customVideoCompositorClass = GradientVideoCompositorTest.self videoComposition.renderSize = size videoComposition.frameDuration = CMTime(value: 1, timescale: 30) // 30 FPS videoComposition.instructions = [instruction] return videoComposition } #include <metal_stdlib> using namespace metal; kernel void gradientShader(texture2d<float, access::write> output [[texture(0)]], constant float &time [[buffer(0)]], uint2 id [[thread_position_in_grid]]) { float2 uv = float2(id) / float2(output.get_width(), output.get_height()); // Animated colors based on time float3 color1 = float3(sin(time) * 0.8 + 0.1, 0.6, 1.0); float3 color2 = float3(0.12, 0.99, cos(time) * 0.9 + 0.3); // Linear interpolation for gradient float3 gradientColor = mix(color1, color2, uv.y); output.write(float4(gradientColor, 1.0), id); }
1
0
383
Apr ’25
How to inform Logic Pro that AU view does not have a fixed aspect ratio?
I have an AUv3 that passes all validation and can be loaded into Logic Pro without issue. The UI for the plug in can be any aspect ratio but Logic insists on presenting it in a view with a fixed aspect ratio. That is when resizing, both the height and width are resized. I have never managed to work out what it is I need to do specify to Logic to allow the user to resize width or height independently of each other. Can anyone tell me what I need to specify in the AU code that will inform Logic that the view can be resized from any side of the window/panel?
0
0
230
Apr ’25
Incorrect 5.1 / Atmos channel mapping on Apple TV 4K (2022)
I ran 5.1 audio tests in both YouTube and Apple Music, and I noticed that when sound is supposed to play from the rear or front surround speakers, it’s also duplicated in the front left and right channels. I’m absolutely sure the issue is with the Apple TV, because I played the same video directly through my TV’s native system, and the channel separation was correct. Everything used to work perfectly before, so this must be a software issue. I’m currently on tvOS 26 Developer Beta 5, but I’m certain the problem also existed on the stable tvOS 18.5. I’ve already reset and updated my Apple TV, and I also tried switching the audio format to forced Dolby Atmos 5.1. On the forums, I mostly see complaints about Dolby Atmos not working at all — in my case, everything technically works, but not the way it’s supposed to.
Replies
1
Boosts
0
Views
99
Activity
Aug ’25
AVSpeechSynthesisVoices available on device
Hello there! Is there any list of voices that are always available on iOS/iPadOS devices? It seems that AVSpeechSynthesisVoice(identifier: "com.apple.voice.compact.en-US.Samantha") is always available on all devices. I thought that AVSpeechSynthesisVoice(identifier: "com.apple.ttsbundle.siri_Nicky_en-US_compact") and AVSpeechSynthesisVoice(identifier: "com.apple.ttsbundle.siri_Aaron_en-US_compact") were available by default on certain newer devices. Is this true? I also noticed that on the same iPad where I was using those 2 voices (Nicky and Aaron) - when I updated to the iPadOS 26 beta, those voices were no longer available. Any information you can share about which voices should be reliably available on which devices would be extremely helpful for our development. Thanks so much!
Replies
0
Boosts
0
Views
188
Activity
Jun ’25
Issue with Audio Sample Rate Conversion in Video Calls
Hey everyone, I'm encountering an issue with audio sample rate conversion that I'm hoping someone can help with. Here's the breakdown: Issue Description: I've installed a tap on an input device to convert audio to an optimal sample rate. There's a converter node added on top of this setup. The problem arises when joining Zoom or FaceTime calls—the converter gets deallocated from memory, causing the program to crash. Symptoms: The converter node is being deallocated during video calls. The program crashes entirely when this happens. Traditional methods of monitoring sample rate changes (tracking nominal or actual sample rates) aren't working as expected. The Big Challenge: I can't figure out how to properly monitor sample rate changes. Listeners set up to track these changes don't trigger when the device joins a Zoom or FaceTime call. Please, if anyone has experience with this or knows a solution, I'd really appreciate your help. Thanks in advance! ⁠
Replies
0
Boosts
0
Views
121
Activity
Apr ’25
Play MPEG-TS (h264 or HEVC) video stream in iOS App
I'm capturing video stream from GoPro camera (via UDP MPEG-TS packets) but unable to play in iOS app. can someone provide a source code to decode MPEG-TS data to CMSampleBuffer.
Replies
0
Boosts
0
Views
166
Activity
Jul ’25
Add icon to DEXT based on AudioDriverKit
Dear Sirs, I'd like to add an icon to my audio driver based on AudioDriverKit. This icon should show up left of my audio device in the audio devices dialog. For an Audio Server Plugin I managed to do this using the property kAudioDevicePropertyIcon and CFBundleCopyResourceURL(...) but how would you do this with AudioDriverKit? Should I use IOUserAudioCustomProperty or IOUserAudioControl and how would I refer to the Bundle? Is there an example available somewhere? Thanks and best regards, Johannes
Replies
7
Boosts
0
Views
1.2k
Activity
Jul ’25
Some questions about musickit
We are developing an apple music app on phone, the developed web works fine on chrome, but when i load it on webivew on my phone, i can't play the first song, We doubt that the drm init, key exchange, session creation was on the music.play() function, while we trigger the play, the drm or session was not ok for play a real song, so it got an error so we may wanna know: what about the realative process of drm, key, session, etc in the play() function? are there some state detect function to show weather the drm is ok?
Replies
1
Boosts
0
Views
161
Activity
Mar ’25
Live HLS Stream Not Playing After Window Resize in Vision Pro
Hi everyone, I'm developing a visionOS app for Apple Vision Pro, and I've encountered an issue related to window resizing at runtime when using AVPlayer to play a live HLS stream. ✅ What I'm Trying to Do Play a live HLS stream (from Wowza) inside my app using AVPlayer. Support resizing the immersive window using Vision Pro’s built-in runtime scaling gesture. Stream works fine at default window size when the app launches. ❌ Problem If I resize the app’s window at runtime (using the Vision Pro pinch-drag gesture), then try to start the stream, it does not play. Instead, it just shows the "Loading live stream..." state and never proceeds to playback. This issue only occurs after resizing the window — if I don’t resize, the stream works perfectly every time. 🧪 What I’ve Tried Verified the HLS URL — it’s working and plays fine in Safari and in the app before resizing. Set .automaticallyWaitsToMinimizeStalling = false on AVPlayer. Observed that .status on AVPlayerItem never reaches .readyToPlay after resizing. Tried to force window size back using UIWindowScene.requestGeometryUpdate(...), but behavior persists.
Replies
2
Boosts
0
Views
289
Activity
Jul ’25
audio video live broadcast
i need to make app for iOS to get code for XCode
Replies
1
Boosts
0
Views
82
Activity
May ’25
How to display custom UI in Picture-in-Picture after iOS 15
After Picture-in-Picture is opened on the camera interface, the custom UI cannot be displayed. Is there any way to make the custom UI visible? If custom UI cannot be displayed, how do teleprompter-type apps in the store manage to show custom teleprompter text within the camera?
Replies
0
Boosts
0
Views
211
Activity
Aug ’25
ffmpeg xcframework not working on Mac, but working correctly on iOS
I have an app (currently in development stage) which needs to use ffmpeg, so I tried searching how to embed ffmpeg in apple apps and found this article https://doc.qt.io/qt-6/qtmultimedia-building-ffmpeg-ios.html It is working correctly for iOS but not for macOS ( I have made changes macOS specific using chatgpt and traditional web searching) Drive link for the file and instructions which I'm following: https://drive.google.com/drive/folders/11wqlvb8SU2thMSfII4_Xm3Kc2fPSCZed?usp=share_link Please can someone from apple or in general help me to figure out what I'm doing wrong?
Replies
1
Boosts
0
Views
205
Activity
Jun ’25
Audio of AirPods won’t work
Since the last update to IOS 26.0 (23A5276f) the AirPods connect to my IPhone and the Audio is still running through the phone. They are shown in the Bluetooth Icon that they’re paired.
Replies
1
Boosts
0
Views
90
Activity
Jun ’25
[CoreImage] OS 26 breaks Metal kernels for CIFilters
I maintain a couple of CoreImage libraries that provide custom Metal kernel backed CIFilters. In iOS/iPadOS 26, the CIColorKernel.apply() method invoked in the CIFilter subclass fails to add the coreimage::destination parameter to the Metal function call: -[CIColorKernel applyWithExtent:arguments:options:] argument count mismatch for kernel 'FractalNoise3D', expected 13 but saw 12. I've compiled the code with Xcode 26 and deployed to iOS 18 devices without any breakage, so this is definitely an iOS problem, not an Xcode problem. Library here: https://github.com/JoshuaSullivan/SimplexNoiseFilter Feedback ID: FB17874311
Replies
1
Boosts
0
Views
192
Activity
Jun ’25
currentPlaybackPitch or preservesPitch parameter
Hi Apple Music API / MusicKit / MediaPlayer Team, Similar to the currentPlaybackRate keeps the same pitch, it would be great to have a currentPlaybackPitch parameter as well. Alternatively, adding a preservesPitch parameter would also work. I see that iOS 26 AutoMix on Apple Music currently does pitch shifting during music transitions, so maybe this is something that could be exposed on the later betas of iOS 26? Main feature request we get is to have simple pitch changes to Apple Music we play through our app. Is this being considered?
Replies
0
Boosts
0
Views
440
Activity
Jul ’25
Unstable Playlist.Entry.id causes crashes when removing duplicates
When multiple identical songs are added to a playlist, Playlist.Entry.id uses a suffix-based identifier (e.g. songID_0, songID_1, etc.). Removing one entry causes others to shift, changing their .id values. This leads to diffing errors and collection view crashes in SwiftUI or UIKit when entries are updated. Steps to Reproduce: Add the same song to a playlist multiple times. Observe .id.rawValue of entries (e.g. i.SONGID_0, i.SONGID_1). Remove one entry. Fetch playlist again — note the other IDs have shifted. FB18879062
Replies
0
Boosts
0
Views
552
Activity
Jul ’25
PHPickerViewController Not Offering public.hevc UTI for a Known HEVC Video
I'm working on an app where a user needs to select a video from their Photos library, and I need to get the original, unmodified HEVC (H.265) data stream to preserve its encoding. The Problem I have confirmed that my source videos are HEVC. I can record a new video with my iPhone 15 Pro Max camera set to "High Efficiency," export the "Unmodified Original" from Photos on my Mac, and verify that the codec is MPEG-H Part2/HEVC (H.265). However, when I select that exact same video in my app using PHPickerViewController, the itemProvider does not list public.hevc as an available type identifier. This forces me to fall back to a generic movie type, which results in the system providing me with a transcoded H.264 version of the video. Here is the debug output from my app after selecting a known HEVC video: ⚠️ 'public.hevc' not found. Falling back to generic movie type (likely H.264). What I've Tried My code explicitly checks for the public.hevc identifier in the registeredTypeIdentifiers array. Since it's not found, my HEVC-specific logic is never triggered. Here is a minimal version of my PHPickerViewControllerDelegate implementation: import UniformTypeIdentifiers // ... inside the Coordinator class ... func picker(_ picker: PHPickerViewController, didFinishPicking results: [PHPickerResult]) { picker.dismiss(animated: true) guard let result = results.first else { return } let itemProvider = result.itemProvider let hevcIdentifier = "public.hevc" let identifiers = itemProvider.registeredTypeIdentifiers print("Available formats from itemProvider: \(identifiers)") if identifiers.contains(hevcIdentifier) { print("✅ HEVC format found, requesting raw data...") itemProvider.loadDataRepresentation(forTypeIdentifier: hevcIdentifier) { (data, error) in // ... process H.265 data ... } } else { print("⚠️ 'public.hevc' not found. Falling back to generic movie type (likely H.264).") itemProvider.loadFileRepresentation(forTypeIdentifier: UTType.movie.identifier) { url, error in // ... process H.264 fallback ... } } } My Environment Device: iPhone 15 Pro Max iOS Version: iOS 18.5 Xcode Version: 16.2 My Questions Are there specific conditions (e.g., the video being HDR/Dolby Vision, Cinematic, or stored in iCloud) under which PHPickerViewController's itemProvider would intentionally not offer the public.hevc type identifier, even for an HEVC video? What is the definitive, recommended API sequence to guarantee that I receive the original, unmodified data stream for a video asset, ensuring that no transcoding to H.264 occurs during the process? Any insight into why public.hevc might be missing from the registeredTypeIdentifiers for a known HEVC asset would be greatly appreciated. Thank you.
Replies
3
Boosts
0
Views
191
Activity
Jul ’25
PhotosPicker obtains the result information of the selected video
In the AVP project, a selector pops up, only wanting to filter spatial videos. When selecting the material of one of the spatial videos, the selection result returns empty. How can we obtain the video selected by the user and get the path and the URL of the file The code is as follows: PhotosPicker(selection: $selectedItem, matching: .videos) { Text("Choose a spatial photo or video") } func loadTransferable(from imageSelection: PhotosPickerItem) -> Progress { return imageSelection.loadTransferable(type: URL.self) { result in DispatchQueue.main.async { // guard imageSelection == self.imageSelection else { return } print("加载成功的图片集合:(result)") switch result { case .success(let url?): self.selectSpatialVideoURL = url print("获取视频链接:(url)") case .success(nil): break // Handle the success case with an empty value. case .failure(let error): print("spatial错误:(error)") // Handle the failure case with the provided error. } } } }
Replies
4
Boosts
0
Views
188
Activity
May ’25
Apple Music Artist - genres
The documentation for the Apple Music API indicates that the genreNames field for a given artist (see https://developer.apple.com/documentation/applemusicapi/artists/attributes-data.dictionary) is an array of strings. However, it only appears as though you return ONE SINGLE GENRE per Artist, regardless of how many genres might be attached to that artist's albums. Am I missing something? Is there an artist where multiple genres may be returned, or is this a bug in the documentation?
Replies
2
Boosts
0
Views
550
Activity
Jul ’25
iOS Audio Routing - Bluetooth Output + Built-in Microphone Input
Hello! I'm experiencing an issue with iOS's audio routing system when trying to use Bluetooth headphones for audio output while also recording environmental audio from the built-in microphone. Desired behavior: Play audio through Bluetooth headset (AirPods) Record unprocessed environmental audio from the iPhone's built-in microphone Actual behavior: When explicitly selecting the built-in microphone, iOS reports it's using it (in currentRoute.inputs) However, the actual audio data received is clearly still coming from the AirPods microphone The audio is heavily processed with voice isolation/noise cancellation, removing environmental sounds Environment Details Device: iPhone 12 Pro Max iOS Version: 18.4.1 Hardware: AirPods Audio Framework: AVAudioEngine (also tried AudioQueue) Code Attempted I've tried multiple approaches to force the correct routing: func configureAudioSession() { let session = AVAudioSession.sharedInstance() // Configure to allow Bluetooth output but use built-in mic try? session.setCategory(.playAndRecord, options: [.allowBluetoothA2DP, .defaultToSpeaker]) try? session.setActive(true) // Explicitly select built-in microphone if let inputs = session.availableInputs, let builtInMic = inputs.first(where: { $0.portType == .builtInMic }) { try? session.setPreferredInput(builtInMic) print("Selected input: \(builtInMic.portName)") } // Log the current route let route = session.currentRoute print("Current input: \(route.inputs.first?.portName ?? "None")") // Configure audio engine with native format let inputNode = audioEngine.inputNode let nativeFormat = inputNode.inputFormat(forBus: 0) inputNode.installTap(onBus: 0, bufferSize: 1024, format: nativeFormat) { buffer, time in // Process audio buffer // Despite showing "Built-in Microphone" in route, audio appears to be // coming from AirPods with voice isolation applied - welp! } try? audioEngine.start() } I've also tried various combinations of: Different audio session modes (.default, .measurement, .voiceChat) Different option combinations (with/without .allowBluetooth, .allowBluetoothA2DP) Setting session.setPreferredInput() both before and after activation Diagnostic Observations When AirPods are connected: AVAudioSession.currentRoute.inputs correctly shows "Built-in Microphone" after setPreferredInput() The actual audio data received shows clear signs of AirPods' voice isolation processing Background/environmental sounds are actively filtered out... When recording a test audio played near the phone (not through the app), the recording is nearly silent. Only headset voice goes through. Questions Is there a workaround to force iOS to actually use the built-in microphone while maintaining Bluetooth output? Are there any lower-level configurations that might resolve this issue? Any insights, workarounds, or suggestions would be greatly appreciated. This is blocking a critical feature in my application that requires environmental audio recording while providing audio feedback through headphones 😅
Replies
0
Boosts
0
Views
222
Activity
May ’25
Visual isTranslatable: NO; reason: observation failure: noObservations, when trying to play custom compositor video with AVPlayer
I am trying to achieve an animated gradient effect that changes values over time based on the current seconds. I am also using AVPlayer and AVMutableVideoComposition along with custom instruction and class to generate the effect. I didn't want to load any video file, but rather generate a custom video with my own set of instructions. I used Metal Compute shaders to generate the effects and make the video to be 20 seconds. However, when I run the code, I get a frozen player with the gradient applied, but when I try to play the video, I get this warning in the console :- Visual isTranslatable: NO; reason: observation failure: noObservations Here is the screenshot :- My entire code :- import AVFoundation import Metal class GradientVideoCompositorTest: NSObject, AVVideoCompositing { var sourcePixelBufferAttributes: [String: Any]? = [ kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA ] var requiredPixelBufferAttributesForRenderContext: [String: Any] = [ kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA ] private var renderContext: AVVideoCompositionRenderContext? private var metalDevice: MTLDevice! private var metalCommandQueue: MTLCommandQueue! private var metalLibrary: MTLLibrary! private var metalPipeline: MTLComputePipelineState! override init() { super.init() setupMetal() } func setupMetal() { guard let device = MTLCreateSystemDefaultDevice(), let queue = device.makeCommandQueue(), let library = try? device.makeDefaultLibrary(), let function = library.makeFunction(name: "gradientShader") else { fatalError("Metal setup failed") } self.metalDevice = device self.metalCommandQueue = queue self.metalLibrary = library self.metalPipeline = try? device.makeComputePipelineState(function: function) } func renderContextChanged(_ newRenderContext: AVVideoCompositionRenderContext) { renderContext = newRenderContext } func startRequest(_ request: AVAsynchronousVideoCompositionRequest) { guard let outputPixelBuffer = renderContext?.newPixelBuffer(), let metalTexture = createMetalTexture(from: outputPixelBuffer) else { request.finish(with: NSError(domain: "com.example.gradient", code: -1, userInfo: nil)) return } var time = Float(request.compositionTime.seconds) renderGradient(to: metalTexture, time: time) request.finish(withComposedVideoFrame: outputPixelBuffer) } private func createMetalTexture(from pixelBuffer: CVPixelBuffer) -> MTLTexture? { var texture: MTLTexture? let width = CVPixelBufferGetWidth(pixelBuffer) let height = CVPixelBufferGetHeight(pixelBuffer) let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor( pixelFormat: .bgra8Unorm, width: width, height: height, mipmapped: false ) textureDescriptor.usage = [.shaderWrite, .shaderRead] CVPixelBufferLockBaseAddress(pixelBuffer, .readOnly) if let textureCache = createTextureCache(), let cvTexture = createCVMetalTexture(from: pixelBuffer, cache: textureCache) { texture = CVMetalTextureGetTexture(cvTexture) } CVPixelBufferUnlockBaseAddress(pixelBuffer, .readOnly) return texture } private func renderGradient(to texture: MTLTexture, time: Float) { guard let commandBuffer = metalCommandQueue.makeCommandBuffer(), let commandEncoder = commandBuffer.makeComputeCommandEncoder() else { return } commandEncoder.setComputePipelineState(metalPipeline) commandEncoder.setTexture(texture, index: 0) var mutableTime = time commandEncoder.setBytes(&mutableTime, length: MemoryLayout<Float>.size, index: 0) let threadsPerGroup = MTLSize(width: 16, height: 16, depth: 1) let threadGroups = MTLSize( width: (texture.width + 15) / 16, height: (texture.height + 15) / 16, depth: 1 ) commandEncoder.dispatchThreadgroups(threadGroups, threadsPerThreadgroup: threadsPerGroup) commandEncoder.endEncoding() commandBuffer.commit() } private func createTextureCache() -> CVMetalTextureCache? { var cache: CVMetalTextureCache? CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, metalDevice, nil, &cache) return cache } private func createCVMetalTexture(from pixelBuffer: CVPixelBuffer, cache: CVMetalTextureCache) -> CVMetalTexture? { var cvTexture: CVMetalTexture? let width = CVPixelBufferGetWidth(pixelBuffer) let height = CVPixelBufferGetHeight(pixelBuffer) CVMetalTextureCacheCreateTextureFromImage( kCFAllocatorDefault, cache, pixelBuffer, nil, .bgra8Unorm, width, height, 0, &cvTexture ) return cvTexture } } class GradientCompositionInstructionTest: NSObject, AVVideoCompositionInstructionProtocol { var timeRange: CMTimeRange var enablePostProcessing: Bool = true var containsTweening: Bool = true var requiredSourceTrackIDs: [NSValue]? = nil var passthroughTrackID: CMPersistentTrackID = kCMPersistentTrackID_Invalid init(timeRange: CMTimeRange) { self.timeRange = timeRange } } func createGradientVideoComposition(duration: CMTime, size: CGSize) -> AVMutableVideoComposition { let composition = AVMutableComposition() let instruction = GradientCompositionInstructionTest(timeRange: CMTimeRange(start: .zero, duration: duration)) let videoComposition = AVMutableVideoComposition() videoComposition.customVideoCompositorClass = GradientVideoCompositorTest.self videoComposition.renderSize = size videoComposition.frameDuration = CMTime(value: 1, timescale: 30) // 30 FPS videoComposition.instructions = [instruction] return videoComposition } #include <metal_stdlib> using namespace metal; kernel void gradientShader(texture2d<float, access::write> output [[texture(0)]], constant float &time [[buffer(0)]], uint2 id [[thread_position_in_grid]]) { float2 uv = float2(id) / float2(output.get_width(), output.get_height()); // Animated colors based on time float3 color1 = float3(sin(time) * 0.8 + 0.1, 0.6, 1.0); float3 color2 = float3(0.12, 0.99, cos(time) * 0.9 + 0.3); // Linear interpolation for gradient float3 gradientColor = mix(color1, color2, uv.y); output.write(float4(gradientColor, 1.0), id); }
Replies
1
Boosts
0
Views
383
Activity
Apr ’25
How to inform Logic Pro that AU view does not have a fixed aspect ratio?
I have an AUv3 that passes all validation and can be loaded into Logic Pro without issue. The UI for the plug in can be any aspect ratio but Logic insists on presenting it in a view with a fixed aspect ratio. That is when resizing, both the height and width are resized. I have never managed to work out what it is I need to do specify to Logic to allow the user to resize width or height independently of each other. Can anyone tell me what I need to specify in the AU code that will inform Logic that the view can be resized from any side of the window/panel?
Replies
0
Boosts
0
Views
230
Activity
Apr ’25