I've been wondering if there is a way to modify or even disable tones for indicating channel states. The behaviour regarding tones seems like a black box with little documentation.
During migration to Apple's PT Framework we've noticed that there are few scenarios where a tone is played which doesn't match certain certifications. For example; moving from a channel to another produces a tone which would fail a test case. I understand the reasoning fully, as it marks that the channel is ready to transmit or receive, but this doesn't mirror the behaviour of TETRA which would be wanted in this case.
I'm also wondering if there would be any way to directly communicate feedback regarding PT Framework?
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi all,
I'm trying to diagnose and resolve an issue with stuttering video playback using the standard AVPlayer. The video in question is a 4K, 39-second file in *.mov format, being played on an iOS device. It's served via a local HTTP server that proxies requests to a backend to fetch and process the content. The project uses end-to-end encrypted storage, which necessitates the proxy for handling data processing. While playback in offline scenarios is smooth, we are encountering issues with smooth playback during streaming. The same video streams smoothly on other platforms using the same connection, so network limitations are not a factor.
On iOS, playback is consistently choppy, with pauses every 1-3 seconds. The video does not appear to buffer adequately for smooth playback.
One particularly curious aspect is the seemingly random pattern of Content-Range requests made by the AVPlayer when streaming the video. Below is an example of the range requests:
Topic:
Media Technologies
SubTopic:
Video
I use the iTunes Library framework in one of my apps, starting with macOS Sequoia 15.1 i can't create the ITLibrary object anymore with the following error:
Connection to amplibraryd was interrupted. clientName:iTunesLibrary(ITLibraryLoader)
Error connecting to the server via proxy object Error Domain=NSCocoaErrorDomain Code=4097 "connection to service named com.apple.amp.library.framework" UserInfo={NSDebugDescription=connection to service named com.apple.amp.library.framework}
configure failed: Error Domain=NSCocoaErrorDomain Code=4097 "connection to service named com.apple.amp.library.framework" UserInfo={NSDebugDescription=connection to service named com.apple.amp.library.framework}
I created a new sandboxed macOS app, added the music folder read permission and it reproduced the error:
import SwiftUI
import iTunesLibrary
@main
struct ITLibraryLoaderApp: App {
var body: some Scene {
WindowGroup {
ContentView()
}
}
}
struct ContentView: View {
var body: some View {
Button("Load ITLibrary") {
loadLibrary()
}
}
func loadLibrary() {
do {
let _ = try ITLibrary(apiVersion: "1.0", options: .none)
}
catch {
print(error)
}
}
}
I restarted my developer machine and the music app with no luck.
I take that MusicKit JS is built with TypeScript, based on the attributions in the script: https://js-cdn.music.apple.com/musickit/v3/musickit.js
In the script it points to https://js-cdn.music.apple.com/musickit/v1/acknowledgements.txt – I assume this should be the v3 URL for the v3 version? It returns the same content nonetheless.
This contains attributions for TypeScript.
Currently there's a third-party effort with DefinitelyTyped, which publishes the NPM package @types/musickit-js. The latest supported version available is v1.
However, there is no version compatible with v3.
This makes it hard to use MusicKit JS v3 in a TypeScript project.
Please publish the types, ideally on the CDN along with the musickit.js file. Also consider publishing an officially Apple supported DefinitelyTyped package, or help to maintain the existing @types/musickit-js to make consuming this even easier.
I’ve tried both AVCaptureVideoDataOutputSampleBufferDelegate (captureOutput) and AVCaptureDataOutputSynchronizerDelegate (dataOutputSynchronizer), but the number of depth frames and saved timestamps is significantly lower than the number of frames in the .mp4 file written by AVAssetWriter.
In my code, I save:
Timestamps for each frame to a metadata file
Depth frames to a binary file
Video to an .mp4 file
If I record a 4-second video at 30fps, the .mp4 file correctly plays for 4 seconds, but the number of stored timestamps and depth frames is much lower—around 70 frames instead of the expected 120.
Does anyone know why this mismatch happens?
func dataOutputSynchronizer(_ synchronizer: AVCaptureDataOutputSynchronizer,
didOutput synchronizedDataCollection: AVCaptureSynchronizedDataCollection) {
// Read all outputs
guard let syncedDepthData: AVCaptureSynchronizedDepthData =
synchronizedDataCollection.synchronizedData(for: depthDataOutput) as? AVCaptureSynchronizedDepthData,
let syncedVideoData: AVCaptureSynchronizedSampleBufferData =
synchronizedDataCollection.synchronizedData(for: videoDataOutput) as? AVCaptureSynchronizedSampleBufferData else {
// only work on synced pairs
return
}
if syncedDepthData.depthDataWasDropped || syncedVideoData.sampleBufferWasDropped {
return
}
let depthData = syncedDepthData.depthData
let depthPixelBuffer = depthData.depthDataMap
let sampleBuffer = syncedVideoData.sampleBuffer
guard let videoPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer),
let formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer) else {
return
}
addToPreviewStream?(CIImage(cvPixelBuffer: videoPixelBuffer))
if !canWrite() {
return
}
// Extract the presentation timestamp (PTS) from the sample buffer
let timestamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
//sessionAtSourceTime is the first buffer we will write to the file
if self.sessionAtSourceTime == nil {
//Make sure we don't start recording until the buffer reaches the correct time (buffer is always behind, this will fix the difference in time)
guard sampleBuffer.presentationTimeStamp >= self.recordFromTime! else { return }
self.sessionAtSourceTime = sampleBuffer.presentationTimeStamp
self.videoWriter!.startSession(atSourceTime: sampleBuffer.presentationTimeStamp)
}
if self.videoWriterInput!.isReadyForMoreMediaData {
self.videoWriterInput!.append(sampleBuffer)
self.videoTimestamps.append(
Timestamp(
frame: videoTimestamps.count,
value: timestamp.value,
timescale: timestamp.timescale
)
)
let ddm = depthData.depthDataMap
depthCapture.addDepthData(pixelBuffer: ddm, timestamp: timestamp)
}
}
Hello,
Starting in iOS 17, our application started having some issue publishing to our video session. More specifically the video capture seems to be broken in some, but not all sessions. What's troubling is that we're seeing that it fails consistently every 4 sessions.
It also fails silently, without reporting any problems to the app. We only notice that there are no frames being rendered or sent to the remote devices.
Here's what shows-up in the console:
<<<< FigCaptureSourceRemote >>>> Fig assert: "! storage->connectionDied" at bail (FigCaptureSourceRemote.m:235) - (err=0)
<<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:253) - (err=-16453)
Anyone seeing this? Any idea what could be the cause? Our sessions work perfectly on iOS16 and below.
Thanks
Is it possible to stream video from a UVC (USB Video Class) camera on an iPhone 15? If so, are there any specific hardware or software requirements to enable this functionality?
I am working on a project for macOS where I am taking an AVCaptureSession's CVPixelBuffer and I need to convert it into a MTLTexture for rendering. On macOS the pixel format is 2vuy, there does not seem to be a clear format conversion while converting to a metal texture. I have been able to convert it to a texture but the color space seems to be off as it is rendering distorted colors with a double image.
I believe 2vuy is a single pane color space and I have tried to account for that, but I am unaware of what is off.
I have attached The CVPixelBuffer and The distorted MTLTexture along with a laundry list of errors.
On iOS my conversions are fine, it is only the macOS 2vuy pixel format that seems to have issues.
My code for the conversion is also attached.
If there are any suggestions or guidance on how to properly convert a 2vuy CVPixelBuffer to a MTLTexture I would greatly appreciate it.
Many Thanks
Conversion_Logs.txt
ConversionCode.swift
Hi everyone,
I’m running into an issue with PHPickerFilter when using PHPickerViewController.
When I configure the picker with a .videos and .livePhotos filter, it seems to work correctly in the Photos tab. However, when I switch to the Collections tab, the filter doesn’t always apply — users can still see and select static image assets in certain collections (e.g. from one of the People & Pets sections).
Here’s a simplified snippet of my setup:
var configuration = PHPickerConfiguration(photoLibrary: .shared())
configuration.selectionLimit = 1
var filters = [PHPickerFilter]()
filters.append(.videos)
filters.append(.livePhotos)
configuration.filter = PHPickerFilter.any(of: filters)
configuration.preferredAssetRepresentationMode = .current
let picker = PHPickerViewController(configuration: configuration)
picker.delegate = self
present(picker, animated: true)
Expected behavior:
The picker should consistently respect the filter across both Photos and Collections tabs, only showing assets that match the filter.
Actual behavior:
The filter seems to apply correctly in the Photos tab, but in the Collections tab, other asset types are still visible/selectable.
Has anyone else encountered this behavior? Is this expected or a known issue, or am I missing something in the configuration?
Thanks in advance!
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Files and Storage
Media Library
Photos and Imaging
PhotoKit
Hi, I’m working on a photo backup app.
I track the PHAsset localIdentifier to determine which photos have been backed up and which haven’t.
Recently, I’ve noticed that two users seem to have experienced the localIdentifier changes after transferring data to a new iPhone using Quick Start.
Additionally, others on StackOverflow have mentioned that the localIdentifier sometimes changes after updating the iOS version.
https://stackoverflow.com/questions/40094728/phobject-localidentifier-reliability
I’d like to confirm the reliability of the localIdentifier after an iOS version upgrade or device transfer.
Can I continue using these locally stored localIdentifiers?
Or is there another recommended approach, such as using PHCloudIdentifier?
Hello!
Just curious if AVAssetWriter append() randomly fails for anyone but me? Seems to happen when live streaming (encoding with VTCompressionSession) and recording (with AVAssetWriter) at the same time.
Topic:
Media Technologies
SubTopic:
Streaming
Hi All,
I am working on a DJ playout app (MACOS). The app has a few AVAudioPlayerNode's combined with the ApplicationMusicPlayer from Musickit. I can route the output of the AVaudioPlayer to a hardware device so that the audio files are directed to their own dedicated output on my Mac. The ApplicationMusicPlayer is following the default output and this is pretty annoying.
Has anyone found a solution to chain the ApplicationMusicPlayer and get it set to a output device?
Thanks
Pancras
Hi, when using ApplicationMusicPlayer from MusicKit my app automatically gets the media controls on the lock screen: Play/ Pause, Skip Buttons, Playback Position etc.
I would like to customize these. Tried a bunch of things, e.g. using MPRemoteCommandCenter. So far I haven't had any success.
Does anyone know how I can customize the media controls of ApplicationMusicPlayer.
Thank you.
Hello everyone,
I'm implementing the new AVInputPickerInteraction API on iOS 26 to allow users to select their microphone from a custom settings menu before recording.
The implementation seems correct, but I'm encountering a strange issue where the input selection immediately reverts to the previous device.
The Situation:
The picker is presented correctly via a manual call to .present(). I can see all available inputs (e.g., "iPhone Microphone" and "AirPods").
The current input is "iPhone Microphone".
I tap on "AirPods".
The UI updates to show "AirPods" as selected for a fraction of a second, then immediately jumps back to "iPhone Microphone".
The same thing happens in reverse.
It seems like the system is automatically reverting the audio route change requested by the picker.
My Implementation:
My setup follows the standard pattern discussed in the WWDC sessions.
Setup Code:
This setup is performed once before the user can trigger the picker.
@available(iOS 26.0, *)
var inputPickerInteraction: AVInputPickerInteraction?
// Note: The AVAudioSession is configured to .playAndRecord
// and set to active elsewhere in the code before this setup is called.
if #available(iOS 26.0, *) {
// Setup the picker
let picker = AVInputPickerInteraction()
self.inputPickerInteraction = picker
self.view.addInteraction(picker) // Added to establish context
}
Presentation Code:
When a user selects "Change Input" from my custom settings menu, I call .present() on the main thread.
// In a delegate method from a custom menu
if #available(iOS 26.0, *) {
DispatchQueue.main.async {
self.inputPickerInteraction?.present(animated: true)
}
}
What I've already checked:
The AVAudioSession is active and its category is .playAndRecord.
The inputPickerInteraction object is not nil.
The .present() method is being called on the main thread.
The picker is added to a view using view.addInteraction() in the setup phase.
I've reviewed my code to ensure there is no other logic that could be manually resetting the AVAudioSession's preferred input.
Has anyone else experienced this behavior? I suspect this might be a bug in the new API, but I want to make sure I'm not missing a crucial step in managing the AVAudioSession state.
Any insights or potential workarounds would be greatly appreciated.
Thank you.
Topic:
Media Technologies
SubTopic:
Audio
Hi,
just generated a HDR10 MVHEVC file, mediainfo is below:
Color range : Limited
Color primaries : BT.2020
Transfer characteristics : PQ
Matrix coefficients : BT.2020 non-constant
Codec configuration box : hvcC+lhvC
then generate the segment files with below command:
mediafilesegmenter --iso-fragmented -t 4 -f av_1 av_new_1.mov
then upload the segment files and prog_index.m3u8 to web server.
just find that can not play the HLS stream on Safari...
the url is http://ip/vod/prog_index.m3u8
just checked that if i remove the tag Transfer characteristics : PQ when generating the MVHEVC file.
above same mediafilesegmenter command and upload the files to web server.
the new version of HLS stream is can play on Safari...
Is there any way to play HLS PQ video on Safari. thanks.
Hello there,
I need to move through video loaded in an AVPlayer one frame at a time back or forth. For that I tried to use AVPlayerItem's method step(byCount:) and it works just fine.
However I need to know when stepping happened and as far as I observed it is not immediate using the method. If I check the currentTime() just after calling the method it's the same and if I do it slightly later (depending of the video itself) it shows the correct "jumped" time.
To achieve my goal I tried subclassing AVPlayerItem and implement my own async method utilizing NotificationCenter and the timeJumpedNotification assuming it would deliver it as the time actually jumps but it's not the case.
Here is my "stripped" and simplified version of the custom Player Item:
import AVFoundation
final class PlayerItem: AVPlayerItem {
private var jumpCompletion: ( (CMTime) -> () )?
override init(asset: AVAsset, automaticallyLoadedAssetKeys: [String]?) {
super .init(asset: asset, automaticallyLoadedAssetKeys: automaticallyLoadedAssetKeys)
NotificationCenter.default.addObserver(self, selector: #selector(timeDidChange(_:)), name: AVPlayerItem.timeJumpedNotification, object: self)
}
deinit {
NotificationCenter.default.removeObserver(self, name: AVPlayerItem.timeJumpedNotification, object: self)
jumpCompletion = nil
}
@discardableResult func step(by count: Int) async -> CMTime {
await withCheckedContinuation { continuation in
step(by: count) { time in
continuation.resume(returning: time)
}
}
}
func step(by count: Int, completion: @escaping ( (CMTime) -> () )) {
guard jumpCompletion == nil else {
completion(currentTime())
return
}
jumpCompletion = completion
step(byCount: count)
}
@objc private func timeDidChange(_ notification: Notification) {
switch notification.name {
case AVPlayerItem.timeJumpedNotification where notification.object as? AVPlayerItem [==](https://www.example.com/) self:
jumpCompletion?(currentTime())
jumpCompletion = nil
default: return
}
}
}
In short the notification never gets called thus the above is not working.
I guess the key there is that in the docs about the timeJumpedNotification: is said:
"A notification the system posts when a player item’s time changes discontinuously."
so the step(byCount:) is not considered as discontinuous operation and doesn't trigger it.
I'd be really helpful if somebody can help as I don't want to use seek(to:toleranceBefore:toleranceAfter:) mainly cause it's not accurate in terms of the exact next/previous frame as the video might have VFR and that causes repeating frames sometimes or even skipping one or another.
Thanks a lot
I am would like to look at AVMetricEvent data during video playback, so I have added this code to a test video player app:
let playerItem: AVPlayerItem = ...
let allMetrics = playerItem.allMetrics()
Task.init {
print("metrics task started")
do {
for try await metricEvent in allMetrics {
print("metric event: \(metricEvent.description)")
}
} catch {
print("unexpected metric iterator error \(error)")
}
}
Running this in Simulator on iPhone 16 Pro (18.0) does not result in any "metric event" diagnostic messages being printed when the video associated with this AVPlayerItem is playing. Only the "metric task started" diagnostic message is seen.
What am I doing wrong that prevents metric data being received?
I would appreciate help in coding or an explanation what to use in swift for an app which will be able to capture LiDAR scanning and RGB data from taken pictures, generate a 3D mesh, and create .OBJ, .MTL, and .JPEG file set for further manipulation of 3D model.
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
3D Graphics
Swift Playground
Object Capture
Since iOS/iPadOs/tvOS 18 then we have run into a new problem with streaming of FairPlay encrypted video. On the affected streams then the audio plays perfectly but the video freezes for periods of a few seconds, so it will freeze for 5s or so, then be OK for a few seconds then freeze again.
It is entirely reproducible when all the following are true
the video streams were produced by a particular encoder (or particular settings, not sure on that)
the video must be encrypted
device is running some variety of iOS 18 (or iPadOS or tvOS)
the device is an affected device
Known devices are
AppleTV 4K 2nd Gen
iPad Pro 11" 1st and 2nd gen
Devices known not to show the problem are
all other AppleTV models
iPhone 13 Pro and 16 Pro
If we stream the same content, but unencrypted, then it plays perfectly, or if you play the encrypted stream on, say, tvOS 17.
When the freezing occurs then we can see in the console logs repeating blocks of lines like the following
default 18:08:46.578582+0000 videocodecd AppleAVD: AppleAVDDecodeFrameResponse(): Frame# 5771 DecodeFrame failed with error 0x0000013c
default 18:08:46.578756+0000 videocodecd AppleAVD: AppleAVDDecodeFrameInternal(): failed - error: 316
default 18:08:46.579018+0000 videocodecd AppleAVD: AppleAVDDecodeFrameInternal(): avdDec - Frame# 5771, DecodeFrame failed with error: 0x13c
default 18:08:46.579169+0000 videocodecd AppleAVD: AppleAVDDisplayCallback(): Asking fig to drop frame # 5771 with err -12909 - internalStatus: 315
also more relevant looking lines:
default 18:17:39.122019+0000 kernel AppleAVD: avdOutbox0ISR(): FRM DONE (cid: 2.0, fno: 10970, codecT: 1) FAILED!!
default 18:17:39.122155+0000 videocodecd AppleAVD: AppleAVDDisplayCallback(): Asking fig to drop frame # 10970 with err -12909 - internalStatus: 315
default 18:17:39.122221+0000 kernel AppleAVD: ## client[ 2.0] @ frm 10970, errStatus: 0x10
default 18:17:39.122338+0000 kernel AppleAVD: decodeFailIdentify(): VP error bit 4 has EP3B0 error
default 18:17:39.122401+0000 kernel AppleAVD: processHWResponse(): clientID 2.0 frameNumber 10970 error 315, offsetIndex 10, isHwErr 1
So it would seem to me that one of the following must be happening:
When these particular HLS files are encrypted then the data is being corrupted in some way that played back on iOS 17 and earlier but now won't on 18+, or
There's a regression in iOS 18 that means that this particular format of video data is corrupted on decryption
If anyone has seen similar behaviour, or has any ideas how to identify which of the two scenarios it is, please say.
Unfortunately we don't have control of the servers so can't make changes there unless we can identify they are definitely the cause of the problem.
Thanks, Simon.
Topic:
Media Technologies
SubTopic:
Video
Is there a recommended way on macOS 26 Tahoe to take a CoreAudio AudioObjectID and use it to lookup the underlying USB LocationID?
I previously used AudioObjectID to query the corresponding DeviceUID with kAudioDevicePropertyDeviceUID. Then I queried for the IOService matching kIOAudioEngineClassName with property kIOAudioEngineGlobalUniqueIDKey matching DeviceUID, and I loaded kUSBDevicePropertyLocationID from the result.
This fails on macOS 26, because the IO Registry for the device has an entry for usbaudiod rather than AppleUSBAudioEngine, and usbaudiod does not include a kIOAudioEngineGlobalUniqueIDKey property (or any other property to map it to a CoreAudio DeviceUID).
My use-case here is a piece of audio recording software that allows configuring a set of supported audio devices via USB HID prior to recording. I present the user with a list of CoreAudio devices to use, but without a way to lookup the underlying USB LocationID, I cannot guarantee that the configured device matches the selected device (e.g. if the user plugged in two identical microphones).