When using the [AVAudioSession setCategory:withOptions:error:] API, the call hangs for a long time and eventually returns an error.This issue occurs on iOS 16, and did not appear in earlier versions.
Thread 135:
0 libsystem_kernel.dylib 0x00000002478e3cd4 _mach_msg2_trap :8 (in libsystem_kernel.dylib)
1 libsystem_kernel.dylib 0x00000002478e7214 _mach_msg_overwrite :428 (in libsystem_kernel.dylib)
2 libsystem_kernel.dylib 0x00000002478e705c _mach_msg :24 (in libsystem_kernel.dylib)
3 libdispatch.dylib 0x00000001d63ffe84 __dispatch_mach_send_and_wait_for_reply :548 (in libdispatch.dylib)
4 libdispatch.dylib 0x00000001d6400224 _dispatch_mach_send_with_result_and_wait_for_reply :60 (in libdispatch.dylib)
5 libxpc.dylib 0x00000001b2114e04 _xpc_connection_send_message_with_reply_sync :256 (in libxpc.dylib)
6 Foundation 0x000000019b6249f0 ___NSXPCCONNECTION_IS_WAITING_FOR_A_SYNCHRONOUS_REPLY__ :16 (in Foundation)
7 Foundation 0x000000019c06d1b4 -[NSXPCConnection _sendInvocation:orArguments:count:methodSignature:selector:withProxy:] :2100 (in Foundation)
8 CoreFoundation 0x000000019dfcb1cc ____forwarding___ :1072 (in CoreFoundation)
9 CoreFoundation 0x000000019dfd3200 ___forwarding_prep_0___ :96 (in CoreFoundation)
10 AudioSession 0x00000001c77498b0 __ZN4avas6client11SessionCore10HandlePingEv :192 (in AudioSession)
11 AudioSession 0x00000001c77497b0 ____ZN4avas6client11SessionCore12DispatchPingEv_block_invoke :52 (in AudioSession)
12 libdispatch.dylib 0x00000001d63e4adc __dispatch_call_block_and_release :32 (in libdispatch.dylib)
13 libdispatch.dylib 0x00000001d63fe7ec __dispatch_client_callout :16 (in libdispatch.dylib)
14 libdispatch.dylib 0x00000001d63ed468 __dispatch_lane_serial_drain :740 (in libdispatch.dylib)
15 libdispatch.dylib 0x00000001d63edf78 __dispatch_lane_invoke :440 (in libdispatch.dylib)
16 libdispatch.dylib 0x00000001d63f6f48 __dispatch_root_queue_drain :364 (in libdispatch.dylib)
17 libdispatch.dylib 0x00000001d63f6d08 __dispatch_worker_thread :268 (in libdispatch.dylib)
18 libsystem_pthread.dylib 0x00000001f9ff144c __pthread_start :136 (in libsystem_pthread.dylib)
19 libsystem_pthread.dylib 0x00000001f9fed8cc _thread_start :8 (in libsystem_pthread.dylib)
Thread 132:
0 libsystem_kernel.dylib 0x00000002478e3cd4 _mach_msg2_trap :8 (in libsystem_kernel.dylib)
1 libsystem_kernel.dylib 0x00000002478e7214 _mach_msg_overwrite :428 (in libsystem_kernel.dylib)
2 libsystem_kernel.dylib 0x00000002478e705c _mach_msg :24 (in libsystem_kernel.dylib)
3 libdispatch.dylib 0x00000001d63ffe84 __dispatch_mach_send_and_wait_for_reply :548 (in libdispatch.dylib)
4 libdispatch.dylib 0x00000001d6400224 _dispatch_mach_send_with_result_and_wait_for_reply :60 (in libdispatch.dylib)
5 libxpc.dylib 0x00000001b2114e04 _xpc_connection_send_message_with_reply_sync :256 (in libxpc.dylib)
6 Foundation 0x000000019b6249f0 ___NSXPCCONNECTION_IS_WAITING_FOR_A_SYNCHRONOUS_REPLY__ :16 (in Foundation)
7 Foundation 0x000000019c06d1b4 -[NSXPCConnection _sendInvocation:orArguments:count:methodSignature:selector:withProxy:] :2100 (in Foundation)
8 CoreFoundation 0x000000019dfcb1cc ____forwarding___ :1072 (in CoreFoundation)
9 CoreFoundation 0x000000019dfd3200 ___forwarding_prep_0___ :96 (in CoreFoundation)
10 AudioSession 0x00000001c7754198 __ZNK4avas6client11SessionCore18SetBatchPropertiesEP12NSDictionaryIP8NSStringPU25objcproto14NSSecureCoding11objc_objectEPU15__autoreleasingP7NSArrayIPS2_IS4_P8NSNumberEENS_30AVAudioSessionBatchSetStrategyEbb :548 (in AudioSession)
11 AudioSession 0x00000001c7753e58 __ZNK4avas6client11SessionCore20SetBatchPropertiesMXEP12NSDictionaryIP8NSStringPU25objcproto14NSSecureCoding11objc_objectE :92 (in AudioSession)
12 AudioSession 0x00000001c775179c __ZN4avas6client11SessionCore11setCategoryEP8NSStringS3_32AVAudioSessionRouteSharingPolicym :472 (in AudioSession)
13 AudioSession 0x00000001c7768f88 -[AVAudioSession setCategory:withOptions:error:] :68 (in AudioSession)
14 AlipayWallet 0x000000010140580c -[AVAudioSession(APMHook) apmhook_setCategory:withOptions:error:] APMHookAudioSession.m:35 (in AlipayWallet)
15 AlipayWallet 0x00000001014001a4 -[APMAudioSessionManager resume] APMAudioSessionManager.m:718 (in AlipayWallet)
16 libdispatch.dylib 0x00000001d63e4adc __dispatch_call_block_and_release :32 (in libdispatch.dylib)
17 libdispatch.dylib 0x00000001d63fe7ec __dispatch_client_callout :16 (in libdispatch.dylib)
18 libdispatch.dylib 0x00000001d63ed468 __dispatch_lane_serial_drain :740 (in libdispatch.dylib)
19 libdispatch.dylib 0x00000001d63edf44 __dispatch_lane_invoke :388 (in libdispatch.dylib)
20 libdispatch.dylib 0x00000001d63f83ec __dispatch_root_queue_drain_deferred_wlh :292 (in libdispatch.dylib)
21 libdispatch.dylib 0x00000001d63f7ce4 __dispatch_workloop_worker_thread :692 (in libdispatch.dylib)
22 libsystem_pthread.dylib 0x00000001f9fee3b8 __pthread_wqthread :292 (in libsystem_pthread.dylib)
23 libsystem_pthread.dylib 0x00000001f9fed8c0 _start_wqthread :8 (in libsystem_pthread.dylib)
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Created
Hello,
I have implemented Low-Latency Frame Interpolation using the VTFrameProcessor framework, based on the sample code from https://developer.apple.com/kr/videos/play/wwdc2025/300. It is currently working well for both LIVE and VOD streams.
However, I have a few questions regarding the lifecycle management and synchronization of this feature:
1. Common Questions (Applicable to both Frame Interpolation & Super Resolution)
1.1 Dynamic Toggling
Do you recommend enabling/disabling these features dynamically during playback?
Or is it better practice to configure them only during the initial setup/preparation phase?
If dynamic toggling is supported, are there any recommended patterns for managing VTFrameProcessor session lifecycle (e.g., startSession / endSession timing)?
1.2 Synchronization Method
I am currently using CADisplayLink to fetch frames from AVPlayerItemVideoOutput and perform processing.
Is CADisplayLink the recommended approach for real-time frame acquisition with VTFrameProcessor?
If the feature needs to be toggled on/off during active playback, are there any concerns or alternative approaches you would recommend?
1.3 Supported Resolution/Quality Range
What are the minimum and maximum video resolutions supported for each feature?
Are there any aspect ratio restrictions (e.g., does it support 1:1 square videos)?
Is there a recommended resolution range for optimal performance and quality?
2. Frame Interpolation Specific Questions
2.1 LIVE Stream Support
Is Low-Latency Frame Interpolation suitable for LIVE streaming scenarios where latency is critical?
Are there any special considerations for LIVE vs VOD?
3. Super Resolution Specific Questions
3.1 Adaptive Bitrate (ABR) Stream Support
In ABR (HLS/DASH) streams, the video resolution can change dynamically during playback.
Is VTLowLatencySuperResolutionScaler compatible with ABR streams where resolution changes mid-playback?
If resolution changes occur, should I recreate the VTLowLatencySuperResolutionScalerConfiguration and restart the session, or does the API handle this automatically?
3.2 Small/Square Resolution Issue
I observed that 144x144 (1:1 square) videos fail with error:
"VTFrameProcessorErrorDomain Code=-19730: processWithSourceFrame within VCPFrameSuperResolutionProcessor failed"
However, 480x270 (16:9) videos work correctly.
minimumDimensions reports 96x96, but 144x144 still fails. Is there an undocumented restriction on aspect ratio or a practical minimum resolution?
3.3 Scale Factor Selection
supportedScaleFactors returns [2.0, 4.0] for most resolutions.
Is there a recommended scale factor for balancing quality and performance?
Are there scenarios where 4.0x should be avoided?
The documentation on this specific topic seems limited, so I would appreciate any insights or advice.
Thank you.
Topic:
Media Technologies
SubTopic:
Streaming
Tags:
VideoToolbox
HTTP Live Streaming
AVKit
AVFoundation
Some of our app's users are repeatedly running into a crash on NeutrinoCore -[NUIdentifier initWithNamespace:name:version:] + 2352. It looks from the stack trace like multiple threads are performing PHFetchRequests, but that shouldn't be causing a crash. It's isolated to a small number of users, which makes me think that it's something related to their specific Photos databases (e.g., data corruption.)
Do you have any suggestions how I might be able to resolve this?
2
libsystem_c.dylib
abort + 124
3
NeutrinoCore
-[NUAssertionPolicyCrashReport notifyAssertion:] + 66
4
NeutrinoCore
-[NUAssertionPolicyComposite notifyAssertion:] + 160
5
NeutrinoCore
-[NUAssertionPolicyUnique notifyAssertion:] + 176
6
NeutrinoCore
-[NUAssertionHandler handleFailureInFunction:file:lineNumber:currentlyExecutingJobName:description:arguments:] + 156
7
NeutrinoCore
_NUAssertFailHandler + 176
8
NeutrinoCore
-[NUIdentifier initWithNamespace:name:version:] + 2352
9
NeutrinoCore
-[NUIdentifier initWithName:version:] + 84
10
NeutrinoCore
-[NUIdentifier initWithName:] + 68
11
PhotoImaging
+[PISchema identifier] + 36
12
PhotoImaging
+[PISchema registeredPhotosSchemaIdentifier] + 32
13
PhotoImaging
+[PIPhotoEditHelper newComposition] + 28
14
PhotoImaging
+[PICompositionSerializer deserializeCompositionFromAdjustments:metadata:formatIdentifier:formatVersion:sidecarData:error:] + 160
15
PhotoImaging
+[PICompositionSerializer deserializeCompositionFromData:formatIdentifier:formatVersion:sidecarData:error:] + 224
16
PhotoLibraryServices
-[PLPhotoEditPersistenceManager loadCompositionFrom:formatIdentifier:formatVersion:sidecarData:error:] + 1848
17
PhotoLibraryServices
+[PLPhotoEditPersistenceManager validateAdjustmentData:formatIdentifier:formatVersion:error:] + 108
18
Photos
__167+[PHContentEditingInputRequestContext contentEditingInputRequestContextForAsset:requestID:managerID:networkAccessAllowed:downloadIntent:progressHandler:resultHandler:]_block_invoke + 260
19
Photos
-[PHAdjustmentData(ContentEditingInput) _contentEditing_readableByClientWithVerificationBlock:] + 136
20
Photos
-[PHAdjustmentData(ContentEditingInput) _contentEditing_requiredBaseVersionReadableByClient:verificationBlock:] + 88
21
Photos
-[PHContentEditingInputRequestContext _adjustmentBaseVersionFromResult:request:canHandleAdjustmentData:] + 356
22
Photos
-[PHContentEditingInputRequestContext produceChildRequestsForRequest:reportingIsLocallyAvailable:isDegraded:result:] + 624
23
Photos
-[PHMediaRequestContext _produceChildRequestsForRequest:withResult:] + 88
24
Photos
-[PHMediaRequestContext mediaRequest:didFinishWithResult:] + 92
25
Photos
-[PHAdjustmentDataRequest _finishFromAsynchronousCallback] + 124
26
Photos
__39-[PHAdjustmentDataRequest startRequest]_block_invoke + 584
27
PhotoLibraryServicesCore
__106-[PLAssetsdResourceClient adjustmentDataForAsset:networkAccessAllowed:trackCPLDownload:completionHandler:]block_invoke.84 + 880
28
CoreFoundation
invoking + 148
29
CoreFoundation
-[NSInvocation invoke] + 424
30
Foundation
<deduplicated_symbol> + 16
31
Foundation
-[NSXPCConnection _decodeAndInvokeReplyBlockWithEvent:sequence:replyInfo:] + 528
32
Foundation
__88-[NSXPCConnection _sendInvocation:orArguments:count:methodSignature:selector:withProxy:]_block_invoke_5 + 188
33
libxpc.dylib
_xpc_connection_reply_callout + 124
42
libsystem_pthread.dylib
start_wqthread + 8
Hi
We’re updating our KSM to support SPC v2/v3 and currently operate with both legacy SDK4 credentials (ASK + 1024 cert) and SDK26 credentials (certificate bundle + provisioning data + 1024/2048 keys).
Our client apps run across a wide range of iOS/tvOS versions, so we want to follow Apple’s recommended client strategy for certificate selection. The docs describe SHA‑1 vs SHA‑256 in the SPC header, but do not specify which OS versions should use SDK4 vs SDK26 credentials.
Could you clarify:
Is there an official minimum iOS/tvOS version where you recommend SDK26 credentials for client apps?
For older OS versions (e.g. iOS 15), is SDK4 still the recommended choice for client apps?
Are there any official migration guidelines for client apps moving from SDK4 to SDK26 credentials?
Thanks in advance.
On macOS 26.2 (Tahoe), the Preview app fails to render many USDZ models correctly.
The issue appears inconsistent:
• Some USDZ files open normally in Preview
• Some USDZ files open but the viewport turns completely black (no geometry, no material, no lighting)
• All the same files open correctly when opened using “Open With → Xcode”
This was not the behavior on macOS 15.7.2, where the exact same USDZ files rendered consistently in Preview without failures.
I am unable to find any clearcut documentation on configuring AVCaptureSession pipeline to capture video with proResRAW codec type, which is 16 bit format. Is it supported only with AVCaptureMovieFileOutput or one can have AVCaptureVideoDataOutput emitting 16-bit sample buffers that can be vended to AVAssetWriter?
Description: I have identified a specific issue when recording acoustic guitar and other instruments on the iPhone 17 Pro Max using native applications (Voice Memos, Camera). The recordings contain an unnatural metallic resonance (ringing artifacts) that should not be present.
Testing and Methodology:
Hardware Verification: Initially, I suspected a hardware defect in the audio chip or microphone. However, extensive testing with third-party software suggests this is likely a software-level issue.
AudioShare Test: I conducted a test using the AudioShare app in "Measurement Mode" (which bypasses standard iOS system-wide audio processing). In this mode, the audio remains perfectly clean, and the metallic ringing disappears entirely.
Conclusion: The issue is rooted in the DSP (Digital Signal Processing) algorithms that iOS applies for noise suppression or voice enhancement. These algorithms appear to misinterpret the high-frequency overtones of acoustic instruments as background noise and attempt to "filter" them, resulting in audible digital artifacts.
Comparison Results: This issue has not been observed on devices from other brands or on older iPhone models (preliminary tests suggest older versions handle this better). Notably, the problem persists even in GarageBand, as the app still utilizes certain system-level processing layers.
Proposed Solution: I suggest adding a "Raw Audio" or "Instrument Mode" toggle within the Microphone/Audio settings for native apps. This mode should disable aggressive DSP processing, similar to how the AVAudioSession.Mode.measurement works in specialized apps.
Attachments: I am attaching 4 archives, including a final "Measurement Mode" folder with comparative samples (Measurement Mode vs. Standard Mode). The artifacts are most prominent when monitored through headphones.
Is it possible to perform speech-to-text using AVAudioEngine to capture microphone input while being on a FaceTime call at the same time?
I tried implementing this, but whenever I attempt to start the AVAudioEngine while a FaceTime call is active, I get the following error:
“The operation couldn’t be completed. (OSStatus error 2003329396)”
I assume this might be due to microphone resource restrictions during FaceTime, but I’d like to confirm whether this limitation is at the system level or if there’s any possible workaround or entitlement that allows concurrent microphone access.
Has anyone encountered this issue or found a solution?
On iPhone 16 Pro Max (not tested other devices) there's a noticeable jump in the framing of the preview video when you record in the iOS AVCam Sample App. The same jump in camera framing can be observed by switching to the front facing camera and then back to the rear one.
It looks roughly consistent with switching between the 0.5x and 1x camera (but not quite a match for the same viewable area in the Camera app) - and it's only when it's initially loaded, once recording is started it retains the 'closer' image no matter how many times it's stopped/started thereafter.
I'm relatively new to Swift and haven't done anything with the camera before, so odd 'buggy' behaviour in the sample code isn't helping me understand it! :-)
Is there any way to fix this?
I’m working with the Push-to-Talk (PTT) framework and observing a consistent delay when starting audio capture.
Scenario:
A PTT call is already active
The AVAudioSession is fully configured
I request beginTransmission on the PTT channel
I start my Audio Unit for recording (AudioOutputUnitStart)
Observed behavior:
AudioOutputUnitStart takes ~500 ms
This happens whether I start the Audio Unit:
after didBeginTransmission, or
after AVAudioSession didActivate
Comparison:
Using the same Audio Unit, same format, and same configuration
Without the PTT framework, AudioOutputUnitStart takes ~200 ms
Additional notes:
I am not modifying or reconfiguring AVAudioSession when requesting beginTransmission
The audio session is already set up when the PTT call starts
There are no interruptions or route changes at the time of starting the Audio Unit
Impact:
This extra latency is significant for Push-to-Talk use cases where fast transmit
start is critical.
I'm working on a v2 Audio Unit that has some complicated internal state (audio, midi, other settings).
When the internal state changes, I want to inform the host (f.i. Logic Pro) that my plugin state has changed, and that the main window should show the 'project changed' status through the window close button.
This was easy to achieve for the VST version of the plugin, but I can't figure out a way to do it for the Audio Unit.
I've tried:
Notifying change of the kAudioUnitProperty_ClassInfo property that stores the plugin state:
unit->PropertyChanged(kAudioUnitProperty_ClassInfo, kAudioUnitScope_Global, 0);
Setting the kAudioUnitProperty_ClassInfo property value each time the plugin state changes.
Adding a new parameter called 'dirtystate' and toggling it and notifying the change each time the plugin state changes.
But nothing really make Logic take notice. This should be an easy task, but I can't put my finger on it.
How do I flag may AUv2 as needing its status saved (i.e. the host project needs saving)?
My app - natively iOS but built with the "Designed for iPad" option to run on Mac - does not recognise an attached USB microphone when running on a Mac. This line
int32_t items = (int32_t) [[[AVAudioSession sharedInstance] availableInputs] count ];
returns 1, which is the Mac internal mic. On iPad and iPhone it sees both the internal mic and the USB mic. Is this an inherent "Designed for iPad" restriction, and is there some trick I can pull to get the USB microphone to be recognised by the system?
Topic:
Media Technologies
SubTopic:
Audio
Hello,
I have a problem generating a 2048-bit FairPlay Streaming certificate.
I tried generating SDK v26.x certificate in two ways.
(1) Use existing certificate
(2) Create new certificate
Though, in both ways, Apple gives me a certificate bundle of 1024-bit certificate.
(fps_certificate.bin)
I've uploaded 2048-bit CSR on creating a certificate.
Just to note, I have created a SDK v4.x certificate few years ago.
Have anyone bumped into a same issue?
Or am I missing something?
I have a new 2725QC (Dell) Monitor that uses USB-C connection to connect with the iMac (2019, 27 inch) through the back port but the problem is that the volume control can currently only be done from the hardware, not the software control using the Apple keyboard. What should I do in terms of writing code to do this (Swift or Obj-C)? Is there a third-party solution for Intel iMac and ARM Mac?
Hello,
I am currently developing a video player using Custom AVPlayer SDK and testing LL-HLS live streaming.
I encountered a specific error, CoreMediaErrorDomain -15418, during playback. I have searched through the official documentation and the forums, but I could not find any information regarding this error code.
I would like to inquire about the following:
Description & Cause: What does the error code -15418 specifically represent in the context of CoreMedia and LL-HLS?
Severity: Is this a critical error that halts playback, or is it merely a warning?
Environment Details:
iOS Version: iOS 26.2
Device: iPhone 15 Pro Max
Stream Type: LL-HLS (Low-Latency HLS)
Impact: Quality drops
Any insights or references to documentation would be greatly appreciated.
Thank you.
I’ve been struggling with a very frustrating issue using the new iOS 26 Swift Concurrency APIs for video processing. My pipeline reads frames using AVAssetReader, processes them via CIContext (Lanczos upscale), and then appends the result to an AVAssetWriter using the new PixelBufferReceiver.
The Problem: The execution randomly stops at the ]await append(...)] call. The task suspends and never resumes.
It is completely unpredictable: It might hang on the very first run, or it might work fine for 4-5 runs and then hang on the next one.
It is independent of video duration: It happens with 5-second clips just as often as with long videos.
No feedback from the system: There is no crash, no error thrown, and CPU usage drops to zero. The thread just stays in the suspended state indefinitely.
If I manually cancel the operation and restart the VideoEngine, it usually starts working again for a few more attempts, which makes me suspect some internal resource exhaustion or a deadlock between the GPU context and the writer's input.
The Code: Here is a simplified version of my processing loop:
private func proccessVideoPipeline(
readerOutputProvider: AVAssetReaderOutput.Provider<CMReadySampleBuffer<CMSampleBuffer.DynamicContent>>,
pixelBufferReceiver: AVAssetWriterInput.PixelBufferReceiver,
nominalFrameRate: Float,
targetSize: CGSize
) async throws {
while !Task.isCancelled, let payload = try await readerOutputProvider.next() {
let sampleBufferInfo: (imageBuffer: CVPixelBuffer?, presentationTimeStamp: CMTime) = payload.withUnsafeSampleBuffer { sampleBuffer in
return (sampleBuffer.imageBuffer, sampleBuffer.presentationTimeStamp)
}
guard let currentPixelBuffer = sampleBufferInfo.imageBuffer else {
throw AsyncFrameProcessorError.missingImageBuffer
}
guard let pixelBufferPool = pixelBufferReceiver.pixelBufferPool else {
throw NSError(domain: "PixelBufferPool", code: -1, userInfo: [NSLocalizedDescriptionKey: "No pixel buffer pool available"])
}
let newPixelBuffer = try pixelBufferPool.makeMutablePixelBuffer()
let newCVPixelBuffer = newPixelBuffer.withUnsafeBuffer({ $0 })
try upscale(currentPixelBuffer, outputPixelBuffer: newCVPixelBuffer, targetSize: targetSize )
let presentationTime = sampleBufferInfo.presentationTimeStamp
try await pixelBufferReceiver.append(.init(unsafeBuffer: newCVPixelBuffer), with: presentationTime)
}
}
Does anyone know how to fix it?
I’ve been using Apple Music API for quite a while now and a recent change must have happened which is quite disruptive.
On many occasions, artists release singles from an album as part of promoting this album. For recent examples, Harry Styles released “Aperture” (a single) to promote his upcoming album “Kiss All The Time. Disco, Occasionally“. Similarly, Bruno Mars released “I Just Might”, a single from the upcoming album “The Romantic”.
Previously, those would return at the endpoint ”artists/{artist_id}/albums” with a “- Single” suffix. But it seems a recent change happens where they only appear as playable tracks inside the album.
This behavior is also evident in the Apple Music app itself. Those singles no longer appear under “Singles & EPs”. Instead, they would only be visible if the single becomes popular enough to be shown on “Top Songs“. Otherwise one would have to know to tap on the (future) album to discover if there are released singles.
Meanwhile Spotify’s API returns those as singles properly, just like Apple Music API used to.
This change must be recent but the question is if it’s intentional, and if so, how can the API be used from here on out to “extract” those singles and represent them?
I neet to take pcm data from aac data, but this api has fossy me deeply.
Hi Apple Developer Support Team,
We are developing an iOS application using a camera package within a hybrid (cross-platform) framework, and we would like to confirm whether it is possible to disable the camera shutter sound programmatically.
As per our understanding, the shutter sound on iOS is system-controlled and depends on the device’s silent/ring mode, and there is no App Store–approved API available to force-disable this sound. Kindly confirm whether this understanding is correct or if any supported alternative approach exists for hybrid or native implementations.
Thank you for your clarification.
Best regards,
ParkhyaSolutions
The ASk is used by the KSM to derive the dASk, which is then used to decrypt the SK...R1.
If the only thing we give the client is the certificate, how does it encrypt the SK...R1 so the server is able to process it.
Would be nice to know it it works generally, because I've been getting questions about it and can't provide a helpful answer.
Thanks in advance.