Hi everyone,
We’re integrating Apple Calendar (iCalendar) into our Codapet app but haven’t found any official Apple APIs for event management and synchronisation.
Currently, we use CalDAV with Apple ID authentication and an app-specific password (ASP), storing the ASP encrypted in our database and decrypting it for each API call. We’re looking for a more secure and recommended approach to this integration.
Does Apple provide dedicated APIs for calendar sync, or is there a better alternative to avoid sending the ASP with every request? Any guidance or best practices would be greatly appreciated!
Thanks!
Delve into the world of built-in app and system services available to developers. Discuss leveraging these services to enhance your app's functionality and user experience.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I have a renewing monthly subscription in my app and recently added upgrade possibilities to yearly and 6 month subscriptions. Those new subscriptions were reviewed, approved and published to App Store.
I'm showing a modal for users in the app from where they can upgrade their subscription. Upgrading was tested with real devices on Sandbox and TestFlight. There has been successful purchases through the in app modal in production app, and directly upgrading from App Store.
However, for some users there seems to happen a failed transaction with paymentCancelled error code during the upgrade. The IAP is still successful, their subscription is upgraded and they haven't voluntarily canceled the IAP. The localized description of the error is "Toimintoa ei voitu suorittaa. (Virhe 2 kohteessa SKErrorDomain.)" which translates to "The operation could not be completed. (Error 2 in SKErrorDomain.)"
These users have various iPhones (iPhone 12 Pro, iPhone 14 Pro, iPhone 15 Pro, iPhone 16 Pro) with up to date iOS versions (>= 18.3.1).
I'm receiving DID_CHANGE_RENEWAL_PREF (UPGRADE) server notification of these purchases on my server.
I haven't been able to reproduce this error myself.
Any ideas why StoreKit might fail the transaction with paymentCancelled error but still successfully upgrade the subscription?
Topic:
App & System Services
SubTopic:
StoreKit
Hi there, I'm facing an issue when disconnecting CarPlay that the navigation session seems to be in some weird state where it is not properly finished. So when I reconnect CarPlay the "Metadata in instrument cluster or HUD" does not update anymore until I start another navigation session and stop that one.
You can see that the instruction to the left on this screen recording is not updating anymore after a reconnect.
https://www.youtube.com/watch?v=sncxyJULjQk
I have a modified the CostalRoad sample app to add support for the HUD cluster and to auto start a navigation simulation when CarPlay connects.
https://github.com/g4rb4g3/CoastalRoads
Can anyone tell me what I have to do when CarPlay disconnect so I can start a new navigation session on reconnect that has a working HUD cluster?
Fun fact is that Apple Maps handles this quite nice (https://www.youtube.com/watch?v=OpJEIyGcwdo), it somehow manages to finish the navigation session and brings up the HUD cluster just fine on reconnect.
I wonder how I can achieve the same, anyone having an idea on that?
We're in the process of migrating our app to the Swift 6 language mode. I have hit a road block that I cannot wrap my head around, and it concerns Core Data and how we work with NSManagedObject instances.
Greatly simplied, our Core Data stack looks like this:
class CoreDataStack {
private let persistentContainer: NSPersistentContainer
var viewContext: NSManagedObjectContext { persistentContainer.viewContext }
}
For accessing the database, we provide Controller classes such as e.g.
class PersonController {
private let coreDataStack: CoreDataStack
func fetchPerson(byName name: String) async throws -> Person? {
try await coreDataStack.viewContext.perform {
let fetchRequest = NSFetchRequest<Person>()
fetchRequest.predicate = NSPredicate(format: "name == %@", name)
return try fetchRequest.execute().first
}
}
}
Our view controllers use such controllers to fetch objects and populate their UI with it:
class MyViewController: UIViewController {
private let chatController: PersonController
private let ageLabel: UILabel
func populateAgeLabel(name: String) {
Task {
let person = try? await chatController.fetchPerson(byName: name)
ageLabel.text = "\(person?.age ?? 0)"
}
}
}
This works very well, and there are no concurrency problems since the managed objects are fetched from the view context and accessed only in the main thread.
When turning on Swift 6 language mode, however, the compiler complains about the line calling the controller method:
Non-sendable result type 'Person?' cannot be sent from nonisolated context in call to instance method 'fetchPerson(byName:)'
Ok, fair enough, NSManagedObject is not Sendable. No biggie, just add @MainActor to the controller method, so it can be called from view controllers which are also main actor. However, now the compiler shows the same error at the controller method calling viewContext.perform:
Non-sendable result type 'Person?' cannot be sent from nonisolated context in call to instance method 'perform(schedule:_:)'
And now I'm stumped. Does this mean NSManageObject instances cannot even be returned from calls to NSManagedObjectContext.perform? Ever? Even though in this case, @MainActor matches the context's actor isolation (since it's the view context)?
Of course, in this simple example the controller method could just return the age directly, and more complex scenarios could return Sendable data structures that are instantiated inside the perform closure. But is that really the only legal solution? That would mean a huge refactoring challenge for our app, since we use NSManageObject instances fetched from the view context everywhere. That's what the view context is for, right?
tl;dr: is it possible to return NSManagedObject instances fetched from the view context with Swift 6 strict concurrency enabled, and if so how?
Is there a way to distinguish physical mouse/keyboard input from remote control mouse/keyboard input on Mac? Or even better, is there a way to detect if my Mac is being remotely controlled?
I'm curious, why DynamicOptionsProvider is available on watchOS? Is there any way to present options to the user? For example in Emoji Rangers project:
struct EmojiRangerSelection: AppIntent, WidgetConfigurationIntent {
static let intentClassName = "EmojiRangerSelectionIntent"
static var title: LocalizedStringResource = "Emoji Ranger Selection"
static var description = IntentDescription("Select Hero")
@Parameter(title: "Selected Hero", default: EmojiRanger.cake, optionsProvider: EmojiRangerOptionsProvider())
var hero: EmojiRanger?
struct EmojiRangerOptionsProvider: DynamicOptionsProvider {
func results() async throws -> [EmojiRanger] {
EmojiRanger.allHeros
}
}
func perform() async throws -> some IntentResult {
return .result()
}
}
On watchOS we usually use recommendations() to give the user predefined choice of configured widgets. Meanwhile in AppIntentProvider recommendations are empty:
struct AppIntentProvider: AppIntentTimelineProvider {
...
func recommendations() -> [AppIntentRecommendation<EmojiRangerSelection>] {
[]
}
}
Does it imply that there's a way to use DynamicOptionsProvider on watchOS somehow? BTW, WidgetConfiguration.promptsForUserConfiguration() is one of the methods that are not available on watchOS.
And also, the Emoji Ranger project doesn't show widgets (complications) on watchOS out of the box.
Hey everyone,
I wanted to check if anyone else has faced extreme delays when requesting access to Apple Pay Wallet APIs. It was Oct 11 2024 a year ago since we first applied to enable in-app provisioning for virtual cards in our app and we made 1% progress.
For context, we already got access from Google for Google Wallet—it was smooth, professional, and timely. But with Apple… it’s been nothing but an endless cycle of waiting.
We followed every step, submitted everything correctly, and even called Apple Developer Support multiple times. Their response? "We've escalated it." Again and again. But there’s no real progress. We’re rerouted, ignored, and left in limbo.
At this point, I don’t even know if anyone is actually reviewing these requests. If a business like ours—fully compliant and ready to integrate—can’t even get a response in 150 day, how is this process supposed to work?
I’m posting this here because I can’t be the only one. Has anyone else faced this? If you finally got access, how did you do it? Because right now, it feels like Apple Pay in-app provisioning is an impossible goal.
Hoping someone from Apple sees this and realizes how broken this process is. We’re just trying to innovate and offer Apple users a great experience—why is it so difficult?
Looking forward to hearing from anyone in the community who can help, Thanks! 🙏
self.pushRegistry = [[PKPushRegistry alloc] initWithQueue:dispatch_get_main_queue()];
self.pushRegistry.delegate = self;
self.pushRegistry.desiredPushTypes = [NSSet setWithObject:PKPushTypeVoIP];
//处理接收到的VoIP推送
(void)pushRegistry:(PKPushRegistry *)registry didReceiveIncomingPushWithPayload:(PKPushPayload *)payload forType:(PKPushType)type withCompletionHandler:(void(^)(void))completion
then we send message from our server or from apple's cloud service: https://icloud.developer.apple.com/dashboard/notifications website services:
when app is in foreground,withCompletionHandler wil be called correctly,but when app is in background or has killed ,withCompletionHandler not be called!!!
the background fetch、voice over ip is checked in signing & capabilities tabs
why?why?why?why?why?why?why?why?why?
In my case, when two functions that start each Live Activity(not connected each other) are performed in LiveActivityIntent's perform(), it seems that only one will start.
(It's the same to start independently with two Task{})
And, set one to 'opensIntent' and separate it by opening another LiveActivityIntent, the result is same.
Also, every time I tap the Intent directly in the shortcut app, one activity will end within a matter of seconds, even if there are two for a while.
But, If openAppWhenRun to true, it seem to works without any problems.
I would appreciate it if you could give me a tip to fix this problem.
In the main app, is there a way to distinguish whether the application:didFinishLaunchingWithOptions: method is triggered by the user manually clicking the app icon, or whether it is automatically triggered by the system after Live Activities receives a remote message notification?
we have three problem when using the push notification on Live Activity.
1. What is the specific callback strategy for the activityUpdates property in ActivityKit?
We found that in actual user scenarios, there is a probability that we may not receive callbacks. From the community experience, there are some resource optimization strategies that do not perform callbacks. From this perspective, the explanation is kind of vague. Is there any clear feedback to understand why callbacks are performed/not performed?
2.what is the specific description of the wake-up strategy, when background app receive Live Activity offline start Push?
From community experience, we can see that the system may wake up for a duration of 0-30s due to resource optimization strategies, or not wake up/not deal with it. Is there an official description of the wake-up strategy? or we also have to follow this description:
Wake up of apps using content-available pushes are heavily throttled. You can expect 1-2 wakeup per hour as a best case scenario in the hands of your users. so this cannot be assumed to be a reliable wake-up on demand mechanism for an app.
3 How can we determine user have selected (allow or always allow) of the Live Activity permission?
When we use real-time activity offline push, there are two system prompts in iOS:
the first prompt : allow and disallow real-time activity
the second prompt : always allow and disallow
Is there an interface that can directly determine which permission the user has chosen (allow/always allow)? (By the way, we can get disallow status).
At present, we haven't seen any interface in the official documentation/interface that can determine (allow/always allow). The difference here will affect the generation of Update Token. Without Update Token, we can not update our activity instance.
Hi, I want to offer an auto-renewable subscription (e.g., $1/month) that grants users (10 document analyses per month), with the count resetting at the start of each billing cycle.
-Unused analyses will not roll over to the next month-
Additionally, any analyses generated while the subscription is active will remain accessible to the user permanently, even if they cancel the subscription.
The paywall, app description, and metadata will clearly state that the subscription grants (10 document analyses per month with no rollover)
We want this to be implemented as an auto-renewable subscription model, not as a consumable service or a token/credit system (which we want to avoid).
Is this model acceptable under Apple’s guidelines, or would it be considered a token/credit system? Any insights or alternative suggestions would be appreciated.. Thanks
I'm trying to run this example project: https://developer.apple.com/documentation/HealthKit/building-a-multidevice-workout-app
When I run it on my device (iPhone 16 Pro and Apple Watch Ultra 2)
I get this error:
-[SPRemoteInterface _appRecoverAnyExtendedRuntimeSession:]_block_invoke:4350: Got no sessions back from -[CSLSSessionService existingRunningSessions:] or -[CSLSSessionService existingScheduledSessions:] after receiving a PUICInitializeSessionServiceAction
I start the workout from my phone, which successfully starts the workout on the watch. But this callback is never triggered on the phone:
healthStore.workoutSessionMirroringStartHandler {
// not happening
}
This makes it difficult to learn the mirroring workout technique.
I'm using Xcode 16.3 and Mac OS 15.4.1.
Any help appreciated!
I have a document based SwiftData app in which I would like to implement a persistent cache. For obvious reasons, I would not like to store the contents of the cache in the documents themselves, but in my app's data directory.
Is a use case, in which a document based SwiftData app uses not only the ModelContainers from the currently open files, but also a ModelContainer writing a database file in the app's documents directory (for cache, settings, etc.) supported?
If yes, how can you inject two different ModelContexts, one tied to the currently open file and one tied to the local database, into a SwiftUI view?
Hello,
I am developing a calling service using CallKit and VOIP push.
I have occasionally encountered a strange issue.
The issue is that VOIP permanently fails to receive calls.
I was previously informed that even if the device is blocked, it can receive calls again after 24 hours.
Also, when I checked the device logic, it complied with the policy requirements set by Apple, including correctly calling CallKit's reportNewIncomingCall method.
Once the issue occurs, no matter how many times I try, VOIP does not receive calls, and neither a device reboot nor checking the Device Console Log shows any logs related to CallKit or VOIP.
I suspect this might be an issue with the VOIP token, and I believe that the only way to get a new one is to reinstall the app. Is that correct?
Of course, after reinstalling, it works fine again, but this is very inconvenient. I don't think this is the right solution.
Is there anyone who can share their insights on this issue?
Thank you.
Here is an AppleScript script to make it possible double-click files in Finder so that they will be opened in Vim, by Normen Hansen: https://github.com/normen/vim-macos-scripts/blob/master/open-file-in-vim.applescript (it must be used from Automator).
I try to simplify it and also to make it possible to use it without Automator. To use my own version, you only need to save it as application instead, like this:
osacompile -o open-with-vim.app open-with-vim.applescript
and then copy it to /Applications and set your .txt files to be opened using this app.
Here it is, it alsmost works:
-- https://github.com/normen/vim-macos-scripts/blob/master/open-file-in-vim.applescript
-- opens Vim with file or file list
-- sets current folder to parent folder of first file
on open theFiles
set command to {}
if input is null or input is {} or ((item 1 in input) as string) is "" then
-- no files, open vim without parameters
set end of command to "vim;exit"
else
set firstFile to (item 1 of theFiles) as string
tell application "Finder" to set pathFolderParent to quoted form of (POSIX path of ((folder of item firstFile) as string))
set end of command to "cd" & space & (pathFolderParent as string)
set end of command to ";hx" -- e.g. vi or any other command-line text editor
set fileList to {}
repeat with i from 1 to (theFiles count)
set end of fileList to space & quoted form of (POSIX path of (item i of theFiles as string))
end repeat
set end of command to (fileList as string)
set end of command to ";exit" -- if Terminal > Settings > Profiles > Shell > When the shell exits != Don't close the window
end if
set command to command as string
set myTab to null
tell application "Terminal"
if it is not running then
set myTab to (do script command in window 1)
else
set myTab to (do script command)
end if
activate
end tell
return
end open
The only problem is the if block in the very beginning:
if input is null or input is {} or ((item 1 in input) as string) is "" then
What is wrong here? How to make it work? (Assuming it already works fine if AppleScript is used using Automator.)
We have a PWA app developed by our company. In order to distribute this app to users' iPhones, we put this PWA app inside an XCode app. That means we put a WebView in XCode to display the PWA URL. Everything works perfect, except for location access.
The PWA app access the device location. When the first time the app acess location, it asks for user consent two times, by PWA app and by the XCode app. This is fine. When the user clicks Allow, the XCode app preserves the user choice and never asks again. However, the PWA app keeps on asking user permission every day. If we close the app open again, it will ask one more time. That means twice daily. But if we close and open the app for a third time, it will not ask. It remembers the user choice only for 24 hours.
If we install the PWA app directly in iPhone (that means if we add the URL as bookmark in home screen), it is asking for location permission only once. However, when we put this app inside an XCode app it is asking every day.
This affects the user experience, and as our users are not tech savvy, causing many issues. Is there a way to force the PWA app inside XCode app to remember the user choice?
Any help is very much appreciated.
Thanks,
I cannot access my corporate invoice. I don't know why I couldn't reach it. How and where can I reach it?
Topic:
App & System Services
SubTopic:
Health & Fitness
The example database/server provided by Apple for Live Caller ID contains a hardcoded database with a tiny number of pre-defined numbers.
However, its not expected to be representational of an live real world usage server.
But the question is how can that be accomplished if its a requirement that the data be KPIR encrypted?
In real world scenarios, the factors that effect whether a number should be blocked or not are continually changing and evolving on a minute-by-minute basis and new information becomes available or existing information changes.
If the database supports tens of millions or hundreds of millions of constantly changing phone numbers, in order to meet the requirements of the Live Caller ID being KPIR encrypted, that would imply the database has to re-encrypt its database of millions endlessly for all time.
That seems unfeasable and impractical to implement.
Therefore how do the Apple designers of this feature envisage/suggest a real-world server supporting millions of changing data should meet the requirement to be KPIR encrypted?
I am a macOS software developer using Xcode and Objective-C. I have a wallpaper app on the App Store, and recently, a few users have asked if the app can be excluded from the ‘Screen Time’ statistics. Is there a way to allow users to choose whether or not the app should appear in the ‘Screen Time’ tracking? I have already set LSUIElement to YES in the info.plist, but the app is still being tracked in ‘Screen Time.