Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

All subtopics
Posts under Graphics & Games topic

Post

Replies

Boosts

Views

Activity

OS choosing performance state poorly for GPU use case
I am building a MacOS desktop app (https://anukari.com) that is using Metal compute to do real-time audio/DSP processing, as I have a problem that is highly parallelizable and too computationally expensive for the CPU. However it seems that the way in which I am using the GPU, even when my app is fully compute-limited, the OS never increases the power/performance state. Because this is a real-time audio synthesis application, it's a huge problem to not be able to take advantage of the full clock speeds that the GPU is capable of, because the app can't keep up with real-time. I discovered this issue while profiling the app using Instrument's Metal tracing (and Game tracing) modes. In the profiling configuration under "Metal Application" there is a drop-down to select the "Performance State." If I run the application under Instruments with Performance State set to Maximum, it runs amazingly well, and all my problems go away. For comparison, when I run the app on its own, outside of Instruments, the expensive GPU computation it's doing takes around 2x as long to complete, meaning that the app performs half as well. I've done a ton of work to micro-optimize my Metal compute code, based on every scrap of information from the WWDC videos, etc. A problem I'm running into is that I think that the more efficient I make my code, the less it signals to the OS that I want high GPU clock speeds! I think part of why the OS is confused is that in most use cases, my computation can be done using only a small number of Metal threadgroups. I'm guessing that the OS heuristics see that only a small fraction of the GPU is saturated and fail to scale up the power/clock state. I'm not sure what to do here; I'm in a bit of a bind. One possibility is that I intentionally schedule busy work -- spin threadgroups just to waste energy and signal to the OS that I need higher clock speeds. This is obviously a really bad idea, but it might work. Is there any other (better) way for my app to signal to the OS that it is doing real-time latency-sensitive computation on the GPU and needs the clock speeds to be scaled up? Note that game mode is not really an option, as my app also runs as an AU plugin inside hosts like Garageband, so it can't be made fullscreen, etc.
6
0
904
May ’25
Background GPU Access availability
I would love to use Background GPU Access to do some video processing in the background. However the documentation of BGContinuedProcessingTaskRequest.Resources.gpu clearly states: Not all devices support background GPU use. For more information, see Performing long-running tasks on iOS and iPadOS. Is there a list available of currently released devices that do (or don't) support GPU background usage? That would help to understand what part of our user base can use this feature. (And what hardware we need to test this on as developers.) For example it seems that it isn't supported on an iPad Pro M1 with the current iOS 26 beta. The simulators also seem to not support the background GPU resource. So would be great to understand what hardware is capable of using this feature!
4
0
896
Jul ’25
OpenGL ES support on Apple Silicon Simulators
Hey folks, I have a legacy game that is running OpenGL ES - and it no longer works on the simulators that are running Apple Silicon, ie iPhone 15 Pro, or the 13" iPads. And yes, i'm also running on Apple Silicon (M1 Max). The apps work fine on the actual devices, but the simulator crashes on any glDrawElements with a stack that looks like the following: I have not yet seen an announcement about this not working but i've seen mention in other apps of stopping to support GL (https://github.com/maplibre/maplibre-native/issues/2351) Can anyone shed some light? I'm obviously going to try to fix it, or find a recent sample app from which to start to see what might be up. Or move to metal, but i hadn't bargained for that level of effort atm ;) Any suggestions appreciated!
10
0
2.5k
Mar ’25
Reality Composer Pro 2.0 shader graphs can't be loaded on visionOS 1
Using Reality Composer Pro 2.0, I created a simple shader graph that displays a texture on an unlit surface: On visionOS 2 beta, I can successfully use ShaderGraphMaterial(named:from:in:) to load that shader graph material and assign it to a model entity. However, on visionOS 1.2 and earlier, either in Simulator or on the device, ShaderGraphMaterial(named:from:in:) fails and I see the following logged to the console: If, using Reality Composer Pro 1.0, I experimentally open the same project and delete and recreate exactly the same nodes above, then ShaderGraphMaterial(named:from:in:) works as expected on visionOS 1.2. Is it a known issue that Reality Composer 2 can't be used with visionOS 1? Is this intentional behavior? I've submitted feedback as FB14828873, including a sample project and repro steps. If possible, I would appreciate guidance from an Apple engineer, like "This is a known issue for [list of node types]" or "Reality Composer Pro 2 is not supported for visionOS 1 development, please refer to [documentation]" or "We recommend [workaround]." Thank you.
7
0
1.5k
May ’25
3D Skeletal animation in metal-cpp?
Hey all! I'm got my hands on a refurbished mac mini m1 and already diving into metal. At the moment, i'm currently studying graphics programming with opengl and got to a point where I can almost create a 3d cube. However, I noticed there aren't many tutorials for metal cpp but rather demos. One thing I love about graphic programming, is skinning/skeletal animation. At the moment, I can't find any sources or tutorials on how to load skeletal animations into metal-cpp. So, if I create my character in blender and had all types of animations all loaded into a .FBX or maybe .DAE and load this into metal api with metal-cpp, how can I go on about how this works?
1
0
385
Mar ’25
Utilizing Point Cloud data from `ObjectCaptureSession` in WWDC23
I am currently developing a mobile and server-side application using the new ObjectCaptureSession on iOS and PhotogrammetrySession on MacOS. I have two questions regarding the newly updated APIs. From WWDC23 session: "Meet Object Capture for iOS", I know that the Object Capture API uses Point Cloud data captured from iPhone LiDAR sensor. I want to know how to use the Point Cloud data captured on iPhone ObjectCaptureSession and use it to create 3D models on PhotogrammetrySession on MacOS. From the example code from WWDC21, I know that the PhotogrammetrySession utilizes depth map from captured photo images by embedding it into the HEIC image and use those data to create a 3D asset on PhotogrammetrySession on MacOS. I would like to know if Point Cloud data is also embedded into the image to be used during 3D reconstruction and if not, how else the Point Cloud data is inserted to be used during reconstruction. Another question is, I know that Point Cloud data is returned as a result from request to the PhtogrammetrySession.Request. I would like to know if this PointCloud data is the same set of data captured during ObjectCaptureSession from WWDC23 that is used to create ObjectCapturePointCloudView. Thank you to everyone for the help in advance. It's a real pleasure to be developing with all the updates to RealityKit and the Object Capture API.
6
0
2.2k
Jul ’25
Xcode Vulkan is opening two windows instead of one.
I'm a newbee at Vulkan and Xcode. I have my project on github https://github.com/flocela/OrangeSpider/ Whenever I run, two windows open instead of only one. I added testing, which means I have an OrangeSpider.xctestplan in the OrangeSpider/TestsOrangeSpider/ folder. This is my first time adding testing to an XCode project, so I think this may be where the problem is. I also get this error message: ViewBridge to RemoteViewService Terminated: Error Domain=com.apple.ViewBridge Code=18 "(null)" UserInfo={com.apple.ViewBridge.error.hint=this process disconnected remote view controller -- benign unless unexpected, com.apple.ViewBridge.error.description=NSViewBridgeErrorCanceled}
4
0
157
Jul ’25
Updated Object Capure -- needs LiDAR?
I have two apps released -- ReefScan and ReefBuild -- that are based on the WWDC21 sample photogrammetry apps for iOS and MacOS. Those run fine without LiDAR and are used mostly for underwater models where LiDAR does not work at all. It now appears that the updated photogrammetry session requires LiDAR data, and building my app on current xcode results in a non-working app. Has the "old" version of photgrammetry session been broken by this update? It worked very well previously so I would hate to see this regression to needing LiDAR. Most of my users do not have that.
3
0
552
Mar ’25
Embedded links not clickable in PDFs for iOS devices
I have a SPFx React application where I am printing the HTML page content using the javascript default window.print() functionality. Once I save the page as pdf from the print preview window and open it using Adobe Acrobat, the links(for eg -> Google) within the content are not clickable and appearing as plain text. I have tried to print random pages post searching with any keywords in Google and saved the files as pdfs, but, unfortunately, the links are still not clickable there as well. To check whether it is an Adobe Acrobat issue, I have performed the same print functionality from Android devices and shared the pdf file across the iOS devices and in that case, when opened using Adobe Acrobat, the links are appearing to be clickable. I am wondering whether it is something related to how the default print functionality works for iPadOS and iOS devices. Any insights on this would be really helpful. Thanks!!! Note: The links are clickable for MacOS as well as for Windows. #ios #ipados #javascript #spfx #react
2
0
129
May ’25
In Metal compute kernels, when do thread variables get spilled into the device memory?
How many 32-bit variables can I use concurrently in a single thread of a Metal compute kernel without worrying about the variables getting spilled into the device memory? Alternatively: how many 32-bit registers does a single thread have available for itself? Let's say that each thread of my compute kernel needs to store and work with its own array of N float variables, where N can be 128, 256, 512 or more. To achieve maximum possible performance, I do not want to the local thread variables to get spilled into the slow device memory. I want all N variables to be stored "on-chip", in the thread memory space. To make my question more concrete, let's say there is an array thread float localArray[N]. Assuming an unrealistic hypothetical scenario where localArray is the only variable in the whole kernel, what is the maximum value of N for which no portion of localArray would get spilled into the device memory? I searched in the Metal feature set tables, but I could not find any details.
1
0
600
Mar ’25
[tvOS] Reacting to button taps
I've just started working on my first SpriteKit game that will eventually run on both tvOS and iOS and am looking at how to build a "button". So far, I've got a custom node that looks like: class MyButton: SKSpriteNode { ... #if os(tvOS) override var canBecomeFocused: Bool { true } override func didUpdateFocus(...) { ... } #endif } The above let me nicely handle focus changes in tvOS and now I'm looking at reacting to selecting the button. Searching around, all the articles/questions/posts are from 2015-2016 - which is a LOOOONG time ago. Most of the guidance appears to be to add a tap gesture recognizer in the owning scene and getting the scene to hand it off to the button. That seems pretty brittle and I'd much prefer if the button itself is responsible for its own tap management. So, I guess my question is whether I should just add a gesture recognizer to my custom button class? Is this inefficient if I end up having 7-8 buttons on the screen and each one has its own gesture recognizer? Somewhat related, all of the 10-year-old advice is that if we add recognizers to scenes, then they need to be removed from the view controller... however, in the modern day world with SwiftUI, my project doesn't even have a view controller (yet, anyway)... what gesture recognizer lifecycle management do I need in a SpriteKit scene that is presented within a SpriteKitView? Or, is there a better way? I was kind of hoping that overriding pressesBegan() (or something similar) in my custom button might have been triggered on tvOS (like touchesBegan() lets me manage touches for the iOS variant of my app) Any pointers or suggestions would be gladly received. Thanks.
2
0
706
Jan ’25
Trouble with MDLMesh.newBox()
I'm trying to build an MDLMesh then add normals let mdlMesh = MDLMesh.newBox(withDimensions: SIMD3<Float>(1, 1, 1), segments: SIMD3<UInt32>(2, 2, 2), geometryType: MDLGeometryType.triangles, inwardNormals:false, allocator: allocator) mdlMesh.addNormals(withAttributeNamed: MDLVertexAttributeNormal, creaseThreshold: 0) When I render the mesh, some normals are (0,0,0). I don't know if the problem is in the mesh, or in the conversion to MTKMesh. Is there a way to examine an MDLMesh with the geometry viewer? When I look at the variable values for my mdlMesh I get this: Not too useful. I don't know how to track down the normals. What's the best way to find out where the normals getting broken?
1
0
125
May ’25
How do I test a new leaderboard added to a Published app with prior leaderboard?
Hi I have attempted to find a fix for my issue via documentation online and one phone support ( not code level support ) call to no end. I could continue to try various things but would like to see if someone else has encountered this issue and a fix for it. Background: My Game app is live on App Store and has 1 classic leaderboard . I am now getting ready to submit an update to the app and it also entails adding a new recurring leaderboard. I added the leaderboard in App Store. I however have NOT uploaded my new build yet. I have also not added my leaderboards ( currently live and not live ) to any set. When I try to submit scores using GKLeaderboard.submitScore(_:context:player:leaderboardIDs:completionHandler:) to the new non-live leaderboard it works ( gives me no error ) When I try to load the scores from the new non-live leaderboard GKLeaderboard.loadLeaderboards(IDs:completionHandler:) loadEntries(for:timeScope:range:completionHandler:) it fails. Error: "leaderboardID not found" I could try ( and will ) uploading the new build to AppStore connect and associating the new leaderboard to it before testing again. try associating each leaderboard to a set Is there anything else that I should be aware of ? Thanks in advance
2
0
109
Jul ’25
Is Metal usable from Swift 6?
Hello ladies and gentlemen, I'm writing a simple renderer on the main actor using Metal and Swift 6. I am at the stage now where I want to create a render pipeline state using asynchronous API: @MainActor class Renderer { let opaqueMeshRPS: MTLRenderPipelineState init(/*...*/) async throws { let descriptor = MTLRenderPipelineDescriptor() // ... opaqueMeshRPS = try await device.makeRenderPipelineState(descriptor: descriptor) } } I get a compilation error if try to use the asynchronous version of the makeRenderPipelineState method: Non-sendable type 'any MTLRenderPipelineState' returned by implicitly asynchronous call to nonisolated function cannot cross actor boundary Which is understandable, since MTLRenderPipelineState is not Sendable. But it looks like no matter where or how I try to access this method, I just can't do it - you have this API, but you can't use it, you can only use the synchronous versions. Am I missing something or is Metal just not usable with Swift 6 right now?
1
0
609
Mar ’25
MetalFX upscaler/denoiser and instant changes
Hi, What's the best way to handle drastic changes in scene charateristics with the new MTLFXTemporalDenoisedScaler? Let's say a visible object of the scene radically changes its material properties. I can modify the albedo and roughness textures consequently. But I suspect the history will be corrupted. Blending visual information between the new frame and the previous ones might be a nonsense. I guess the problem should be the same when objects appear or disappear instantly. Is the upsacler manage these events for us (by lowering blending), or should we use the reactive or the denoise strength mask or something like that to handle them?
2
0
130
Jul ’25
Apple API "CGDisplayCopyAllDisplayModes provides resolution list which does not match with system resolution for the external monitors.
Our application is trying to read all resolutions of an external monitor. We have observed that, for the external monitor there is a mismatch in resolution list in our application and the resolution list in system settings. We are using the apple API "CGDisplayCopyAllDisplayModes" to read the resolutions.
1
0
564
Mar ’25
vImageConverter_CreateWithCGImageFormat Fails with kvImageInvalidImageFormat When Trying to Convert CMYK to RGB
So I get JPEG data in my app. Previously I was using the higher level NSBitmapImageRep API and just feeding the JPEG data to it. But now I've noticed on Sonoma If I get a JPEG in the CMYK color space the NSBitmapImageRep renders mostly black and is corrupted. So I'm trying to drop down to the lower level APIs. Specifically I grab a CGImageRef and and trying to use the Accelerate API to convert it to another format (to hopefully workaround the issue... CGImageRef sourceCGImage = `CGImageCreateWithJPEGDataProvider(jpegDataProvider,` NULL, shouldInterpolate, kCGRenderingIntentDefault); Now I use vImageConverter_CreateWithCGImageFormat... with the following values for source and destination formats: Source format: (derived from sourceCGImage) bitsPerComponent = 8 bitsPerPixel = 32 colorSpace = (kCGColorSpaceICCBased; kCGColorSpaceModelCMYK; Generic CMYK Profile) bitmapInfo = kCGBitmapByteOrderDefault version = 0 decode = 0x000060000147f780 renderingIntent = kCGRenderingIntentDefault Destination format: bitsPerComponent = 8 bitsPerPixel = 24 colorSpace = (DeviceRBG) bitmapInfo = 8197 version = 0 decode = 0x0000000000000000 renderingIntent = kCGRenderingIntentDefault But vImageConverter_CreateWithCGImageFormat fails with kvImageInvalidImageFormat. Now if I change the destination format to use 32 bitsPerpixel and use alpha in the bitmap info the vImageConverter_CreateWithCGImageFormat does not return an error but I get a black image just like NSBitmapImageRep
14
0
1.4k
Aug ’25