Imagine a native macOS app that acts as a "launcher" for a Java game.** For example, the "launcher" app might use the Swift Process API or a similar method to run the java command line tool (lets assume the user has installed Java themselves) to run the game.
I have seen How to Enable Game Mode. If the native launcher app's Info.plist has the following keys set:
LSApplicationCategoryType set to public.app-category.games
LSSupportsGameMode set to true (for macOS 26+)
GCSupportsGameMode set to true
The launcher itself can cause Game Mode to activate if the launcher is fullscreened. However, if the launcher opens a Java process that opens a window, then the Java window is fullscreened, Game Mode doesn't seem to activate. In this case activating Game Mode for the launcher itself is unnecessary, but you'd expect Game Mode to activate when the actual game in the Java window is fullscreened.
Is there a way to get Game Mode to activate in the latter case?
** The concrete case I'm thinking of is a third-party Minecraft Java Edition launcher, but the issue can also be demonstrated in a sample project (FB13786152). It seems like the official Minecraft launcher is able to do this, though it's not clear how. (Is its bundle identifier hardcoded in the OS to allow for this? Changing a sample app's bundle identifier to be the same as the official Minecraft launcher gets the behavior I want, but obviously this is not a practical solution.)
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
اريد انشاء لعبه في ابل ستور و تكون اول صفحه تكون شروط و الاحكام و خيار بدا اللعبه200 فئات من السعوديه من مسلسل من العب من بنات و بس وقطر و الإمارات وانمي ومسلسلات تركيه و السياحه و الدول وشركات عالميه و شركات كترونيه
I've been working on Swift game which is not yet launched or available for preview.
The game works in such a way that it has idle CPU while the user is thinking and sustained max CPU and GPU on as many cores as possible when he makes a move.
Rarely, due to OS activity or something else outside of my control (for example when dropping the OS curtain even if for just a bit then remove it), the game or some of its threads are moved to efficiency cores which results in major stuttering which persists precisely until the game is idle again at which point the game is moved back on performance cores - but if the player keeps making moves the stuttering simply won't go away and so I guess compuptation is locked onto efficiency cores.
The issue does not reproduce on MacCatalyst on Intel.
How do I tell Swift to avoid efficiency cores?
BTW Swift and SceneKIT have AMAZING performance especially when compared to others.
The “explore spatial accessory input on visionOS” presentation from WDC25 interests me. I bought both the MUSE Logitech stylus and the PS VR2 sense controllers to try out with the sculpting app presented by the author, engineer Amanda Han. Unfortunately the app itself was not included. Could the app be made available for downloading as well as the Xcode project? I appreciate any assistance the author and your team could provide. Thank you.
Topic:
Graphics & Games
SubTopic:
RealityKit
Hello,
Could someone post code that shows how to implement GCVirtualController to move a box around the screen?
I've been poking around with GCVirtualController and gotten as far as having the D-pad and A B buttons appear on the display. But how do I make it do anything?
Topic:
Graphics & Games
SubTopic:
SpriteKit
Hi. I'm a 3D designer, using Blender for most of my work. The most recent Blender conference discussed utilizing the Open Shading Language (OSL) in their latest versions, which allows designers to write custom shaders for their workflows.
At the moment, only Nvidia Optix GPU's can utilize this language for rendering (from what I understand), but Blender developers stated they are waiting on other GPU manufacturers to implement this feature as well. I'm not sure if there are any licensing issues here, but would this be something Apple could implement in Metal to make their hardware more attractive to the 3D design community?
Any help or knowledge on this topic would be greatly appreciated.
Hey everyone, I am currently developing an app in visionOS and using RealityComposerPro create scenes in put in my app.
I have a humanoid model with hair strands, and each strand of hair has an opacity map. However, some reflections are still visible even though the opacity is zero. There are also some weird culling among hair strands (in the left circle) and weird reflections in hair cards (in the right circle).
Here's my settings for the materials.
Since all the hair strands are interconnected with each other, it is hard to decide the drawing order in Xcode, so I am wondering if there's an easier way to handle transparency objects.
Please let me know if you know anything helpful, much appreciated!
Hello,
In our game we enforce an age gate before showing Game Center sign‑in. Only after the user passes the age gate do we call GKLocalPlayer.localPlayer.authenticateHandler.
The reason I’m asking is that we want to reliably detect if the game was launched from a Game Center activity in the Games app (iOS 26+). If the user prefers to enter via activities, we don’t want to miss that event during cold start.
Our current proposal is:
Register a GKLocalPlayerListener early in didFinishLaunchingWithOptions: so the app is ready to catch events.
Queue any incoming events in our dispatcher.
Only process those events after the user passes the age gate and authentication succeeds.
My questions are:
Does player:wantsToPlayGameActivity:completionHandler: ever fire before authentication, or only after the local player is authenticated?
If it only fires after authentication, is our “register early but gate processing” approach the correct way to ensure we don’t miss activity launches?
Is there any recommended pattern to distinguish “activity launch” vs. “normal launch” in this age‑gate scenario?
We want to respect Apple’s age gate requirements, but also ensure activity launches are not lost if the user prefers that entry point.
Sorry if this is a stupid question — I just want to be sure we’re following the right pattern.
Thanks for any clarification or best‑practice guidance!
Hi everyone,
I’ve been developing a custom, end-to-end 3D rendering engine called Crescent from scratch using C++20 and Metal-cpp (targeting macOS and visionOS). My primary goal is to build a zero-bottleneck, GPU-driven pipeline that maximizes the potential of Apple Silicon’s Unified Memory and TBDR architecture.
While the fundamental systems are stable, I am looking for architectural feedback from Metal framework engineers regarding specific synchronization and latency challenges.
Current Core Implementations:
GPU-Driven Instance Culling: High-performance occlusion culling using a Hierarchical Z-Buffer (HZB) approach via Compute Shaders.
Clustered Forward Shading: Support for high-count dynamic lights through view-space clustering.
Temporal Stability: Custom TAA with history rejection and Motion Blur resolve.
Asset Infrastructure: Robust GUID-based scene serialization and a JSON-driven ECS hierarchy.
The Architectural Challenge:
I am currently seeing slight synchronization overhead when generating the HZB mip-chain. On Apple Silicon, I am evaluating the cost of encoder transitions versus cache-friendly barriers.
&& m_hzbInitPipeline && m_hzbDownsamplePipeline && !m_hzbMipViews.empty();
if (canBuildHzb) {
MTL::ComputeCommandEncoder* hzbInit = commandBuffer->computeCommandEncoder();
hzbInit->setComputePipelineState(m_hzbInitPipeline);
hzbInit->setTexture(m_depthTexture, 0);
hzbInit->setTexture(m_hzbMipViews[0], 1);
if (m_pointClampSampler) {
hzbInit->setSamplerState(m_pointClampSampler, 0);
} else if (m_linearClampSampler) {
hzbInit->setSamplerState(m_linearClampSampler, 0);
}
const uint32_t hzbWidth = m_hzbMipViews[0]->width();
const uint32_t hzbHeight = m_hzbMipViews[0]->height();
const uint32_t threads = 8;
MTL::Size tgSize = MTL::Size(threads, threads, 1);
MTL::Size gridSize = MTL::Size((hzbWidth + threads - 1) / threads * threads,
(hzbHeight + threads - 1) / threads * threads,
1);
hzbInit->dispatchThreads(gridSize, tgSize);
hzbInit->endEncoding();
for (size_t mip = 1; mip < m_hzbMipViews.size(); ++mip) {
MTL::Texture* src = m_hzbMipViews[mip - 1];
MTL::Texture* dst = m_hzbMipViews[mip];
if (!src || !dst) {
continue;
}
MTL::ComputeCommandEncoder* downEncoder = commandBuffer->computeCommandEncoder();
downEncoder->setComputePipelineState(m_hzbDownsamplePipeline);
downEncoder->setTexture(src, 0);
downEncoder->setTexture(dst, 1);
const uint32_t mipWidth = dst->width();
const uint32_t mipHeight = dst->height();
MTL::Size downGrid = MTL::Size((mipWidth + threads - 1) / threads * threads,
(mipHeight + threads - 1) / threads * threads,
1);
downEncoder->dispatchThreads(downGrid, tgSize);
downEncoder->endEncoding();
}
if (m_instanceCullHzbPipeline) {
dispatchInstanceCulling(m_instanceCullHzbPipeline, true);
}
}
My Questions:
Encoder Synchronization: Would you recommend moving this loop into a single ComputeCommandEncoder using MTLBarrier between dispatches to maintain L2 cache residency, or is the overhead of separate encoders negligible for depth-downsampling on TBDR?
visionOS Bindless Latency: For stereo rendering on visionOS, what are the best practices for managing MTL4ArgumentTable updates at 90Hz+? I want to ensure that updating bindless resources for each eye doesn't introduce unnecessary CPU-to-GPU latency.
Memory Management: Are there specific hints for Memoryless textures that could be applied to intermediate HZB levels to save bandwidth during this process?
I’ve attached a screenshot of a scene rendered with the engine (PBR, SSR, and IBL).
As GKGameCenterViewController has been deprecated, it seems that GKAccessPoint is now the correct way to present the GameCentre leaderboard.
But the placement options for the GKAccessPoint are very limited and lead to misaligned UI that looks clunky, as the GKAccessPoint does not align with the system navigation toolbar.
Am I missing something here or am I just stuck with a lopsided UI now?
I much preferred how this previously worked, where I could present the GKGameCenterViewController in a sheet from my own button
I have really enjoyed looking through the code and videos related to Metal 4. Currently, my interest is to update a ReSTIR Project and take advantage of more robust ways to refit acceleration Structures and more powerful ways to access resources.
I am working in Swift and have encountered a couple of puzzles:
What is the 'accepted' way to create a MTL4BufferRange to store indices and vertices?
How do I properly rewrite Swift code to build and compact an Acceleration Structure?
I do realize that this is all in Beta and will happily look through Code Samples this Fall. If other guidance is available earlier, that would be fabulous!
Thank you
I am trying to install the Game Porting Toolkit 2.1 according to the Readme file provided with the toolkit.
When I run the following command:
WINEPREFIX=~/my-game-prefix brew --prefix game-porting-toolkit/bin/wine64 winecfg
I get an error message:
zsh: no such file or directory: /usr/local/opt/game-porting-toolkit/bin/wine64
I don't know how to resolve this.
When I type in the command which brew , I get the path
/usr/local/bin/brew
What am I doing wrong?
Hi there,
I'm wondering if it's possible under iOS 28 developer beta to enable MetalFX scaling info with '{"MTL_HUD_ENABLED": "1" for my App.
This information has been added to Mac, but looks to be absent on iPhone / iPad
I have an odd bug, if I use initWithFrame as the init routine for NSView subclass that uses layers I don't see this bug.
But if I embedded this view into a storyboard with a .nib file and use initWithCoder, I need to return true on
(BOOL) contentsAreFlipped
From the NSView subclass
If I don't the CALayer actually renders from 0,0 from the view upwards and off the window.
The frame sizes for the NSView and the CALayer are good.. when I see them in updateLayer.
Obviously I have a fix.. but I would like to understand why.
Topic:
Graphics & Games
SubTopic:
General
I am integrating MetalFX FrameInterpolator into a custom Unity RenderGraph–based render pipeline (C++ native plugin + C# render passes), and I am hitting the following assertion at runtime:
/MetalFXDebugError.h:29: failed assertion `Color texture width mismatch from descriptor'
What makes this confusing is that all input/output textures have the correct width and height, and they exactly match the values specified in the MTLFXFrameInterpolatorDescriptor.
Setup
Input resolution: 1024 x 512
Output resolution: 2048 x 1024
MTLFXTemporalScaler is created first and then passed into MTLFXFrameInterpolator
The TemporalScaler and FrameInterpolator descriptors use the same input/output sizes and formats
All Metal textures:
Have no parentTexture
Are 2D textures
Match the descriptor sizes exactly (verified via logging)
Texture bindings at encode time
frameInterpolator.colorTexture = mtlTexColor; // 1024 x 512
frameInterpolator.prevColorTexture = mtlTexPrevColor; // 1024 x 512
frameInterpolator.motionTexture = mtlTexMotion; // 1024 x 512
frameInterpolator.depthTexture = mtlTexDepth; // 1024 x 512
frameInterpolator.uiTexture = mtlTexUI; // 2048 x 1024
frameInterpolator.outputTexture = mtlTexOutput; // 2048 x 1024
All widths/heights are logged and match:
Color : 1024 x 512 (input)
PrevColor : 1024 x 512 (input)
Motion : 1024 x 512 (input)
Depth : 1024 x 512 (input)
UI : 2048 x 1024 (output)
Output : 2048 x 1024 (output)
The TemporalScaler works correctly on its own.
The assertion only occurs when using FrameInterpolator.
Important detail about colorTexture
Originally, colorTexture was copied from BuiltinRenderTextureType.CurrentActive.
After reading that this might violate MetalFX semantics, I changed the pipeline so that:
colorTexture now comes from a dedicated private RenderGraph texture
It is not the backbuffer
It is not a drawable
It is not used as a final output
It is created before UI rendering
Despite this, the assertion still occurs.
Question
Can uiTexture for MTLFXFrameInterpolator legally come from a texture copied from BuiltinRenderTextureType.CurrentActive?
More generally:
Are there additional hidden constraints on colorTexture / prevColorTexture (such as Metal usage, storageMode, aliasing, or hazard tracking) that could cause this assertion, even when sizes match?
Does FrameInterpolator require colorTexture and prevColorTexture to be created in a very specific way (e.g. non-aliased, ShaderRead usage, identical Metal resource properties)?
Any clarification on the exact semantic requirements for colorTexture, prevColorTexture, or uiTexture in MetalFX FrameInterpolator would be greatly appreciated.
I am trying to create a simple portal like that in RealityKit, but using metal instead of RealityKit. Has anyone been able to create a window or portal like thing to show a skybox outside in mixed Reality?
Topic:
Graphics & Games
SubTopic:
Metal
Hi Apple,
In VisionOS, for real-time streaming of large 3D scenes, I plan to create Metal buffers and textures in multiple threads and then use a compute shader on the main thread to copy the Metal resources into RealityKit, minimizing main thread usage. Given that most of RealityKit's default APIs require execution on the main actor (main thread), it is not ideal for streaming data. Is this approach the best way to handle streaming data and real-time rendering?
Thank you very much.
I'm developing a game that supports GameKit turn based matches. What I don't understand is this:
Is tapping on the Game Center notification push messages the only way for the GKTurnBasedEventListener to trigger? What if someone misses the push message (swiping it away by accident or something like that) but still wants to join? Is there some inbox somewhere where the pending messages can be seen or fetched?
Also it was mentioned in a very old WWDC video (from 2013, I think that's the latest with information about turn based matches) that the notification also includes a badge for the icon. However, I do not understand how to implement that. Is there any documentation for that?
Hi everyone,
I’m currently learning about ParticleEmitterComponentParticleEmitterComponent and exploring the sample app provided in the Simulating particles in your visionOS app documentation.
In the sample app, when I set the EmitterPreset to fireworks from the settings panel on the left side of the window and choose SystemImage, I noticed two issues:
The image applied to mainEmitter appears clipped or cropped.
The image on spawnedEmitter does not update to the selected SystemImage.
What I want to achieve:
Apply the same SystemImage to both mainEmittermainEmitter and spawnedEmitterspawnedEmitter so that it displays correctly without clipping.
Remove the animation that changes the size of spawnedEmitterspawnedEmitter over time and keep it at a constant size.
Could someone explain which properties should be adjusted to achieve this behavior? Any guidance or examples would be greatly appreciated.
Thanks in advance!
When trying to play with friends Krazy Krownz doesn’t allow me to click multiplayer even though my Apple Game Center connected and my friends Apple game center connected as well. I even tried sending an invite from Apple Game Center to friends and Krazy Krownz doesn’t even show up on the list of available multiplayer games.
I’ve signed out and back in the same issue remain.
I’ve try to contact the game developer, but the website doesn’t work.
Topic:
Graphics & Games
SubTopic:
GameKit