Since macOS 15.3.2, we have observed that when another window is moved near the App Store's install button, the button disappears.
We have attached a related video in the Feedback submission here https://feedbackassistant.apple.com/feedback/20444423
Our application overlays a transparent, watermark-window on top of the system window, which causes the install button in the App Store to be hidden when a user attempts to install an application.Could you advise on how to avoid this issue?
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello. In the iOS app i'm working on we are very tight on memory budget and I was looking at ways to reduce our texture memory usage. However I noticed that comparing ASTC8x8 to ASTC12x12, there is no actual difference in allocated memory for most of our textures despite ASTC12x12 having less than half the bpp of 8x8. The difference between the two only becomes apparent for textures 1024x1024 and larger, and even in that case the actual texture data is sometimes only 60% of the allocation size. I understand there must be some alignment and padding going on, but this seems extreme. For an example scene in my app with astc12x12 for most textures there is over a 100mb difference in astc size on disk versus when loaded, so I would love to be able to recover even a portion of that memory.
Here is some test code with some measurements i've taken using an iphone 11:
for(int i = 0; i < 11; i++) {
MTLTextureDescriptor *texDesc = [[MTLTextureDescriptor alloc] init];
texDesc.pixelFormat = MTLPixelFormatASTC_12x12_LDR;
int dim = 12;
int n = 2 << i;
int mips = i+1;
texDesc.width = n;
texDesc.height = n;
texDesc.mipmapLevelCount = mips;
texDesc.resourceOptions = MTLResourceStorageModeShared;
texDesc.usage = MTLTextureUsageShaderRead;
// Calculate the equivalent astc texture size
int blocks = 0;
if(mips == 1) {
blocks = n/dim + (n%dim>0? 1 : 0);
blocks *= blocks;
} else {
for(int j = 0; j < mips; j++) {
int a = 2 << j;
int cur = a/dim + (a%dim>0? 1 : 0);
blocks += cur*cur;
}
}
auto tex = [objCObj newTextureWithDescriptor:texDesc];
printf("%dx%d, mips %d, Astc: %d, Metal: %d\n", n, n, mips, blocks*16, (int)tex.allocatedSize);
}
MTLPixelFormatASTC_12x12_LDR
128x128, mips 7, Astc: 2768, Metal: 6016
256x256, mips 8, Astc: 10512, Metal: 32768
512x512, mips 9, Astc: 40096, Metal: 98304
1024x1024, mips 10, Astc: 158432, Metal: 262144
128x128, mips 1, Astc: 1936, Metal: 4096
256x256, mips 1, Astc: 7744, Metal: 16384
512x512, mips 1, Astc: 29584, Metal: 65536
1024x1024, mips 1, Astc: 118336, Metal: 147456
MTLPixelFormatASTC_8x8_LDR
128x128, mips 7, Astc: 5488, Metal: 6016
256x256, mips 8, Astc: 21872, Metal: 32768
512x512, mips 9, Astc: 87408, Metal: 98304
1024x1024, mips 10, Astc: 349552, Metal: 360448
128x128, mips 1, Astc: 4096, Metal: 4096
256x256, mips 1, Astc: 16384, Metal: 16384
512x512, mips 1, Astc: 65536, Metal: 65536
1024x1024, mips 1, Astc: 262144, Metal: 262144
I also tried using MTLHeaps (placement and automatic) hoping they might be better, but saw nearly the same numbers.
Is there any way to have metal allocate these textures in a more compact way to save on memory?
I'm trying to position an Entity with inverse kinematics while dragging the handle only, but use forward kinematics (pose jointTransforms) otherwise.
The System, Components, Gestures and Rig all seem to work individually.
My approach is to add the IKComponent when dragging starts on the handle and removing the IKComponent it is released.
The switch into IK works, but when removing the IKComponent the App crashes
* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x8)
* frame #0: 0x00000001aa5bb188 CoreRE`(anonymous namespace)::IKComponentSolverWrapper::getSolver() + 60
frame #1: 0x00000001aa5bafb0 CoreRE`re::internal::ikParametersNodeCallback(re::Slice<re::StringID>, re::Slice<re::RigDataValue>, re::Slice<re::StringID>, re::MutableSlice<re::RigDataValue>, void*) + 48
frame #2: 0x00000001aa52d090 CoreRE`re::(anonymous namespace)::resolveEvaluationContextCallback(re::EvaluationContext&, void*) + 152
frame #3: 0x00000001aa68c024 CoreRE`re::(anonymous namespace)::$_76::__invoke(re::Slice<unsigned long>, re::(anonymous namespace)::RegisterTable&) + 1080
frame #4: 0x00000001aa678c94 CoreRE`re::EvaluationModelSingleThread::evaluate(re::EvaluationContextSlices&) + 1188
frame #5: 0x00000001aa866984 CoreRE`re::SkeletalPoseRuntimeData::executeEvaluationTree() + 136
frame #6: 0x00000001aadf37ec CoreRE`re::ecs2::SkeletalPoseComponent::calculateSkeletalPoseBufferWithRig(re::ecs2::MeshComponent*, re::ecs2::RigComponent*, re::ecs2::SkeletalPoseBufferComponent*) + 492
frame #7: 0x00000001aadf4a84 CoreRE`re::ecs2::SkeletalPoseComponentStateImpl::processPreparingComponents(re::ecs2::System::UpdateContext const&, re::ecs2::BasicComponentStateSceneData<re::ecs2::SkeletalPoseComponent>*, re::ecs2::ComponentBuckets<re::ecs2::SkeletalPoseComponent>::BucketIteration, void*) + 268
frame #8: 0x00000001aadf54b0 CoreRE`re::ecs2::SkeletalPoseSystem::update(re::ecs2::System::UpdateContext) const + 732
frame #9: 0x00000001aaed3e54 CoreRE`re::internal::Callable<re::ecs2::ECSManager::configurePhaseECSSystems(re::Scheduler::ScheduleDescriptor&, re::ecs2::ECSSystemGroup, unsigned long)::$_1, void (float)>::operator()(float&&) const + 168
frame #10: 0x00000001ab40eda4 CoreRE`re::Scheduler::executePhase(unsigned long) + 440
frame #11: 0x00000001aa6a3b74 CoreRE`re::Engine::executePhase(re::FramePhase) + 144
frame #12: 0x000000023173de9c RealitySystemSupport`RCPSharedSimulationExecuteUpdate + 64
frame #13: 0x00000002276c9820 MRUIKit`__65-[MRUISharedSimulation _doJoinWithConnectionConfiguration:error:]_block_invoke.35 + 168
frame #14: 0x00000002276c8530 MRUIKit`__addCAPreFenceHandler_block_invoke + 32
frame #15: 0x000000018af22058 QuartzCore`CA::Transaction::run_commit_handlers(CATransactionPhase) + 112
frame #16: 0x000000018aef2ad4 QuartzCore`CA::Context::commit_transaction(CA::Transaction*, double, double*) + 592
frame #17: 0x000000018af21898 QuartzCore`CA::Transaction::commit() + 652
frame #18: 0x000000018af22dac QuartzCore`CA::Transaction::flush_as_runloop_observer(bool) + 68
frame #19: 0x0000000185a26820 UIKitCore`_UIApplicationFlushCATransaction + 48
frame #20: 0x0000000184f97af0 UIKitCore`_UIUpdateSequenceRun + 76
frame #21: 0x0000000185954290 UIKitCore`schedulerStepScheduledMainSection + 168
frame #22: 0x00000001859536d8 UIKitCore`runloopSourceCallback + 80
frame #23: 0x00000001804157fc CoreFoundation`__CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ + 24
frame #24: 0x0000000180415744 CoreFoundation`__CFRunLoopDoSource0 + 172
frame #25: 0x0000000180414eb0 CoreFoundation`__CFRunLoopDoSources0 + 232
frame #26: 0x000000018040f454 CoreFoundation`__CFRunLoopRun + 788
frame #27: 0x000000018040ecd4 CoreFoundation`CFRunLoopRunSpecific + 552
frame #28: 0x0000000190104b70 GraphicsServices`GSEventRunModal + 160
frame #29: 0x0000000185a27e30 UIKitCore`-[UIApplication _run] + 796
frame #30: 0x0000000185a2c058 UIKitCore`UIApplicationMain + 124
frame #31: 0x00000001d29558b4 SwiftUI`closure #1 (Swift.UnsafeMutablePointer<Swift.Optional<Swift.UnsafeMutablePointer<Swift.Int8>>>) -> Swift.Never in SwiftUI.KitRendererCommon(Swift.AnyObject.Type) -> Swift.Never + 164
frame #32: 0x00000001d29555dc SwiftUI`SwiftUI.runApp<τ_0_0 where τ_0_0: SwiftUI.App>(τ_0_0) -> Swift.Never + 84
frame #33: 0x00000001d265ecdc SwiftUI`static SwiftUI.App.main() -> () + 164
frame #34: 0x000000010303f1c4 Playground.debug.dylib`static PlaygroundApp.$main() at <compiler-generated>:0
frame #35: 0x000000010303f290 Playground.debug.dylib`main at PlaygroundApp.swift:7:8
frame #36: 0x0000000102f6d410 dyld_sim`start_sim + 20
frame #37: 0x000000010312e274 dyld`start + 2840
Is there a workaround or another way to switch between IK and FK?
Topic:
Graphics & Games
SubTopic:
RealityKit
Hello,
Could someone post code that shows how to implement GCVirtualController to move a box around the screen?
I've been poking around with GCVirtualController and gotten as far as having the D-pad and A B buttons appear on the display. But how do I make it do anything?
Topic:
Graphics & Games
SubTopic:
SpriteKit
Hi,
I’m testing Unity’s Spaceship HDRP demo on iPhone 17 Pro Max and iPad Pro M4 (iOS 26.1).
Everything renders correctly, and my custom MetalFX Spatial plugin initializes successfully — it briefly reports active scaling (e.g. 1434×660 → 2868×1320 at 50% scaling), then reverts to native rendering a few frames later.
Setup:
Xcode 16.1 (targeting iOS 18)
Unity 2022.3.62f3 (HDRP)
Metal backend
Dynamic Resolution enabled in HDRP assets and cameras
Relevant Xcode console excerpt:
[MetalFXPlugin] MetalFX_Enable(True) called.
[SpaceshipOptions] MetalFX enabled with HDRP dynamic resolution integration.
[SpaceshipOptions] Disabled TAA for MetalFX Spatial.
[SpaceshipOptions] Created runtime RenderTexture: 1434x660
[MetalFX] Spatial scaler created (1434x660 → 2868x1320).
[MetalFX] Processed frame with scaler.
[MetalFXPlugin] Sent RenderTexture (1434x660) to MetalFX. Output target 2868x1320.
[SpaceshipOptions] MetalFX target set: 1434x660
[SpaceshipOptions] Camera targetTexture cleared after MetalFX handoff.
It looks like HDRP clears the camera’s target texture right after MetalFX submits the frame, which causes it to revert to native rendering.
Is there a recommended way to persist or rebind the MetalFX output texture when using HDRP on iOS?
Unity doesn’t appear to support MetalFX in the Editor either:
Thanks!
I am trying to simulate a paste command and it seems to not want to paste. It worked at one point with the same code and now is causing issues.
My code looks like this:
` func simulatePaste() {
guard let source = CGEventSource(stateID: .hidSystemState) else {
print("Failed to create event source")
return
}
let keyDown = CGEvent(keyboardEventSource: source, virtualKey: CGKeyCode(9), keyDown: true)
let keyUp = CGEvent(keyboardEventSource: source, virtualKey: CGKeyCode(9), keyDown: false)
keyDown?.flags = .maskCommand
keyUp?.flags = .maskCommand
keyDown?.post(tap: .cgAnnotatedSessionEventTap)
keyUp?.post(tap: .cgAnnotatedSessionEventTap)
print("Simulated Cmd + V")
}
I know that there is some issues around permissions and so in my Info.plist I have this:
<string>NSApplication</string>
<key>NSAppleEventsUsageDescription</key>
<string>This app requires permission to send keyboard input for pasting from the clipboard.</string>
I have also disabled sandbox. It does ask me if I want to give the app permissions but after approving it, it still doesn't paste.
Hi
Looking at the documentation for screenSpaceAmbientOcclusionIntensity, I noticed that it says this is supported on visionOS 1.0+: https://developer.apple.com/documentation/scenekit/scncamera/screenspaceambientocclusionintensity
Could someone enlighten me as to how that would work? As far as I know, we don't use an SCNCamera on visionOS. So, what's the idea here? Can we activate SSAO on visionOS?
I am trying to use the SVGF denoiser to denoise my ray traced shadows (and also other textures later). I do get a smoothed image, but with wonky denoising.
I need the depth-normal textures and motion textures for the SVGF and assume that these are badly filled in my case. However, neither in the above linked documentation nor in the WWDC19 video I find how they should be defined. I am looking to answers to:
Is depth in red or alpha channel for the depth-normal texture?
Are the normals in screen space?
Is depth linear?
Is it distance or z coordinate in view space? Or even logarithmically scaled or something else?
Are the motion vectors supposed to be in pixels per frame?
What is the orientation of the axis? Is y up or down?
Are there are other restrictions on the formats?
Also the linked code did not help me (I have not found any SVGF so far; also all the code is in Objective-C++, not Swift, but that's a different topic).
So how should I fill these textures.
Can someone point me to the documentation where these kinds of questions are answered?
I am currently using RealityKit (perspective camera) to render a character in my swiftUI app.
The character has customization such as clothing items and hair and all objects are properly weighted to the rig.
The way the model is setup in Blender is like so: Groups of objects that will be swapped (ex: Shoes -> Shoes objects) and an armature. I then export it to usdc with all objects active. This is the resulting hierarchy:
Before exporting for the animation (armature modifier applied), I simply had to store the Model entities and swap them in but now when I export with the Armature Modifier applied, so that animations get exported, the ModelComponent gets flattened to the armature and swapping entities and applying new materials to them is no longer as simple.
Here's a demo blend file and usdc export with a setup like mine, having an animated bone to swing a cube and sphere, to be swapped so that only one is visible https://www.dropbox.com/scl/fo/be2q6qcztc83z7c4gj1w0/AMapxWc_ip2KZ8oTOYDUMv8?rlkey=rcdaggcxq06dyen09mw5mqmem&st=bnc0d7j0&dl=0
This is how I'm loading the entity and removing a part, with the demo files
import SwiftUI
import RealityKit
struct SwapDemoView: View {
var body: some View {
RealityView { content in
let camera = PerspectiveCamera()
camera.transform.translation = SIMD3(x: 0, y: 0.1, z: 3)
guard let root = try? await Entity(named: "simpleSwapDemo") else { fatalError("simpleSwapDemo.usdc is not present") }
print(root) // Get initial hierarchy
guard let cube = root.findEntity(named: "Cube") else { fatalError("Entity cube doesn't exist") }
cube.removeFromParent() // <-- Cube is still visible after removal
print(root) // Get hierarchy to confirm removal of cube
let resource = root.availableAnimations[0]
root.playAnimation(resource.repeat())
content.add(root)
content.add(camera)
}
.background(.white)
}
}
And this is what the entity hierarchy looks like in RealityKit before cube removal
▿ 'root' : Entity, children: 1
⟐ SynchronizationComponent
⟐ AnimationLibraryComponent
⟐ Transform
▿ 'Armature' : ModelEntity, children: 2
⟐ SynchronizationComponent
⟐ ModelComponent
⟐ SkeletalPosesComponent
⟐ AnimationLibraryComponent
⟐ Transform
▿ 'Armature' : Entity
⟐ SynchronizationComponent
⟐ Transform
▿ 'Primitives' : Entity, children: 2
⟐ SynchronizationComponent
⟐ Transform
▿ 'Sphere' : Entity, children: 1
⟐ SynchronizationComponent
⟐ Transform
▿ 'Sphere' : Entity
⟐ SynchronizationComponent
⟐ Transform
▿ 'Cube' : Entity, children: 1
⟐ SynchronizationComponent
⟐ Transform
▿ 'Cube' : Entity
⟐ SynchronizationComponent
⟐ Transform
And here's the hierarchy after removal
▿ 'root' : Entity, children: 1
⟐ SynchronizationComponent
⟐ AnimationLibraryComponent
⟐ Transform
▿ 'Armature' : ModelEntity, children: 2
⟐ SynchronizationComponent
⟐ ModelComponent
⟐ SkeletalPosesComponent
⟐ AnimationLibraryComponent
⟐ Transform
▿ 'Armature' : Entity
⟐ SynchronizationComponent
⟐ Transform
▿ 'Primitives' : Entity, children: 1
⟐ SynchronizationComponent
⟐ Transform
▿ 'Sphere' : Entity, children: 1
⟐ SynchronizationComponent
⟐ Transform
▿ 'Sphere' : Entity
⟐ SynchronizationComponent
⟐ Transform
And this is the result:
What's the best practice here? Should animation be exported separately and then applied to the skeleton? If so, how is that achieved? I'm not really sure how to proceed here.
The code is pretty simple
kernel void naive(
constant RunParams *param [[ buffer(0) ]],
const device float *A [[ buffer(1) ]], // [N, K]
device float *output [[ buffer(2) ]],
uint2 gid [[ thread_position_in_grid ]]) {
uint a_ptr = gid.x * param->K;
for (uint i = 0; i < param->K; i++, a_ptr++) {
val += A[b_ptr];
}
output[ptr] = val;
}
when uint a_ptr = gid.x * param->K, the code got 150 GFLops
when uint a_ptr = gid.y * param->K, the code got 860 GFLops
param->K = 256;
thread per group: [16, 16]
I'd like to understand why the performance is so different, and how can I profile/diagnose this to help with further optimization.
Topic:
Graphics & Games
SubTopic:
Metal
Hi all,
I've developed some code that enables an arcball camera interaction with my scene. I've done this using components and systems. The implementation feels a bit messy as I've got gesture code on my realityView, and then a bunch of other code that uses those gesture inputs in my component and system.
Is there a demo app, or some example code that shows a nice way to encapsulate these things in to one item for custom cameras, something like Apple's .realityViewCameraControls(.orbit)
If not can anyone recommend an approach to take?
Hi everyone,
I'm using the Vision framework’s ImageAestheticsScoresObservation class (https://developer.apple.com/documentation/vision/imageaestheticsscoresobservation).
I noticed that the overallScore returned sometimes gives negative values. Could someone confirm whether the expected range of the score is from -1.0 to 1.0?
The documentation doesn’t explicitly state the possible score range, so I’d appreciate any clarification or insights.
Thanks in advance!
Hi,
I'm working with a very simple app that tries to read a coordinates card and past the data into diferent fields. The card's layout is COLUMNS from 1-10, ROWs from A-J and a two digit number for each cell. In my app, I have field for each of those cells (A1, A2...). I want that OCR to read that card and paste the info but I just cant. I have two problems. The camera won't close. It remains open until I press the button SAVE (this is not good because a user could take 3, 4, 5... pictures of the same card with, maybe, different results, and then? Which is the good one?). Then, after I press save, I can see the OCR kinda works ( the console prints all the date read) but the info is not pasted at all.
Any idea? I know is hard to know what's wrong but I've tried chatgpt and all it does... just doesn't work
This is the code from the scanview
import SwiftUI
import Vision
import VisionKit
struct ScanCardView: UIViewControllerRepresentable {
@Binding var scannedCoordinates: [String: String]
var useLettersForColumns: Bool
var numberOfColumns: Int
var numberOfRows: Int
@Environment(.presentationMode) var presentationMode
func makeUIViewController(context: Context) -> VNDocumentCameraViewController {
let scannerVC = VNDocumentCameraViewController()
scannerVC.delegate = context.coordinator
return scannerVC
}
func updateUIViewController(_ uiViewController: VNDocumentCameraViewController, context: Context) {}
func makeCoordinator() -> Coordinator {
return Coordinator(self)
}
class Coordinator: NSObject, VNDocumentCameraViewControllerDelegate {
let parent: ScanCardView
init(_ parent: ScanCardView) {
self.parent = parent
}
func documentCameraViewController(_ controller: VNDocumentCameraViewController, didFinishWith scan: VNDocumentCameraScan) {
print("Escaneo completado, procesando imagen...")
guard scan.pageCount > 0, let image = scan.imageOfPage(at: 0).cgImage else {
print("No se pudo obtener la imagen del escaneo.")
controller.dismiss(animated: true, completion: nil)
return
}
recognizeText(from: image)
DispatchQueue.main.async {
print("Finalizando proceso OCR y cerrando la cámara.")
controller.dismiss(animated: true, completion: nil)
}
}
func documentCameraViewControllerDidCancel(_ controller: VNDocumentCameraViewController) {
print("Escaneo cancelado por el usuario.")
controller.dismiss(animated: true, completion: nil)
}
func documentCameraViewController(_ controller: VNDocumentCameraViewController, didFailWithError error: Error) {
print("Error en el escaneo: \(error.localizedDescription)")
controller.dismiss(animated: true, completion: nil)
}
private func recognizeText(from image: CGImage) {
let request = VNRecognizeTextRequest { (request, error) in
guard let observations = request.results as? [VNRecognizedTextObservation], error == nil else {
print("Error en el reconocimiento de texto: \(String(describing: error?.localizedDescription))")
DispatchQueue.main.async {
self.parent.presentationMode.wrappedValue.dismiss()
}
return
}
let recognizedStrings = observations.compactMap { observation in
observation.topCandidates(1).first?.string
}
print("Texto reconocido: \(recognizedStrings)")
let filteredCoordinates = self.filterValidCoordinates(from: recognizedStrings)
DispatchQueue.main.async {
print("Coordenadas detectadas después de filtrar: \(filteredCoordinates)")
self.parent.scannedCoordinates = filteredCoordinates
}
}
request.recognitionLevel = .accurate
let handler = VNImageRequestHandler(cgImage: image, options: [:])
DispatchQueue.global(qos: .userInitiated).async {
do {
try handler.perform([request])
print("OCR completado y datos procesados.")
} catch {
print("Error al realizar la solicitud de OCR: \(error.localizedDescription)")
}
}
}
private func filterValidCoordinates(from strings: [String]) -> [String: String] {
var result: [String: String] = [:]
print("Texto antes de filtrar: \(strings)")
for string in strings {
let trimmedString = string.replacingOccurrences(of: " ", with: "")
if parent.useLettersForColumns {
let pattern = "^[A-J]\\d{1,2}$" // Letras de A-J seguidas de 1 o 2 dígitos
if trimmedString.range(of: pattern, options: .regularExpression) != nil {
print("Coordenada válida detectada (letras): \(trimmedString)")
result[trimmedString] = "Valor" // Asignación de prueba
}
} else {
let pattern = "^[1-9]\\d{0,1}$" // Solo números, de 1 a 99
if trimmedString.range(of: pattern, options: .regularExpression) != nil {
print("Coordenada válida detectada (números): \(trimmedString)")
result[trimmedString] = "Valor"
}
}
}
print("Coordenadas finales después de filtrar: \(result)")
return result
}
}
}
Hi all,
I'm encountering an issue with Metal raytracing on my M5 MacBook Pro regarding Instance Acceleration Structure (IAS).
Intersection tests suddenly stop working after a certain point in the sampling loop.
Situation
I implemented an offline GPU path tracer that runs the same kernel multiple times per pixel (sampleCount) using metal::raytracing.
Intersection tests are performed using an IAS.
Since this is an offline path tracer, geometries inside the IAS never changes across samples (no transforms or updates).
As sampleCount increases, there comes a point where the number of intersections drops to zero, and remains zero for all subsequent samples.
Here's a code sketch:
let sampleCount: UInt16 = 1024
for sampleIndex: UInt16 in 0..<sampleCount {
// ...
do {
let commandBuffer = commandQueue.makeCommandBuffer()
// Dispatch the intersection kernel.
await commandBuffer.completed()
}
do {
let commandBuffer = commandQueue.makeCommandBuffer()
// Use the intersection test results from the previous command buffer.
await commandBuffer.completed()
}
// ...
}
kernel void intersectAlongRay(
const metal::uint32_t threadIndex [[thread_position_in_grid]],
// ...
const metal::raytracing::instance_acceleration_structure accelerationStructure [[buffer(2)]],
// ...
)
{
// ...
const auto result = intersector.intersect(ray, accelerationStructure);
switch (result.type) {
case metal::raytracing::intersection_type::triangle: {
// Write intersection result to device buffers.
break;
}
default:
break;
}
Observations
Encoding both the intersection kernel and the subsequent result usage in the same command buffer does not resolve the problem.
Switching from IAS to Primitive Acceleration Structure (PAS) fixes the problem.
Rebuilding the IAS for each sample also resolves the issue.
Intersections produce inconsistent results even though the IAS and rays are identical — Image 1 shows a hit, while Image 2 shows a miss.
Questions
Am I misusing IAS in some way ?
Could this be a Metal bug ?
Any guidance or confirmation would be greatly appreciated.
I've recently run into an issue in Xcode where the sks editor's preview canvas just vanishes for every project on my computer. I don't think it is an issue with my sks files because this works as expected on another computer with the same files, and when it happens it happens for ALL sks files in all projects. There used to be menu items to toggle the canvas and its settings, but those are now gone for me in sks files (they show up for swift files that have previews, however).
Any idea what is going on here? How do I get the canvas back? I literally cannot get any work done on my primary computer because of this...
The game physics work as expected using GTPK 2.0 using Crossover 24 or Whisky. However, using GPTK 2.1 with Crossover 25, the player and camera physics misbehave. See https://www.reddit.com/r/WWEGames/comments/1jx9mph/the_siamese_elbow/ and https://www.reddit.com/r/WWEGames/comments/1jx9ow4/camera_glitch/
Full video also linked in the Reddit post.
I have also submitted this bug via the feedback assistant.
I have something like this drawing in an MTKView (see at bottom).
I am finding it difficult to figure out when can the Swift-land resources used in making the MTLBuffer(s) be released? Below, for example, is it ok if args goes out of scope (or is otherwise deallocated) at point 1, 2, or 3? Or perhaps even earlier, as soon as argsBuffer has been created?
I have been reading through various articles such as
Setting resource storage modes
Choosing a resource storage mode for Apple GPUs
Copying data to a private resource
but it's a lot to absorb and I haven't been really able to find an authoritative description of the required lifetime of the resources in CPU land.
I should mention that this is Metal 4 code. In previous versions of Metal, the MTLCommandBuffer had the ability to add a completion handler to be called by the GPU after it has finished running the commands in the buffer but in Metal 4 there is no such thing (it it were even needed for the purpose I am interested in).
Any advice and/or pointers to the definitive literature will be appreciated.
guard let argsBuffer = device.makeBuffer(bytes: &args,...
argumentTable.setAddress(argsBuffer.gpuAddress, ...
encoder.setArgumentTable(argumentTable, stages: .vertex)
// encode drawing
renderEncoder.draw...
...
encoder.endEncoding() // 1
commandBuffer.endCommandBuffer() // 2
commandQueue.waitForDrawable(drawable)
commandQueue.commit([commandBuffer]) // 3
commandQueue.signalDrawable(drawable)
drawable.present()
Topic:
Graphics & Games
SubTopic:
Metal
Question:
I'm encountering an issue with in-app purchases (IAP) in Unity, specifically for a non-consumable product in the iOS sandbox environment. Below are the details:
Environment:
Unity Version: 2022.3.55f1 Unity In-App Purchasing
Version: v4.12.2
Device: iPhone (15, iOS 18.1.1)
Connection: Wi-Fi iOS
Settings: In-App Purchases set to “Allowed” initially Problem Behavior:
I attempted to purchase a non-consumable item for the first time. The payment is successfully completed by entering the password.
I then background the game app and navigate to the iOS Settings to set In-App Purchases to "Don't Allow."
After returning to the game and either closing or killing the app, I try to purchase the same non-consumable item again.
I checked canMakePayments() through the Apple configuration, and the app correctly detected that I could not make purchases due to the restriction.
I then navigate back to Settings and set In-App Purchases to "Allow."
Upon returning to the game, I try purchasing the non-consumable item again. A pop-up appears, saying, "You’ve already purchased this. Would you like to get it again for free?"
The issue is: Will it deduct money for the second time, and why is the system allowing the user to purchase the same non-consumable item multiple times after purchasing it once?
Is this the expected behavior for Unity In-App Purchasing, or is there something I might be missing in handling non-consumable purchases in this scenario?
Additional Information:
I’ve confirmed that the "In-App Purchases" are set to “Allowed” before attempting the purchase again.
I understand that non-consumable products should not be purchased more than once, so I’m unsure why the system is offering to let the user purchase it again.
I appreciate any insights into whether this is expected behavior or if I need to adjust how I handle the purchase flow.
I think I really have tried everything and I did all according to official documentation to support game mode on iOS or iPadOS but it doesn't matter what I do it just doesn't get triggered. Funny enough it works during development when I install it via Xcode but as soon as it is live on the store and when I install it from there game mode doesn't get triggered anymore. What I have atm
I have added (even though it is deprecated)
<key>GCSupportsGameMode</key>
<true/>
I have set the (but it seems only supported for macOS)
<key>LSApplicationCategoryType</key>
<string>public.app-category.games</string>
I have added
<key>LSSupportsGameMode</key>
<true/>
It just doesn't work. Is there anything else what needs to be done? Should the flag LSSupportsGameMode not be enough normally?
The reason why this is so annoying is that my app is a real time streaming app and I want to profit from minimised background activities for smoother gameplay and more consistent frame rates like mentioned in the documentation.
Hi,
wanted to test if possible to use Mesa3D OGLon12+D3DMetal 2b3 to get GL>4.1 support on windows apps via D3D12Metal..
using simple wglgears.c app (similar glxgears) and running like:
GALLIUM_DRIVER=d3d12 wine64 wglgears64 -info
with overridden opengl32.dll using contents from:
https://github.com/pal1000/mesa-dist-win/releases/download/24.3.0-rc1/mesa3d-24.3.0-rc1-release-msvc.7z
I get:
[D3DMetal:LOG:5E53] Unsupported API: CreateCommandQueue1
caused by:
https://gitlab.freedesktop.org/mesa/mesa/-/commit/c022c9603d500b59ff5e6f93c8a214d1785ab20a
API:
https://learn.microsoft.com/en-us/windows/win32/api/d3d12/nf-d3d12-id3d12device9-createcommandqueue1
note setup is correct as using:
GALLIUM_DRIVER=llvmpipe wine64 wglgears64 -info
I get:
GL_RENDERER = llvmpipe (LLVM 19.1.3, 128 bits)
GL_VERSION = 4.5 (Compatibility Profile) Mesa 24.3.0-rc1 (git-85ba713d76)
GL_VENDOR = Mesa
GL_EXTENSIONS = GL_ARB_multisample GL_EXT_abgr GL_EXT_bgra GL_EXT_blend_color GL_EXT_blend_minmax GL_EXT_blend_subtract
r GL_EXT_texture.. etc..