Skip to content

Move ordered usages to hals#8924

Merged
cwfitzgerald merged 13 commits intogfx-rs:trunkfrom
NiklasEi:move-ordered-usages-to-hals
Mar 15, 2026
Merged

Move ordered usages to hals#8924
cwfitzgerald merged 13 commits intogfx-rs:trunkfrom
NiklasEi:move-ordered-usages-to-hals

Conversation

@NiklasEi
Copy link
Contributor

@NiklasEi NiklasEi commented Jan 25, 2026

Connections
Resolves #8853 (and thus bevyengine/bevy#14710)

Description
Currently, ordered usage is defined globally for all hals. According to #8853 this is problematic and the current ordered usages are not correct for Vulkan.
This PR moves the ordered usages into the different hals. It also removes the two wrong ordered usages for Vulkan.

Testing
I ran the standalone examples on my linux machine (NVIDIA RTX 2000 Ada).

Squash or Rebase?

Squash

Checklist

  • Run cargo fmt.
  • Run taplo format.
  • Run cargo clippy --tests. If applicable, add:
    • --target wasm32-unknown-unknown
  • Run cargo xtask test to run tests.
    • I have 15 failing tests, but the same are also failing on latest trunk
    • FAIL [   0.004s] naga::naga snapshots::convert_snapshots_spv
      FAIL [   1.680s] wgpu-examples [Executed Failure: BACKEND] [Vulkan/Intel(R) Graphics (RPL-S)/0] water
      FAIL [   1.790s] wgpu-examples [Executed Failure: BACKEND] [Vulkan/llvmpipe (LLVM 19.1.1, 256 bits)/2] water
      FAIL [   0.180s] wgpu-examples [Executed] [Vulkan/Intel(R) Graphics (RPL-S)/0] wgpu_examples::cooperative_matrix::tests::cooperative_matrix
      FAIL [   3.807s] wgpu-examples [Executed] [Gl/NVIDIA RTX 2000 Ada Generation Laptop GPU/PCIe/SSE2/3] multiple_render_targets
      FAIL [   3.638s] wgpu-examples [Executed] [Gl/NVIDIA RTX 2000 Ada Generation Laptop GPU/PCIe/SSE2/3] skybox-bc7
      FAIL [   3.726s] wgpu-examples [Executed] [Gl/NVIDIA RTX 2000 Ada Generation Laptop GPU/PCIe/SSE2/3] skybox-etc2
      FAIL [   5.871s] wgpu-test::wgpu-gpu [Executed] [Gl/NVIDIA RTX 2000 Ada Generation Laptop GPU/PCIe/SSE2/3] wgpu_gpu::clear_texture::clear_texture_uncompressed_gles
      FAIL [   5.571s] wgpu-examples [Executed Failure: BACKEND] [Vulkan/NVIDIA RTX 2000 Ada Generation Laptop GPU/1] water
      FAIL [   2.563s] wgpu-examples [Executed] [Vulkan/NVIDIA RTX 2000 Ada Generation Laptop GPU/1] wgpu_examples::cooperative_matrix::tests::cooperative_matrix
      FAIL [   2.107s] wgpu-test::wgpu-gpu [Executed] [Gl/NVIDIA RTX 2000 Ada Generation Laptop GPU/PCIe/SSE2/3] wgpu_gpu::render_target::draw_to_2d_view
      FAIL [   2.057s] wgpu-test::wgpu-gpu [Executed] [Gl/NVIDIA RTX 2000 Ada Generation Laptop GPU/PCIe/SSE2/3] wgpu_gpu::render_target::draw_to_3d_view
      FAIL [   2.259s] wgpu-test::wgpu-gpu [Executed] [Gl/NVIDIA RTX 2000 Ada Generation Laptop GPU/PCIe/SSE2/3] wgpu_gpu::render_target::draw_to_2d_array_view
      FAIL [   2.161s] wgpu-test::wgpu-gpu [Executed] [Gl/NVIDIA RTX 2000 Ada Generation Laptop GPU/PCIe/SSE2/3] wgpu_gpu::render_target::resolve_to_2d_array_view
      FAIL [   2.224s] wgpu-test::wgpu-gpu [Executed] [Gl/NVIDIA RTX 2000 Ada Generation Laptop GPU/PCIe/SSE2/3] wgpu_gpu::render_target::resolve_to_2d_view
      
  • If this contains user-facing changes, add a CHANGELOG.md entry.

@inner-daemons inner-daemons self-requested a review January 26, 2026 01:57
@inner-daemons
Copy link
Collaborator

I'm interested in this so I'll take a look at some point, but I can't merge it myself.

@inner-daemons
Copy link
Collaborator

I'm just gonna let any future reviewer know than in this comment, this patch is confirmed to work. That makes multiple people with the bug completely going away so this is almost certainly the right fix.

Thanks for this @NiklasEi, we'll try to review & merge it soon-ish, plus I asked the maintainers about backporting this so it appears in a patch release like 28.0.1 rather than being pushed off until the next release in 2 months.

Valuable stuff!

Copy link
Collaborator

@inner-daemons inner-daemons left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few comments/questions. Broadly LGTM! Also make sure to add a changelog entry

@NiklasEi
Copy link
Contributor Author

Also make sure to add a changelog entry

There are two entries now, one for the general move of ordered uses to the hals and one for the fix in Vulkan, I hope that is OK.

@NiklasEi NiklasEi force-pushed the move-ordered-usages-to-hals branch from db9c4e1 to 996c810 Compare January 28, 2026 19:49
Copy link
Member

@cwfitzgerald cwfitzgerald left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for tackling this! Mostly comments about naming and docs. I have commented on a few names, though there are more places where the same pattern should be applied - I want to make sure that new readers of the code understand what exactly the members represent.

Copy link
Collaborator

@inner-daemons inner-daemons left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reading through the changes here gave me a headache, its 99% just passing around the same stuff. You couldn't pay me to spend another minute looking at this. Nothing controversial though, lets get this merged

Copy link
Member

@cwfitzgerald cwfitzgerald left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have an idea for a different design for this, but lets land this and I'll file an issue about a potential different design

@cwfitzgerald
Copy link
Member

Thank you!

@cwfitzgerald cwfitzgerald enabled auto-merge (squash) March 15, 2026 02:36
@cwfitzgerald cwfitzgerald merged commit 62c2f5d into gfx-rs:trunk Mar 15, 2026
57 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Vulkan backend incorrectly skips barriers

4 participants