Skip to content

ZstdGpuMemsetMemcpy: Increase threadgroup size and binary-search scalarization optimizations.#100

Open
Jonathan-Weinstein-AMD wants to merge 7 commits intomicrosoft:developmentfrom
Jonathan-Weinstein-AMD:zstd-memset-memcpy-tgsize-and-scalarize-bsearch
Open

ZstdGpuMemsetMemcpy: Increase threadgroup size and binary-search scalarization optimizations.#100
Jonathan-Weinstein-AMD wants to merge 7 commits intomicrosoft:developmentfrom
Jonathan-Weinstein-AMD:zstd-memset-memcpy-tgsize-and-scalarize-bsearch

Conversation

@Jonathan-Weinstein-AMD
Copy link
Copy Markdown

@Jonathan-Weinstein-AMD Jonathan-Weinstein-AMD commented Apr 11, 2026

Increasing kzstdgpu_TgSizeX_MemsetMemcpy from 32 to 64 yields nice performance improvements on both an AMD 7900 XTX and an NV RTX 4080. Reasons for this on AMD may include going from wave32 to wave64, which can help in ways like the s_clause instruction can, and all waves (though still just one in this case, but it has more lanes) of a threadgroup are launched on the same compute-unit; each compute-unit has its own L0/K$, and it is likely adjacent threadgroups touch adjacent memory.

In the insects archive, the threadgroup size change alone nets between a 30-40% duration reduction on both GPUs; clocks were fixed at some non-specific frequency in both cases, but timings were fairly consistent.

Increasing the threadgroup size to 128 or 256 could further improve performance, but I decide to stop at 64 now, and that's what xbox seems to do.


The insects archive has 233 raw blocks that sum to 18,860,773 bytes. The second optimization (f570ccb) in this PR tries to detect the (likely) case all lanes in a wave store within the same block (via WaveActiveAllEqual(globalBlockIdx)), and if so, does the frame lookup in a uniform context. This seems pretty neutral on RTX 4080, but on 7900 XTX it further reduces duration by about 38% on top of the threadgroup size change.

The last optimization then also tries to scalarize the first binary search too. This absorbs the previous optimization. This also seems pretty neutral on RTX 4080, but on 7900 XTX it seems about a 23% duration reduction on top of the previous change.

In total, all three changes net about a 70% duration reduction for the insects archive on 7900 XTX (at some low-ish fixed-clock frequency).

There's no rush for changes like these.

…o 64. For one thing, enables wave64 on RDNA3/RDNA4. Also, all waves in a threadgroup are launched on the same compute-unit, which can have L0/K$ benefits, so in general having a larger threadgroup size can help, and performance for the insects archive does seem to improve more when trying 128 threads.

Performance of [Memcpy RAW blocks, Memset RLE blocks]:
new_duration/old_duration for insects archive, clocks were fixed at some non-specific frequency, but timings were fairly consistent:
7900 XTX: ~= 0.60
RTX 4080: ~= 0.67
The insects archive has 233 raw blocks that sum to 18,860,773 bytes. It is quite likely then that all threads in a wave write to the same block. Detect this and scalarize the second zstdgpu_BinarySearch to lookup the frame index.

Performance of [Memcpy RAW blocks, Memset RLE blocks]:
new_duration/old_duration for insects archive, clocks were fixed at some non-specific frequency, but timings were fairly consistent:
7900 XTX:
- Against last commit (tgsize already 64): ~= 0.62
- Against upstream: ~= 0.38
… too.

This is a bit fiddley and I don't super like the added `if (blockSize == 0) continue;` in `ParseFrame`.

Performance of [Memcpy RAW blocks, Memset RLE blocks]:
new_duration/old_duration for insects archive, clocks were fixed at some non-specific frequency, but timings were fairly consistent:
7900 XTX:
- Against last commit (tgsize already 64 and had WaveActiveAllEqual from frame lookup): ~= 0.77
- Against upstream: ~= 0.29
…zero-decompressed-size Raw/RLE blocks. No per-thread LDS used (still some LDS, a word per group so don't have to care about wave size or things of that nature). K$ vs LDS latency is perhaps comparable. I think I decided LDS at first since I thought to do a binary search within the a tail of `min(Constants.blockCount - groupLeaderBlockIdx, numActiveThreads)` when assuming there are no empty Raw/RLE blocks at that point.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant