Skip to content

Pull requests: rasbid/llama.cpp

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Reviews
Assignee
Filter by who’s assigned
Assigned to nobody Loading
Sort

Pull requests list

Prefer VRAM allocations on AMD GCN Vulkan GPUs codex
#18 opened Oct 13, 2025 by rasbid Loading…
Tune AMD GCN Vulkan warptiles codex
#17 opened Oct 13, 2025 by rasbid Loading…
Tune AMD GCN Vulkan subgroup sizing codex
#15 opened Oct 13, 2025 by rasbid Loading…
Adjust Vulkan submit batching for AMD GCN GPUs codex
#14 opened Oct 13, 2025 by rasbid Loading…
Clamp Vulkan DMMV workgroup sizes on AMD GCN codex
#13 opened Oct 13, 2025 by rasbid Loading…
Enable subgroup float reductions on AMD GCN codex
#12 opened Oct 13, 2025 by rasbid Loading…
Enable AMD GCN subgroup DMMV pipelines codex
#7 opened Oct 12, 2025 by rasbid Loading…
Tune Vulkan matmul tiles for AMD GCN wave64 codex
#6 opened Oct 12, 2025 by rasbid Loading…
Improve AMD GCN detection fallback codex
#4 opened Oct 12, 2025 by rasbid Loading…
Add Vulkan matmul profiling instrumentation codex
#3 opened Oct 7, 2025 by rasbid Loading…
Add detailed startup logging for llama-cli codex
#1 opened Sep 23, 2025 by rasbid Loading…
ProTip! What’s not been updated in a month: updated:<2026-03-03.