forked from ggml-org/llama.cpp
-
Notifications
You must be signed in to change notification settings - Fork 0
Pull requests: rasbid/llama.cpp
Author
Label
Projects
Milestones
Reviews
Assignee
Sort
Pull requests list
Allow subgroup reductions on stable AMD GCN Vulkan drivers
codex
#16
opened Oct 13, 2025 by
rasbid
Loading…
Prefer device-local allocations on AMD GCN Vulkan GPUs
codex
#10
opened Oct 13, 2025 by
rasbid
Loading…
Add fallback for AMD shader core count without shader core properties2
codex
#5
opened Oct 12, 2025 by
rasbid
Loading…
ProTip!
What’s not been updated in a month: updated:<2026-03-03.