-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Pull requests: uxlfoundation/oneDNN
Author
Label
Projects
Milestones
Reviews
Assignee
Sort
Pull requests list
graph: backend: dnnl: exec: gated_mlp uses unique_ptr instead of raw ptr
component:graph-api
Codeowner: @oneapi-src/onednn-graph
#5060
opened Apr 21, 2026 by
TaoLv
Contributor
Loading…
[WIP] xe: gated_mlp: improve performance of ukernel-based gmlp
component:common
component:tests
Codeowner: @oneapi-src/onednn-arch
platform:gpu-intel
Codeowner: @oneapi-src/onednn-gpu-intel
#5059
opened Apr 21, 2026 by
hidefromkgb
Contributor
•
Draft
WIP: [GPU] Improve int8 ResNet-34 convolution performance on BMG
platform:gpu-intel
Codeowner: @oneapi-src/onednn-gpu-intel
#5058
opened Apr 20, 2026 by
echeresh
Contributor
Loading…
[GPU][NVL-P] Add acc mode restriction on atomic acc
platform:gpu-intel
Codeowner: @oneapi-src/onednn-gpu-intel
#5057
opened Apr 20, 2026 by
kealan-barbieri
Contributor
Loading…
4 tasks
[GPU] xe: tensor: fix view_t::normalized_tlayout()
backport
component:tests
Codeowner: @oneapi-src/onednn-arch
platform:gpu-intel
Codeowner: @oneapi-src/onednn-gpu-intel
#5055
opened Apr 20, 2026 by
echeresh
Contributor
Loading…
benchdnn: support SDPA primitive in verbose output
component:tests
Codeowner: @oneapi-src/onednn-arch
documentation
A request to change/fix/improve the documentation. Codeowner: @oneapi-src/onednn-doc
#5054
opened Apr 20, 2026 by
kwieloch-intel
Contributor
•
Draft
graph: fix a few moderate/minor coverity issues
component:graph-api
Codeowner: @oneapi-src/onednn-graph
component:tests
Codeowner: @oneapi-src/onednn-arch
#5051
opened Apr 20, 2026 by
TaoLv
Contributor
Loading…
benchdnn: inputs: graph: add torchbench sdpa training tests
component:tests
Codeowner: @oneapi-src/onednn-arch
#5050
opened Apr 20, 2026 by
TaoLv
Contributor
Loading…
1 of 3 tasks
[GPU] ngen: emul: fix s0Q*s1W multiply on XE3P. Backport
backport
third_party
#5048
opened Apr 17, 2026 by
skazakov1
Contributor
Loading…
[GPU] ngen: emul: fix s0Q*s1W multiply on XE3P
third_party
#5046
opened Apr 17, 2026 by
skazakov1
Contributor
Loading…
WIP: xe: conv: consolidate accumulator type setup
platform:gpu-intel
Codeowner: @oneapi-src/onednn-gpu-intel
#5045
opened Apr 17, 2026 by
echeresh
Contributor
Loading…
WIP: [GPU] Use f32 accumulator for f16 input in convolution
component:tests
Codeowner: @oneapi-src/onednn-arch
platform:gpu-intel
Codeowner: @oneapi-src/onednn-gpu-intel
#5041
opened Apr 16, 2026 by
echeresh
Contributor
Loading…
cpu: x64: matmul: enable treat_as_plain for weights format
bug
A confirmed library bug
platform:cpu-x64
Intel64/AMD64 processors. Codeowner: @oneapi-src/onednn-cpu-x64
#5038
opened Apr 16, 2026 by
xuxinzen
Contributor
Loading…
cpu: aarch64: make jit_uni_pool SVE instantiation vector length agnostic
component:common
platform:cpu-aarch64
Codeowner: @oneapi-src/onednn-cpu-aarch64
#5036
opened Apr 16, 2026 by
Sqvid
Contributor
Loading…
3 tasks done
Test PR. Upconvert fp8 weights to xf16 in Matmul in case of xf16 activations for 3.10
backport
component:common
component:tests
Codeowner: @oneapi-src/onednn-arch
platform:cpu-x64
Intel64/AMD64 processors. Codeowner: @oneapi-src/onednn-cpu-x64
xe: gemm: use dispatch table for simple strategy parameter parsing
platform:gpu-intel
Codeowner: @oneapi-src/onednn-gpu-intel
#5025
opened Apr 15, 2026 by
Simonsays095
Contributor
Loading…
[GPU] GEMM Acc fixup
platform:gpu-intel
Codeowner: @oneapi-src/onednn-gpu-intel
#5017
opened Apr 14, 2026 by
kealan-barbieri
Contributor
Loading…
2 of 4 tasks
cpu: x64: matmul: Enable int8 grouped quantization
platform:cpu-x64
Intel64/AMD64 processors. Codeowner: @oneapi-src/onednn-cpu-x64
#5014
opened Apr 14, 2026 by
inteldimitrius
Contributor
•
Draft
cpu: aarch64: enable binary op and binary post-ops on ASIMD
platform:cpu-aarch64
Codeowner: @oneapi-src/onednn-cpu-aarch64
#5008
opened Apr 13, 2026 by
renato-arantes
Contributor
Loading…
3 tasks done
oneDNN v3.12 release notes
backport
documentation
A request to change/fix/improve the documentation. Codeowner: @oneapi-src/onednn-doc
#5005
opened Apr 11, 2026 by
vpirogov
Contributor
Loading…
aarch64: support for per_dim_0 scales and bf16 dst_dt in jit int8 matmul
component:common
component:tests
Codeowner: @oneapi-src/onednn-arch
platform:cpu-aarch64
Codeowner: @oneapi-src/onednn-cpu-aarch64
#4987
opened Apr 9, 2026 by
michalowski-arm
Contributor
Loading…
2 tasks done
MFDNN-14690: Replace XE3P_35_10/11/UNKNOWN Core enum values with Xe3p
platform:gpu-intel
Codeowner: @oneapi-src/onednn-gpu-intel
third_party
#4981
opened Apr 8, 2026 by
dyoussif
Contributor
Loading…
cpu, benchdnn: add reorder to/from grouped with different dts and use grouped in matmul ref
component:tests
Codeowner: @oneapi-src/onednn-arch
cpu: aarch64: implement forward lnorm in SVE
component:common
platform:cpu-aarch64
Codeowner: @oneapi-src/onednn-cpu-aarch64
Previous Next
ProTip!
What’s not been updated in a month: updated:<2026-03-20.