This repository contains algorithms developed by the PATRIC team submitted to the 2025/2026 PETRIC2 reconstruction challenge, building on the MaGeZ algorithm that won the first PETRIC challenge.
- Patrick Fahy, University of Bath, United Kingdom
- Matthias Ehrhardt, University of Bath, United Kingdom
- Mohammad Golbabaee, University of Bristol, United Kingdom
- Zeljko Kereta, University College London, United Kingdom
We start from the MaGeZ preconditioned SVRG algorithm and replace the scalar step size at each iteration with a learned 3D convolution kernel applied to the preconditioned gradient. The key idea is that this generalises the scalar step size to a richer spatial operator while keeping the parameter count small (5×5×5 kernels).
The base update rule is
where $\tilde{g}t$ is the SVRG gradient estimate, $P_t$ is a diagonal preconditioner based on the harmonic mean of $x / (A^\top \mathbf{1})$ and the inverse diagonal Hessian of the Relative Difference Prior, and $[\cdot]+$ enforces non-negativity.
For PETRIC2, we additionally apply a Gaussian pre-filter (FWHM = 6mm) to the OSEM warm-start image before beginning the iteration.
We replace the scalar step size
Since convolution is linear in the kernel, the training objective is a linear least-squares problem that can be solved efficiently using Conjugate Gradients (CG) — no backpropagation, unrolling, or automatic differentiation is required. Each kernel is learned in under 100 CG iterations (seconds, not minutes).
This approach is based on: Fahy, Golbabaee, Ehrhardt. Greedy learning to optimize with convergence guarantees. arXiv:2406.00260, 2024.
Kernels were trained on a small subset of the available datasets (5 out of 13). Due to the heterogeneity of PET scanner geometries across datasets, the kernels are trained to be iteration-dependent but not data-adaptive — the same kernel
Learned kernels are used for the first epoch only (when the full gradient is computed). Beyond the first epoch, stochastic subset gradients made the learning signal too noisy — learned kernels converged to approximately zero. After the first epoch, the algorithm switches back to standard MaGeZ with a hand-tuned decreasing step size schedule.
The kernel regression loss is computed over the whole-object mask:
The kernel regression loss uses a weighted combination that directly targets the PETRIC2 evaluation metrics. For whole-object and background masks, per-voxel MSE is used (proxy for NRMSE). For each VOI region, the squared error of means is used (proxy for AEM):
where $\hat{x}n = x{t,n} - K * (P_{t,n} \tilde{g}_{t,n})$.
-
Epoch 1 (
$t = 0, \ldots, S-1$ ): use learned kernels
-
Epochs ≥ 2: standard MaGeZ with
$\alpha_t = 1.5$ for$t \le 60$ , and$\alpha_t = 1$ otherwise
We thank the PETRIC2 organisers for a very interesting challenge. This work builds on the MaGeZ algorithm by Ehrhardt, Schramm, and Kereta.