Releases: PolyU-IOR/HPR-LP
HPR-LP v0.1.5
Optimize GPU implementation for kernels and workflows for around 10% speedup (per-iteration).
HPR-LP v0.1.4
- Improved GPU efficiency via CUDA Graph update execution.
- Stabilized and optimized GPU power iteration eigenvalue estimation.
- Added MOI/JuMP optimizer integration. See demo/demo_JuMP.jl for usage.
- Added warm-start support, allowing users to provide initial points (initial_x, initial_y) for the solver.
HPR-LP v0.1.3
Highlights of HPR-LP v0.1.3 (October 17, 2025):
- Enhanced parameter adjustment strategy, significantly improved stability, achieving relative KKT and duality gap accuracy up to 1e-9.
- Improved LP modeling pipeline, seamless integration with JuMP for a smoother modeling experience.
Benchmark Results
Platform: NVIDIA A100-SXM4-80GB
Dataset: Mittelmann’s LP benchmark without presolve
Performance: 47 of 49 instances solved (Tolerance: 1e-4, Time limit: 3600s)
Performance: 41 of 49 instances solved (Tolerance: 1e-9, Time limit: 3600s)
HPR-LP v0.1.2
Highlights of HPR-LP v0.1.2 (September 27, 2025):
- SpMV rewrites. Add a preprocessing step and buffer preallocation to avoid redundant work between iterations.
- Kernel rewrites. Several CUDA kernels were refactored to reduce memory traffic and improve occupancy.
- In terms of SGM10 (1e-8 accuracy), 11% faster for Mittelmann's LP benchmark set and 7% faster for MIP2017 large-scale LP relaxations (compared to v0.1.1).
HPR-LP v0.1.1
Highlights of HPR-LP v0.1.1 (September 9, 2025):
- Model reformulation. Updated the problem formulation to the new form for better stability and consistency across instances.
- Adaptive restarts & penalty auto-tuning. A redesigned penalty parameter update rule to improve convergence speed and robustness.
- Kernel rewrites. Several CUDA kernels were refactored/fused to reduce memory traffic and improve occupancy.
- Simplified parameters. Removed sigma and sigma_fixed from the parameters.
- In terms of SGM10 (1e-8 accuracy), 14% faster for Mittelmann's LP benchmark set and 95% faster for MIP2017 large-scale LP relaxations (compared to v0.1.0).
HPR-LP v0.1.0
A preliminary release (July 4, 2025).
A GPU-accelerated LP solver in Julia implementing the Halpern Peaceman–Rachford (HPR) method.
Model: