Skip to content

Releases: PolyU-IOR/HPR-LP

HPR-LP v0.1.5

05 Apr 08:36

Choose a tag to compare

Optimize GPU implementation for kernels and workflows for around 10% speedup (per-iteration).

HPR-LP v0.1.4

20 Mar 10:41

Choose a tag to compare

  • Improved GPU efficiency via CUDA Graph update execution.
  • Stabilized and optimized GPU power iteration eigenvalue estimation.
  • Added MOI/JuMP optimizer integration. See demo/demo_JuMP.jl for usage.
  • Added warm-start support, allowing users to provide initial points (initial_x, initial_y) for the solver.

HPR-LP v0.1.3

17 Oct 13:50

Choose a tag to compare

Highlights of HPR-LP v0.1.3 (October 17, 2025):

  1. Enhanced parameter adjustment strategy, significantly improved stability, achieving relative KKT and duality gap accuracy up to 1e-9.
  2. Improved LP modeling pipeline, seamless integration with JuMP for a smoother modeling experience.

Benchmark Results
Platform: NVIDIA A100-SXM4-80GB
Dataset: Mittelmann’s LP benchmark without presolve
Performance: 47 of 49 instances solved (Tolerance: 1e-4, Time limit: 3600s)
Performance: 41 of 49 instances solved (Tolerance: 1e-9, Time limit: 3600s)

HPR-LP v0.1.2

28 Sep 11:00

Choose a tag to compare

Highlights of HPR-LP v0.1.2 (September 27, 2025):

  1. SpMV rewrites. Add a preprocessing step and buffer preallocation to avoid redundant work between iterations.
  2. Kernel rewrites. Several CUDA kernels were refactored to reduce memory traffic and improve occupancy.
  3. In terms of SGM10 (1e-8 accuracy), 11% faster for Mittelmann's LP benchmark set and 7% faster for MIP2017 large-scale LP relaxations (compared to v0.1.1).

HPR-LP v0.1.1

09 Sep 14:24

Choose a tag to compare

Highlights of HPR-LP v0.1.1 (September 9, 2025):

  1. Model reformulation. Updated the problem formulation to the new form for better stability and consistency across instances.
  2. Adaptive restarts & penalty auto-tuning. A redesigned penalty parameter update rule to improve convergence speed and robustness.
  3. Kernel rewrites. Several CUDA kernels were refactored/fused to reduce memory traffic and improve occupancy.
  4. Simplified parameters. Removed sigma and sigma_fixed from the parameters.
  5. In terms of SGM10 (1e-8 accuracy), 14% faster for Mittelmann's LP benchmark set and 95% faster for MIP2017 large-scale LP relaxations (compared to v0.1.0).

HPR-LP v0.1.0

09 Sep 13:07

Choose a tag to compare

A preliminary release (July 4, 2025).
A GPU-accelerated LP solver in Julia implementing the Halpern Peaceman–Rachford (HPR) method.
Model:

$$ \begin{array}{ll} \underset{x \in \mathbb{R}^n}{\min} \quad & \langle c, x \rangle \\ \text{s.t.} \quad & A_1 x = b_1, \\ & A_2 x \geq b_2, \\ & l \leq x \leq u . \end{array} $$