This repository contains the implementation and analysis of optimization algorithms applied to convex and variational inequality (VI) problems. The experiments focus on comparing classical and advanced methods such as Gradient Descent (GD), Stochastic Gradient Descent (SGD), Stochastic Variance Reduced Gradient (SVRG), Halpern iteration (including stochastic and inexact variants), and the adaptive GRAAL (aGRAAL) method.
notebooks/: Jupyter notebooks for each experimentreports/: Final PDF reports summarizing resultsfigs/: Figures and plots from experimentsdata/: Datasets used in experimentsbeyond_golden_ratio/: Reference code for VI algorithms from the BGR paper (see Acknowledgements)
- Problems: L2-regularized least squares and logistic regression
- Methods: GD, SGD (with decaying steps), SVRG (with fixed step)
- Metrics: Loss curves, convergence speed, and gradient variance
- Data: 100×100 synthetic logistic regression
- Focus: Stability of SVRG compared to SGD instability
- Insight: SVRG provides smoother and more stable convergence
- Methods: Extragradient (EG), Halpern, OGDA, GRAAL (ϕ = 2, ϕ = φ)
- Problem: Bilinear saddle-point problem (Polar Game)
- Visualization: Julia streamplots using CairoMakie
- Result: Demonstrates convergence and divergence regimes, verifying theoretical predictions
- Goal: Reproduce Figures 2–6 from BGR paper
- Tools: Custom Julia implementation with adaptive step size
- Examples: Polar Game, Forsaken benchmark
- Result: Shows sensitivity to parameter ϕ and validates convergence behavior
- Goal: Implement deterministic, stochastic, and inexact Halpern methods
- Benchmarks: Forsaken, Polar Game, and Lower Bound problems
- Result: Verified divergence of EG (Theorem 4.3, Gorbunov et al. 2022) and compared Halpern’s improved performance
- Insight: Highlighted parameter sensitivity, stability issues, and convergence differences
- Inexact Halpern Experiments: Extended analysis to Forsaken, Polar Game, and Lower Bound VI benchmarks with parameter tuning (ρ, η, c).
- Counterexample Verification: Reproduced Theorem 4.3 (Gorbunov et al., 2022), demonstrating divergence of EG and contrasting with Halpern iteration.
- Research Integration: Began preparing experimental framework for inexact fixed-point iterations in min-max problems (inspired by Alacaoglu et al., 2024–2025).
- Reports: Drafted technical notes and LaTeX reports summarizing results for Directed Studies (MATH 448) under Prof. Ahmet Alacaoglu.
This repository incorporates reference code from Axel Böhm’s beyond_golden_ratio,
licensed under the MIT License © 2023 Axel Böhm.
Related paper:
Alacaoglu, Ahmet, Axel Böhm, and Yura Malitsky.
Beyond the Golden Ratio for Variational Inequality Algorithms.
arXiv preprint arXiv:2212.13955 (2022).