Variational Monte Carlo with gauge-equivariant neural-network quantum states for
Research output: Deep learning lattice gauge theories, Phys. Rev. B 110, 165133 (2024) — A. Apte, C. Córdova, T.-C. Huang, A. Ashmore.
- Builds gauge-invariant neural-network ansätze for
$\mathbb{Z}_N$ pure gauge theory on a 2D spatial lattice in the Hamiltonian formulation. - Optimizes the ansätze via variational Monte Carlo (NetKet-based) to find ground states as a function of the gauge coupling.
- Uses transfer learning across coupling values to track the ground state through the confinement / deconfinement transition, rather than retraining from scratch at each point.
- Extracts critical behavior: data collapse and finite-size scaling to recover critical exponents (continuous transition for
$\mathbb{Z}_2$ in the Ising universality class; weakly first-order for$\mathbb{Z}_3$ ).
| Layer | Tool |
|---|---|
| VMC & NNQS machinery | NetKet |
| Autodiff / accelerator | JAX (via NetKet) |
| Experiments | Jupyter notebooks |
| Cluster execution | SLURM (gpu.sbatch, cluster.py) |
Most experiments live in Jupyter notebooks and were launched on a GPU SLURM cluster. The typical flow:
- Prototype the architecture and optimization in a notebook against a small lattice (where exact diagonalization is available for sanity-checking).
- Scale up via
cluster.py+gpu.sbatch— the SLURM template launches a parameter sweep (lattice size × coupling × seed). - Post-process results (energy convergence, observables, scaling fits) back in notebooks.
No pinned environment is checked in. The core stack is:
- Python ≥ 3.9
- NetKet ≥ 3.x
- JAX with CUDA support (for GPU runs)
- NumPy, SciPy, Matplotlib, Jupyter
@article{ApteCordovaHuangAshmore2024,
title = {Deep learning lattice gauge theories},
author = {Apte, Anuj and C\'ordova, Clay and Huang, Tzu-Chen and Ashmore, Anthony},
journal = {Phys. Rev. B},
volume = {110},
number = {16},
pages = {165133},
year = {2024},
doi = {10.1103/PhysRevB.110.165133},
}