Gym environment representing the importance sampling stochastic optimal control (IS SOC) Markov Decision Process (MDP) defined in [J. Quer & E. Ribera Borrell, JMP, 2024].
This environment mimics the stochastic optimization problem equivalent to finding a 0-variance importance sampling estimator for estimating path-quantities of a diffusion processes.
In particularly, we aim to estimate path functionals up to a random time of metastable stochastic process following an overdamped Langevin equation. We consider only first hitting time problems leading to an optimal, time-homogeneous control.
IS SOC MDP for the following quantities of interest:
- Moment generating function (MGF) of the first hitting times to a targets set C.
- Committor probabilities between the target sets A and B.
- Transition probabilities to a target set within a given a finite time horizon.
We consider overdamped langevin dynamics with the following potentials
- 1-dimensional brownian motion (constant potential).
- Multidimensional double well potential.
- 1-and-2-dimensional triplewell potential.
- Butane interacting potential (in dev branch).
- clone the repo
git clone git@github.com:riberaborrell/gym-sde-is.git
- move inside the directory, create virtual environment and install required packages
cd gym-sde-is
make venv
- activate venv
source venv/bin/activate
- create config.py file and edit it
cp gym_sde_is/utils/config_template.py gym_sde_is/utils/config.py
- clone the sde-hjb-solver repo
cd ../
git clone git@github.com:riberaborrell/sde-hjb-solver.git
- Pip install the gym-sde-is repo locally
pip install -e sde-hjb-solver