Skip to content

Commit e5ba6d5

Browse files
virginiafdezVirginia Fernandezpre-commit-ci[bot]KumoLiuericspod
authored
Porting of 2d_autoencoderkl and 3d_autoencoderkl tutorials from MONAI generative (#1823)
Fixes # . ### Description Incorporation of tutorials for AuoencoderKL (2D and 3D) as part of the porting of MONAI Generative Models. ### Checks <!--- Put an `x` in all the boxes that apply, and remove the not applicable items --> - [X] Avoid including large-size files in the PR. - [X] Clean up long text outputs from code cells in the notebook. - [X ] For security purposes, please check the contents and remove any sensitive info such as user names and private key. - [X] Ensure (1) hyperlinks and markdown anchors are working (2) use relative paths for tutorial repo files (3) put figure and graphs in the `./figure` folder - [ ] Notebook runs automatically `./runner.sh -t <path to .ipynb file>` --------- Signed-off-by: Virginia Fernandez <virginia.fernandez@kcl.ac.uk> Signed-off-by: Eric Kerfoot <eric.kerfoot@kcl.ac.uk> Co-authored-by: Virginia Fernandez <virginia.fernandez@kcl.ac.uk> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: YunLiu <55491388+KumoLiu@users.noreply.github.com> Co-authored-by: Eric Kerfoot <eric.kerfoot@kcl.ac.uk>
1 parent 29fba69 commit e5ba6d5

File tree

3 files changed

+1587
-0
lines changed

3 files changed

+1587
-0
lines changed

generation/2d_autoencoderkl/2d_autoencoderkl_tutorial.ipynb

Lines changed: 785 additions & 0 deletions
Large diffs are not rendered by default.

generation/3d_autoencoderkl/3d_autoencoderkl_tutorial.ipynb

Lines changed: 793 additions & 0 deletions
Large diffs are not rendered by default.

generation/README.md

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -63,3 +63,12 @@ Example shows how to use a DDPM to inpaint of 2D images from the MedNIST dataset
6363

6464
## [Guiding the 2D diffusion synthesis using ControlNet](./controlnet/2d_controlnet.ipynb)
6565
Example shows how to use ControlNet to condition a diffusion model trained on 2D brain MRI images on binary brain masks.
66+
67+
## [Spatial variational autoencoder for 2D modelling and synthesis](./2d_autoencoderkl)
68+
Example shows the use cases of applying a spatial VAE to a 2D synthesis example. To obtain realistic results, the model is trained on the original VAE losses, as well as perceptual and adversarial ones.
69+
70+
## [Spatial variational autoencoder for 3D modelling and synthesis](./3d_autoencoderkl)
71+
Example shows the use cases of applying a spatial VAE to a 3D synthesis example. To obtain realistic results, the model is trained on the original VAE losses, as well as perceptual and adversarial ones.
72+
73+
## Performing anomaly detection with diffusion models: [implicit guidance](./anomaly_detection/2d_classifierfree_guidance_anomalydetection_tutorial.ipynb), [using transformers](./anomaly_detection/anomaly_detection_with_transformers.ipynb) and [classifier free guidance](./anomaly_detection/anomalydetection_tutorial_classifier_guidance.ipynb)
74+
Examples show how to perform anomaly detection in 2D, using implicit guidance [2D-classifier free guiance](./anomaly_detection/2d_classifierfree_guidance_anomalydetection_tutorial.ipynb), transformers [using transformers](./anomaly_detection/anomaly_detection_with_transformers.ipynb) and [classifier free guidance](./anomalydetection_tutorial_classifier_guidance).

0 commit comments

Comments
 (0)