Digital twins have emerged as a key technology for optimizing the performance of engineering products and systems. High-fidelity numerical simulations constitute the backbone of engineering design, providing insight into the performance of complex systems. However, large-scale, dynamic, non-linear models require significant computational resources and are prohibitive for real-time digital twin applications.
To this end, reduced order models (ROMs) are employed to approximate the high-fidelity solutions while accurately capturing the dominant aspects of the physical behavior. The present repository proposes a new machine learning (ML) platform for the development of ROMs to handle large-scale numerical problems dealing with transient nonlinear partial differential equations.
FastSVD-ML-ROM is a comprehensive framework that combines multiple machine learning techniques for efficient reduced order modeling:
- SVD Update Methodology: Computes a linear subspace of multi-fidelity solutions during the simulation process
- Convolutional Autoencoders: Enables nonlinear dimensionality reduction
- Feed-Forward Neural Networks: Maps input parameters to latent spaces
- Long-Short Term Memory (LSTM) Networks: Predicts and forecasts the dynamics of parametric solutions
The FastSVD-ML-ROM framework utilizes a multi-stage approach:
- SVD-based Linear Subspace: Computes and updates a linear subspace representation of multi-fidelity solutions during simulation
- Nonlinear Dimensionality Reduction: Convolutional autoencoders compress high-dimensional solution spaces into compact latent representations
- Parameter Mapping: Feed-forward neural networks learn the relationship between input parameters and latent space coordinates
- Temporal Dynamics: LSTM networks capture and predict the time evolution of parametric solutions
Note: The training and test data for this project are not included in the repository. To obtain the data, please contact the repository maintainer. Once you receive the data, place it in the data/ directory following the structure described below.
The framework has been demonstrated on 3D blood flow simulations inside an arterial segment. The data/ directory should contain high-fidelity model (HFM) data organized as follows:
- 10 parameterized training datasets: Each folder (e.g.,
1_0.07_train,2_0.15_train, etc.) contains 160.datfiles representing solution snapshots at different time steps - Training datasets cover various parameter configurations for learning the parameter-to-latent-space mapping
- 2 test datasets:
10_0.5_testand11_0.42_test, each containing 200.datfiles - Test datasets are used for validation and performance evaluation on unseen parameter configurations
- All data files are in
.datformat containing high-fidelity simulation snapshots - The folder naming convention indicates different parameter sets (e.g.,
1_0.07represents parameter set 1 with value 0.07) - These HFM snapshots are used to train the convolutional autoencoders, feed-forward neural networks, and LSTM networks
First, verify your CUDA version to ensure compatibility:
nvidia-smiThis will display your CUDA version and GPU information. The framework has been tested with CUDA 11.8.
Install PyTorch and torchvision with CUDA 11.8 support:
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu118 --no-cache-dirThe --no-cache-dir flag is recommended to avoid memory issues during installation.
Verify PyTorch CUDA installation:
import torch
print(f"PyTorch version: {torch.__version__}")
print(f"CUDA available: {torch.cuda.is_available()}")
print(f"CUDA version: {torch.version.cuda}")
print(f"GPU device: {torch.cuda.get_device_name(0) if torch.cuda.is_available() else 'N/A'}")Install the remaining required dependencies:
pip install -r requirements.txt- Clone the repository
- Check your CUDA version using
nvidia-smi - Install PyTorch with CUDA support (see above)
- Install dependencies from
requirements.txt - Request the training and test data from the repository maintainer and place it in the
data/directory - Configure the model parameters as needed in
src/config.py
First, perform SVD-based linear projection on your high-fidelity simulation data:
python src/linear_projection/process_all_components.pyThis generates projected data in data/linear_projected/ organized by component (ux, uy, uz) and data type (train/test).
Train the 2D Convolutional Autoencoder for nonlinear dimensionality reduction:
python src/nonlinear_reduction/CAE_main.pyConfiguration:
- Edit
src/config.pyto adjust training parameters:lr_CAE_2D: Learning rate (default: 0.0005)batch_CAE_2D: Batch size (default: 20)epochs_CAE_2D: Number of epochs (default: 2000)latent_CAE_2D: Latent space dimension (default: 4)val_split_CAE_2D: Validation split ratio (default: 0.1)
Outputs:
The training script saves the following in src/nonlinear_reduction/output/:
DL_weights/weights_CAE2D.pth: Full model weightsDL_weights/enc_CAE2D.pth: Encoder weightsDL_weights/dec_CAE2D.pth: Decoder weightsDL_data/CAE2D_enc.npy: Encoded latent representationsDL_data/CAE2D_dec.npy: Decoded reconstructionsscaling_data/stdmean_CAE2D.npy: Data standardization parametersresults_csv/CAE_2D.json: Training history (losses per epoch)
Features:
- Automatic CUDA/CPU device detection
- Early stopping (patience: 50 epochs)
- Model checkpointing (saves best model based on validation loss)
- Random train/validation split with reproducible shuffling
The efficiency of the FastSVD-ML-ROM framework has been demonstrated for 3D blood flow inside an arterial segment. The accuracy of the reconstructed results indicates the robustness of the proposed approach.
If you use this work in your research, please cite the following paper:
Drakoulas, G.I., Gortsas, T.V., Bourantas, G.C., Burganos, V.N., Polyzos, D. (2023).
FastSVD-ML–ROM: A reduced-order modeling framework based on machine learning for real-time applications.
Computer Methods in Applied Mechanics and Engineering.
Paper available at: https://www.sciencedirect.com/science/article/abs/pii/S0045782523002797
This project is licensed under the MIT License - see the LICENSE file for details.