Nvidia Modulus Symbolic (Modulus Sym)
Nvidia Modulus
NVIDIA Modulus blends physics, as expressed by governing partial differential equations (PDEs), boundary conditions, and training data to build high-fidelity, parameterized, surrogate deep learning models. The platform abstracts the complexity of setting up a scalable training pipeline, so you can leverage your domain expertise to map problems to an AI model’s training and develop better neural network architectures. Available reference application serve as a great starting point for applying the same principles to new use cases.
Whether you’re a researcher looking to develop novel AI-based approaches for reimagining engineering and scientific simulations or you’re an engineer looking to accelerate design optimization and digital twin applications, the Modulus platform can support your model development. Modulus offers a variety of approaches for training physics-based neural network models, from purely physics-driven models with physics-informed neural networks (PINNs) to physics-based, data-driven architectures such as neural operators.
Modulus has been rearchitected into modules:
Modulus Core is the base module that consists of the core components of the framework for developing Physics-ML models
Modulus Sym provides an abstraction layer for using PDE-based symbolic loss functions
Modulus Launch provides optimized training recipes for data driven Physics-ML models
Setting up Nvidia Modulus Sym in OAsis
In this guide, we'll use Modulus Sym as an illustrative example.
Login into OAsis, select "TERMINAL"
Execute the following commands:
# create job with gpu resource
srun -p gpu --cpus-per-task=4 --mem=16G --gres=gpu:3g.40gb:1 --pty bash
# load required modules
module load Anaconda3/2022.05
module load CUDA/12.1.0
module load GCCcore/11.3.0 git/2.36.0-nodocs
module load git-lfs/3.2.0
# create conda environment with python version 3.8
conda create -n modulus-symbolic python=3.8 anaconda
source activate modulus-symbolic
# install required package in "modulus-symbolic" conda environment
conda install pip ipykernel
conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
pip install blosc2==2.0.0 cython protobuf pyqtwebengine pyqt5 blosc2 lit cmake nvidia-modulus.sym tensorboard
After successfully setting up the conda environment, you can load the module and activate the environment effortlessly by following commands:
module load Anaconda3/2022.05
module load CUDA/12.1.0
source activate modulus-symbolic
Clone the source code and run an example.
cd $HOME
git clone https://github.com/NVIDIA/modulus-sym.git
cd $HOME/modulus-sym/examples/chip_2d/
# it may take a while. you may adjust the training max_steps(conf/config.yaml) to lower number. let say 5000
python chip_2d.py
Edit the conf/cconfig.yaml file according to adjust the traning max_steps.
Finished Training .
Examine the trained model using the TensorBoard Web UI.
# type following command to bind hostname, port to tensorboard
host=$(hostname)
port=$(hc acquire-port -j $SLURM_JOB_ID -u web --host $host -l TensorBoard)
# you can edit logdir to other directory
tensorboard --logdir . --host $host --port $port
Click the "TensorBoard" button to access the web-based user interface.
Navigate to the "SCALARS" tab to delve into comprehensive training insights.
This tutorial elucidates how to set up and utilize Nvidia Modulus Sym within the OAsis platform, showcasing the potential of merging physics and AI to achieve exceptional model performance.