Questa è una vecchia versione del documento!
Indice
Gromacs
Several versions of Gromacs are available, check below:
Compilation details:
Gromacs | Plumed | Compiler | MPI | CUDA SDK | CUDA RTL | CUDA Architectures |
---|---|---|---|---|---|---|
4.5.7 | GNU 5.4.0 | OpenMPI 1.10.7 | ||||
5.1.4 | 2.3.8 | GNU 5.4.0 | OpenMPI 1.10.7 | |||
5.1.5 | 2.3.8 | GNU 5.4.0 | OpenMPI 1.10.7 | |||
2016.4 | 2.3.8 | GNU 5.4.0 | OpenMPI 1.10.7 | 10.0.130 | 10.0.130 | 6.0, 7.0 |
2016.6 | 2.3.8 | GNU 5.4.0 | OpenMPI 1.10.7 | 10.0.130 | 10.0.130 | 6.0, 7.0 |
2018.6 | 2.4.8 | GNU 5.4.0 | OpenMPI 1.10.7 | 10.0.130 | 10.0.130 | 6.0, 7.0 |
2018.6 | 2.4.8 | GNU 7.3.0 | OpenMPI 3.1.4 | 10.0.130 | 10.0.130 | 6.0, 7.0 |
2018.8 | 2.4.8 | GNU 5.4.0 | OpenMPI 1.10.7 | 10.0.130 | 10.0.130 | 6.0, 7.0 |
2018.8 | 2.4.8 | GNU 7.3.0 | OpenMPI 3.1.4 | 10.0.130 | 10.0.130 | 6.0, 7.0 |
2019.4 | 2.8.3 | GNU 5.4.0 | OpenMPI 1.10.7 | 10.0.130 | 10.0.130 | 6.0, 7.0 |
2019.4 | 2.8.3 | GNU 7.3.0 | OpenMPI 3.1.4 | 10.0.130 | 10.0.130 | 6.0, 7.0 |
2019.6 | 2.8.3 | GNU 5.4.0 | OpenMPI 1.10.7 | 10.0.130 | 10.0.130 | 6.0, 7.0 |
2019.6 | 2.8.3 | GNU 7.3.0 | OpenMPI 3.1.4 | 10.0.130 | 10.0.130 | 6.0, 7.0 |
2020.7 | 2.8.3 | GNU 8.3.0 | OpenMPI 3.1.6 | 12.0.0 | 12.0.0 | 6.0, 7.0, 8.0 |
2021.4 | 2.8.3 | GNU 8.3.0 | OpenMPI 3.1.6 | 12.0.0 | 12.0.0 | 6.0, 7.0, 8.0 |
2021.7 | 2.8.3 | GNU 8.3.0 | OpenMPI 3.1.6 | 12.0.0 | 12.0.0 | 6.0, 7.0, 8.0 |
2022.6 | 2.8.3 | GNU 8.3.0 | OpenMPI 3.1.6 | 12.0.0 | 12.0.0 | 6.0, 7.0, 8.0 |
For example Gromacs 2022.6 was patched with Plumed 2.8.3 and compiled with the GNU 8.3.0 tool chain and OpenMPI 3.1.6. The GPU enabled version was compiled with CUDA SDK (software development kit) 12.0.0 and is usable with CUDA RTL (run time library) 12.0.0. Code was generated for CUDA architecture 6.0, 7.0, 8.0.
Supported CUDA architectures:
GPU | CUDA Architecture |
---|---|
NVIDIA Tesla P100 | 6.0 |
NVIDIA Tesla V100 | 7.0 |
NVIDIA Tesla A100 | 8.0 |
Environment modules are available in different flavors:
Gromacs | Prerequisites | Flavor | Module |
---|---|---|---|
4.5.7 | gnu/5.4.0 openmpi/1.10.7 | bdw | 4.5.7-bdw |
4.5.7 | gnu/5.4.0 openmpi/1.10.7 | bdw | 4.5.7-cpu |
5.1.4 | gnu/5.4.0 openmpi/1.10.7 | bdw | 5.1.4-bdw |
5.1.4 | gnu/5.4.0 openmpi/1.10.7 | bdw | 5.1.4-cpu |
5.1.5 | gnu/5.4.0 openmpi/1.10.7 | bdw | 5.1.5-bdw |
5.1.5 | gnu/5.4.0 openmpi/1.10.7 | bdw | 5.1.5-cpu |
2016.4 | gnu/5.4.0 openmpi/1.10.7 | bdw | 2016.4-bdw |
2016.4 | gnu/5.4.0 openmpi/1.10.7 | bdw | 2016.4-cpu |
2016.4 | gnu/5.4.0 openmpi/1.10.7 | gpu | 2016.4-gpu |
2016.4 | gnu/5.4.0 openmpi/1.10.7 | knl | 2016.4-knl |
2016.4 | gnu/5.4.0 openmpi/1.10.7 | skl | 2016.4-skl |
2016.6 | gnu/5.4.0 openmpi/1.10.7 | bdw | 2016.6-bdw |
2016.6 | gnu/5.4.0 openmpi/1.10.7 | bdw | 2016.6-cpu |
2016.6 | gnu/5.4.0 openmpi/1.10.7 | gpu | 2016.6-gpu |
2016.6 | gnu/5.4.0 openmpi/1.10.7 | knl | 2016.6-knl |
2016.6 | gnu/5.4.0 openmpi/1.10.7 | skl | 2016.6-skl |
2018.6 | gnu/5.4.0 openmpi/1.10.7 | bdw | 2018.6-bdw |
2018.6 | gnu/5.4.0 openmpi/1.10.7 | bdw | 2018.6-cpu |
2018.6 | gnu/5.4.0 openmpi/1.10.7 | gpu | 2018.6-gpu |
2018.6 | gnu/5.4.0 openmpi/1.10.7 | knl | 2018.6-knl |
2018.6 | gnu/5.4.0 openmpi/1.10.7 | skl | 2018.6-skl |
2018.6 | gnu7/7.3.0 openmpi3/3.1.4 | bdw | 2018.6-bdw |
2018.6 | gnu7/7.3.0 openmpi3/3.1.4 | bdw | 2018.6-cpu |
2018.6 | gnu7/7.3.0 openmpi3/3.1.4 | gpu | 2018.6-gpu |
2018.6 | gnu7/7.3.0 openmpi3/3.1.4 | knl | 2018.6-knl |
2018.6 | gnu7/7.3.0 openmpi3/3.1.4 | skl | 2018.6-skl |
2018.8 | gnu/5.4.0 openmpi/1.10.7 | bdw | 2018.8-bdw |
2018.8 | gnu/5.4.0 openmpi/1.10.7 | bdw | 2018.8-cpu |
2018.8 | gnu/5.4.0 openmpi/1.10.7 | gpu | 2018.8-gpu |
2018.8 | gnu/5.4.0 openmpi/1.10.7 | knl | 2018.8-knl |
2018.8 | gnu/5.4.0 openmpi/1.10.7 | skl | 2018.8-skl |
2018.8 | gnu7/7.3.0 openmpi3/3.1.4 | bdw | 2018.8-bdw |
2018.8 | gnu7/7.3.0 openmpi3/3.1.4 | bdw | 2018.8-cpu |
2018.8 | gnu7/7.3.0 openmpi3/3.1.4 | gpu | 2018.8-gpu |
2018.8 | gnu7/7.3.0 openmpi3/3.1.4 | knl | 2018.8-knl |
2018.8 | gnu7/7.3.0 openmpi3/3.1.4 | skl | 2018.8-skl |
2019.4 | gnu/5.4.0 openmpi/1.10.7 | bdw | 2019.4-bdw |
2019.4 | gnu/5.4.0 openmpi/1.10.7 | bdw | 2019.4-cpu |
2019.4 | gnu/5.4.0 openmpi/1.10.7 | gpu | 2019.4-gpu |
2019.4 | gnu/5.4.0 openmpi/1.10.7 | knl | 2019.4-knl |
2019.4 | gnu/5.4.0 openmpi/1.10.7 | skl | 2019.4-skl |
2019.4 | gnu7/7.3.0 openmpi3/3.1.4 | bdw | 2019.4-bdw |
2019.4 | gnu7/7.3.0 openmpi3/3.1.4 | bdw | 2019.4-cpu |
2019.4 | gnu7/7.3.0 openmpi3/3.1.4 | gpu | 2019.4-gpu |
2019.4 | gnu7/7.3.0 openmpi3/3.1.4 | knl | 2019.4-knl |
2019.4 | gnu7/7.3.0 openmpi3/3.1.4 | skl | 2019.4-skl |
2019.6 | gnu/5.4.0 openmpi/1.10.7 | bdw | 2019.6-bdw |
2019.6 | gnu/5.4.0 openmpi/1.10.7 | bdw | 2019.6-cpu |
2019.6 | gnu/5.4.0 openmpi/1.10.7 | gpu | 2019.6-gpu |
2019.6 | gnu/5.4.0 openmpi/1.10.7 | knl | 2019.6-knl |
2019.6 | gnu/5.4.0 openmpi/1.10.7 | skl | 2019.6-skl |
2019.6 | gnu7/7.3.0 openmpi3/3.1.4 | bdw | 2019.6-bdw |
2019.6 | gnu7/7.3.0 openmpi3/3.1.4 | bdw | 2019.6-cpu |
2019.6 | gnu7/7.3.0 openmpi3/3.1.4 | gpu | 2019.6-gpu |
2019.6 | gnu7/7.3.0 openmpi3/3.1.4 | knl | 2019.6-knl |
2019.6 | gnu7/7.3.0 openmpi3/3.1.4 | skl | 2019.6-skl |
2020.7 | gnu8/8.3.0 openmpi3/3.1.6 | bdw | 2020.7-bdw |
2020.7 | gnu8/8.3.0 openmpi3/3.1.6 | bdw | 2020.7-cpu |
2020.7 | gnu8/8.3.0 openmpi3/3.1.6 | gpu | 2020.7-gpu |
2020.7 | gnu8/8.3.0 openmpi3/3.1.6 | knl | 2020.7-knl |
2020.7 | gnu8/8.3.0 openmpi3/3.1.6 | skl | 2020.7-skl |
2021.4 | gnu8/8.3.0 openmpi3/3.1.6 | bdw | 2021.4-bdw |
2021.4 | gnu8/8.3.0 openmpi3/3.1.6 | bdw | 2021.4-cpu |
2021.4 | gnu8/8.3.0 openmpi3/3.1.6 | gpu | 2021.4-gpu |
2021.4 | gnu8/8.3.0 openmpi3/3.1.6 | knl | 2021.4-knl |
2021.4 | gnu8/8.3.0 openmpi3/3.1.6 | skl | 2021.4-skl |
2021.7 | gnu8/8.3.0 openmpi3/3.1.6 | bdw | 2021.7-bdw |
2021.7 | gnu8/8.3.0 openmpi3/3.1.6 | bdw | 2021.7-cpu |
2021.7 | gnu8/8.3.0 openmpi3/3.1.6 | gpu | 2021.7-gpu |
2021.7 | gnu8/8.3.0 openmpi3/3.1.6 | knl | 2021.7-knl |
2021.7 | gnu8/8.3.0 openmpi3/3.1.6 | skl | 2021.7-skl |
2022.6 | gnu8/8.3.0 openmpi3/3.1.6 | bdw | 2022.6-bdw |
2022.6 | gnu8/8.3.0 openmpi3/3.1.6 | bdw | 2022.6-cpu |
2022.6 | gnu8/8.3.0 openmpi3/3.1.6 | gpu | 2022.6-gpu |
2022.6 | gnu8/8.3.0 openmpi3/3.1.6 | knl | 2022.6-knl |
2022.6 | gnu8/8.3.0 openmpi3/3.1.6 | skl | 2022.6-skl |
GMXLIB environment variable
To define the GMXLIB environment variable, add the following lines to the file $HOME/.bash_profile
:
GMXLIB=$HOME/gromacs/top export GMXLIB
$HOME/gromacs/top
is purely indicative. Modify it according to your preferences.
Job Gromacs OpenMP
Gromacs 4.5.7
Gromacs 5.1.4
Gromacs 5.1.5
Script mdrun-omp.sh
to exclusively request a node and start multiple OpenMP threads (tested with –cpus-per-task=14 and –cpus-per-task=16):
- mdrun-omp.sh
#!/bin/bash --login #SBATCH --job-name=mdrun_omp #SBATCH --output=%x.o%j #SBATCH --error=%x.e%j #SBATCH --nodes=1 #SBATCH --ntasks=1 #SBATCH --cpus-per-task=14 #SBATCH --exclusive #SBATCH --time=0-24:00:00 #SBATCH --mem=120G #SBATCH --partition=cpu #SBATCH --qos=cpu #SBATCH --account=<account> shopt -q login_shell || exit 1 test -n "$SLURM_NODELIST" || exit 1 module load gnu module load openmpi module load gromacs/5.1.5-cpu # Set OMP_NUM_THREADS to the same value as --cpus-per-task with a fallback in # case it isn't set. SLURM_CPUS_PER_TASK is set to the value of --cpus-per-task, # but only if --cpus-per-task is explicitly set. export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK:-1} gmx mdrun -deffnm topology -pin on
Gromacs 2016.6
Script mdrun-omp.sh
to exclusively request a node and start multiple OpenMP threads (tested with –cpus-per-task=14 and –cpus-per-task=16):
- mdrun-omp.sh
#!/bin/bash --login #SBATCH --job-name=mdrun_omp #SBATCH --output=%x.o%j #SBATCH --error=%x.e%j #SBATCH --nodes=1 #SBATCH --ntasks=1 #SBATCH --cpus-per-task=14 #SBATCH --exclusive #SBATCH --time=0-24:00:00 #SBATCH --mem=120G #SBATCH --partition=cpu #SBATCH --qos=cpu #SBATCH --account=<account> shopt -q login_shell || exit 1 test -n "$SLURM_NODELIST" || exit 1 module load gnu module load openmpi module load gromacs/2016.6-cpu # Set OMP_NUM_THREADS to the same value as --cpus-per-task with a fallback in # case it isn't set. SLURM_CPUS_PER_TASK is set to the value of --cpus-per-task, # but only if --cpus-per-task is explicitly set. export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK:-1} gmx mdrun -deffnm topology -pin on -plumed plumed.dat
Job Gromacs MPI OpenMP
Gromacs 5.1.5
Script mdrun-mpi-omp.sh
to exclusively request one or more nodes (tested with –nodes=1 and –nodes=2) and start multiple MPI processes (tested with –ntasks-per-node=6 and –ntasks-per-node=8; the number of OpenMP threads will be calculated automatically if –cpus-per-task is not explicitly set):
- mdrun-mpi-omp.sh
#!/bin/bash --login #SBATCH --job-name=mdrun_mpi_omp #SBATCH --output=%x.o%j #SBATCH --error=%x.e%j #SBATCH --nodes=1 #SBATCH --ntasks-per-node=6 #SBATCH --exclusive #SBATCH --time=0-24:00:00 #SBATCH --mem=120G #SBATCH --partition=cpu #SBATCH --qos=cpu #SBATCH --account=<account> shopt -q login_shell || exit 1 test -n "$SLURM_NODELIST" || exit 1 module load gnu module load openmpi module load gromacs/5.1.5-cpu # Set OMP_NUM_THREADS to the same value as --cpus-per-task with a fallback in # case it isn't set. SLURM_CPUS_PER_TASK is set to the value of --cpus-per-task, # but only if --cpus-per-task is explicitly set. if [ -n "$SLURM_CPUS_PER_TASK" ]; then OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK else if [ -n "$SLURM_NTASKS_PER_NODE" ]; then OMP_NUM_THREADS=$((SLURM_CPUS_ON_NODE/SLURM_NTASKS_PER_NODE)) else if [ -n "$SLURM_NTASKS" ] && [ -n "$SLURM_NNODES" ]; then OMP_NUM_THREADS=$((SLURM_CPUS_ON_NODE/(SLURM_NTASKS/SLURM_NNODES))) else OMP_NUM_THREADS=1 fi fi fi export OMP_NUM_THREADS mpirun gmx mdrun -deffnm topology -pin on
Gromacs 2016.6
Script mdrun-mpi-omp.sh
to exclusively request one or more nodes (tested with –nodes=1 and –nodes=2) and start multiple MPI processes (tested with –ntasks-per-node=6 and –ntasks-per-node=8; the number of OpenMP threads will be calculated automatically if –cpus-per-task is not explicitly set):
- mdrun-mpi-omp.sh
#!/bin/bash --login #SBATCH --job-name=mdrun_mpi_omp #SBATCH --output=%x.o%j #SBATCH --error=%x.e%j #SBATCH --nodes=1 #SBATCH --ntasks-per-node=6 #SBATCH --exclusive #SBATCH --time=0-24:00:00 #SBATCH --mem=120G #SBATCH --partition=cpu #SBATCH --qos=cpu #SBATCH --account=<account> shopt -q login_shell || exit 1 test -n "$SLURM_NODELIST" || exit 1 module load gnu module load openmpi module load gromacs/2016.6-cpu # Set OMP_NUM_THREADS to the same value as --cpus-per-task with a fallback in # case it isn't set. SLURM_CPUS_PER_TASK is set to the value of --cpus-per-task, # but only if --cpus-per-task is explicitly set. if [ -n "$SLURM_CPUS_PER_TASK" ]; then OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK else if [ -n "$SLURM_NTASKS_PER_NODE" ]; then OMP_NUM_THREADS=$((SLURM_CPUS_ON_NODE/SLURM_NTASKS_PER_NODE)) else if [ -n "$SLURM_NTASKS" ] && [ -n "$SLURM_NNODES" ]; then OMP_NUM_THREADS=$((SLURM_CPUS_ON_NODE/(SLURM_NTASKS/SLURM_NNODES))) else OMP_NUM_THREADS=1 fi fi fi export OMP_NUM_THREADS mpirun gmx mdrun -deffnm topology -pin on -plumed plumed.dat
Gromacs 2019.6
Script mdrun-mpi-omp-9x3.sh
to request one node and start 9 MPI processes and 3 OpenMP threads for each MPI process (27 threads overall):
- mdrun-mpi-omp-9x3.sh
#!/bin/bash --login #SBATCH --job-name=mdrun_mpi_omp_9x3 #SBATCH --output=%x.o%j #SBATCH --error=%x.e%j #SBATCH --nodes=1 #SBATCH --ntasks-per-node=28 #SBATCH --time=0-24:00:00 #SBATCH --mem=80G #SBATCH --partition=cpu #SBATCH --qos=cpu #SBATCH --account=<account> shopt -q login_shell || exit 1 test -n "$SLURM_NODELIST" || exit 1 module load gnu7 module load openmpi3 module load gromacs/2019.6-cpu module list OMP_NUM_THREADS=3 export OMP_NUM_THREADS mpirun -np 9 gmx mdrun -deffnm meta -pin on -plumed plumed.dat
Gromacs 2021.7
Script mdrun-mpi-omp-9x3.sh
to request one node and start 9 MPI processes and 3 OpenMP threads for each MPI process (27 threads overall):
- mdrun-mpi-omp-9x3.sh
#!/bin/bash --login #SBATCH --job-name=mdrun_mpi_omp_9x3 #SBATCH --output=%x.o%j #SBATCH --error=%x.e%j #SBATCH --nodes=1 #SBATCH --ntasks-per-node=28 #SBATCH --time=0-24:00:00 #SBATCH --mem=80G #SBATCH --partition=cpu #SBATCH --qos=cpu #SBATCH --account=<account> shopt -q login_shell || exit 1 test -n "$SLURM_NODELIST" || exit 1 module load gnu8 module load openmpi3 module load gromacs/2021.7-cpu module list OMP_NUM_THREADS=3 export OMP_NUM_THREADS mpirun -np 9 gmx mdrun -deffnm meta -pin on -plumed plumed.dat
Job Gromacs OpenMP GPU
Gromacs 2016.6
The mdrun-omp-cuda.sh
script runs a 4 core with 1 CUDA GPU Gromacs job:
- mdrun-omp-cuda.sh
#!/bin/bash --login #SBATCH --job-name=mdrun_omp_cuda #SBATCH --output=%x.o%j #SBATCH --error=%x.e%j #SBATCH --ntasks=1 #SBATCH --cpus-per-task=4 #SBATCH --gres=gpu:1 #SBATCH --time=0-24:00:00 #SBATCH --mem=16G #SBATCH --partition=gpu #SBATCH --qos=gpu #SBATCH --account=<account> shopt -q login_shell || exit 1 test -n "$SLURM_NODELIST" || exit 1 module load gnu module load openmpi module load gromacs/2016.6-gpu # Set OMP_NUM_THREADS to the same value as --cpus-per-task with a fallback in # case it isn't set. SLURM_CPUS_PER_TASK is set to the value of --cpus-per-task, # but only if --cpus-per-task is explicitly set. export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK:-1} gmx mdrun -deffnm topology -pin on -dlb auto -plumed plumed.dat
Job Gromacs MPI GPU
Gromacs 2019.6
Script mdrun-mpi-omp-gpu-4x2.sh
to request one node and start 4 MPI processes and 2 OpenMP threads for each MPI process (8 threads overall) with 2 CUDA GPU's:
- mdrun-mpi-omp-gpu-4x2.sh
#!/bin/bash --login #SBATCH --job-name=mdrun_mpi_omp_gpu #SBATCH --output=%x.o%j #SBATCH --error=%x.e%j #SBATCH --nodes=1 #SBATCH --ntasks-per-node=8 #SBATCH --gres=gpu:2 #SBATCH --time=0-24:00:00 #SBATCH --mem=16G #SBATCH --partition=gpu #SBATCH --qos=gpu #SBATCH --account=<account> shopt -q login_shell || exit 1 test -n "$SLURM_NODELIST" || exit 1 module load gnu7 module load openmpi3 module load gromacs/2019.6-gpu OMP_NUM_THREADS=2 export OMP_NUM_THREADS mpirun -np 4 gmx mdrun -deffnm meta -dlb auto -plumed plumed.dat
Gromacs 2021.7
Script mdrun-mpi-omp-gpu-4x2.sh
to request one node and start 4 MPI processes and 2 OpenMP threads for each MPI process (8 threads overall) with 2 CUDA GPU's:
- mdrun-mpi-omp-gpu-4x2.sh
#!/bin/bash --login #SBATCH --job-name=mdrun_mpi_omp_gpu #SBATCH --output=%x.o%j #SBATCH --error=%x.e%j #SBATCH --nodes=1 #SBATCH --ntasks-per-node=8 #SBATCH --gres=gpu:2 #SBATCH --time=0-24:00:00 #SBATCH --mem=16G #SBATCH --partition=gpu #SBATCH --qos=gpu #SBATCH --account=<account> shopt -q login_shell || exit 1 test -n "$SLURM_NODELIST" || exit 1 module load gnu8 module load openmpi3 module load gromacs/2021.7-gpu OMP_NUM_THREADS=2 export OMP_NUM_THREADS mpirun -np 4 gmx mdrun -deffnm meta -dlb auto -plumed plumed.dat
Gromacs with Apptainer
Available versions of Gromacs:
Gromacs |
---|
2025.1 |
Examples
The following examples demonstrate using the NGC GROMACS container to run the STMV
benchmark.
Download the STMV
benchmark:
wget https://zenodo.org/record/3893789/files/GROMACS_heterogeneous_parallelization_benchmark_info_and_systems_JCP.tar.gz tar xf GROMACS_heterogeneous_parallelization_benchmark_info_and_systems_JCP.tar.gz cd GROMACS_heterogeneous_parallelization_benchmark_info_and_systems_JCP/stmv
Job Gromacs CPU
Script slurm-gromacs-2025.1-cpu.sh
to request one node and start 8 thread-MPI tasks and 6 OpenMP threads for each thread-MPI task (48 threads overall) and run the STMV
benchmark:
- slurm-gromacs-2025.1-cpu.sh
#!/bin/bash --login #SBATCH --job-name=mdrun_cpu #SBATCH --output=%x.o%j #SBATCH --error=%x.e%j #SBATCH --nodes=1 #SBATCH --ntasks-per-node=8 #SBATCH --cpus-per-task=6 #SBATCH --time=0-04:00:00 #SBATCH --mem=32G #SBATCH --partition=cpu_guest #SBATCH --qos=cpu_guest ##SBATCH --account=<account> shopt -q login_shell || exit 1 test -n "$SLURM_NODELIST" || exit 1 test $SLURM_NNODES -eq 1 || exit 1 module load apptainer module load gromacs/2025.1/cpu apptainer run \ --bind "$PWD:/host_pwd" \ --pwd /host_pwd \ "$GROMACS_CONTAINER" \ gmx mdrun \ -v \ -ntmpi $SLURM_TASKS_PER_NODE \ -ntomp $SLURM_CPUS_PER_TASK \ -nb cpu \ -pme cpu \ -npme 1 \ -update cpu \ -bonded cpu \ -nsteps 100000 \ -resetstep 90000 \ -noconfout \ -dlb no \ -nstlist 300 \ -pin on
Job Gromacs GPU
Script slurm-gromacs-2025.1-gpu.sh
to request one node and start 8 thread-MPI tasks and 6 OpenMP threads for each thread-MPI task (48 threads thread) with 8 CUDA GPU's and run the STMV
benchmark:
- slurm-gromacs-2025.1-gpu.sh
#!/bin/bash --login #SBATCH --job-name=mdrun_gpu #SBATCH --output=%x.o%j #SBATCH --error=%x.e%j #SBATCH --nodes=1 #SBATCH --ntasks-per-node=8 #SBATCH --cpus-per-task=6 #SBATCH --time=0-00:30:00 #SBATCH --mem=8G #SBATCH --gres=gpu:a5000_mm1:8 #SBATCH --partition=gpu_guest #SBATCH --qos=gpu_guest ##SBATCH --account=<account> shopt -q login_shell || exit 1 test -n "$SLURM_NODELIST" || exit 1 test $SLURM_NNODES -eq 1 || exit 1 module load apptainer module load gromacs/2025.1/gpu apptainer run \ --nv \ --bind "$PWD:/host_pwd" \ --pwd /host_pwd \ "$GROMACS_CONTAINER" \ gmx mdrun \ -v \ -ntmpi $SLURM_TASKS_PER_NODE \ -ntomp $SLURM_CPUS_PER_TASK \ -nb gpu \ -pme gpu \ -npme 1 \ -update gpu \ -bonded gpu \ -nsteps 100000 \ -resetstep 90000 \ -noconfout \ -dlb no \ -nstlist 300 \ -pin on