Strumenti Utente

Strumenti Sito


calcoloscientifico:userguide:gromacs

Differenze

Queste sono le differenze tra la revisione selezionata e la versione attuale della pagina.

Link a questa pagina di confronto

Entrambe le parti precedenti la revisioneRevisione precedente
Prossima revisione
Revisione precedente
calcoloscientifico:userguide:gromacs [24/09/2023 13:44] fabio.spatarocalcoloscientifico:userguide:gromacs [20/08/2025 18:00] (versione attuale) – [Job Gromacs CPU] fabio.spataro
Linea 22: Linea 22:
 ^ Gromacs  ^ Plumed  ^ Compiler   ^ MPI             ^ CUDA SDK  ^ CUDA RTL  ^ CUDA Architectures  ^ ^ Gromacs  ^ Plumed  ^ Compiler   ^ MPI             ^ CUDA SDK  ^ CUDA RTL  ^ CUDA Architectures  ^
 | 4.5.7    |         | GNU 5.4.0  | OpenMPI 1.10.7  |                                         | | 4.5.7    |         | GNU 5.4.0  | OpenMPI 1.10.7  |                                         |
-| 5.1.4    | 2.4.8   | GNU 5.4.0  | OpenMPI 1.10.7  |                                         | +| 5.1.4    | 2.3.8   | GNU 5.4.0  | OpenMPI 1.10.7  |                                         | 
-| 5.1.5    | 2.4.8   | GNU 5.4.0  | OpenMPI 1.10.7  |                                         | +| 5.1.5    | 2.3.8   | GNU 5.4.0  | OpenMPI 1.10.7  |                                         | 
-| 2016.4   | 2.4.8   | GNU 5.4.0  | OpenMPI 1.10.7 8.0.61.2  8.0.61.2  | 6.0                 +| 2016.4   | 2.3.8   | GNU 5.4.0  | OpenMPI 1.10.7 10.0.130  10.0.130  | 6.0, 7.0            
-| 2016.6   | 2.4.8   | GNU 5.4.0  | OpenMPI 1.10.7 8.0.61,2  8.0.61.2  | 6.0                 |+| 2016.6   | 2.3.8   | GNU 5.4.0  | OpenMPI 1.10.7 10.0.130  10.0.130  | 6.0, 7.0            |
 | 2018.6   | 2.4.8   | GNU 5.4.0  | OpenMPI 1.10.7  | 10.0.130  | 10.0.130  | 6.0, 7.0            | | 2018.6   | 2.4.8   | GNU 5.4.0  | OpenMPI 1.10.7  | 10.0.130  | 10.0.130  | 6.0, 7.0            |
 | 2018.6   | 2.4.8   | GNU 7.3.0  | OpenMPI 3.1.4   | 10.0.130  | 10.0.130  | 6.0, 7.0            | | 2018.6   | 2.4.8   | GNU 7.3.0  | OpenMPI 3.1.4   | 10.0.130  | 10.0.130  | 6.0, 7.0            |
Linea 34: Linea 34:
 | 2019.6   | 2.8.3   | GNU 5.4.0  | OpenMPI 1.10.7  | 10.0.130  | 10.0.130  | 6.0, 7.0            | | 2019.6   | 2.8.3   | GNU 5.4.0  | OpenMPI 1.10.7  | 10.0.130  | 10.0.130  | 6.0, 7.0            |
 | 2019.6   | 2.8.3   | GNU 7.3.0  | OpenMPI 3.1.4   | 10.0.130  | 10.0.130  | 6.0, 7.0            | | 2019.6   | 2.8.3   | GNU 7.3.0  | OpenMPI 3.1.4   | 10.0.130  | 10.0.130  | 6.0, 7.0            |
-| 2020.7   | 2.8.3   | GNU 8.3.0  | OpenMPI 3.1.6   11.6.0    | 11.6.0    | 6.0, 7.0, 8.0       | +| 2020.7   | 2.8.3   | GNU 8.3.0  | OpenMPI 3.1.6   12.0.0    | 12.0.0    | 6.0, 7.0, 8.0       | 
-| 2021.4   | 2.8.3   | GNU 8.3.0  | OpenMPI 3.1.6   11.6.0    | 11.6.0    | 6.0, 7.0, 8.0       | +| 2021.4   | 2.8.3   | GNU 8.3.0  | OpenMPI 3.1.6   12.0.0    | 12.0.0    | 6.0, 7.0, 8.0       | 
-| 2021.7   | 2.8.3   | GNU 8.3.0  | OpenMPI 3.1.6   11.6.0    | 11.6.0    | 6.0, 7.0, 8.0       |+| 2021.7   | 2.8.3   | GNU 8.3.0  | OpenMPI 3.1.6   12.0.0    | 12.0.0    | 6.0, 7.0, 8.0       |
 | 2022.6   | 2.8.3   | GNU 8.3.0  | OpenMPI 3.1.6   | 12.0.0    | 12.0.0    | 6.0, 7.0, 8.0       | | 2022.6   | 2.8.3   | GNU 8.3.0  | OpenMPI 3.1.6   | 12.0.0    | 12.0.0    | 6.0, 7.0, 8.0       |
  
Linea 245: Linea 245:
 </note> </note>
  
-=== Gromacs 5.1.===+=== Gromacs 5.1.===
  
 Script ''mdrun-mpi-omp.sh'' to exclusively request one or more nodes (tested with --nodes=1 and --nodes=2) and start multiple MPI processes (tested with --ntasks-per-node=6 and --ntasks-per-node=8; the number of OpenMP threads will be calculated automatically if --cpus-per-task is not explicitly set): Script ''mdrun-mpi-omp.sh'' to exclusively request one or more nodes (tested with --nodes=1 and --nodes=2) and start multiple MPI processes (tested with --ntasks-per-node=6 and --ntasks-per-node=8; the number of OpenMP threads will be calculated automatically if --cpus-per-task is not explicitly set):
Linea 268: Linea 268:
 module load gnu module load gnu
 module load openmpi module load openmpi
-module load gromacs/5.1.4-cpu+module load gromacs/5.1.5-cpu
  
 # Set OMP_NUM_THREADS to the same value as --cpus-per-task with a fallback in # Set OMP_NUM_THREADS to the same value as --cpus-per-task with a fallback in
Linea 512: Linea 512:
  
 mpirun -np 4 gmx mdrun -deffnm meta -dlb auto -plumed plumed.dat mpirun -np 4 gmx mdrun -deffnm meta -dlb auto -plumed plumed.dat
 +</code>
 +
 +===== Gromacs with Apptainer =====
 +
 +Available versions of Gromacs:
 +
 +^ Gromacs                                                      ^
 +| [[http://manual.gromacs.org/documentation/2025.1|2025.1]]    |
 +
 +==== Examples ====
 +
 +The following examples demonstrate using the [[https://catalog.ngc.nvidia.com/orgs/hpc/containers/gromacs|NGC GROMACS]] container to run the ''STMV'' benchmark.
 +
 +Download the ''STMV'' benchmark:
 +
 +<code bash>
 +wget https://zenodo.org/record/3893789/files/GROMACS_heterogeneous_parallelization_benchmark_info_and_systems_JCP.tar.gz 
 +tar xf GROMACS_heterogeneous_parallelization_benchmark_info_and_systems_JCP.tar.gz 
 +cd GROMACS_heterogeneous_parallelization_benchmark_info_and_systems_JCP/stmv
 +</code>
 +
 +==== Job Gromacs CPU ====
 +
 +Script ''slurm-gromacs-2025.1-cpu.sh'' to request one node and start 8 thread-MPI tasks and 6 OpenMP threads for each thread-MPI task (48 threads overall) and run the ''STMV'' benchmark:
 +
 +<code bash slurm-gromacs-2025.1-cpu.sh>
 +#!/bin/bash --login
 +#SBATCH --job-name=mdrun_cpu
 +#SBATCH --output=%x.o%j
 +#SBATCH --error=%x.e%j
 +#SBATCH --nodes=1
 +#SBATCH --ntasks-per-node=8
 +#SBATCH --cpus-per-task=6
 +#SBATCH --time=0-04:00:00
 +#SBATCH --mem=16G
 +#SBATCH --partition=cpu_guest
 +#SBATCH --qos=cpu_guest
 +##SBATCH --account=<account>
 +
 +shopt -q login_shell || exit 1
 +test -n "$SLURM_NODELIST" || exit 1
 +test $SLURM_NNODES -eq 1 || exit 1
 +
 +module load apptainer
 +module load gromacs/2025.1/cpu
 +
 +apptainer run \
 +    --bind "$PWD:/host_pwd" \
 +    --pwd /host_pwd \
 +    "$GROMACS_CONTAINER" \
 +    gmx mdrun \
 +        -v \
 +        -ntmpi $SLURM_TASKS_PER_NODE \
 +        -ntomp $SLURM_CPUS_PER_TASK \
 +        -nb cpu \
 +        -pme cpu \
 +        -npme 1 \
 +        -update cpu \
 +        -bonded cpu \
 +        -nsteps 100000 \
 +        -resetstep 90000 \
 +        -noconfout \
 +        -dlb no \
 +        -nstlist 300 \
 +        -pin on
 +</code>
 +
 +==== Job Gromacs GPU ====
 +
 +Script ''slurm-gromacs-2025.1-gpu.sh'' to request one node and start 8 thread-MPI tasks and 6 OpenMP threads for each thread-MPI task (48 threads thread) with 8 CUDA GPU's and run the ''STMV'' benchmark:
 +
 +<code bash slurm-gromacs-2025.1-gpu.sh>
 +#!/bin/bash --login
 +#SBATCH --job-name=mdrun_gpu
 +#SBATCH --output=%x.o%j
 +#SBATCH --error=%x.e%j
 +#SBATCH --nodes=1
 +#SBATCH --ntasks-per-node=8
 +#SBATCH --cpus-per-task=6
 +#SBATCH --time=0-00:30:00
 +#SBATCH --mem=8G
 +#SBATCH --gres=gpu:a5000_mm1:8
 +#SBATCH --partition=gpu_guest
 +#SBATCH --qos=gpu_guest
 +##SBATCH --account=<account>
 +
 +shopt -q login_shell || exit 1
 +test -n "$SLURM_NODELIST" || exit 1
 +test $SLURM_NNODES -eq 1 || exit 1
 +
 +module load apptainer
 +module load gromacs/2025.1/gpu
 +
 +apptainer run \
 +    --nv \
 +    --bind "$PWD:/host_pwd" \
 +    --pwd /host_pwd \
 +    "$GROMACS_CONTAINER" \
 +    gmx mdrun \
 +        -v \
 +        -ntmpi $SLURM_TASKS_PER_NODE \
 +        -ntomp $SLURM_CPUS_PER_TASK \
 +        -nb gpu \
 +        -pme gpu \
 +        -npme 1 \
 +        -update gpu \
 +        -bonded gpu \
 +        -nsteps 100000 \
 +        -resetstep 90000 \
 +        -noconfout \
 +        -dlb no \
 +        -nstlist 300 \
 +        -pin on
 </code> </code>
  
calcoloscientifico/userguide/gromacs.1695555861.txt.gz · Ultima modifica: da fabio.spataro

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki