Strumenti Utente

Strumenti Sito


calcoloscientifico:userguide:gromacs

Differenze

Queste sono le differenze tra la revisione selezionata e la versione attuale della pagina.

Link a questa pagina di confronto

Entrambe le parti precedenti la revisioneRevisione precedente
Prossima revisione
Revisione precedente
calcoloscientifico:userguide:gromacs [25/09/2023 13:12] fabio.spatarocalcoloscientifico:userguide:gromacs [20/08/2025 18:00] (versione attuale) – [Job Gromacs CPU] fabio.spataro
Linea 512: Linea 512:
  
 mpirun -np 4 gmx mdrun -deffnm meta -dlb auto -plumed plumed.dat mpirun -np 4 gmx mdrun -deffnm meta -dlb auto -plumed plumed.dat
 +</code>
 +
 +===== Gromacs with Apptainer =====
 +
 +Available versions of Gromacs:
 +
 +^ Gromacs                                                      ^
 +| [[http://manual.gromacs.org/documentation/2025.1|2025.1]]    |
 +
 +==== Examples ====
 +
 +The following examples demonstrate using the [[https://catalog.ngc.nvidia.com/orgs/hpc/containers/gromacs|NGC GROMACS]] container to run the ''STMV'' benchmark.
 +
 +Download the ''STMV'' benchmark:
 +
 +<code bash>
 +wget https://zenodo.org/record/3893789/files/GROMACS_heterogeneous_parallelization_benchmark_info_and_systems_JCP.tar.gz 
 +tar xf GROMACS_heterogeneous_parallelization_benchmark_info_and_systems_JCP.tar.gz 
 +cd GROMACS_heterogeneous_parallelization_benchmark_info_and_systems_JCP/stmv
 +</code>
 +
 +==== Job Gromacs CPU ====
 +
 +Script ''slurm-gromacs-2025.1-cpu.sh'' to request one node and start 8 thread-MPI tasks and 6 OpenMP threads for each thread-MPI task (48 threads overall) and run the ''STMV'' benchmark:
 +
 +<code bash slurm-gromacs-2025.1-cpu.sh>
 +#!/bin/bash --login
 +#SBATCH --job-name=mdrun_cpu
 +#SBATCH --output=%x.o%j
 +#SBATCH --error=%x.e%j
 +#SBATCH --nodes=1
 +#SBATCH --ntasks-per-node=8
 +#SBATCH --cpus-per-task=6
 +#SBATCH --time=0-04:00:00
 +#SBATCH --mem=16G
 +#SBATCH --partition=cpu_guest
 +#SBATCH --qos=cpu_guest
 +##SBATCH --account=<account>
 +
 +shopt -q login_shell || exit 1
 +test -n "$SLURM_NODELIST" || exit 1
 +test $SLURM_NNODES -eq 1 || exit 1
 +
 +module load apptainer
 +module load gromacs/2025.1/cpu
 +
 +apptainer run \
 +    --bind "$PWD:/host_pwd" \
 +    --pwd /host_pwd \
 +    "$GROMACS_CONTAINER" \
 +    gmx mdrun \
 +        -v \
 +        -ntmpi $SLURM_TASKS_PER_NODE \
 +        -ntomp $SLURM_CPUS_PER_TASK \
 +        -nb cpu \
 +        -pme cpu \
 +        -npme 1 \
 +        -update cpu \
 +        -bonded cpu \
 +        -nsteps 100000 \
 +        -resetstep 90000 \
 +        -noconfout \
 +        -dlb no \
 +        -nstlist 300 \
 +        -pin on
 +</code>
 +
 +==== Job Gromacs GPU ====
 +
 +Script ''slurm-gromacs-2025.1-gpu.sh'' to request one node and start 8 thread-MPI tasks and 6 OpenMP threads for each thread-MPI task (48 threads thread) with 8 CUDA GPU's and run the ''STMV'' benchmark:
 +
 +<code bash slurm-gromacs-2025.1-gpu.sh>
 +#!/bin/bash --login
 +#SBATCH --job-name=mdrun_gpu
 +#SBATCH --output=%x.o%j
 +#SBATCH --error=%x.e%j
 +#SBATCH --nodes=1
 +#SBATCH --ntasks-per-node=8
 +#SBATCH --cpus-per-task=6
 +#SBATCH --time=0-00:30:00
 +#SBATCH --mem=8G
 +#SBATCH --gres=gpu:a5000_mm1:8
 +#SBATCH --partition=gpu_guest
 +#SBATCH --qos=gpu_guest
 +##SBATCH --account=<account>
 +
 +shopt -q login_shell || exit 1
 +test -n "$SLURM_NODELIST" || exit 1
 +test $SLURM_NNODES -eq 1 || exit 1
 +
 +module load apptainer
 +module load gromacs/2025.1/gpu
 +
 +apptainer run \
 +    --nv \
 +    --bind "$PWD:/host_pwd" \
 +    --pwd /host_pwd \
 +    "$GROMACS_CONTAINER" \
 +    gmx mdrun \
 +        -v \
 +        -ntmpi $SLURM_TASKS_PER_NODE \
 +        -ntomp $SLURM_CPUS_PER_TASK \
 +        -nb gpu \
 +        -pme gpu \
 +        -npme 1 \
 +        -update gpu \
 +        -bonded gpu \
 +        -nsteps 100000 \
 +        -resetstep 90000 \
 +        -noconfout \
 +        -dlb no \
 +        -nstlist 300 \
 +        -pin on
 </code> </code>
  
calcoloscientifico/userguide/gromacs.1695640335.txt.gz · Ultima modifica: da fabio.spataro

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki