Strumenti Utente

Strumenti Sito


calcoloscientifico:userguide:gromacs

Differenze

Queste sono le differenze tra la revisione selezionata e la versione attuale della pagina.

Link a questa pagina di confronto

Entrambe le parti precedenti la revisioneRevisione precedente
Prossima revisione
Revisione precedente
calcoloscientifico:userguide:gromacs [21/09/2023 18:12] fabio.spatarocalcoloscientifico:userguide:gromacs [25/09/2023 13:12] (versione attuale) fabio.spataro
Linea 21: Linea 21:
  
 ^ Gromacs  ^ Plumed  ^ Compiler   ^ MPI             ^ CUDA SDK  ^ CUDA RTL  ^ CUDA Architectures  ^ ^ Gromacs  ^ Plumed  ^ Compiler   ^ MPI             ^ CUDA SDK  ^ CUDA RTL  ^ CUDA Architectures  ^
-| 4.5.7    | 2.3.2   | GNU 5.4.0  | OpenMPI 1.10.7  |                                         | +| 4.5.7    |         | GNU 5.4.0  | OpenMPI 1.10.7  |                                         | 
-| 5.1.4    | 2.3.  | GNU 5.4.0  | OpenMPI 1.10.7  |                                         | +| 5.1.4    | 2.3.  | GNU 5.4.0  | OpenMPI 1.10.7  |                                         | 
-| 5.1.5    | 2.3.  | GNU 5.4.0  | OpenMPI 1.10.7  |                                         | +| 5.1.5    | 2.3.  | GNU 5.4.0  | OpenMPI 1.10.7  |                                         | 
-| 2016.4   | 2.3.  | GNU 5.4.0  | OpenMPI 1.10.7 8.0.61.2  8.0.61.2  | 6.0                 +| 2016.4   | 2.3.  | GNU 5.4.0  | OpenMPI 1.10.7 10.0.130  10.0.130  | 6.0, 7.0            
-| 2016.6   | 2.4.  | GNU 5.4.0  | OpenMPI 1.10.7 8.0.61,2  8.0.61.2  | 6.0                 +| 2016.6   | 2.3.  | GNU 5.4.0  | OpenMPI 1.10.7 10.0.130  10.0.130  | 6.0, 7.0            
-| 2018.6   | 2.4.  | GNU 5.4.0  | OpenMPI 1.10.7  | 10.0.130  | 10.0.130  | 6.0, 7.0            | +| 2018.6   | 2.4.  | GNU 5.4.0  | OpenMPI 1.10.7  | 10.0.130  | 10.0.130  | 6.0, 7.0            | 
-| 2018.6   | 2.4.  | GNU 7.3.0  | OpenMPI 3.1.4   | 10.0.130  | 10.0.130  | 6.0, 7.0            | +| 2018.6   | 2.4.  | GNU 7.3.0  | OpenMPI 3.1.4   | 10.0.130  | 10.0.130  | 6.0, 7.0            | 
-| 2018.8   | 2.4.  | GNU 5.4.0  | OpenMPI 1.10.7  | 10.0.130  | 10.0.130  | 6.0, 7.0            | +| 2018.8   | 2.4.  | GNU 5.4.0  | OpenMPI 1.10.7  | 10.0.130  | 10.0.130  | 6.0, 7.0            | 
-| 2018.8   | 2.4.  | GNU 7.3.0  | OpenMPI 3.1.4   | 10.0.130  | 10.0.130  | 6.0, 7.0            | +| 2018.8   | 2.4.  | GNU 7.3.0  | OpenMPI 3.1.4   | 10.0.130  | 10.0.130  | 6.0, 7.0            | 
-| 2019.4   | 2.5.3   | GNU 5.4.0  | OpenMPI 1.10.7  | 10.0.130  | 10.0.130  | 6.0, 7.0            | +| 2019.4   | 2.8.3   | GNU 5.4.0  | OpenMPI 1.10.7  | 10.0.130  | 10.0.130  | 6.0, 7.0            | 
-| 2019.4   | 2.5.3   | GNU 7.3.0  | OpenMPI 3.1.4   | 10.0.130  | 10.0.130  | 6.0, 7.0            | +| 2019.4   | 2.8.3   | GNU 7.3.0  | OpenMPI 3.1.4   | 10.0.130  | 10.0.130  | 6.0, 7.0            | 
-| 2019.6   | 2.5.3   | GNU 5.4.0  | OpenMPI 1.10.7  | 10.0.130  | 10.0.130  | 6.0, 7.0            | +| 2019.6   | 2.8.3   | GNU 5.4.0  | OpenMPI 1.10.7  | 10.0.130  | 10.0.130  | 6.0, 7.0            | 
-| 2019.6   | 2.5.3   | GNU 7.3.0  | OpenMPI 3.1.4   | 10.0.130  | 10.0.130  | 6.0, 7.0            | +| 2019.6   | 2.8.3   | GNU 7.3.0  | OpenMPI 3.1.4   | 10.0.130  | 10.0.130  | 6.0, 7.0            | 
-| 2020.7   | 2.7.3   | GNU 8.3.0  | OpenMPI 3.1.6   11.6.0    | 11.6.0    | 6.0, 7.0, 8.0       | +| 2020.7   | 2.8.3   | GNU 8.3.0  | OpenMPI 3.1.6   12.0.0    | 12.0.0    | 6.0, 7.0, 8.0       | 
-| 2021.4   | 2.7.3   | GNU 8.3.0  | OpenMPI 3.1.6   11.6.0    | 11.6.0    | 6.0, 7.0, 8.0       | +| 2021.4   | 2.8.3   | GNU 8.3.0  | OpenMPI 3.1.6   12.0.0    | 12.0.0    | 6.0, 7.0, 8.0       | 
-| 2021.7   | 2.7.3   | GNU 8.3.0  | OpenMPI 3.1.6   11.6.0    | 11.6.0    | 6.0, 7.0, 8.0       |+| 2021.7   | 2.8.3   | GNU 8.3.0  | OpenMPI 3.1.6   12.0.0    | 12.0.0    | 6.0, 7.0, 8.0       |
 | 2022.6   | 2.8.3   | GNU 8.3.0  | OpenMPI 3.1.6   | 12.0.0    | 12.0.0    | 6.0, 7.0, 8.0       | | 2022.6   | 2.8.3   | GNU 8.3.0  | OpenMPI 3.1.6   | 12.0.0    | 12.0.0    | 6.0, 7.0, 8.0       |
  
Linea 61: Linea 61:
 | 2016.4   | gnu/5.4.0 openmpi/1.10.7   | gpu     | 2016.4-gpu  | | 2016.4   | gnu/5.4.0 openmpi/1.10.7   | gpu     | 2016.4-gpu  |
 | 2016.4   | gnu/5.4.0 openmpi/1.10.7   | knl     | 2016.4-knl  | | 2016.4   | gnu/5.4.0 openmpi/1.10.7   | knl     | 2016.4-knl  |
 +| 2016.4   | gnu/5.4.0 openmpi/1.10.7   | skl     | 2016.4-skl  |
 | 2016.6   | gnu/5.4.0 openmpi/1.10.7   | bdw     | 2016.6-bdw  | | 2016.6   | gnu/5.4.0 openmpi/1.10.7   | bdw     | 2016.6-bdw  |
 | 2016.6   | gnu/5.4.0 openmpi/1.10.7   | bdw     | 2016.6-cpu  | | 2016.6   | gnu/5.4.0 openmpi/1.10.7   | bdw     | 2016.6-cpu  |
 | 2016.6   | gnu/5.4.0 openmpi/1.10.7   | gpu     | 2016.6-gpu  | | 2016.6   | gnu/5.4.0 openmpi/1.10.7   | gpu     | 2016.6-gpu  |
 | 2016.6   | gnu/5.4.0 openmpi/1.10.7   | knl     | 2016.6-knl  | | 2016.6   | gnu/5.4.0 openmpi/1.10.7   | knl     | 2016.6-knl  |
 +| 2016.6   | gnu/5.4.0 openmpi/1.10.7   | skl     | 2016.6-skl  |
 | 2018.6   | gnu/5.4.0 openmpi/1.10.7   | bdw     | 2018.6-bdw  | | 2018.6   | gnu/5.4.0 openmpi/1.10.7   | bdw     | 2018.6-bdw  |
 | 2018.6   | gnu/5.4.0 openmpi/1.10.7   | bdw     | 2018.6-cpu  | | 2018.6   | gnu/5.4.0 openmpi/1.10.7   | bdw     | 2018.6-cpu  |
Linea 127: Linea 129:
  
 <note> <note>
-Plumed ed CUDA modules are automatically loaded.+If required, the Plumed ed CUDA modules are automatically loaded.
 </note> </note>
  
Linea 153: Linea 155:
  
 <note> <note>
-Loading the +Loading the gromacs/4.5.7-bdw or gromacs/4.5.7-cpu modules provides the additional 
-  * gromacs/4.5.7-bdw +tools energy2bfacg_mmpbsatrj_cavity
-  * gromacs/4.5.7-cpu +
-modules provides the following additional tools+
-  * energy2bfac +
-  * g_mmpbsa +
-  * trj_cavity+
 </note> </note>
  
Linea 165: Linea 162:
  
 <note> <note>
-Loading the +Loading the gromacs/5.1.4-bdw or gromacs/5.1.4-cpu modules provides the additional 
-  * gromacs/5.1.4-bdw +tools energy2bfacg_mmpbsatrj_cavity
-  * gromacs/5.1.4-cpu +
-modules provides the following additional tools+
-  * energy2bfac +
-  * g_mmpbsa +
-  * trj_cavity+
 </note> </note>
  
Linea 177: Linea 169:
  
 <note> <note>
-Loading the +Loading the gromacs/5.1.5-bdw or gromacs/5.1.5-cpu modules provides the additional 
-  * gromacs/5.1.5-bdw +tools energy2bfacg_mmpbsatrj_cavity
-  * gromacs/5.1.5-cpu +
-modules provides the following additional tools+
-  * energy2bfac +
-  * g_mmpbsa +
-  * trj_cavity+
 </note> </note>
  
Linea 189: Linea 176:
  
 <code bash mdrun-omp.sh> <code bash mdrun-omp.sh>
-#!/bin/bash                                                                        +#!/bin/bash --login
 #SBATCH --job-name=mdrun_omp #SBATCH --job-name=mdrun_omp
 #SBATCH --output=%x.o%j #SBATCH --output=%x.o%j
Linea 200: Linea 187:
 #SBATCH --mem=120G #SBATCH --mem=120G
 #SBATCH --partition=cpu #SBATCH --partition=cpu
-# +#SBATCH --qos=cpu
-# Charge resources to account+
 #SBATCH --account=<account> #SBATCH --account=<account>
  
-test -n "$SLURM_NODELIST" || exit+shopt -q login_shell || exit 1 
 +test -n "$SLURM_NODELIST" || exit 1
  
 module load gnu module load gnu
Linea 217: Linea 204:
 gmx mdrun -deffnm topology -pin on gmx mdrun -deffnm topology -pin on
 </code> </code>
- 
-<note> 
-loading the gromacs/-cpu and gromacs/4.5.7-cpu modules provides the following additional tools: -energy2bfac, - g_mmpbsa, - trj_cavity 
-</note> 
  
 === Gromacs 2016.6 === === Gromacs 2016.6 ===
Linea 227: Linea 210:
  
 <code bash mdrun-omp.sh> <code bash mdrun-omp.sh>
-#!/bin/bash                                                                        +#!/bin/bash --login
 #SBATCH --job-name=mdrun_omp #SBATCH --job-name=mdrun_omp
 #SBATCH --output=%x.o%j #SBATCH --output=%x.o%j
Linea 238: Linea 221:
 #SBATCH --mem=120G #SBATCH --mem=120G
 #SBATCH --partition=cpu #SBATCH --partition=cpu
-# +#SBATCH --qos=cpu
-# Charge resources to account+
 #SBATCH --account=<account> #SBATCH --account=<account>
  
-test -n "$SLURM_NODELIST" || exit+shopt -q login_shell || exit 1 
 +test -n "$SLURM_NODELIST" || exit 1
  
 module load gnu module load gnu
Linea 255: Linea 238:
 gmx mdrun -deffnm topology -pin on -plumed plumed.dat gmx mdrun -deffnm topology -pin on -plumed plumed.dat
 </code> </code>
- 
-<note> 
-Plumed module is automatically loaded. 
-</note> 
  
 ==== Job Gromacs MPI OpenMP ====  ==== Job Gromacs MPI OpenMP ==== 
Linea 266: Linea 245:
 </note> </note>
  
-=== Gromacs 5.1.===+=== Gromacs 5.1.===
  
 Script ''mdrun-mpi-omp.sh'' to exclusively request one or more nodes (tested with --nodes=1 and --nodes=2) and start multiple MPI processes (tested with --ntasks-per-node=6 and --ntasks-per-node=8; the number of OpenMP threads will be calculated automatically if --cpus-per-task is not explicitly set): Script ''mdrun-mpi-omp.sh'' to exclusively request one or more nodes (tested with --nodes=1 and --nodes=2) and start multiple MPI processes (tested with --ntasks-per-node=6 and --ntasks-per-node=8; the number of OpenMP threads will be calculated automatically if --cpus-per-task is not explicitly set):
  
 <code bash mdrun-mpi-omp.sh> <code bash mdrun-mpi-omp.sh>
-#!/bin/bash+#!/bin/bash --login
 #SBATCH --job-name=mdrun_mpi_omp #SBATCH --job-name=mdrun_mpi_omp
 #SBATCH --output=%x.o%j #SBATCH --output=%x.o%j
Linea 281: Linea 260:
 #SBATCH --mem=120G #SBATCH --mem=120G
 #SBATCH --partition=cpu #SBATCH --partition=cpu
-# +#SBATCH --qos=cpu
-# Charge resources to account+
 #SBATCH --account=<account> #SBATCH --account=<account>
  
-test -n "$SLURM_NODELIST" || exit+shopt -q login_shell || exit 1 
 +test -n "$SLURM_NODELIST" || exit 1
  
 module load gnu module load gnu
 module load openmpi module load openmpi
-module load gromacs/5.1.4-cpu+module load gromacs/5.1.5-cpu
  
 # Set OMP_NUM_THREADS to the same value as --cpus-per-task with a fallback in # Set OMP_NUM_THREADS to the same value as --cpus-per-task with a fallback in
Linea 318: Linea 297:
  
 <code bash mdrun-mpi-omp.sh> <code bash mdrun-mpi-omp.sh>
-#!/bin/bash+#!/bin/bash --login
 #SBATCH --job-name=mdrun_mpi_omp #SBATCH --job-name=mdrun_mpi_omp
 #SBATCH --output=%x.o%j #SBATCH --output=%x.o%j
Linea 328: Linea 307:
 #SBATCH --mem=120G #SBATCH --mem=120G
 #SBATCH --partition=cpu #SBATCH --partition=cpu
-# +#SBATCH --qos=cpu
-# Charge resources to account+
 #SBATCH --account=<account> #SBATCH --account=<account>
  
-test -n "$SLURM_NODELIST" || exit+shopt -q login_shell || exit 1 
 +test -n "$SLURM_NODELIST" || exit 1
  
 module load gnu module load gnu
Linea 369: Linea 348:
  
 <code bash mdrun-mpi-omp-9x3.sh> <code bash mdrun-mpi-omp-9x3.sh>
-#!/bin/bash+#!/bin/bash --login
 #SBATCH --job-name=mdrun_mpi_omp_9x3 #SBATCH --job-name=mdrun_mpi_omp_9x3
 #SBATCH --output=%x.o%j #SBATCH --output=%x.o%j
Linea 377: Linea 356:
 #SBATCH --time=0-24:00:00 #SBATCH --time=0-24:00:00
 #SBATCH --mem=80G #SBATCH --mem=80G
-#SBATCH --partition=cpu,knl +#SBATCH --partition=cpu 
-##SBATCH --account=<account>+#SBATCH --qos=cpu 
 +#SBATCH --account=<account> 
 + 
 +shopt -q login_shell || exit 1 
 +test -n "$SLURM_NODELIST" || exit 1
  
 module load gnu7 module load gnu7
 module load openmpi3 module load openmpi3
-module load gromacs/2019.6-$SLURM_JOB_PARTITION+module load gromacs/2019.6-cpu
  
 module list module list
Linea 395: Linea 378:
 <note> <note>
 Plumed module is automatically loaded. Plumed module is automatically loaded.
-</note> 
- 
-<note> 
-Partition may be indifferently cpu or knl. 
 </note> </note>
  
Linea 406: Linea 385:
  
 <code bash mdrun-mpi-omp-9x3.sh> <code bash mdrun-mpi-omp-9x3.sh>
-#!/bin/bash+#!/bin/bash --login
 #SBATCH --job-name=mdrun_mpi_omp_9x3 #SBATCH --job-name=mdrun_mpi_omp_9x3
 #SBATCH --output=%x.o%j #SBATCH --output=%x.o%j
Linea 414: Linea 393:
 #SBATCH --time=0-24:00:00 #SBATCH --time=0-24:00:00
 #SBATCH --mem=80G #SBATCH --mem=80G
-#SBATCH --partition=cpu,knl +#SBATCH --partition=cpu 
-##SBATCH --account=<account>+#SBATCH --qos=cpu 
 +#SBATCH --account=<account> 
 + 
 +shopt -q login_shell || exit 1 
 +test -n "$SLURM_NODELIST" || exit 1
  
 module load gnu8 module load gnu8
 module load openmpi3 module load openmpi3
-module load gromacs/2021.7-$SLURM_JOB_PARTITION+module load gromacs/2021.7-cpu
  
 module list module list
Linea 429: Linea 412:
 mpirun -np 9 gmx mdrun -deffnm meta -pin on -plumed plumed.dat mpirun -np 9 gmx mdrun -deffnm meta -pin on -plumed plumed.dat
 </code> </code>
- 
-<note> 
-Plumed module is automatically loaded. 
-</note> 
- 
-<note> 
-Partition may be indifferently cpu or knl. 
-</note> 
  
 ====  Job Gromacs OpenMP GPU ====  ====  Job Gromacs OpenMP GPU ==== 
Linea 445: Linea 420:
  
 <code bash mdrun-omp-cuda.sh> <code bash mdrun-omp-cuda.sh>
-#!/bin/bash+#!/bin/bash --login
 #SBATCH --job-name=mdrun_omp_cuda #SBATCH --job-name=mdrun_omp_cuda
 #SBATCH --output=%x.o%j #SBATCH --output=%x.o%j
Linea 455: Linea 430:
 #SBATCH --mem=16G #SBATCH --mem=16G
 #SBATCH --partition=gpu #SBATCH --partition=gpu
-# +#SBATCH --qos=gpu
-# Charge resources to account+
 #SBATCH --account=<account> #SBATCH --account=<account>
  
-test -n "$SLURM_NODELIST" || exit+shopt -q login_shell || exit 1 
 +test -n "$SLURM_NODELIST" || exit 1
  
 module load gnu module load gnu
Linea 472: Linea 447:
 gmx mdrun -deffnm topology -pin on -dlb auto -plumed plumed.dat gmx mdrun -deffnm topology -pin on -dlb auto -plumed plumed.dat
 </code> </code>
- 
-<note> 
-Plumed ed CUDA modules are automatically loaded. 
-</note> 
  
 ====  Job Gromacs MPI GPU ====  ====  Job Gromacs MPI GPU ==== 
Linea 484: Linea 455:
  
 <code bash mdrun-mpi-omp-gpu-4x2.sh> <code bash mdrun-mpi-omp-gpu-4x2.sh>
-#!/bin/bash+#!/bin/bash --login
 #SBATCH --job-name=mdrun_mpi_omp_gpu #SBATCH --job-name=mdrun_mpi_omp_gpu
 #SBATCH --output=%x.o%j #SBATCH --output=%x.o%j
Linea 494: Linea 465:
 #SBATCH --mem=16G #SBATCH --mem=16G
 #SBATCH --partition=gpu #SBATCH --partition=gpu
-# +#SBATCH --qos=gpu
-# Charge resources to account+
 #SBATCH --account=<account> #SBATCH --account=<account>
 +
 +shopt -q login_shell || exit 1
 +test -n "$SLURM_NODELIST" || exit 1
  
 module load gnu7 module load gnu7
Linea 508: Linea 481:
 mpirun -np 4 gmx mdrun -deffnm meta -dlb auto -plumed plumed.dat mpirun -np 4 gmx mdrun -deffnm meta -dlb auto -plumed plumed.dat
 </code> </code>
- 
-<note> 
-Plumed ed CUDA modules are automatically loaded. 
-</note> 
  
 === Gromacs 2021.7 === === Gromacs 2021.7 ===
Linea 518: Linea 487:
  
 <code bash mdrun-mpi-omp-gpu-4x2.sh> <code bash mdrun-mpi-omp-gpu-4x2.sh>
-#!/bin/bash+#!/bin/bash --login
 #SBATCH --job-name=mdrun_mpi_omp_gpu #SBATCH --job-name=mdrun_mpi_omp_gpu
 #SBATCH --output=%x.o%j #SBATCH --output=%x.o%j
Linea 528: Linea 497:
 #SBATCH --mem=16G #SBATCH --mem=16G
 #SBATCH --partition=gpu #SBATCH --partition=gpu
-# +#SBATCH --qos=gpu
-# Charge resources to account+
 #SBATCH --account=<account> #SBATCH --account=<account>
 +
 +shopt -q login_shell || exit 1
 +test -n "$SLURM_NODELIST" || exit 1
  
 module load gnu8 module load gnu8
calcoloscientifico/userguide/gromacs.1695312775.txt.gz · Ultima modifica: 21/09/2023 18:12 da fabio.spataro

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki