Strumenti Utente

Strumenti Sito


calcoloscientifico:userguide:gromacs

Differenze

Queste sono le differenze tra la revisione selezionata e la versione attuale della pagina.

Link a questa pagina di confronto

Entrambe le parti precedenti la revisioneRevisione precedente
Prossima revisione
Revisione precedente
calcoloscientifico:userguide:gromacs [20/08/2025 17:36] – [Job Gromacs GPU] fabio.spatarocalcoloscientifico:userguide:gromacs [20/08/2025 18:00] (versione attuale) – [Job Gromacs CPU] fabio.spataro
Linea 523: Linea 523:
 ==== Examples ==== ==== Examples ====
  
-The following examples demonstrate using the NGC GROMACS container to run the ''STMV'' benchmark.+The following examples demonstrate using the [[https://catalog.ngc.nvidia.com/orgs/hpc/containers/gromacs|NGC GROMACS]] container to run the ''STMV'' benchmark.
  
 Download the ''STMV'' benchmark: Download the ''STMV'' benchmark:
Linea 535: Linea 535:
 ==== Job Gromacs CPU ==== ==== Job Gromacs CPU ====
  
-Script ''slurm-gromacs-2025.1-cpu.sh'' to request one node and start 8 MPI thread and 6 OpenMP threads for each MPI thread (48 threads overall) and run the ''STMV'' benchmark:+Script ''slurm-gromacs-2025.1-cpu.sh'' to request one node and start 8 thread-MPI tasks and 6 OpenMP threads for each thread-MPI task (48 threads overall) and run the ''STMV'' benchmark:
  
 <code bash slurm-gromacs-2025.1-cpu.sh> <code bash slurm-gromacs-2025.1-cpu.sh>
Linea 546: Linea 546:
 #SBATCH --cpus-per-task=6 #SBATCH --cpus-per-task=6
 #SBATCH --time=0-04:00:00 #SBATCH --time=0-04:00:00
-#SBATCH --mem=32G+#SBATCH --mem=16G
 #SBATCH --partition=cpu_guest #SBATCH --partition=cpu_guest
 #SBATCH --qos=cpu_guest #SBATCH --qos=cpu_guest
Linea 581: Linea 581:
 ==== Job Gromacs GPU ==== ==== Job Gromacs GPU ====
  
-Script ''slurm-gromacs-2025.1-gpu.sh'' to request one node and start 8 MPI thread and 6 OpenMP threads for each MPI process (48 threads thread) with 8 CUDA GPU's and run the ''STMV'' benchmark:+Script ''slurm-gromacs-2025.1-gpu.sh'' to request one node and start 8 thread-MPI tasks and 6 OpenMP threads for each thread-MPI task (48 threads thread) with 8 CUDA GPU's and run the ''STMV'' benchmark:
  
 <code bash slurm-gromacs-2025.1-gpu.sh> <code bash slurm-gromacs-2025.1-gpu.sh>
Linea 591: Linea 591:
 #SBATCH --ntasks-per-node=8 #SBATCH --ntasks-per-node=8
 #SBATCH --cpus-per-task=6 #SBATCH --cpus-per-task=6
-#SBATCH --time=0-04:00:00+#SBATCH --time=0-00:30:00
 #SBATCH --mem=8G #SBATCH --mem=8G
 #SBATCH --gres=gpu:a5000_mm1:8 #SBATCH --gres=gpu:a5000_mm1:8
calcoloscientifico/userguide/gromacs.1755704210.txt.gz · Ultima modifica: da fabio.spataro

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki