Strumenti Utente

Strumenti Sito


calcoloscientifico:benchmarks

Differenze

Queste sono le differenze tra la revisione selezionata e la versione attuale della pagina.

Link a questa pagina di confronto

Entrambe le parti precedenti la revisioneRevisione precedente
Prossima revisione
Revisione precedente
calcoloscientifico:benchmarks [04/06/2018 12:28] – [Job MPI with IntelMPI] roberto.alfiericalcoloscientifico:benchmarks [17/04/2020 15:44] (versione attuale) roberto.alfieri
Linea 1: Linea 1:
  
-<html><center><b> <font face="Cursive" color="#606060" size="2"> + 
-Progetto HPC <em>- Calcolo Scientifico dell'Università e dell'INFN di Parma -</em> + 
-<br><font size="5"> Benchmarks </font></font> </b></center></html>+<html><center><b>  
 +<font face="Arial" color="#606060" size="2"> 
 + Scientific Computing of the  University of Parma, <em> in collaboration with INFN </em> 
 +<br><font size="5"> HPC Benchmark </font></font> </b></center><p></html> 
  
  
Linea 27: Linea 31:
  
 <code bash> <code bash>
- 
 #!/bin/sh #!/bin/sh
-Two processes on the same node BDW.+Due processi su stesso host stesso socket
  
 #SBATCH --output=%x.o%j #SBATCH --output=%x.o%j
Linea 39: Linea 42:
  
 #SBATCH  --account=T_HPC18A #SBATCH  --account=T_HPC18A
 +##SBATCH  --reservation corso_hpc18a_7
  
 ## Print the list of the assigned resources ## Print the list of the assigned resources
 echo "#SLURM_JOB_NODELIST: $SLURM_JOB_NODELIST" echo "#SLURM_JOB_NODELIST: $SLURM_JOB_NODELIST"
  
-module load  gnu openmpi imb+module load  intel intelmpi
  
-CMD="mpirun   IMB-MPI1  pingpong  -off_cache -1"+CMD="mpirun   /hpc/group/T_HPC18A/bin/IMB-MPI1  pingpong  -off_cache -1"
 echo "# $CMD" echo "# $CMD"
-eval $CMD > IMB-N1.dat+eval $CMD > IMB-N1-BDW.dat
 </code> </code>
  
Linea 53: Linea 57:
  
 <code bash> <code bash>
- 
 #!/bin/sh #!/bin/sh
 # Due processi su stesso host stesso socket # Due processi su stesso host stesso socket
Linea 70: Linea 73:
 echo "#SLURM_JOB_NODELIST: $SLURM_JOB_NODELIST" echo "#SLURM_JOB_NODELIST: $SLURM_JOB_NODELIST"
  
-module load  gnu openmpi imb+module load  intel intelmpi
  
-mpirun hostname +CMD="mpirun   /hpc/group/T_HPC18A/bin/IMB-MPI1.knl  pingpong  -off_cache -1"
- +
-CMD="mpirun   IMB-MPI1  pingpong  -off_cache -1"+
 echo "# $CMD" echo "# $CMD"
 eval $CMD > IMB-N1-KNL.dat eval $CMD > IMB-N1-KNL.dat
Linea 100: Linea 101:
 echo "#SLURM_JOB_NODELIST: $SLURM_JOB_NODELIST" echo "#SLURM_JOB_NODELIST: $SLURM_JOB_NODELIST"
  
-module load  gnu openmpi imb+module load  intel intelmpi
  
-#mpirun hostname+mpirun hostname
  
-CMD="mpirun   IMB-MPI1  pingpong  -off_cache -1"+CMD="mpirun   /hpc/group/T_HPC18A/bin/IMB-MPI1  pingpong  -off_cache -1"
 echo "# $CMD" echo "# $CMD"
-eval $CMD > IMB-N2.dat+eval $CMD > IMB-N2-opa.dat 
 + 
 +export  I_MPI_FABRICS=shm:tcp 
 + 
 +CMD="mpirun   /hpc/group/T_HPC18A/bin/IMB-MPI1  pingpong  -off_cache -1" 
 +echo "# $CMD" 
 +eval $CMD > IMB-N2-tcp.dat
 </code> </code>
-Results 2018 with GNU  compiler:+ 
 +Results 2018 (Intel compiler):
  
 {{:calcoloscientifico:imb.png?200|}} {{:calcoloscientifico:imb.png?200|}}
-  + 
-Results 2017 with Intel compiler: + 
 +Results 2017  (Intel compiler)
  
 {{:calcoloscientifico:imb-mpi1-n1.png?200|}} {{:calcoloscientifico:imb-mpi1-n1.png?200|}}
Linea 122: Linea 131:
 ===== mpi_latency, mpi_bandwidth ===== ===== mpi_latency, mpi_bandwidth =====
  
-==== Job MPI with IntelMPI ==== 
  
   module load intel intelmpi   module load intel intelmpi
-   +  #module load gnu openmpi 
 +     
   cp /hpc/share/samples/mpi/mpi_latency.c   .   cp /hpc/share/samples/mpi/mpi_latency.c   .
   cp /hpc/share/samples/mpi/mpi_bandwidth.c .   cp /hpc/share/samples/mpi/mpi_bandwidth.c .
Linea 132: Linea 141:
  
  
-Script ''mpi_lat_band.sh '':+Script ''mpi_lat_band.slurm '':
  
 <code bash> <code bash>
Linea 140: Linea 149:
 #SBATCH --partition=bdw #SBATCH --partition=bdw
  
-#SBATCH  -N2 --tasks-per-node=1+### 2 nodi (OPA o TCP)  
 +#SBATCH  -N2 --tasks-per-node=1     
 +### 1 nodo (SHM)  
 +##SBATCH  -N1 --tasks-per-node=2 
 #SBATCH  --exclusive #SBATCH  --exclusive
 +
 +#SBATCH  --account=T_2018_HPCCALCPAR
 +##SBATCH  --reservation hpcprogpar_20190517
  
 ## Print the list of the assigned resources ## Print the list of the assigned resources
Linea 152: Linea 168:
  
 export  I_MPI_FABRICS=shm:tcp export  I_MPI_FABRICS=shm:tcp
- 
- 
 mpirun  mpi_latency    > mpi_latency_TCP.dat mpirun  mpi_latency    > mpi_latency_TCP.dat
 mpirun  mpi_bandwidth  > mpi_bandwidth_TCP.dat mpirun  mpi_bandwidth  > mpi_bandwidth_TCP.dat
Linea 159: Linea 173:
 </code> </code>
  
-Results 2018: +Results 2019 (INTEL compiler):
  
-^  Latency  ^^  Bandwidth  ^^ +^  Latency (micros.)  ^^^  Bandwidth (MB/s)  ^^^ 
-^  OPA   ^  TCP  ^  OPA  ^  TCP  ^ +^  SHM  ^  OPA  ^  TCP   SHM   OPA  ^  TCP  ^   
-|  2 us   86 us   6600 MB/s   117 MB/s  |+|  3  |  3   86   7170   6600  |  117   
 ===== NBODY ===== ===== NBODY =====
  
Linea 361: Linea 374:
  
  
-==== Run GPU ==== 
  
-TODO 
calcoloscientifico/benchmarks.1528108121.txt.gz · Ultima modifica: 04/06/2018 12:28 da roberto.alfieri

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki