Strumenti Utente

Strumenti Sito


calcoloscientifico:userguide:crystal

Differenze

Queste sono le differenze tra la revisione selezionata e la versione attuale della pagina.

Link a questa pagina di confronto

Entrambe le parti precedenti la revisione Revisione precedente
calcoloscientifico:userguide:crystal [08/04/2024 17:58]
fabio.spataro
calcoloscientifico:userguide:crystal [08/04/2024 18:11] (versione attuale)
fabio.spataro
Linea 1: Linea 1:
 ===== Crystal ===== ===== Crystal =====
  
-==== Crystal14 serial job ====+==== Crystal14 ====
  
-Script ''slurm-runcry14.sh'':+[[calcoloscientifico:userguide:crystal:14|Crystal14]]
  
-<code bash slurm-runcry14.sh> +==== Crystal23 ====
-#!/bin/bash +
-#SBATCH --job-name=runcry14 +
-#SBATCH --output=%x.o%j +
-#SBATCH --error=%x.e%j +
-#SBATCH --nodes=+
-#SBATCH --ntasks-per-node=+
-#SBATCH --partition=vrt +
-#SBATCH --qos=vrt +
-#SBATCH --mem=8G +
-#SBATCH --time=0-00:30:00 +
-##SBATCH --account=<account>+
  
-module load crystal/14/1.0.4 +[[calcoloscientifico:userguide:crystal:23|Crystal23]]
- +
-runcry14 test00 test00 +
-</code> +
- +
-==== Crystal14 MPI job ==== +
- +
-Script ''slurm-runmpi14.sh'': +
- +
-<code bash slurm-runmpi14.sh> +
-#!/bin/bash +
-#SBATCH --job-name=runmpi14 +
-#SBATCH --output=%x.o%j +
-#SBATCH --error=%x.e%j +
-#SBATCH --nodes=2 +
-#SBATCH --ntasks-per-node=4 +
-#SBATCH --partition=cpu +
-#SBATCH --qos=cpu +
-#SBATCH --mem=8G +
-#SBATCH --time=0-00:30:00 +
-##SBATCH --account=<account> +
- +
-module load crystal/14/1.0.4 +
- +
-srun -n$SLURM_NTASKS hostname -s | sort > machines.LINUX +
-uniq machines.LINUX > nodes.par +
- +
-runmpi14 $SLURM_NTASKS test00 test00 +
- +
-rm -f machines.LINUX +
-rm -f nodes.par +
-</code> +
- +
-==== Crystal23 serial job ==== +
- +
-Script ''slurm-runcry23.sh'': +
- +
-<code bash slurm-runcry23.sh> +
-#!/bin/bash +
-#SBATCH --job-name=runcry23 +
-#SBATCH --output=%x.o%j +
-#SBATCH --error=%x.e%j +
-#SBATCH --nodes=1 +
-#SBATCH --tasks-per-node=1 +
-#SBATCH --partition=vrt +
-#SBATCH --qos=vrt +
-#SBATCH --mem=8G +
-#SBATCH --time=0-00:30:00 +
-##SBATCH --account=<ACCOUNT> +
- +
-module load gnu8 openmpi4 +
-module load crystal/23/1.0.1 +
- +
-runcry23 test00 test00 +
-</code> +
- +
-==== Crystal23 OpenMP job ==== +
- +
-Script ''slurm-runcry23OMP.sh'': +
- +
-<code bash > +
-#!/bin/bash --login +
-#SBATCH --job-name=runcry23OMP +
-#SBATCH --output=%x.o%j +
-#SBATCH --error=%x.e%j +
-#SBATCH --nodes=1 +
-#SBATCH --tasks-per-node=1 +
-#SBATCH --cpus-per-task=16 +
-#SBATCH --partition=cpu +
-#SBATCH --qos=cpu +
-#SBATCH --mem=8G +
-#SBATCH --time=0-00:30:00 +
-##SBATCH --account=<ACCOUNT> +
- +
-module load gnu8 openmpi4 +
-module load crystal/23/1.0.1-omp +
- +
-runcry23OMP test00 test00 +
-</code> +
- +
-==== Crystal23 MPI job ==== +
- +
-Script ''slurm-runPcry23.sh'': +
- +
-<code bash slurm-runPcry23.sh> +
-#!/bin/bash --login +
-#SBATCH --job-name=runPcry23 +
-#SBATCH --output=%x.o%j +
-#SBATCH --error=%x.e%j +
-#SBATCH --nodes=2 +
-#SBATCH --tasks-per-node=8 +
-#SBATCH --partition=cpu +
-#SBATCH --qos=cpu +
-#SBATCH --mem=8G +
-#SBATCH --time=0-00:30:00 +
-##SBATCH --account=<ACCOUNT> +
- +
-module load gnu8 openmpi4 +
-module load crystal/23/1.0.1 +
- +
-srun -n$SLURM_NTASKS hostname -s sort > machines.LINUX +
-uniq machines.LINUX > nodes.par +
- +
-runPcry23 $SLURM_NTASKS test00 test00 +
- +
-rm -f machines.LINUX +
-rm -f nodes.par +
-</code> +
- +
-==== Crystal23 MPI job with OpenMP ==== +
- +
-Script ''slurm-runPcry23OMP.sh'': +
- +
-<code bash slurm-runPcry23OMP.sh> +
-#!/bin/bash --login +
-#SBATCH --job-name=runPcry23OMP +
-#SBATCH --output=%x.o%j +
-#SBATCH --error=%x.e%j +
-#SBATCH --nodes=2 +
-#SBATCH --tasks-per-node=2 +
-#SBATCH --cpus-per-task=4 +
-#SBATCH --partition=cpu +
-#SBATCH --qos=cpu +
-#SBATCH --mem=8G +
-#SBATCH --time=0-00:30:00 +
-##SBATCH --account=<ACCOUNT> +
- +
-module load gnu8 openmpi4 +
-module load crystal/23/1.0.1-omp +
- +
-srun -n$SLURM_NTASKS hostname -s | sort > machines.LINUX +
-uniq machines.LINUX > nodes.par +
- +
-runPcry23OMP $SLURM_NTASKS test00 test00 +
- +
-rm -f machines.LINUX +
-rm -f nodes.par +
-</code>+
  
calcoloscientifico/userguide/crystal.txt · Ultima modifica: 08/04/2024 18:11 da fabio.spataro