calcoloscientifico:userguide:crystal
Questa è una vecchia versione del documento!
Indice
Crystal
Crystal14 serial job
Script slurm-runcry14.sh
:
- slurm-runcry14.sh
#!/bin/bash #SBATCH --job-name=runcry14 #SBATCH --output=%x.o%j #SBATCH --error=%x.e%j #SBATCH --nodes=1 #SBATCH --ntasks-per-node=1 #SBATCH --partition=vrt #SBATCH --qos=vrt #SBATCH --mem=8G #SBATCH --time=0-00:30:00 ##SBATCH --account=<account> module load crystal/14/1.0.4 runcry14 test00 test00
Crystal14 MPI job
Script slurm-runmpi14.sh
:
- slurm-runmpi14.sh
#!/bin/bash #SBATCH --job-name=runmpi14 #SBATCH --output=%x.o%j #SBATCH --error=%x.e%j #SBATCH --nodes=2 #SBATCH --ntasks-per-node=4 #SBATCH --partition=cpu #SBATCH --qos=cpu #SBATCH --mem=8G #SBATCH --time=0-00:30:00 ##SBATCH --account=<account> module load crystal/14/1.0.4 srun -n$SLURM_NTASKS hostname -s | sort > machines.LINUX uniq machines.LINUX > nodes.par runmpi14 $SLURM_NTASKS test00 test00 rm -f machines.LINUX rm -f nodes.par
Crystal23 serial job
Script slurm-runcry23.sh
:
- slurm-runcry23.sh
#!/bin/bash #SBATCH --job-name=runcry23 #SBATCH --output=%x.o%j #SBATCH --error=%x.e%j #SBATCH --nodes=1 #SBATCH --tasks-per-node=1 #SBATCH --partition=vrt #SBATCH --qos=vrt #SBATCH --mem=8G #SBATCH --time=0-00:30:00 ##SBATCH --account=<ACCOUNT> module load gnu8 openmpi4 module load crystal/23/1.0.1 runcry23 test00 test00
Crystal23 MPI job
Crystal23 MPI job with OpenMP
calcoloscientifico/userguide/crystal.1712591452.txt.gz · Ultima modifica: 08/04/2024 17:50 da fabio.spataro