Strumenti Utente

Strumenti Sito


calcoloscientifico:userguide:crystal

Differenze

Queste sono le differenze tra la revisione selezionata e la versione attuale della pagina.

Link a questa pagina di confronto

Prossima revisione
Revisione precedente
calcoloscientifico:userguide:crystal [13/04/2022 19:42]
fabio.spataro creata
calcoloscientifico:userguide:crystal [08/04/2024 18:11] (versione attuale)
fabio.spataro
Linea 1: Linea 1:
 ===== Crystal ===== ===== Crystal =====
  
-==== Crystal14 serial job ====+==== Crystal14 ====
  
-Script ''slurm-runcry14.sh'':+[[calcoloscientifico:userguide:crystal:14|Crystal14]]
  
-<code bash slurm-runcry14.sh> +==== Crystal23 ====
-#!/bin/bash +
-#SBATCH --job-name=runcry14 +
-#SBATCH --output=%x.o%j +
-#SBATCH --error=%x.e%j +
-#SBATCH --nodes=+
-#SBATCH --ntasks-per-node=+
-#SBATCH --partition=vrt +
-#SBATCH --mem=8G +
-#SBATCH --time=0-00:30:00 +
-#SBATCH --account=<account>+
  
-module load crystal/14/1.0.4 +[[calcoloscientifico:userguide:crystal:23|Crystal23]]
- +
-runcry14 test00 test00 +
-</code> +
- +
-==== Crystal14 MPI job ==== +
- +
-Script ''slurm-runmpi14.sh'': +
- +
-<code bash slurm-runmpi14.sh> +
-#!/bin/bash +
-#SBATCH --job-name=runmpi14 +
-#SBATCH --output=%x.o%j +
-#SBATCH --error=%x.e%j +
-#SBATCH --nodes=2 +
-#SBATCH --ntasks-per-node=4 +
-#SBATCH --partition=cpu +
-#SBATCH --mem=8G +
-#SBATCH --time=0-00:30:00 +
-#SBATCH --account=<account> +
- +
-module load crystal/14/1.0.4 +
- +
-#export CRY14P_MACH="$PWD" # defined in the module +
-#export CRY14_SCRDIR="/node/$USER/crystal/14" # defined differently in the module +
- +
-srun -n$SLURM_NTASKS hostname -s sort > machines.LINUX +
-uniq machines.LINUX > nodes.par +
- +
-runmpi14 $SLURM_NTASKS test00 test00 +
- +
-rm -f machines.LINUX +
-rm -f nodes.par +
-</code> +
- +
-/* +
-===== Crystal14 job (old version) ===== +
- +
-==== Crystal14 MPI job (old version) ==== +
- +
-Script ''crystal14.sh'' for submitting the MPI version of Crystal14. Requires 4 nodes from 8 cores and starts 8 MPI processes per node: +
- +
-<code bash> +
-#!/bin/sh +
- +
-#SBATCH --job-name="crystal14"        #< Job name  +
-#SBATCH --partition=cpu               #< Resource request +
-#SBATCH --nodes=4 +
-#SBATCH --ntasks=8 +
-#SBATCH --time=0-168:00:00 +
- +
-#< Charge resources to account  +
-#SBATCH --account=<account> +
-#SBATCH --mem=64G +
- +
-#< input files directory +
-CRY14_INP_DIR='input' +
- +
-#< output files directory +
-CRY14_OUT_DIR='output' +
- +
-#< ouput files prefix +
-CRY14_INP_PREFIX='test' +
- +
-#< input wave function file prefix +
-CRY14_F9_PREFIX='test' +
- +
-source /hpc/share/applications/crystal14 +
- +
-</code> +
- +
-We recommend creating a folder for each simulation. In each folder there must be a copy of the '' crystal14.sh '' script. +
- +
-<note> +
-The script contains the definition of four variables: +
- +
-  * **CRY14_INP_DIR**: the input file or files must be in the 'input' subfolder of the current directory. To use the current directory, comment the line with the definition of the CRY14_INP_DIR variable. To change subfolder, change the value of the CRY14_INP_DIR variable. +
-  * **CRY14_OUT_DIR**: the output files will be created in the 'output' subfolder of the current folder. To use the current directory, comment the line with the definition of the CRY14_OUT_DIR variable. To change subfolder modify the value of the variable CRY14_OUT_DIR. +
-  * **CRY14_INP_PREFIX**: the file or input files have a prefix that must coincide with the value of the CRY14_INP_PREFIX variable. The string 'test' is purely indicative and does not correspond to a real case. +
-  * **CRY14_F9_PREFIX**: the input file, with extension 'F9', is the result of a previous processing and must coincide with the value of the variable CRY14_F9_PREFIX. The string 'test' is purely indicative and does not correspond to a real case. +
- +
-The '' crystal14.sh '' script includes, in turn, the system script '' / hpc / software / bin / hpc-pbs-crystal14 ''. The latter can not be changed by the user. +
-</note> +
- +
-== Submission of the shell script == +
- +
-Navigate to the folder containing '' crystal14.sh '' and run the following command to submit the script to the job scheduler: +
- +
-<code> +
-sbatch ./crystal14.sh +
-</code> +
- +
-== Analysis of files produced by Crystal14 during job execution == +
- +
-During execution of the job a temporary '' tmp '' folder is created which contains the two files: +
- +
-<code> +
-nodes.par +
-machines.LINUX +
-</code> +
- +
-The ''nodes.par'' file contains the names of the nodes that participate in the parallel computing. +
- +
-The ''machines.LINUX'' file contains the names of the nodes that participate in the parallel computing with a multiplicity equal to the number of MPI processes started on the node. +
- +
-To locate the temporary folders produced by Crystal14 during the execution of the job, run the following command directly from the login node: +
- +
-<code> +
-eval ls -d1 /hpc/node/wn{$(seq -s, 81 95)}/$USER/crystal/* 2>/dev/null +
-</code> +
- +
-<note> +
-Be careful because the previous command contains the names of the currently available calculation nodes. This list and the corresponding command may change in the future. +
-</note> +
- +
-To check the contents of the files produced by Crystal14 during the execution of the job, the user can move to one of the folders highlighted by the previous command. +
- +
-At the end of the execution of the job, the two files ''machines.LINUX'' and ''nodes.par'' are deleted. The temporary folder ''tmp'' is deleted only if it is empty. +
- +
-It is therefore not necessary to log in with SSH to the nodes participating in the processing to check the contents of the files produced by Crystal14. +
-*/+
  
calcoloscientifico/userguide/crystal.1649871761.txt.gz · Ultima modifica: 13/04/2022 19:42 da fabio.spataro