===== Relion ===== === Short guide to using Relion (compiled with GNU tools) === Connect to the server [[calcoloscientifico:userguide:gui|gui.hpc.unipr.it]] using: ssh -X nome.cognome@gui.hpc.unipr.it (with linux) or mobaXterm o Remote Desktop (with windows) Open a terminal and enter the following commands: [user@ui03 ~]$ newgrp newgrp: group '' ()[user@ui03 ~]$ cd $GROUP ()[user@ui03 ]$ cd $USER ()[user@ui03 ]$ mkdir -p relion/test ()[user@ui03 ]$ cd relion/test ()[user@ui03 test]$ module load gnu8 openmpi3 relion ()[user@ui03 test]$ module list Currently Loaded Modules: 1) gnu8/8.3.0 3) ucx/1.14.0 5) ctffind/4.1.13 7) chimera/1.14 9) relion/4.0.1-cpu 2) libfabric/1.13.1 4) openmpi3/3.1.6 6) resmap/1.1.4 8) topaz/0.2.5 ()[user@ui03 test]$ echo $RELION_QSUB_TEMPLATE /hpc/share/applications/gnu8/openmpi3/relion/4.0.1/cpu/bin/sbatch.sh ()[user@ui03 test]$ relion We assume that the directory ''/hpc/group///relion/test'' contains an example case to test how Relion works. Replace with the name of your group and with the name of your user. The file ''/hpc/share/applications/gnu8/openmpi3/relion/4.0.1/cpu/bin/sbatch.sh'' is the script to submit the job to the SLURM queue manager. === GPU processing === If you intend to launch a processing that uses the GPU acceleration, in * ''RELION/2D classification/Compute'' * ''RELION/3D initial model/Compute'' * ''RELION/3D classification/Compute'' * ''RELION/3D auto refine/Compute'' * ''RELION/3D multi-body/Compute'' set | Use GPU acceleration?|Yes | | Which GPUs to use:|$GPUID | {{ :calcoloscientifico:cluster:softwareapplicativo:relion-gpu-compute.jpg?direct&200 |}} Set the various parameters in * ''RELION/2D classification/Running'' * ''RELION/3D initial model/Running'' * ''RELION/3D classification/Running'' * ''RELION/3D auto refine/Running'' * ''RELION/3D multi-body/Running'' In particular: | Submit to queue? | Yes | | Queue name: | **gpu** | | Queue submit command: | sbatch | | Total run time: | D-HH:MM:SS (estimated) | | Charge resources used to: | | | Real memory required per node: | G (estimated) | | Generic consumable resources: | **gpu::** (from 1 to 6) | | Additional (extra5) SBATCH directives: | --nodes= (optional) | | Additional (extra6) SBATCH directives: | --ntastks-per-node= (optional) | | Additional (extra7) SBATCH directives: | --reservation= (optional)' | | Standard submission script: | /hpc/share/applications/gnu8/openmpi3/relion/4.0.1/**gpu**/bin/sbatch.sh | | Current job: | | | Additional argumets: | (optional) | {{ :calcoloscientifico:cluster:softwareapplicativo:relion-gpu-running-run.jpg?direct&200 |}} Submit the job with "Run!". In the terminal window the following message appears: ''Submitted batch job '' Check the status of the queues with in command (to be launched in a second terminal window): ''hpc-squeue -u $USER'' Cancel the job with "Delete": {{ :calcoloscientifico:cluster:softwareapplicativo:relion-job-delete.jpg?direct&200 |}} Cancel the job in the queue (Relion does not do this automatically): ''scancel '' == Submit to the gpu_guest partition == To submit the job to the ''gpu_guest'' partition: | Queue name: | **gpu_guest** | | Standard submission script: | /hpc/share/applications/gnu8/openmpi3/relion/4.0.1/**gpu**/bin/sbatch.sh | === CPU processing === == Submit to cpu partition == Some commands launched by Relion supports CPU acceleration. CPU acceleration is activated automatically. If you intend to launch a processing that uses the CPU, in * ''RELION/2D classification/Compute'' * ''RELION/3D initial model/Compute'' * ''RELION/3D classification/Compute'' * ''RELION/3D auto refine/Compute'' * ''RELION/3D multi-body/Compute'' set | Use GPU acceleration?|No | {{ :calcoloscientifico:cluster:softwareapplicativo:relion-cpu-compute.jpg?direct&200 |}} Set the various parameters in * ''RELION/2D classification/Running'' * ''RELION/3D initial model/Running'' * ''RELION/3D classification/Running'' * ''RELION/3D auto refine/Running'' * ''RELION/3D multi-body/Running'' In particular: | Submit to queue? | Yes | | Queue name: | **cpu** | | Queue submit command: | sbatch | | Total run time: | D-HH:MM:SS (estimated) | | Charge resources used to: | | | Real memory required per node: | G (estimated) | | Generic consumable resources: | | | Additional (extra5) SBATCH directives: | --nodes= (optional) | | Additional (extra6) SBATCH directives: | --ntastks-per-node= (optional) | | Additional (extra7) SBATCH directives: | --reservation= (optional) | | Standard submission script: | /hpc/share/applications/gnu8/openmpi3/relion/4.0.1/**cpu**/bin/sbatch.sh | | Current job: | | | Additional argumets: | (optional) | {{ :calcoloscientifico:cluster:softwareapplicativo:relion-cpu-running-run.jpg?direct&200 |}} Submit the job with "Run!". In the terminal window the following message appears: ''Submitted batch job '' Check the status of the queues with in command (to be launched in a second terminal window): ''hpc-squeue -u $USER'' Cancel the job with "Delete": {{ :calcoloscientifico:cluster:softwareapplicativo:relion-job-delete.jpg?direct&200 |}} Cancel the job in the queue (Relion does not do this automatically): ''scancel '' == Submit to knl partition == To submit the work on the '''knl'' partition: | Queue name: | **knl** | | Standard submission script: | /hpc/share/applications/gnu8/openmpi3/relion/4.0.1/**knl**/bin/sbatch.sh | == The choice of parameters == The number of CPUs required is equal to the product of ''Number of MPI processors'' and ''Number of threads''. The following limitations must be respected otherwise the job sent in execution will remain in the queue indefinitely: ^ Partition ^ Number of CPUs ^ Number of GPUs ^ | gpu | >= 1 | 1-6 | | gpu_guest | >= 1 | 1-2 | | cpu | >= 2 | 0 | | knl | >= 2 | 0 | The number of allocated nodes depends on the number of CPUs required, the number of CPUs per node (depending on the [[calcoloscientifico:userguide#slurm_partitions|type of node]]), the availability of free or partially occupied nodes. The number of nodes can be specified using the SBATCH directive ''--nodes'' The number of tasks per node can be specified using the SBATCH directive ''--ntasks-per-node'' For more information on the SBATCH directives (to be launched in a second terminal window): ''man sbatch'' To know efficiency in the use of the required resources (to be launched after the end of the job in a second terminal window): ''seff '' == The use of MOTIONCOR2 with Relion == In ''RELION/Motion correction/Motion'' set | Use RELION's own implementation?|No | | MOTIONCOR2 executable:|"$RELION_MOTIONCOR2_EXECUTABLE" or "$(which MotionCor2)" | | Which GPUs to use:|$GPUID | When SLURM starts the job submitted with "Run!" in the "Running" tab, the "Standard submission script" loads the "relion" module which defines the environment variable RELION_MOTIONCOR2_EXECUTABLE. Multiple MotionCor2 processes should not share a GPU; otherwise, it can lead to crash or broken outputs (e.g. black images). The ''Number of MPI processors'' must match the ''Number of GPUs''. == The use of CTFFIND-4.1 with Relion == In ''RELION/CTF estimation/CTFFIND-4.1'' set | Use CTFFIND-4.1?|Yes | | CTFFIND-4.1 executable:|"$RELION_CTFFIND_EXECUTABLE" or "$(which ctffind)" | When SLURM starts the job submitted with "Run!" in the "Running" tab, the "Standard submission script" loads the "relion" module which defines the environment variable RELION_CTFFIND_EXECUTABLE. == The use of Gctf with Relion == In ''RELION/CTF estimation/Gctf'' set | Use Gctf instead?|Yes | | Gctf executable:|"$RELION_GCTF_EXECUTABLE" or "$(which Gctf)" | | Which GPUs to use:|$GPUID | When SLURM starts the job submitted with "Run!" in the "Running" tab, the "Standard submission script" loads the "relion" module which defines the environment variable RELION_GCTF_EXECUTABLE. == The use of ResMap with Relion == In ''RELION/Local resolution/ResMap'' set | Use ResMap? | Yes | | ResMap executable: | "$RELION_RESMAP_EXECUTABLE" or "$(which ResMap)" | When SLURM starts the job submitted with "Run!" in the "Running" tab, the "Standard submission script" loads the "relion" module which defines the environment variable RELION_RESMAP_EXECUTABLE. ===== ResMap ===== [[http://resmap.sourceforge.net|ResMap]] can be used independently by Relion. CPU version 1.1.4: module load resmap [[https://sourceforge.net/projects/resmap-latest|ResMap-Latest]] supports the use of NVIDIA GPUs. GPU version 1.95 (requires CUDA 8.0): module load cuda/8.0 resmap GPU version 1.95 (requires CUDA 9.0): module load cuda/9.0 resmap ===== Chimera ===== [[https://www.cgl.ucsf.edu/chimera|UCSF Chimera]] can be used independently by Relion, in combination with ResMap and alone. module load chimera