Strumenti Utente

Strumenti Sito


calcoloscientifico:cluster:softwareapplicativo:relion

Differenze

Queste sono le differenze tra la revisione selezionata e la versione attuale della pagina.

Link a questa pagina di confronto

Entrambe le parti precedenti la revisioneRevisione precedente
Prossima revisione
Revisione precedente
calcoloscientifico:cluster:softwareapplicativo:relion [22/04/2022 15:19] – [Table] fabio.spatarocalcoloscientifico:cluster:softwareapplicativo:relion [29/05/2023 11:55] (versione attuale) federico.prost
Linea 3: Linea 3:
 === Short guide to using Relion (compiled with GNU tools) === === Short guide to using Relion (compiled with GNU tools) ===
  
-Connect to the server [[calcoloscientifico:userguide:gui|gui.hpc.unipr.it]].+Connect to the server [[calcoloscientifico:userguide:gui|gui.hpc.unipr.it]] 
 +using: 
 + 
 + ssh -X nome.cognome@gui.hpc.unipr.it (with linux) 
 + 
 +or 
 + 
 + mobaXterm o Remote Desktop (with windows) 
  
 Open a terminal and enter the following commands: Open a terminal and enter the following commands:
Linea 11: Linea 19:
 newgrp: group '<GROUP>' newgrp: group '<GROUP>'
  
-[user@ui03 <GROUP>]$ cd relion+(<GROUP>)[user@ui03 ~]$ cd $GROUP
  
-[user@ui03 relion]$ cd test+(<GROUP>)[user@ui03 <GROUP>]$ cd $USER
  
-[user@ui03 test]$ module load gnu8 openmpi3 relion+(<GROUP>)[user@ui03 <USER>]$ mkdir -p relion/test
  
-[user@ui03 test]$ module list+(<GROUP>)[user@ui03 <USER>]$ cd relion/test
  
-Currently Loaded Modules: +(<GROUP>)[user@ui03 test]$ module load gnu8 openmpi3 relion
-  1) gnu8/8.3.0       3) ctffind/4.1.13   5) chimera/1.14 +
-  2) openmpi3/3.1.4   4) resmap/1.1.4     6) relion/3.0.8-bdw+
  
-[user@ui03 test]$ echo $RELION_QSUB_TEMPLATE +(<GROUP>)[user@ui03 test]$ module list 
-/hpc/share/applications/gnu8/openmpi3/relion/3.0.8/bdw/bin/sbatch.sh+ 
 +Currently Loaded Modules: 
 +  1) gnu8/8.3.0         3) ucx/1.14.0       5) ctffind/4.1.13   7) chimera/1.14   9) relion/4.0.1-cpu 
 +  2) libfabric/1.13.1   4) openmpi3/3.1.6   6) resmap/1.1.4     8) topaz/0.2.5 
 +   
 +(<GROUP>)[user@ui03 test]$ echo $RELION_QSUB_TEMPLATE 
 +/hpc/share/applications/gnu8/openmpi3/relion/4.0.1/cpu/bin/sbatch.sh
  
-[user@ui03 test]$ relion+(<GROUP>)[user@ui03 test]$ relion
 </code> </code>
  
 We assume that the directory We assume that the directory
  
-''/hpc/group/<GROUP>/relion/test''+''/hpc/group/<GROUP>/<USER>/relion/test''
  
 contains an example case to test how Relion works. contains an example case to test how Relion works.
  
 <note> <note>
-Replace <GROUP> with the name of your group.+Replace <GROUP> with the name of your group and <USER> with the name of your user.
 </note> </note>
  
 The file The file
  
-''/hpc/share/applications/gnu8/openmpi3/relion/3.0.8/bdw/bin/sbatch.sh''+''/hpc/share/applications/gnu8/openmpi3/relion/4.0.1/cpu/bin/sbatch.sh''
  
 is the script to submit the job to the SLURM queue manager. is the script to submit the job to the SLURM queue manager.
Linea 60: Linea 72:
 |     Which GPUs to use:|$GPUID  | |     Which GPUs to use:|$GPUID  |
  
-{{ :calcoloscientifico:cluster:softwareapplicativo:relion-gpu-compute.png?direct&200 |}}+{{ :calcoloscientifico:cluster:softwareapplicativo:relion-gpu-compute.jpg?direct&200 |}}
  
 Set the various parameters in Set the various parameters in
Linea 78: Linea 90:
 |               Charge resources used to: | <account>                                                                 | |               Charge resources used to: | <account>                                                                 |
 |          Real memory required per node: | <quantity>G (estimated)                                                   | |          Real memory required per node: | <quantity>G (estimated)                                                   |
-|           Generic consumable resources: | **gpu:<quantity per node>** (from 1 to 6)                                 |+|           Generic consumable resources: | **gpu:<type of gpu>:<quantity per node>** (from 1 to 6)                                 |
 |  Additional (extra5) SBATCH directives: | --nodes=<number of nodes> (optional)                                      | |  Additional (extra5) SBATCH directives: | --nodes=<number of nodes> (optional)                                      |
 |  Additional (extra6) SBATCH directives: | --ntastks-per-node=<number of tasks per node> (optional)                  | |  Additional (extra6) SBATCH directives: | --ntastks-per-node=<number of tasks per node> (optional)                  |
 |  Additional (extra7) SBATCH directives: | --reservation=<reservation name> (optional)'                              | |  Additional (extra7) SBATCH directives: | --reservation=<reservation name> (optional)'                              |
-|             Standard submission script: | /hpc/share/applications/gnu8/openmpi3/relion/3.0.8/**gpu**/bin/sbatch.sh  |+|             Standard submission script: | /hpc/share/applications/gnu8/openmpi3/relion/4.0.1/**gpu**/bin/sbatch.sh  |
 |                            Current job: | <job name>                                                                | |                            Current job: | <job name>                                                                |
 |                    Additional argumets: | <options to add to the command that will be execute> (optional)           | |                    Additional argumets: | <options to add to the command that will be execute> (optional)           |
  
-{{ :calcoloscientifico:cluster:softwareapplicativo:relion-gpu-running-run.png?direct&200 |}}+{{ :calcoloscientifico:cluster:softwareapplicativo:relion-gpu-running-run.jpg?direct&200 |}}
  
 Submit the job with "Run!". Submit the job with "Run!".
Linea 100: Linea 112:
 Cancel the job with "Delete": Cancel the job with "Delete":
  
-{{ :calcoloscientifico:cluster:softwareapplicativo:relion-gpu-running-delete.png?direct&200 |}} +{{ :calcoloscientifico:cluster:softwareapplicativo:relion-job-delete.jpg?direct&200 |}}
- +
-{{ :calcoloscientifico:cluster:softwareapplicativo:relion-trash.png?direct&200 |}}+
  
 Cancel the job in the queue (Relion does not do this automatically): Cancel the job in the queue (Relion does not do this automatically):
Linea 112: Linea 122:
 To submit the job to the ''gpu_guest'' partition: To submit the job to the ''gpu_guest'' partition:
  
- Queue name:|**gpu_guest** | +                 Queue name: | **gpu_guest**                                                             
-|  Standard submission script:|/hpc/share/applications/gnu8/openmpi3/relion/3.0.8/**gpu**/bin/sbatch.sh |+|  Standard submission script: | /hpc/share/applications/gnu8/openmpi3/relion/4.0.1/**gpu**/bin/sbatch.sh  |
  
 === CPU processing === === CPU processing ===
  
-== Submit to bdw partition ==+== Submit to cpu partition ==
  
 <note> <note>
Linea 135: Linea 145:
 |  Use GPU acceleration?|No | |  Use GPU acceleration?|No |
  
-{{ :calcoloscientifico:cluster:softwareapplicativo:relion-bdw-compute.png?direct&200 |}}+{{ :calcoloscientifico:cluster:softwareapplicativo:relion-cpu-compute.jpg?direct&200 |}}
  
 Set the various parameters in Set the various parameters in
Linea 148: Linea 158:
  
 |                        Submit to queue? | Yes                                                                       | |                        Submit to queue? | Yes                                                                       |
-|                             Queue name: | **bdw**                                                                   |+|                             Queue name: | **cpu**                                                                   |
 |                   Queue submit command: | sbatch                                                                    | |                   Queue submit command: | sbatch                                                                    |
 |                         Total run time: | D-HH:MM:SS (estimated)                                                    | |                         Total run time: | D-HH:MM:SS (estimated)                                                    |
Linea 157: Linea 167:
 |  Additional (extra6) SBATCH directives: | --ntastks-per-node=<number of tasks per node> (optional)                  | |  Additional (extra6) SBATCH directives: | --ntastks-per-node=<number of tasks per node> (optional)                  |
 |  Additional (extra7) SBATCH directives: | --reservation=<reservation name> (optional)                               | |  Additional (extra7) SBATCH directives: | --reservation=<reservation name> (optional)                               |
-|             Standard submission script: | /hpc/share/applications/gnu8/openmpi3/relion/3.0.8/**bdw**/bin/sbatch.sh  |+|             Standard submission script: | /hpc/share/applications/gnu8/openmpi3/relion/4.0.1/**cpu**/bin/sbatch.sh  |
 |                            Current job: | <job name>                                                                | |                            Current job: | <job name>                                                                |
 |                    Additional argumets: | <options to add to the command that will be execute> (optional)           | |                    Additional argumets: | <options to add to the command that will be execute> (optional)           |
  
-{{ :calcoloscientifico:cluster:softwareapplicativo:relion-bdw-running-run.png?direct&200 |}}+{{ :calcoloscientifico:cluster:softwareapplicativo:relion-cpu-running-run.jpg?direct&200 |}}
  
 Submit the job with "Run!". Submit the job with "Run!".
Linea 175: Linea 185:
 Cancel the job with "Delete": Cancel the job with "Delete":
  
-{{ :calcoloscientifico:cluster:softwareapplicativo:relion-bdw-running-delete.png?direct&200 |}} +{{ :calcoloscientifico:cluster:softwareapplicativo:relion-job-delete.jpg?direct&200 |}}
- +
-{{ :calcoloscientifico:cluster:softwareapplicativo:relion-trash.png?direct&200 |}}+
  
 Cancel the job in the queue (Relion does not do this automatically): Cancel the job in the queue (Relion does not do this automatically):
  
 ''scancel <SLURM_JOB_ID>'' ''scancel <SLURM_JOB_ID>''
- 
-== Submit to vrt partition == 
- 
-To submit the work on the ''vrt'' partition: 
- 
-|  Queue name:|**vrt** | 
-|  Standard submission script:|/hpc/share/applications/gnu8/openmpi3/relion/3.0.8/**bdw**/bin/sbatch.sh | 
  
 == Submit to knl partition == == Submit to knl partition ==
Linea 194: Linea 195:
 To submit the work on the '''knl'' partition: To submit the work on the '''knl'' partition:
  
- Queue name:|**knl** | +                 Queue name: | **knl**                                                                   
-|  Standard submission script:|/hpc/share/applications/gnu8/openmpi3/relion/3.0.8/**knl**/bin/sbatch.sh +|  Standard submission script: | /hpc/share/applications/gnu8/openmpi3/relion/4.0.1/**knl**/bin/sbatch.sh  |
- +
-== Submit to skl partition == +
- +
-To submit the work on the ''skl'' partition: +
- +
- Queue name:|**skl** | +
-|  Standard submission script:|/hpc/share/applications/gnu8/openmpi3/relion/3.0.8/**skl**/bin/sbatch.sh | +
- +
-== Submit to skl_guest partition == +
- +
-To submit the work on the ''skl_guest'' partition: +
- +
-|  Queue name:|**skl_guest** | +
-|  Standard submission script:|/hpc/share/applications/gnu8/openmpi3/relion/3.0.8/**skl**/bin/sbatch.sh |+
  
 == The choice of parameters == == The choice of parameters ==
Linea 220: Linea 207:
 | gpu        |            >= 1 |             1-6 | | gpu        |            >= 1 |             1-6 |
 | gpu_guest  |            >= 1 |             1-2 | | gpu_guest  |            >= 1 |             1-2 |
-vrt        |               1 |               0 | +cpu                   >= 2 |               0 |
-| bdw                   >= 2 |               0 |+
 | knl        |            >= 2 |               0 | | knl        |            >= 2 |               0 |
-| skl        |            >= 2 |               0 | 
-| skl_guest  |            >= 2 |               0 | 
  
 The number of allocated nodes depends on the number of CPUs required, the number of CPUs per node (depending on the [[calcoloscientifico:userguide#slurm_partitions|type of node]]), the availability of free or partially occupied nodes. The number of allocated nodes depends on the number of CPUs required, the number of CPUs per node (depending on the [[calcoloscientifico:userguide#slurm_partitions|type of node]]), the availability of free or partially occupied nodes.
calcoloscientifico/cluster/softwareapplicativo/relion.1650633570.txt.gz · Ultima modifica: 22/04/2022 15:19 da fabio.spataro

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki