Strumenti Utente

Strumenti Sito


calcoloscientifico:cluster:softwareapplicativo:relion

Differenze

Queste sono le differenze tra la revisione selezionata e la versione attuale della pagina.

Link a questa pagina di confronto

Entrambe le parti precedenti la revisioneRevisione precedente
Prossima revisione
Revisione precedente
calcoloscientifico:cluster:softwareapplicativo:relion [29/05/2023 11:15] federico.prostcalcoloscientifico:cluster:softwareapplicativo:relion [29/05/2023 11:55] (versione attuale) federico.prost
Linea 19: Linea 19:
 newgrp: group '<GROUP>' newgrp: group '<GROUP>'
  
-[user@ui03 ~]$ cd $GROUP+(<GROUP>)[user@ui03 ~]$ cd $GROUP
  
-[user@ui03 <GROUP>]$ cd $USER+(<GROUP>)[user@ui03 <GROUP>]$ cd $USER
  
-[user@ui03 <GROUP>]$ mkdir -p relion/test+(<GROUP>)[user@ui03 <USER>]$ mkdir -p relion/test
  
-[user@ui03 relion]$ cd relion/test+(<GROUP>)[user@ui03 <USER>]$ cd relion/test
  
-[user@ui03 test]$ module load gnu8 openmpi3 relion+(<GROUP>)[user@ui03 test]$ module load gnu8 openmpi3 relion
  
-[user@ui03 test]$ module list+(<GROUP>)[user@ui03 test]$ module list
  
 Currently Loaded Modules: Currently Loaded Modules:
-  1) gnu8/8.3.0       3) ctffind/4.1.13   5) chimera/1.14 +  1) gnu8/8.3.0         3) ucx/1.14.0       5) ctffind/4.1.13   7) chimera/1.14   9) relion/4.0.1-cpu 
-  2) openmpi3/3.1.  4) resmap/1.1.4     6) relion/3.0.8-bdw+  2) libfabric/1.13.1   4) openmpi3/3.1.  6) resmap/1.1.4     8topaz/0.2.5 
 +   
 +(<GROUP>)[user@ui03 test]$ echo $RELION_QSUB_TEMPLATE 
 +/hpc/share/applications/gnu8/openmpi3/relion/4.0.1/cpu/bin/sbatch.sh
  
-[user@ui03 test]$ echo $RELION_QSUB_TEMPLATE +(<GROUP>)[user@ui03 test]$ relion
-/hpc/share/applications/gnu8/openmpi3/relion/3.0.8/bdw/bin/sbatch.sh +
- +
-[user@ui03 test]$ relion+
 </code> </code>
  
 We assume that the directory We assume that the directory
  
-''/hpc/group/<GROUP>/relion/test''+''/hpc/group/<GROUP>/<USER>/relion/test''
  
 contains an example case to test how Relion works. contains an example case to test how Relion works.
  
 <note> <note>
-Replace <GROUP> with the name of your group.+Replace <GROUP> with the name of your group and <USER> with the name of your user.
 </note> </note>
  
 The file The file
  
-''/hpc/share/applications/gnu8/openmpi3/relion/3.0.8/bdw/bin/sbatch.sh''+''/hpc/share/applications/gnu8/openmpi3/relion/4.0.1/cpu/bin/sbatch.sh''
  
 is the script to submit the job to the SLURM queue manager. is the script to submit the job to the SLURM queue manager.
Linea 72: Linea 72:
 |     Which GPUs to use:|$GPUID  | |     Which GPUs to use:|$GPUID  |
  
-{{ :calcoloscientifico:cluster:softwareapplicativo:relion-gpu-compute.png?direct&200 |}}+{{ :calcoloscientifico:cluster:softwareapplicativo:relion-gpu-compute.jpg?direct&200 |}}
  
 Set the various parameters in Set the various parameters in
Linea 90: Linea 90:
 |               Charge resources used to: | <account>                                                                 | |               Charge resources used to: | <account>                                                                 |
 |          Real memory required per node: | <quantity>G (estimated)                                                   | |          Real memory required per node: | <quantity>G (estimated)                                                   |
-|           Generic consumable resources: | **gpu:<quantity per node>** (from 1 to 6)                                 |+|           Generic consumable resources: | **gpu:<type of gpu>:<quantity per node>** (from 1 to 6)                                 |
 |  Additional (extra5) SBATCH directives: | --nodes=<number of nodes> (optional)                                      | |  Additional (extra5) SBATCH directives: | --nodes=<number of nodes> (optional)                                      |
 |  Additional (extra6) SBATCH directives: | --ntastks-per-node=<number of tasks per node> (optional)                  | |  Additional (extra6) SBATCH directives: | --ntastks-per-node=<number of tasks per node> (optional)                  |
 |  Additional (extra7) SBATCH directives: | --reservation=<reservation name> (optional)'                              | |  Additional (extra7) SBATCH directives: | --reservation=<reservation name> (optional)'                              |
-|             Standard submission script: | /hpc/share/applications/gnu8/openmpi3/relion/3.0.8/**gpu**/bin/sbatch.sh  |+|             Standard submission script: | /hpc/share/applications/gnu8/openmpi3/relion/4.0.1/**gpu**/bin/sbatch.sh  |
 |                            Current job: | <job name>                                                                | |                            Current job: | <job name>                                                                |
 |                    Additional argumets: | <options to add to the command that will be execute> (optional)           | |                    Additional argumets: | <options to add to the command that will be execute> (optional)           |
  
-{{ :calcoloscientifico:cluster:softwareapplicativo:relion-gpu-running-run.png?direct&200 |}}+{{ :calcoloscientifico:cluster:softwareapplicativo:relion-gpu-running-run.jpg?direct&200 |}}
  
 Submit the job with "Run!". Submit the job with "Run!".
Linea 112: Linea 112:
 Cancel the job with "Delete": Cancel the job with "Delete":
  
-{{ :calcoloscientifico:cluster:softwareapplicativo:relion-gpu-running-delete.png?direct&200 |}} +{{ :calcoloscientifico:cluster:softwareapplicativo:relion-job-delete.jpg?direct&200 |}}
- +
-{{ :calcoloscientifico:cluster:softwareapplicativo:relion-trash.png?direct&200 |}}+
  
 Cancel the job in the queue (Relion does not do this automatically): Cancel the job in the queue (Relion does not do this automatically):
Linea 124: Linea 122:
 To submit the job to the ''gpu_guest'' partition: To submit the job to the ''gpu_guest'' partition:
  
- Queue name:|**gpu_guest** | +                 Queue name: | **gpu_guest**                                                             
-|  Standard submission script:|/hpc/share/applications/gnu8/openmpi3/relion/3.0.8/**gpu**/bin/sbatch.sh |+|  Standard submission script: | /hpc/share/applications/gnu8/openmpi3/relion/4.0.1/**gpu**/bin/sbatch.sh  |
  
 === CPU processing === === CPU processing ===
Linea 147: Linea 145:
 |  Use GPU acceleration?|No | |  Use GPU acceleration?|No |
  
-{{ :calcoloscientifico:cluster:softwareapplicativo:relion-bdw-compute.png?direct&200 |}}+{{ :calcoloscientifico:cluster:softwareapplicativo:relion-cpu-compute.jpg?direct&200 |}}
  
 Set the various parameters in Set the various parameters in
Linea 169: Linea 167:
 |  Additional (extra6) SBATCH directives: | --ntastks-per-node=<number of tasks per node> (optional)                  | |  Additional (extra6) SBATCH directives: | --ntastks-per-node=<number of tasks per node> (optional)                  |
 |  Additional (extra7) SBATCH directives: | --reservation=<reservation name> (optional)                               | |  Additional (extra7) SBATCH directives: | --reservation=<reservation name> (optional)                               |
-|             Standard submission script: | /hpc/share/applications/gnu8/openmpi3/relion/3.0.8/**bdw**/bin/sbatch.sh  |+|             Standard submission script: | /hpc/share/applications/gnu8/openmpi3/relion/4.0.1/**cpu**/bin/sbatch.sh  |
 |                            Current job: | <job name>                                                                | |                            Current job: | <job name>                                                                |
 |                    Additional argumets: | <options to add to the command that will be execute> (optional)           | |                    Additional argumets: | <options to add to the command that will be execute> (optional)           |
  
-{{ :calcoloscientifico:cluster:softwareapplicativo:relion-bdw-running-run.png?direct&200 |}}+{{ :calcoloscientifico:cluster:softwareapplicativo:relion-cpu-running-run.jpg?direct&200 |}}
  
 Submit the job with "Run!". Submit the job with "Run!".
Linea 187: Linea 185:
 Cancel the job with "Delete": Cancel the job with "Delete":
  
-{{ :calcoloscientifico:cluster:softwareapplicativo:relion-bdw-running-delete.png?direct&200 |}} +{{ :calcoloscientifico:cluster:softwareapplicativo:relion-job-delete.jpg?direct&200 |}}
- +
-{{ :calcoloscientifico:cluster:softwareapplicativo:relion-trash.png?direct&200 |}}+
  
 Cancel the job in the queue (Relion does not do this automatically): Cancel the job in the queue (Relion does not do this automatically):
  
 ''scancel <SLURM_JOB_ID>'' ''scancel <SLURM_JOB_ID>''
- 
-== Submit to vrt partition == 
- 
-To submit the work on the ''vrt'' partition: 
- 
-|  Queue name:|**vrt** | 
-|  Standard submission script:|/hpc/share/applications/gnu8/openmpi3/relion/3.0.8/**bdw**/bin/sbatch.sh | 
  
 == Submit to knl partition == == Submit to knl partition ==
Linea 206: Linea 195:
 To submit the work on the '''knl'' partition: To submit the work on the '''knl'' partition:
  
- Queue name:|**knl** | +                 Queue name: | **knl**                                                                   
-|  Standard submission script:|/hpc/share/applications/gnu8/openmpi3/relion/3.0.8/**knl**/bin/sbatch.sh +|  Standard submission script: | /hpc/share/applications/gnu8/openmpi3/relion/4.0.1/**knl**/bin/sbatch.sh  |
- +
-== Submit to skl partition == +
- +
-To submit the work on the ''skl'' partition: +
- +
- Queue name:|**skl** | +
-|  Standard submission script:|/hpc/share/applications/gnu8/openmpi3/relion/3.0.8/**skl**/bin/sbatch.sh | +
- +
-== Submit to skl_guest partition == +
- +
-To submit the work on the ''skl_guest'' partition: +
- +
-|  Queue name:|**skl_guest** | +
-|  Standard submission script:|/hpc/share/applications/gnu8/openmpi3/relion/3.0.8/**skl**/bin/sbatch.sh |+
  
 == The choice of parameters == == The choice of parameters ==
Linea 232: Linea 207:
 | gpu        |            >= 1 |             1-6 | | gpu        |            >= 1 |             1-6 |
 | gpu_guest  |            >= 1 |             1-2 | | gpu_guest  |            >= 1 |             1-2 |
-| vrt        |               1 |               0 | 
 | cpu        |            >= 2 |               0 | | cpu        |            >= 2 |               0 |
 | knl        |            >= 2 |               0 | | knl        |            >= 2 |               0 |
-| skl        |            >= 2 |               0 | 
  
 The number of allocated nodes depends on the number of CPUs required, the number of CPUs per node (depending on the [[calcoloscientifico:userguide#slurm_partitions|type of node]]), the availability of free or partially occupied nodes. The number of allocated nodes depends on the number of CPUs required, the number of CPUs per node (depending on the [[calcoloscientifico:userguide#slurm_partitions|type of node]]), the availability of free or partially occupied nodes.
calcoloscientifico/cluster/softwareapplicativo/relion.1685351710.txt.gz · Ultima modifica: 29/05/2023 11:15 da federico.prost

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki