Strumenti Utente

Strumenti Sito


calcoloscientifico:userguide:alphafold

Differenze

Queste sono le differenze tra la revisione selezionata e la versione attuale della pagina.

Link a questa pagina di confronto

Entrambe le parti precedenti la revisioneRevisione precedente
Prossima revisione
Revisione precedente
calcoloscientifico:userguide:alphafold [24/01/2025 18:26] – [Alphafold3] fabio.spatarocalcoloscientifico:userguide:alphafold [06/02/2025 19:46] (versione attuale) fabio.spataro
Linea 13: Linea 13:
 <code> <code>
 /hpc/share/containers/apptainer/alphafold/3.0.1/alphafold-3.0.1.sif /hpc/share/containers/apptainer/alphafold/3.0.1/alphafold-3.0.1.sif
 +</code>
 +
 +=== Alphafold3 GPU demo ===
 +
 +<code>
 +mkdir -p demo/af_input
 +cp -p /hpc/share/containers/apptainer/alphafold/3/af_input/fold_input.json demo/af_input
 +cp -p /hpc/share/containers/apptainer/alphafold/3.0.1/slurm-alphafold-gpu-a100_40g.sh demo
 +cd demo
 +sbatch slurm-alphafold-gpu-a100_40g.sh
 </code> </code>
  
Linea 36: Linea 46:
 </code> </code>
  
-Script ''slurm-alphafold.sh'' to run ''alphafold'' on 1 node with 1 GPU (8 tasks per node):+Script ''slurm-alphafold-gpu-a100_40g.sh'' to run ''alphafold'' on 1 node with 1 A100 (40 GB) GPU (8 tasks per node):
  
-<code bash slurm-alphafold.sh>+<code bash slurm-alphafold-gpu-a100_40g.sh>
 #!/bin/bash --login #!/bin/bash --login
 #SBATCH --job-name=alphafold #SBATCH --job-name=alphafold
Linea 62: Linea 72:
 test -n "$ALPHAFOLD_CONTAINER" || exit 1 test -n "$ALPHAFOLD_CONTAINER" || exit 1
  
-ALPHAFOLD_N_CPU=$SLURM_CPUS_PER_TASK+set -x 
 + 
 +ALPHAFOLD_JSON_INPUT_FILE='fold_input.json'
 ALPHAFOLD_INPUT_DIR="$PWD/af_input" ALPHAFOLD_INPUT_DIR="$PWD/af_input"
 ALPHAFOLD_OUTPUT_DIR="$PWD/af_output/${SLURM_JOB_NAME}.d${SLURM_JOB_ID}" ALPHAFOLD_OUTPUT_DIR="$PWD/af_output/${SLURM_JOB_NAME}.d${SLURM_JOB_ID}"
Linea 69: Linea 81:
  
 apptainer exec \ apptainer exec \
-    --bind '/opt/hpc/system/nvidia/driver:/usr/local/nvidia/bin'+    --nv \
-    --bind '/opt/hpc/system/nvidia/driver:/usr/local/nvidia/lib' \+
     --bind "$ALPHAFOLD_INPUT_DIR:/root/af_input" \     --bind "$ALPHAFOLD_INPUT_DIR:/root/af_input" \
     --bind "$ALPHAFOLD_OUTPUT_DIR:/root/af_output" \     --bind "$ALPHAFOLD_OUTPUT_DIR:/root/af_output" \
-    --bind "$ALPHAFOLD_MODEL_DIR:/root/models" \ 
-    --bind "$ALPHAFOLD_DB_DIR:/root/public_databases" \ 
     "$ALPHAFOLD_CONTAINER" \     "$ALPHAFOLD_CONTAINER" \
     python /app/alphafold/run_alphafold.py \     python /app/alphafold/run_alphafold.py \
-    --json_path=/root/af_input/fold_input.json \+    --json_path="/root/af_input/$ALPHAFOLD_JSON_INPUT_FILE" \
     --model_dir=/root/models \     --model_dir=/root/models \
     --db_dir=/root/public_databases \     --db_dir=/root/public_databases \
-    --pdb_database_path=/root/public_databases/mmcif_files +    --db_dir=/root/public_databases_fallback 
-    --output_dir=/root/af_output +    --output_dir=/root/af_output
-    --jackhmmer_n_cpu=$ALPHAFOLD_N_CPU \ +
-    --nhmmer_n_cpu=$ALPHAFOLD_N_CPU+
 </code> </code>
  
 The processing result will be saved in the ''af output'' folder. The processing result will be saved in the ''af output'' folder.
 +
 +Scripts for specific NVIDIA GPU models to run ''alphafold'' on 1 node with 1 GPU (8 tasks per node):
 +
 +^ GPU  ^ Path  ^
 +| NVIDIA [[https://github.com/google-deepmind/alphafold3/blob/main/docs/performance.md#nvidia-p100|P100 (12 GB)]]  | ''/hpc/share/containers/apptainer/alphafold/3.0.1/slurm-alphafold-gpu-p100.sh''  |
 +| NVIDIA [[https://github.com/google-deepmind/alphafold3/blob/main/docs/performance.md#nvidia-v100|V100 (32 GB)]]  | ''/hpc/share/containers/apptainer/alphafold/3.0.1/slurm-alphafold-gpu_guest-v100_hylab.sh''  |
 +| NVIDIA [[https://github.com/google-deepmind/alphafold3/blob/main/docs/performance.md#nvidia-a100-40-gb|A100 (40 GB)]]  | ''/hpc/share/containers/apptainer/alphafold/3.0.1/slurm-alphafold-gpu-a100_40g.sh''  |
 +| NVIDIA [[https://github.com/google-deepmind/alphafold3/blob/main/docs/performance.md#accelerator-hardware-requirements|A100 (80 GB)]]  | ''/hpc/share/containers/apptainer/alphafold/3.0.1/slurm-alphafold-gpu-a100_80g.sh''  |
  
 === Documentation ===  === Documentation === 
  
-Version 3.0.1 full help options:+How to get a list of all flags of ''run_alphafold.py'' (version 3.0.1): 
 + 
 +<code bash> 
 +module load apptainer 
 +module load alphafold/3.0.1 
 + 
 +apptainer exec "$ALPHAFOLD_CONTAINER" python /app/alphafold/run_alphafold.py --helpfull 
 +</code> 
 + 
 +List of all flags of ''run_alphafold.py'' (version 3.0.1):
  
 <code> <code>
Linea 114: Linea 138:
   --db_dir: Path to the directory containing the databases. Can be specified multiple times to search multiple directories in order.;   --db_dir: Path to the directory containing the databases. Can be specified multiple times to search multiple directories in order.;
     repeat this option to specify a list of values     repeat this option to specify a list of values
-    (default: "['/home/sti_calcolo/public_databases']")+    (default: "['/hpc/home/sti_calcolo/public_databases']")
   --flash_attention_implementation: <triton|cudnn|xla>: Flash attention implementation to use. 'triton' and 'cudnn' uses a Triton and cuDNN flash attention   --flash_attention_implementation: <triton|cudnn|xla>: Flash attention implementation to use. 'triton' and 'cudnn' uses a Triton and cuDNN flash attention
     implementation, respectively. The Triton kernel is fastest and has been tested more thoroughly. The Triton and cuDNN kernels require Ampere GPUs or later.     implementation, respectively. The Triton kernel is fastest and has been tested more thoroughly. The Triton and cuDNN kernels require Ampere GPUs or later.
Linea 142: Linea 166:
     (default: '${DB_DIR}/mgy_clusters_2022_05.fa')     (default: '${DB_DIR}/mgy_clusters_2022_05.fa')
   --model_dir: Path to the model to use for inference.   --model_dir: Path to the model to use for inference.
-    (default: '/home/sti_calcolo/models')+    (default: '/hpc/home/sti_calcolo/models')
   --nhmmer_binary_path: Path to the Nhmmer binary.   --nhmmer_binary_path: Path to the Nhmmer binary.
     (default: '/hmmer/bin/nhmmer')     (default: '/hmmer/bin/nhmmer')
calcoloscientifico/userguide/alphafold.1737739583.txt.gz · Ultima modifica: 24/01/2025 18:26 da fabio.spataro

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki