Entrambe le parti precedenti la revisioneRevisione precedenteProssima revisione | Revisione precedente |
calcoloscientifico:userguide:alphafold [06/02/2025 19:15] – fabio.spataro | calcoloscientifico:userguide:alphafold [06/02/2025 19:46] (versione attuale) – fabio.spataro |
---|
mkdir -p demo/af_input | mkdir -p demo/af_input |
cp -p /hpc/share/containers/apptainer/alphafold/3/af_input/fold_input.json demo/af_input | cp -p /hpc/share/containers/apptainer/alphafold/3/af_input/fold_input.json demo/af_input |
cp -p /hpc/share/containers/apptainer/alphafold/3.0.1/slurm-alphafold-gpu-a100_80g.sh demo | cp -p /hpc/share/containers/apptainer/alphafold/3.0.1/slurm-alphafold-gpu-a100_40g.sh demo |
cd demo | cd demo |
sbatch slurm-alphafold-gpu-a100_80g.sh | sbatch slurm-alphafold-gpu-a100_40g.sh |
</code> | </code> |
| |
</code> | </code> |
| |
Script ''slurm-alphafold-gpu-a100_80g.sh'' to run ''alphafold'' on 1 node with 1 A100 (80 GB) GPU (8 tasks per node): | Script ''slurm-alphafold-gpu-a100_40g.sh'' to run ''alphafold'' on 1 node with 1 A100 (40 GB) GPU (8 tasks per node): |
| |
<code bash slurm-alphafold-gpu-a100_80g.sh> | <code bash slurm-alphafold-gpu-a100_40g.sh> |
#!/bin/bash --login | #!/bin/bash --login |
#SBATCH --job-name=alphafold | #SBATCH --job-name=alphafold |
#SBATCH --partition=gpu | #SBATCH --partition=gpu |
#SBATCH --qos=gpu | #SBATCH --qos=gpu |
#SBATCH --gres=gpu:a100_80g:1 | #SBATCH --gres=gpu:a100_40g:1 |
##SBATCH --account=<account> | ##SBATCH --account=<account> |
| |
| |
^ GPU ^ Path ^ | ^ GPU ^ Path ^ |
| NVIDIA [[https://github.com/google-deepmind/alphafold3/blob/main/docs/performance.md#nvidia-p100|P100 (12 GB)]] | /hpc/share/containers/apptainer/alphafold/3.0.1/slurm-alphafold-gpu-p100.sh | | | NVIDIA [[https://github.com/google-deepmind/alphafold3/blob/main/docs/performance.md#nvidia-p100|P100 (12 GB)]] | ''/hpc/share/containers/apptainer/alphafold/3.0.1/slurm-alphafold-gpu-p100.sh'' | |
| NVIDIA [[https://github.com/google-deepmind/alphafold3/blob/main/docs/performance.md#nvidia-v100|V100 (32 GB)]] | /hpc/share/containers/apptainer/alphafold/3.0.1/slurm-alphafold-gpu_guest-v100_hylab.sh | | | NVIDIA [[https://github.com/google-deepmind/alphafold3/blob/main/docs/performance.md#nvidia-v100|V100 (32 GB)]] | ''/hpc/share/containers/apptainer/alphafold/3.0.1/slurm-alphafold-gpu_guest-v100_hylab.sh'' | |
| NVIDIA [[https://github.com/google-deepmind/alphafold3/blob/main/docs/performance.md#nvidia-a100-40-gb|A100 (40 GB)]] | /hpc/share/containers/apptainer/alphafold/3.0.1/slurm-alphafold-gpu-a100_40g.sh | | | NVIDIA [[https://github.com/google-deepmind/alphafold3/blob/main/docs/performance.md#nvidia-a100-40-gb|A100 (40 GB)]] | ''/hpc/share/containers/apptainer/alphafold/3.0.1/slurm-alphafold-gpu-a100_40g.sh'' | |
| NVIDIA [[https://github.com/google-deepmind/alphafold3/blob/main/docs/performance.md#accelerator-hardware-requirements|A100 (80 GB)]] | /hpc/share/containers/apptainer/alphafold/3.0.1/slurm-alphafold-gpu-a100_80g.sh | | | NVIDIA [[https://github.com/google-deepmind/alphafold3/blob/main/docs/performance.md#accelerator-hardware-requirements|A100 (80 GB)]] | ''/hpc/share/containers/apptainer/alphafold/3.0.1/slurm-alphafold-gpu-a100_80g.sh'' | |
| |
=== Documentation === | === Documentation === |