Apptainer is already available on the HPC cluster.
Users are encouraged to use it on their system.
These instructions are intended for users who wish to install Apptainer on their Linux system.
On a RedHat Enterpise Linux system Apptainer can be installed by the administrator with the following command:
yum install https://github.com/apptainer/apptainer/releases/download/v1.1.8/apptainer-1.1.8-1.x86_64.rpm
On a Debian/Ubuntu system Apptainer can be installed by the administrator with the following command:
wget -qc https://github.com/apptainer/apptainer/releases/download/v1.1.8/apptainer_1.1.8_amd64.deb sudo dpkg -i apptainer_1.1.8_amd64.deb
The following commands
bash /hpc/share/tools/nvidia/driver/extract_nvidia_driver.sh 510.47.03 /opt/hpc/system/nvidia/driver-510.47.03 ln -sfn driver-510.47.03 /opt/hpc/system/nvidia/driver
were executed on worker nodes with GPU by the administrator in a directory that contains the NVIDIA driver setup executable (for example NVIDIA-Linux-x86_64-510.47.03.run
).
Users who want to install Apptainer on their Linux system will be able to change the NVIDIA driver version and the installation path:
bash extract_nvidia_driver.sh <driver_version> </path/to/nvidia/driver_version> ln -sfn <driver_version> </path/to/nvidia/driver>
On the login node of the HPC cluster run the following command:
srun --nodes=1 --ntasks-per-node=2 --partition=cpu --pty bash
On the worker node run the following commands:
module load apptainer/1.0 container='/hpc/share/applications/amber/20/amber-20-cpu' bind="${GROUP:+$GROUP,}${ARCHIVE:+$ARCHIVE,}${SCRATCH:+$SCRATCH,}$(test -d /node && echo /node)" export APPTAINERENV_PS1='(\[\e[93;40m\]$APPTAINER_NAME\[\e[0m\])[\u@\h \W]\$ ' apptainer shell ${bind:+--bind $bind} "$container.sif"
Inside the Apptainer container:
(amber-20-cpu.sif)[user@wn01 ~]$ which pmemd.MPI /usr/local/amber/bin/pmemd.MPI
Users who want to try Apptainer on their system will have to copy the Apptainer image
/hpc/share/applications/amber/20/amber-20-cpu.sif
from the HPC cluster to their system and appropriately modify the value of the container
and bind
variables.
Users who have installed a binary package do not have to load the apptainer
module.
On the login node of the HPC cluster run the following command:
srun --nodes=1 --ntasks-per-node=2 --partition=gpu --gres=gpu:1 --pty bash
On the worker node run the following commands:
module load apptainer/1.0 container='/hpc/share/applications/amber/20/amber-20-gpu' bind="${GROUP:+$GROUP,}${ARCHIVE:+$ARCHIVE,}${SCRATCH:+$SCRATCH,}$(test -d /node && echo /node)" bind="${bind:+$bind,}/opt/hpc/system/nvidia/driver:/usr/local/nvidia/lib,/opt/hpc/system/nvidia/driver:/usr/local/nvidia/bin" export APPTAINERENV_PS1='(\[\e[92;40m\]$APPTAINER_NAME\[\e[0m\])[\u@\h \W]\$ ' apptainer shell ${bind:+--bind $bind} "$container.sif"
Inside the Apptainer container:
(amber-20-gpu.sif)[user@wn41 ~]$ which pmemd.cuda /usr/local/amber/bin/pmemd.cuda (amber-20-gpu.sif)[user@wn41 ~]$ nvidia-smi -L GPU 0: Tesla P100-PCIE-12GB (UUID: GPU-72c0a29f-52d9-dd84-6ebd-dddff9150862) GPU 1: Tesla P100-PCIE-12GB (UUID: GPU-712c408a-aea1-bedc-0017-e8b596a19813) GPU 2: Tesla P100-PCIE-12GB (UUID: GPU-28abb0c0-4b8e-4dc1-c900-363178a9fdab) GPU 3: Tesla P100-PCIE-12GB (UUID: GPU-9bdb3f49-0a09-ffd8-34b2-f6bec452c96c) GPU 4: Tesla P100-PCIE-12GB (UUID: GPU-7f865f50-b609-1530-4473-a350cd4cd020) GPU 5: Tesla P100-PCIE-12GB (UUID: GPU-ac67e6d2-f6c2-5c20-34f8-61542b33b030) GPU 6: Tesla P100-PCIE-12GB (UUID: GPU-199d33ff-3754-7d4a-bda7-f26dbc536ed1)
Users who want to try Apptainer on their system will have to copy the Apptainer image
/hpc/share/applications/amber/20/amber-20-gpu.sif
from the HPC cluster to their system and appropriately modify the value of the container
and bind
variables.
Users who have installed a binary package do not have to load the apptainer
module.
slurm-bactgen.sh
script to get list of packages present in bactgen
environment and help from tormes
on 1 node (1 task, 4 CPUs per task):
#!/bin/bash #SBATCH --job-name=bactgen #SBATCH --output=%x.o%j #SBATCH --error=%x.e%j #SBATCH --nodes=1 #SBATCH --ntasks-per-node=1 #SBATCH --cpus-per-task=4 #SBATCH --time=0-01:00:00 #SBATCH --mem=8G #SBATCH --partition=cpu #SBATCH --qos=cpu #SBATCH --account=<account> module load apptainer module load bactgen apptainer run "$CONTAINER" micromamba list echo '─────────────────────────────────────────────────────────────────────────────────────────' apptainer run "$CONTAINER" tormes --help echo '─────────────────────────────────────────────────────────────────────────────────────────'
Edit the slurm-bactgen.sh
script and submit it with the following command:
sbatch slurm-bactgen.sh