Questa è una vecchia versione del documento!
Apptainer
Apptainer is already available on the HPC cluster.
Users are encouraged to use it on their system.
Install a binary package
These instructions are intended for users who wish to install Apptainer on their Linux system.
On a RedHat Enterpise Linux system Apptainer can be installed by the administrator with the following command:
yum install https://github.com/apptainer/apptainer/releases/download/v1.0.1/apptainer-1.0.1-1.x86_64.rpm
On a Debian/Ubuntu system Apptainer can be installed by the administrator with the following command:
sudo apt install https://github.com/apptainer/apptainer/releases/download/v1.0.1/apptainer_1.0.1_amd64.deb
Extract NVIDIA driver
The following commands
bash /hpc/share/tools/nvidia/driver/extract_nvidia_driver.sh 510.47.03 /opt/hpc/system/nvidia/driver-510.47.03 ln -sfn driver-510.47.03 /opt/hpc/system/nvidia/driver
were executed on worker nodes with GPU by the administrator in a directory that contains the NVIDIA driver setup executable (for example NVIDIA-Linux-x86_64-510.47.03.run
).
Users who want to install Apptainer on their Linux system will be able to change the NVIDIA driver version and the installation path:
bash extract_nvidia_driver.sh <driver_version> </path/to/nvidia/driver_version> ln -sfn <driver_version> </path/to/nvidia/driver>
Apptainer on a worker node with CPU
On the login node of the HPC cluster run the following command:
srun --nodes=1 --ntasks-per-node=2 --partition=cpu --pty bash
On the worker node run the following commands:
module load apptainer/1.0 container='/hpc/share/applications/amber/20/amber-20-cpu' bind="${GROUP:+$GROUP,}${ARCHIVE:+$ARCHIVE,}${SCRATCH:+$SCRATCH,}$(test -d /node && echo /node)" export APPTAINERENV_PS1='(\[\e[93;40m\]$APPTAINER_NAME\[\e[0m\])[\u@\h \W]\$ ' apptainer shell ${bind:+--bind $bind} "$container.sif"
Inside the Apptainer container:
(amber-20-cpu.sif)[user@wn01 ~]$ which pmemd.MPI /usr/local/amber/bin/pmemd.MPI
Users who want to try Apptainer on their system will have to copy the Apptainer image
/hpc/share/applications/amber/20/amber-20-cpu.sif
from the HPC cluster to their system and appropriately modify the value of the container
and bind
variables.
Users who have installed a binary package do not have to load the apptainer
module.
Apptainer on a worker node with GPU
On the login node of the HPC cluster run the following command:
srun --nodes=1 --ntasks-per-node=2 --partition=gpu --gres=gpu:1 --pty bash
On the worker node run the following commands:
module load apptainer/1.0 container='/hpc/share/applications/amber/20/amber-20-gpu' bind="${GROUP:+$GROUP,}${ARCHIVE:+$ARCHIVE,}${SCRATCH:+$SCRATCH,}$(test -d /node && echo /node)" bind="${bind:+$bind,}/opt/hpc/system/nvidia/driver:/usr/local/nvidia/lib,/opt/hpc/system/nvidia/driver:/usr/local/nvidia/bin" export APPTAINERENV_PS1='(\[\e[92;40m\]$APPTAINER_NAME\[\e[0m\])[\u@\h \W]\$ ' apptainer shell ${bind:+--bind $bind} "$container.sif"
Inside the Apptainer container:
(amber-20-gpu.sif)[user@wn41 ~]$ which pmemd.cuda /usr/local/amber/bin/pmemd.cuda (amber-20-gpu.sif)[user@wn41 ~]$ nvidia-smi -L GPU 0: Tesla P100-PCIE-12GB (UUID: GPU-72c0a29f-52d9-dd84-6ebd-dddff9150862) GPU 1: Tesla P100-PCIE-12GB (UUID: GPU-712c408a-aea1-bedc-0017-e8b596a19813) GPU 2: Tesla P100-PCIE-12GB (UUID: GPU-28abb0c0-4b8e-4dc1-c900-363178a9fdab) GPU 3: Tesla P100-PCIE-12GB (UUID: GPU-9bdb3f49-0a09-ffd8-34b2-f6bec452c96c) GPU 4: Tesla P100-PCIE-12GB (UUID: GPU-7f865f50-b609-1530-4473-a350cd4cd020) GPU 5: Tesla P100-PCIE-12GB (UUID: GPU-ac67e6d2-f6c2-5c20-34f8-61542b33b030) GPU 6: Tesla P100-PCIE-12GB (UUID: GPU-199d33ff-3754-7d4a-bda7-f26dbc536ed1)
Users who want to try Apptainer on their system will have to copy the Apptainer image
/hpc/share/applications/amber/20/amber-20-gpu.sif
from the HPC cluster to their system and appropriately modify the value of the container
and bind
variables.
Users who have installed a binary package do not have to load the apptainer
module.
CityChrone
CityChrone from Rocky 8.5 docker
Singularity Definition File:
- citychrone-rocky-8.5.def
BootStrap: docker From: rockylinux:8.5 %environment export PATH=/miniconda3/bin:$PATH %runscript exec vcontact "$@" %post dnf -y update dnf -y install scl-utils dnf -y install gcc-toolset-9 scl enable gcc-toolset-9 bash dnf -y install git cmake3 zlib-devel wget # Install miniconda wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh bash Miniconda3-latest-Linux-x86_64.sh -b -f -p /miniconda3/ rm Miniconda3-latest-Linux-x86_64.sh # pull the conda functions in . /miniconda3/etc/profile.d/conda.sh and make pip, etc. available while in %post export PATH="/miniconda3/bin:$PATH" # CONDA install conda install \ --yes \ --channel conda-forge \ --strict-channel-priority \ pandas matplotlib folium gdal jupyter numba colorama geopy shapely tqdm pymongo geojson protobuf pyproj # Help conda resolving Python "import" conda update --all # PIP install pip install \ --no-deps \ gtfs-realtime-bindings pyspark mpire # OSRM git clone https://github.com/Project-OSRM/osrm-backend.git cd osrm-backend mkdir -p build cd build cmake3 .. -DENABLE_MASON=ON -DCMAKE_CXX_COMPILER=/opt/rh/gcc-toolset-9/root/usr/bin/g++ make make install
Singularity Build script:
- singularity-build-rocky-8.5.sh
#!/bin/bash module load singularity/3.8.6 export SINGULARITY_CACHEDIR="/node/$USER/singularity/.singularity_cache" export SINGULARITY_PULLFOLDER="/node/$USER/singularity/.singularity_images" export SINGULARITY_TMPDIR="/node/$USER/singularity/.singularity_tmp" export SINGULARITY_LOCALCACHEDIR="/node/$USER/singularity/.singularity_localcache" export TMPDIR="/node/$USER/singularity/.tmp" mkdir -p "$SINGULARITY_CACHEDIR" mkdir -p "$SINGULARITY_PULLFOLDER" mkdir -p "$SINGULARITY_TMPDIR" mkdir -p "$SINGULARITY_LOCALCACHEDIR" mkdir -p "$TMPDIR" singularity build --fakeroot "/node/$USER/singularity/citychrone-rocky-8.5.sif" citychrone-rocky-8.5.def mv "/node/$USER/singularity/citychrone-rocky-8.5.sif" .
Singularity Run script:
- singularity-run-rocky-8.5.sh
#!/bin/bash module load singularity/3.8.6 singularity shell citychrone-rocky-8.5.sif
Interactive session:
srun --nodes=1 --ntasks-per-node=2 --partition=cpu --mem=8G --time=02:00:00 --pty bash
Launch Singularity Build script:
bash singularity-build-rocky-8.5.sh
Launch Singularity Run script:
bash singularity-run-rocky-8.5.sh
CityChrone from Ubuntu Focal (20.04 LTS) docker
Singularity Definition File:
- citychrone-ubuntu-focal.def
BootStrap: docker From: ubuntu:focal %environment export PATH=/miniconda3/bin:$PATH export DEBIAN_FRONTEND=noninteractive export TZ='Europe/Rome' %runscript exec vcontact "$@" %post DEBIAN_FRONTEND=noninteractive TZ='Europe/Rome' ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone apt-get update && \ apt-get install -y automake build-essential bzip2 wget git default-jre unzip \ build-essential git cmake pkg-config \ libbz2-dev libstxxl-dev libstxxl1v5 libxml2-dev \ libzip-dev libboost-all-dev lua5.2 liblua5.2-dev libtbb-dev # Install miniconda wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh bash Miniconda3-latest-Linux-x86_64.sh -b -f -p /miniconda3/ rm Miniconda3-latest-Linux-x86_64.sh # pull the conda functions in . /miniconda3/etc/profile.d/conda.sh and make pip, etc. available while in %post export PATH="/miniconda3/bin:$PATH" # CONDA install conda install \ --yes \ --channel conda-forge \ --strict-channel-priority \ pandas matplotlib folium gdal jupyter numba colorama geopy shapely tqdm pymongo geojson protobuf pyproj # Help conda resolving Python "import" conda update --all # PIP install pip install \ --no-deps \ gtfs-realtime-bindings pyspark mpire # OSRM git clone https://github.com/Project-OSRM/osrm-backend.git cd osrm-backend mkdir -p build cd build cmake .. -DCMAKE_BUILD_TYPE=Release cmake --build . cmake --build . --target install
Singularity Build script:
- singularity-build-ubuntu-focal.sh
#!/bin/bash module load singularity/3.8.6 export SINGULARITY_CACHEDIR="/node/$USER/singularity/.singularity_cache" export SINGULARITY_PULLFOLDER="/node/$USER/singularity/.singularity_images" export SINGULARITY_TMPDIR="/node/$USER/singularity/.singularity_tmp" export SINGULARITY_LOCALCACHEDIR="/node/$USER/singularity/.singularity_localcache" export TMPDIR="/node/$USER/singularity/.tmp" mkdir -p "$SINGULARITY_CACHEDIR" mkdir -p "$SINGULARITY_PULLFOLDER" mkdir -p "$SINGULARITY_TMPDIR" mkdir -p "$SINGULARITY_LOCALCACHEDIR" mkdir -p "$TMPDIR" singularity build --fakeroot "/node/$USER/singularity/citychrone-ubuntu-focal.sif" citychrone-ubuntu-focal.def mv "/node/$USER/singularity/citychrone-ubuntu-focal.sif" .
Singularity Run script:
- singularity-run-ubuntu-focal.sh
#!/bin/bash module load singularity/3.8.6 singularity shell citychrone-ubuntu-focal.sif
Interactive session:
srun --nodes=1 --ntasks-per-node=2 --partition=cpu --mem=8G --time=02:00:00 --pty bash
Launch Singularity Build script:
bash singularity-build-ubuntu-focal.sh
Launch Singularity Run script:
bash singularity-run-ubuntu-focal.sh