===== Amber ===== ==== Amber 20 ==== === sander === Script ''slurm-amber-20-cpu.sh'' to run ''sander.MPI'' on 2 nodes (16 tasks per node): #!/bin/bash --login #SBATCH --job-name=amber20cpu #SBATCH --output=%x.o%j #SBATCH --error=%x.e%j #SBATCH --nodes=2 #SBATCH --ntasks-per-node=16 #SBATCH --time=0-01:00:00 #SBATCH --mem=16G #SBATCH --partition=cpu #SBATCH --constraint='broadwell|skylake' #SBATCH --account= shopt -q login_shell || exit 1 module load gnu8 openmpi4 module load apptainer/1.0 container='/hpc/share/applications/amber/20/amber-20-cpu' bind="${GROUP:+$GROUP,}${ARCHIVE:+$ARCHIVE,}${SCRATCH:+$SCRATCH,}$(test -d /node && echo /node)" mpirun \ apptainer exec ${bind:+--bind $bind} "$container.sif" \ sander.MPI -p complex_azd.prmtop -c complex_azd.inpcrd -ref complex_azd.inpcrd -i min1.in -o min1.out -r min1.crd -O === pmemd.cuda === Script ''slurm-amber-20-gpu.sh'' to run ''pmemd.cuda'' on 1 node with 1 GPU (4 tasks per node): #!/bin/bash --login #SBATCH --job-name=amber20gpu #SBATCH --output=%x.o%j #SBATCH --error=%x.e%j #SBATCH --nodes=1 #SBATCH --ntasks-per-node=4 #SBATCH --gres=gpu:1 #SBATCH --time=0-01:00:00 #SBATCH --mem=16G #SBATCH --partition=gpu #SBATCH --account= shopt -q login_shell || exit 1 module load apptainer/1.0 container='/hpc/share/applications/amber/20/amber-20-gpu' bind="${GROUP:+$GROUP,}${ARCHIVE:+$ARCHIVE,}${SCRATCH:+$SCRATCH,}$(test -d /node && echo /node)" bind="${bind:+$bind,}/opt/hpc/system/nvidia/driver:/usr/local/nvidia/lib,/opt/hpc/system/nvidia/driver:/usr/local/nvidia/bin" apptainer exec ${bind:+--bind $bind} "$container.sif" \ pmemd.cuda -i min.in -o test-min.out -p test.prmtop -c test.inpcrd -r test-min.rst -ref test.inpcrd -e test-min.en -inf test-min.mdinfo -O apptainer exec ${bind:+--bind $bind} "$container.sif" \ pmemd.cuda -i test_amber_MD.in -o test-MD.out -p test.prmtop -c test-min.rst -r test-MD.rst -x test-MD.nc -e test-MD.en -inf test-MD.mdinfo -O === pdb4amber === Run the following commands on the login node to open a shell on the compute node: srun \ --nodes=1 \ --ntasks-per-node=4 \ --time=0-01:00:00 \ --mem=16G \ --partition=cpu \ --qos=cpu \ --constraint='broadwell|skylake' \ --account= \ --pty \ bash As usual the argument to the --account= option must be the account you intend to use. On the compute node run a shell within the ''amber/20/cpu'' container: module load apptainer/1.0 module load amber/20/cpu apptainer shell "$CONTAINER" Inside the container run ''pdb4amber'': pdb4amber To exit the container run ''exit''. To exit the shell on the compute node run the ''exit''.