Strumenti Utente

Strumenti Sito


calcoloscientifico:userguide:namd

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems.

NAMD

Job NAMD on BDW

Job NAMD multicore (single BDW node)

Example script slurm-namd-bdw.sh to run NAMD multicore on 1 node using 16 cores and at most 32 GB of memory:

slurm-namd-bdw.sh
#!/bin/bash
#SBATCH --job-name=NAMD
#SBATCH --output=%x.o%j
#SBATCH --error=%x.e%j
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=16
#SBATCH --mem=32G
#SBATCH --time=1-00:00:00
#SBATCH --partition=bdw
#
# Edit next line
#SBATCH --account=<account>
 
module load namd/2.12/bdw
 
if [ ! -e "${SLURM_JOB_NAME}.conf" ]; then
    echo "Error: file '${SLURM_JOB_NAME}.conf' not found." 1>&2
    exit 1
fi
 
namd2 +p ${SLURM_TASKS_PER_NODE} "${SLURM_JOB_NAME}.conf" > "${SLURM_JOB_NAME}.log"

Submission example:

sbatch --job-name=t1r1_min1 slurm-namd-bdw.sh

Job NAMD multicore (single BDW node, simplified version)

Example script slurm-namd-bdw.sh to run NAMD multicore on 1 node using 16 cores and at most 32 GB of memory:

slurm-namd-bdw.sh
#!/bin/bash
#SBATCH --job-name=t1r1_min1
#SBATCH --output=%x.o%j
#SBATCH --error=%x.e%j
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=16
#SBATCH --mem=32G
#SBATCH --time=1-00:00:00
#SBATCH --partition=bdw
#
# Edit next line
#SBATCH --account=<account>
 
module load namd/2.12/bdw
 
namd2 +p ${SLURM_TASKS_PER_NODE} t1r1_min1.conf > t1r1_min1.log

Submission example:

sbatch slurm-namd-bdw.sh

Job NAMD multicore (single BDW node, exclusive mode)

Example script slurm-namd.sh to run NAMD multicore on 1 node in exclusive mode:

slurm-namd.sh
#!/bin/bash
#SBATCH --job-name=NAMD
#SBATCH --output=%x.o%j
#SBATCH --error=%x.e%j
#SBATCH --nodes=1
#SBATCH --mem=0
#SBATCH --exclusive
#SBATCH --time=1-00:00:00
#SBATCH --partition=bdw
#
# Edit next line
#SBATCH --account=<account>
 
module load namd/2.12/bdw
 
if [ ! -e "${SLURM_JOB_NAME}.conf" ]; then
    echo "Error: file '${SLURM_JOB_NAME}.conf' not found." 1>&2
    exit 1
fi
 
namd2 +p ${SLURM_TASKS_PER_NODE} "${SLURM_JOB_NAME}.conf" > "${SLURM_JOB_NAME}.log"

Submission example:

sbatch --job-name=t1r1_min1 slurm-namd-bdw.sh

Job NAMD multicore (single BDW node, exclusive mode, simplified version)

Example script slurm-namd-bdw.sh to run NAMD multicore on 1 node in exclusive mode:

slurm-namd-bdw.sh
#!/bin/bash
#SBATCH --job-name=t1r1_min1
#SBATCH --output=%x.o%j
#SBATCH --error=%x.e%j
#SBATCH --nodes=1
#SBATCH --mem=0
#SBATCH --exclusive
#SBATCH --time=1-00:00:00
#SBATCH --partition=bdw
#
# Edit next line
#SBATCH --account=<account>
 
module load namd/2.12/bdw
 
namd2 +p ${SLURM_TASKS_PER_NODE} t1r1_min1.conf > t1r1_min1.log

Submission example:

sbatch slurm-namd-bdw.sh

Job NAMD on KNL

Job NAMD multicore (single KNL node)

Example script slurm-namd-knl.sh to run NAMD multicore on 1 KNL node in exclusive mode using 64 cores:

slurm-namd-knl.sh
#!/bin/bash
#SBATCH --job-name=NAMD
#SBATCH --output=%x.o%j
#SBATCH --error=%x.e%j
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=64
#SBATCH --exclusive
#SBATCH --mem=0
#SBATCH --time=1-00:00:00
#SBATCH --partition=knl
#
# Edit next line
#SBATCH --account=<account>
 
module load namd/2.12/knl
 
if [ ! -e "${SLURM_JOB_NAME}.conf" ]; then
    echo "Error: file '${SLURM_JOB_NAME}.conf' not found." 1>&2
    exit 1
fi
 
namd2 +p ${SLURM_TASKS_PER_NODE} "${SLURM_JOB_NAME}.conf" > "${SLURM_JOB_NAME}.log"

Submission example:

sbatch --job-name=t1r1_min1 slurm-namd-knl.sh

Job NAMD on GPU

Job NAMD multicore (single GPU node)

Example script slurm-namd-gpu.sh to run NAMD multicore on 1 GPU node using 8 cores, 2 GPUs and at most 32 GB of memory:

slurm-namd-gpu.sh
#!/bin/bash
#SBATCH --job-name=NAMD
#SBATCH --output=%x.o%j
#SBATCH --error=%x.e%j
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
#SBATCH --gres=gpu:2
#SBATCH --mem=32G
#SBATCH --time=1-00:00:00
#SBATCH --partition=gpu
#
# Edit next line
#SBATCH --account=<account>
 
module load namd/2.12/gpu
 
if [ ! -e "${SLURM_JOB_NAME}.conf" ]; then
    echo "Error: file '${SLURM_JOB_NAME}.conf' not found." 1>&2
    exit 1
fi
 
namd2 +p ${SLURM_TASKS_PER_NODE} "${SLURM_JOB_NAME}.conf" > "${SLURM_JOB_NAME}.log"

Submission example:

sbatch --job-name=t1r1_min1 slurm-namd-gpu.sh

Submission example overriding the options specified in the launch script:

sbatch --job-name=t1r1_min1 --ntasks-per-node=16 --gres=gpu:4 --mem=64G slurm-namd-gpu.sh
calcoloscientifico/userguide/namd.txt · Ultima modifica: 30/06/2022 12:52 da fabio.spataro