Before you run your batch script, you may execute
newgrp G_ABAQUS
Now your current working directory is /hpc/group/G_ABAQUS
. Create and use your own sub-directory.
Load Abaqus module
module load abaqus
and submit from this new shell.
$TMPDIR
environment variable or /tmp if $TMPDIR
is not defined. During the analysis a subdirectory will be created under this directory to hold the analysis scratch files. The name of the subdirectory is constructed from the user's user name, the job ID, and the job's process number. The subdirectory and its contents are deleted upon completion of the analysis.
Example script slurm-abaqus.sh
to run Abacus on 1 node, 6 cores, 0 GPUs:
#!/bin/bash #SBATCH --job-name=abaqus #SBATCH --output=%x.o%j #SBATCH --error=%x.e%j #SBATCH --nodes=1 #SBATCH --ntasks-per-node=28 #SBATCH --mem=94G #SBATCH --time=3-00:00:00 #SBATCH --partition=cpu #SBATCH --qos=cpu # # Edit next line #SBATCH --account=<account> echo "$SLURM_JOB_NODELIST" # Modules required by Abacus (choose one of the following two possibilities) #module load intel impi abaqus module load gnu openmpi abaqus ulimit -s unlimited ulimit -s abaqus job=testverita interactive ask_delete=OFF cpus=$SLURM_NTASKS_PER_NODE
Example script slurm-abaqus-gpu.sh
to run Abacus on 1 node, 6 cores, 1 GPU:
#!/bin/bash #SBATCH --job-name=abaqus_gpu #SBATCH --output=%x.o%j #SBATCH --error=%x.e%j #SBATCH --nodes=1 #SBATCH --ntasks-per-node=6 #SBATCH --gres=gpu:a100_80g:1 #SBATCH --mem=30G #SBATCH --time=1-00:00:00 #SBATCH --partition=gpu #SBATCH --qos=gpu # # Edit next line #SBATCH --account=<account> echo "$SLURM_JOB_NODELIST" # Modules required by Abacus (choose one of the following two possibilities) #module load intel impi cuda abaqus module load gnu openmpi cuda abaqus ulimit -s unlimited ulimit -s abaqus job=testverita interactive ask_delete=OFF cpus=$SLURM_NTASKS_PER_NODE gpus=1
Example script slurm-abaqus-fort.sh
to run Abacus on 1 node, 2 cores, 0 GPUs with Fortran subroutine:
#!/bin/bash #SBATCH --job-name=abaqus_fort #SBATCH --output=%x.o%j #SBATCH --error=%x.e%j #SBATCH --nodes=1 #SBATCH --ntasks-per-node=2 #SBATCH --mem=8G #SBATCH --time=2-00:00:00 #SBATCH --partition=cpu #SBATCH --qos=cpu # # Edit next line #SBATCH --account=<account> echo "$SLURM_JOB_NODELIST" # Modules required by Abacus (choose one of the following two possibilities) module load gnu intel/ps-xe-2017.2 impi/2017.2 abaqus #module load gnu openmpi abaqus ulimit -s unlimited ulimit -s abaqus \ job="$SLURM_JOB_NAME" \ input=<input_file> \ user=<user_file> \ scratch="$SCRATCH" \ interactive \ ask_delete=OFF \ cpus=$SLURM_NTASKS_PER_NODE
Submission example (one node, more then one task per node):
sbatch --account=G_ABAQUS --job-name=MODEL12-01 --partition=cpu --qos=cpu --ntasks-per-node=6 --mem=24G slurm-abaqus-fort.sh
Submission example (one node, one task per node):
sbatch --account=G_ABAQUS --job-name=MODEL12-02 --partition=vrt --qos=vrt --ntasks-per-node=1 --mem=16G slurm-abaqus-fort.sh
Example script slurm-abaqus-user.sh
to run Abacus on 1 node, 2 cores, 0 GPUs with user provided subroutine:
#!/bin/bash #SBATCH --job-name=abaqus_user #SBATCH --output=%x.o%j #SBATCH --error=%x.e%j #SBATCH --nodes=1 #SBATCH --ntasks-per-node=2 #SBATCH --mem=8G #SBATCH --time=2-00:00:00 #SBATCH --partition=cpu #SBATCH --qos=cpu # # Edit next line #SBATCH --account=<account> echo "$SLURM_JOB_NODELIST" # Modules required by Abacus (choose one of the following two possibilities) module load gnu intel/ps-xe-2017.2 openmpi abaqus #module load gnu openmpi abaqus ulimit -s unlimited ulimit -s abaqus \ job="$SLURM_JOB_NAME" \ ${abaqus_input:+input="$abaqus_input" }\ ${abaqus_user:+user="$abaqus_user" }\ scratch="$SCRATCH" \ interactive \ ask_delete=OFF \ cpus=$SLURM_NTASKS_PER_NODE
Submission example (one node, more then one task per node):
abaqus_input=MODEL12 abaqus_user=UTIL56 sbatch --account=G_ABAQUS --job-name=${abaqus_input}-03 --partition=cpu --ntasks-per-node=6 --mem=24G abaqus_user.sh
Submission example (one node, one task per node):
abaqus_input=MODEL12 abaqus_user=UTIL56 sbatch --account=G_ABAQUS --job-name=${abaqus_input}-04 --partition=vrt --ntasks-per-node=1 --mem=16G abaqus_user.sh