Strumenti Utente

Strumenti Sito


calcoloscientifico:userguide:abaqus

Abaqus

Before you run your batch script, you may execute

newgrp G_ABAQUS

Now your current working directory is /hpc/group/G_ABAQUS. Create and use your own sub-directory.

Load Abaqus module

module load abaqus

and submit from this new shell.

Command line options

  • job: job name
  • input: input file
  • user: user privided source file or object file
  • scratch: full path name of the directory to be used for scratch files. The default value on Linux is the value of the $TMPDIR environment variable or /tmp if $TMPDIR is not defined. During the analysis a subdirectory will be created under this directory to hold the analysis scratch files. The name of the subdirectory is constructed from the user's user name, the job ID, and the job's process number. The subdirectory and its contents are deleted upon completion of the analysis.

Job Abaqus MPI

Example script slurm-abaqus.sh to run Abacus on 1 node, 6 cores, 0 GPUs:

slurm-abaqus.sh
#!/bin/bash
#SBATCH --job-name=abaqus
#SBATCH --output=%x.o%j
#SBATCH --error=%x.e%j
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=28
#SBATCH --mem=94G
#SBATCH --time=3-00:00:00
#SBATCH --partition=cpu
#
# Edit next line
#SBATCH --account=<account>
 
echo "$SLURM_JOB_NODELIST"
 
# Modules required by Abacus (choose one of the following two possibilities)
#module load intel impi abaqus
module load gnu openmpi abaqus
 
ulimit -s unlimited
ulimit -s
 
abaqus job=testverita interactive ask_delete=OFF cpus=$SLURM_NTASKS_PER_NODE

Job Abaqus MPI with GPU

Example script slurm-abaqus-gpu.sh to run Abacus on 1 node, 6 cores, 1 GPU:

slurm-abaqus-gpu.sh
#!/bin/bash
#SBATCH --job-name=abaqus_gpu
#SBATCH --output=%x.o%j
#SBATCH --error=%x.e%j
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=6
#SBATCH --gres=gpu:1
#SBATCH --mem=30G
#SBATCH --time=24:00:00
#SBATCH --partition=gpu
#
# Edit next line
#SBATCH --account=<account> 
 
echo "$SLURM_JOB_NODELIST"
 
# Modules required by Abacus (choose one of the following two possibilities)
#module load intel impi cuda abaqus
module load gnu openmpi cuda abaqus
 
ulimit -s unlimited
ulimit -s
 
abaqus job=testverita interactive ask_delete=OFF cpus=$SLURM_NTASKS_PER_NODE gpus=1

Job Abaqus MPI with Fortran

Example script slurm-abaqus-fort.sh to run Abacus on 1 node, 2 cores, 0 GPUs with Fortran subroutine:

slurm-abaqus-fort.sh
#!/bin/bash
#SBATCH --job-name=abaqus_fort
#SBATCH --output=%x.o%j
#SBATCH --error=%x.e%j
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=2
#SBATCH --mem=8G
#SBATCH --time=2-00:00:00
#SBATCH --partition=cpu
#
# Edit next line
#SBATCH --account=<account>
 
echo "$SLURM_JOB_NODELIST"
 
# Modules required by Abacus (choose one of the following two possibilities)
module load gnu intel/ps-xe-2017.2 impi/2017.2 abaqus
#module load gnu openmpi abaqus
 
ulimit -s unlimited
ulimit -s
 
abaqus \
job="$SLURM_JOB_NAME" \
input=<input_file> \
user=<user_file> \
scratch="$SCRATCH" \
interactive \
ask_delete=OFF \
cpus=$SLURM_NTASKS_PER_NODE

Submission example (one node, more then one task per node):

sbatch --account=G_ABAQUS --job-name=MODEL12-01 --partition=bdw --ntasks-per-node=6 --mem=24G slurm-abaqus-fort.sh

Submission example (one node, one task per node):

sbatch --account=G_ABAQUS --job-name=MODEL12-02 --partition=vrt --ntasks-per-node=1 --mem=16G slurm-abaqus-fort.sh

Job Abaqus MPI with user subroutine

Example script slurm-abaqus-user.sh to run Abacus on 1 node, 2 cores, 0 GPUs with user provided subroutine:

slurm-abaqus-user.sh
#!/bin/bash
#SBATCH --job-name=abaqus_user
#SBATCH --output=%x.o%j
#SBATCH --error=%x.e%j
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=2
#SBATCH --mem=8G
#SBATCH --time=2-00:00:00
#SBATCH --partition=cpu
#
# Edit next line
#SBATCH --account=<account>
 
echo "$SLURM_JOB_NODELIST"
 
# Modules required by Abacus (choose one of the following two possibilities)
module load gnu intel/ps-xe-2017.2 openmpi abaqus
#module load gnu openmpi abaqus
 
ulimit -s unlimited
ulimit -s
 
abaqus \
job="$SLURM_JOB_NAME" \
${abaqus_input:+input="$abaqus_input" }\
${abaqus_user:+user="$abaqus_user" }\
scratch="$SCRATCH" \
interactive \
ask_delete=OFF \
cpus=$SLURM_NTASKS_PER_NODE

Submission example (one node, more then one task per node):

abaqus_input=MODEL12
abaqus_user=UTIL56
sbatch --account=G_ABAQUS --job-name=${abaqus_input}-03 --partition=cpu --ntasks-per-node=6 --mem=24G abaqus_user.sh

Submission example (one node, one task per node):

abaqus_input=MODEL12
abaqus_user=UTIL56
sbatch --account=G_ABAQUS --job-name=${abaqus_input}-04 --partition=vrt --ntasks-per-node=1 --mem=16G abaqus_user.sh
calcoloscientifico/userguide/abaqus.txt · Ultima modifica: 28/09/2023 20:25 da fabio.spataro