Header left.png

Difference between revisions of "Slurm"

From Systems Group
Jump to: navigation, search
(SLURM Directives)
(SLURM Directives)
Line 9: Line 9:
 
! Resource !! Syntax !! Example !!Description
 
! Resource !! Syntax !! Example !!Description
 
|-
 
|-
| Account  || --account=slurmgeneral      || --account=slurmgeneral        || entity which resources are charged to. [[https://systems.cs.odu.edu/Slurm#Compute_Resources|available accounts]]
+
| Account  || --account=slurmgeneral      || --account=slurmgeneral        || entity which resources are charged to. [https://systems.cs.odu.edu/Slurm#Compute_Resources|available accounts]
 
|-
 
|-
| Partition || --partition=slurm-general-01 || --partition=slurm-general-01  || where job resources are allocated. [[https://systems.cs.odu.edu/Slurm#Compute_Resources|available partitions]]
+
| Partition || --partition=slurm-general-01 || --partition=slurm-general-01  || where job resources are allocated. [https://systems.cs.odu.edu/Slurm#Compute_Resources|available partitions]
 
|-
 
|-
 
| Job Name  || --job-name=<filename>        || --job-name=testprogram        || name of job to be queued
 
| Job Name  || --job-name=<filename>        || --job-name=testprogram        || name of job to be queued

Revision as of 19:31, 4 August 2022

Slurm is an open-source job scheduler for Linux and Unix-like kernels.

SLURM Directives

SLURM directives are job options that constrain the job to the conditions specified. Directives can be identified by the syntax `#SBATCH <flag>'. Commonly used flags are listed below.

Flags
Resource Syntax Example Description
Account --account=slurmgeneral --account=slurmgeneral available accounts]
Partition --partition=slurm-general-01 --partition=slurm-general-01 available partitions]
Job Name --job-name=<filename> --job-name=testprogram name of job to be queued
Task --ntask=<number> --ntask=2 useful for commands to be ran in parallel
Memory --mem=<size>[units] --mem=1gb memory to be allocated for job
Output --output=<filename> --output=testprogram.log name of job output file
Time --time= --time=01:00:00 time limit for job

SRUN

srun is used to submit jobs for execution in real time. Also used to create job steps.


srun example

srun --partition slurm-general-01 --account=slurmgeneral --pty /bin/bash     # shell on compute node
                                                                             # specifying which partition and account (applicable if assigned multiple accounts)
srun --pty /bin/bash                                                         # shell on compute node
                                                                             # default partition and account is used when not specified
                                                                             # This is not suggested as submitted jobs can be put in a pending stay due to incorrect permissions

SBATCH

Sbatch is a command used to submit jobs via batch scripts to SLURM.


batch script example

#!/bin/bash -l                             # login shell (required for lmod)                             
#SBATCH --job-name=testprogram             # job name
#SBATCH --partition=slurm-general-01       # specifying which partition to run job on
#SBATCH --account=slurmgeneral             # only applicable if user is assigned multiple accounts
#SBATCH --ntasks=1                         # commands to run in parallel
#SBATCH --mem=1gb                          # request 1gb of memory
#SBATCH --output=testprogram.lob           # output and error log

date
sleep 10
module use /mnt/lmod_modules/Linux/
module load miniconda3
someProgram.py
date


submitting a job using sbatch

sbatch myprogram.sh                                                      # queue job using a batch script 
sbatch --partition slurm-general-01 --account=slurmgeneral myprogram.sh  # batch script specifying which partition and account when not specified using a slurm directive within the script

Compute Resources

The ODU CS department HPC cluster is comprised of multiple partitions where users can submit jobs. Each partition can only be accessed by users who are assigned to the partitions respective account. Not all partitions can be accessed by all users.

Resources
Cluster Partition Account
slurm-cluster slurm-general-01 slurmgeneral
slurm-cluster slurm-general-02 slurmgeneral
slurm-cluster haoresearch shaoresearch
slurm-cluster lusiliresearch lliresearch
slurm-cluster wangresearch fwangresearch

Troubleshooting

How to view assigned account

sacctmgr show association -p user=$username