The Slurm Workload Manager, or more simply Slurm, is what Resource Computing uses for scheduling jobs on our cluster SPORC and the Ocho. Slurm makes allocating resources and keeping tabs on the progress of your jobs easy. This documentation will cover some of the basic commands you will need to know to start running your jobs.
To run jobs you need to connect to sporcsubmit.rc.rit.edu using either SSH of FastX.
sinfo
Reports the state of the partitions and nodes managed by Slurm.
[abc1234@sporcsubmit ~]$ sinfo
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
tier 1 up 10-00:00:0 1 down* skl-a-08tier 1 up 10-00:00:0 1 mix skl-a-60tier 1 up 10-00:00:0 12 alloc skl-a-[01-04,07,09-15]tier 1 up 10-00:00:0 20 idle skl-a-[05-06,16-32,61]tier 2 up 10-00:00:0 1 down* skl-a-08...onboard up 10-00:00:0 27 idle skl-a-[33-59]interactive up 2-00:00:00 1 mix theocho- PARTITION: the name of the partition
- AVAIL: whether the partition is up or down
- TIMELIMIT: the maximum length a job will will run in the format Days-Hours:Minutes:Seconds
- NODES: the number of nodes of that configuration
- STATE: down* if jobs cannot be ran, idle if it is are available for jobs, alloc if all the CPUs in the partition are allocated to jobs, or mix if some CPUs on the nodes are allocated and others are idle.
- NODELIST: specific nodes associated with that partition.
sbatch
Submits a script to Slurm so a job can scheduled. A job will wait in pending until the allocated resources for the job are available.
Every script you submit to Slurm through sbatch should have the following options specified in the file:
...
#SBATCH -J <jobName>...
#SBATCH -t Days-Hours:Minutes:Seconds...
Sets the time limit for the job. Other acceptable time formats include:
MinutesMinutes:SecondsHours:Minutes:SecondsDays-HoursDays-Hours:Minutes
...
#SBATCH -p <partition>...
Specifies which partition to run your job on. Choices includes:
tier1tier2tier3onboarddebug
Run my-accounts to see which partitions you can run jobs on.
...
#SBATCH -A <accountName>...
#SBATCH --mem=<size[units]>...
#SBATCH -o <filename.o>...
#SBATCH -e <filename.e>...
#SBATCH --mail-user=<email>...
#SBATCH --mail-type=<type>...
#SBATCH -n <number>...
#SBATCH -c <ncpus>...
#SBATCH --gres=gpu[:type:count]...
Example Bash Script
The following is the example script slurm-single-core.sh. This can be found by running grab-examples when you log into SPORC.
#!/bin/bash -l# NOTE the -l flag!# This is an example job file for a single core CPU bound program# Note that all of the following statements below that begin# with #SBATCH are actually commands to the SLURM scheduler# Please copy this file to your home directory and modify it # to suit your needs.## If you need any help,please email rc-help@rit.edu
## Name of the job - You'll probably want to customize this.#SBATCH -J test# Standard out and Standard Error output files#SBATCH -o test.o#SBATCH -e test.e# To send emails, set the address below and remove one of the '#' sings##SBATCH --mail-user=<email># notify on state change: BEGIN, END, FAIL, OR ALL# 5 days is the run time MAX, anything over will be KILLED unless you talk with RC# Request 4 days and 5 hours#SBATCH -t 4-5:0:0#Put the job in the appropriate partition matching the account and request one core#SBATCH -A <account_name> -p <onboard, tier1, tier2, tier3> -c 1#Job membory requirements in MB=m (default), GB=g, or TB=t#SBATCH --mem=3g## Your job script goes below this line.#echo "(${HOSTNAME}) sleeping for 1 minute to simulate work (ish)"sleep 60echo "(${HOSTNAME}) Ahhh, alarm clock!"Running sbatch
[abc1234@sporcsubmit ~]$ sbatch slurm-mpi.shSubmitted batch job 2914- If no filename is specified, then
sbatchwill read from the command line - The number after job is the job_id
See
squeueandsacctfor how to check the progress of the job
See Using the Cluster - Advanced Usage for topics such as loops and dependent jobs. Some documentation will also give you example bash scripts for your specific program.
srun
srun is used for jobs that require MPI. It schedules your job to be ran on the Slurm Scheduler similar to sbatch. To use simply create an sbatch file like the example above and add srun ./<mpi_program> below the sbatch commands. Then run the sbatch file as you normally would.
Small srun Example
#!/bin/bash -l# NOTE the -l flag!# This is an example job file for a multi-core MPI job.# If you need any help,please email rc-help@rit.edu
# Name of the job #SBATCH -J mpi_test# Standard out and Standard Error output files#SBATCH -o mpi_test.o#SBATCH -e mpi_test.e#Put the job in the appropriate partition matching the account and request FOUR cores#SBATCH - A <account_name> -p <onboard, tier1, tier2, tier3> -n 4#Job membory requirements in MB=m (default), GB=g, or TB=t#SBATCH --mem=3g## Your job script goes below this line.#srun ./mpi_programsinteractive
If you need user interaction or are only running something once then run `sinteractive`. This will ask you for the resources you require and then connect you to the scheduled node. If you don't know what that entails, just try it. Be sure to exit from your sinteractive session by running exit when you're done, otherwise you're a terrible person for requesting resources you aren't using. For the full process, see our documentation.
squeue
Lists the state of all jobs being run or scheduled to run.
[abc1234@sporcsubmit ~]$ squeueJOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
2714_1 tier3 myjob abc1234 PD 0:00 1 (JobHeldAdmin)2714_2 tier3 myjob abc1234 PD 0:00 1 (JobHeldAdmin)... 384 tier1 new_job def5678 R 2-09:14:40 1 skl-a-18 1492 interacti _interac aaa0000 R 1:24:23 1 theochoJOBID: number id associated with the jobPARTITION: name of partition running the jobNAME: name of the job ran with sbatch or sinteractiveUSER: who ordered the job to be ranST: State of the job, PD for pending, R for runningTIME: how long the job has been running in the format Days-Hours:Minutes:SecondsNODES: number of nodes allocated to the jobNODELIST(REASON): either the name of the node running the job of the reason the job is not running such as JobHeldAdmin (job is prevented from running by the administrator). Other reasons and their explanations can be found in the official Slurm documentation for squeue.- Use
squeue -u usernameto view only the jobs from a specific user
scancel
Signals or cancels a job. One or more jobs separated by spaces may be specified.
[abc1234@sporcsubmit ~]$ scancel job_id[_array_id]sacct
Lists the jobs that are running or have been run.
[abc1234@sporcsubmit ~]$ sacct JobID JobName Partition Account AllocCPUS State ExitCode----------- --------- --------- ---------- --------- --------- --------2912 job_tests tier3 job_tester 2 COMPLETED 0:02912.batch batch job_tester 2 COMPLETED 0:02912.extern extern job_tester 2 COMPLETED 0:02913 jobs2 tier3 job_tester 1 FAILED 1:02913.batch batch job_tester 1 FAILED 1:02913.extern extern job_tester 1 COMPLETED 0:0sacct -j <jobName>will display only the one or more jobs listedsacct -A <accountName>will display only the jobs ran by the one or more comma separated accounts- Failed jobs will have an exit code other than 0. 1 is used for general failures. Some exit codes have special meanings which can be looked up online
my-accounts
Although not apart of Slurm my-accounts allows you to see all the accounts associated with your username which is helpful when you want to charge resource allocation to certain accounts.
[abc1234@sporcsubmit ~]$ my-accounts Account Name Expired QOS Allowed Partitions- ------------ ------- --- ------------------* my_acct false qos_tier3 tier3,debug,interactiveIf there are any further questions, or there is an issue with the documentation, please contact rc-help@rit.edu for additional assistance.This wiki page is deprecated. You can find this documentation on our new documentation site: https://research-computing.git-pages.rit.edu/docs/basic_slurm_commands.html