Slurm Commands
Below is a list of some of the commonly used and useful Slurm commands and examples of how to use them. For more information about the myriad options and output formats see the man page for each command.
Command | Description | Example |
---|---|---|
sinfo | List all partitions/queues and limits | sinfo |
squeue | List all queued jobs | squeue |
- list my jobs | squeue -u $USER | |
- show info on a particular job | squeue -j jobid | |
- show estimated start time for a job | squeue -j jobid --start | |
- list all jobs using specific account |
squeue -A myproj_id |
|
mybalance | List summary of core hour balances of all my accounts | mybalance |
Command | Description | Example |
---|---|---|
sbatch | Submit job script | sbatch myjob.sh |
- submit script to use 5 nodes | sbatch -N 5 myjob.sh | |
- submit job with dependency on successful completion of other jobs | sbatch -d afterok:job_id[:jobid...] myjob.sh | |
scancel | Cancel job | scancel <jobid> |
sattach | Attach terminal to standard output of running job (job step 0) | sattach <jobid>.0 |
scontrol | - Prevent a queued job from running | scontrol hold <jobid> |
- release a job hold | scontrol release <jobid> | |
- display detailed info about specific job | scontrol show jobid <jobid> | |
srun | Run a parallel job (mostly within an allocation created with a job script) | |
- run a 2 node interactive job for 30 minutes | srun -N 2 -A myproj_id -t 00:30:00 -p DevQ --pty bash | |
- run an MPI application within a Slurm submit script (using all cores allocated on all nodes) | srun -n $(($SLURM_JOB_NUM_NODES * 40)) ./my_mpi_app | |
- run a Hybrid MPI/OpenMP application using 1 MPI process per node | srun -n $SLURM_JOB_NUM_NODES --ntasks-per-node=1 ./my_hybrid_mpi_app | |
- run a command on a job already running. e.g. to find out the CPU/Memory usage | srun --jobid <jobid> ps u |