====== Interactive jobs ======
As described in [[peregrine:job_management:scheduling_system#starting_tasks|the section about srun on the page about job scripts]], srun can be used in your job script to launch tasks. However, you can also use srun on the login nodes to run tasks in an interactive way on one of the compute nodes, without having to write a job script. The srun can take more or less the same arguments as sbatch, i.e. the same options you can put in your job script, allowing to specify the requirements for this interactive job.
This is a very simple example that demonstrates how it works, it will just run the command hostname on the allocated compute node:
p123456@login1:~ srun --ntasks=1 --time=00:00:10 --partition=short hostname
srun: job 56789 queued and waiting for resources
srun: job 56789 has been allocated resources
node4
As the output shows, the command was being executed on a compute node (pg-node004) and you can now actually see what happens: the output of the command is printed directly to your screen instead of being saved to a file.
In the same way you could run any other application and you can even interact with your application, if necessary. It is even possible to launch a shell on a compute node in case you want to work interactively on a compute node:
p123456@login1:~ srun --ntasks=1 --time=01:00:00 --partition=regular --pty bash -i
srun: job 45678 queued and waiting for resources
srun: job 45678 has been allocated resources
p123456@node4:~
As the last line of this output shows, you are now running a shell on a compute node and you can start working. Do note that the more time you request for your job, the longer you might have to wait before your job will start. And since you do not know when exactly your job will start, this may not be very convenient.
**N.B.: interactive jobs currently don't (always) use the software stack built for the allocated nodes, you can work around this by first running ''unset SW_STACK_ARCH && module restore'' after the job has started.**