Running jobs
Submitting jobs
In order to submit an already created job script, the command sbatch has to be used. The simplest invocation of sbatch is:
sbatch jobscript
This will submit the file named jobscript, containing the #SBATCH parameters and the commands to run, to the system for execution.
It is also possible to give extra arguments to sbatch. For instance, if you want to send the job to a specific partition without having to edit your job script, this can also be done from the command-line using the same kind of options as shown before:
sbatch --partition=gpu jobscript
Note that if you have options defined in your job script and as sbatch command-line arguments, the latter ones will override the values in the job script. So the last example will always send the job to the “gpu” partition, regardless of what you may have defined in the job script itself. For more detailed (and complete) examples, please look at the Examples/templates section.
Job environment
Jobs will always start in the same directory as from which they were submitted. Do note that your environment (e.g. loaded modules) will not be transferred to the job, as was the case on the Peregrine cluster. This means that you should always load the required modules in your job script.
If you do really want to change the latter behavior (which we really do not recommend, as it breaks the reproducibility of your scripts), you can add the following to your job script:
#SBATCH --export=ALL