Differences
This shows you the differences between two versions of the page.
| Both sides previous revision Previous revision Next revision | Previous revision | ||
| habrok:job_management:scheduling_system [2022/12/22 12:41] – fokke | habrok:job_management:scheduling_system [2025/07/14 14:13] (current) – [Nodes and cores] pedro | ||
|---|---|---|---|
| Line 64: | Line 64: | ||
| ==== Nodes and cores ==== | ==== Nodes and cores ==== | ||
| - | The requirements for nodes (full computers) and cores can be given using the parameters '' | + | The requirements for nodes (full computers) and cores can be given using the parameters '' |
| '' | '' | ||
| Line 103: | Line 103: | ||
| Note that if you only use '' | Note that if you only use '' | ||
| + | ==== Multinode jobs requiring faster networking ==== | ||
| + | |||
| + | If your application is using MPI and may benefit from a high bandwidth (the amount of data transferred per second) and/or low latency (the amount of time it takes for the first bit to arrive) you can send the job to the '' | ||
| + | < | ||
| + | #SBATCH --partition=parallel | ||
| + | </ | ||
| + | Since there are only limited resources available in this partition there are two important guidelines | ||
| + | - When using just a few cores you might as well run your application on a single node | ||
| + | - It would be wise to test the performance difference between a job running on the regular nodes and the omnipath nodes, since there may be more capacity available in the '' | ||
| ==== Memory ==== | ==== Memory ==== | ||
| Line 173: | Line 182: | ||
| The following table gives an overview and description of other useful parameters that can be used: | The following table gives an overview and description of other useful parameters that can be used: | ||
| - | ^Parameter ^Description | + | ^Parameter ^Description ^ |
| - | |%%--%%job-name |Specify a name for the job, which will be shown in the job overview | + | |%%--%%job-name |Specify a name for the job, which will be shown in the job overview | |
| - | |%%--%%mail-type|Comma-separated list of events for which an email notification should be sent. Valid event names are:\\ //ALL// - equivalent to: //BEGIN//, //END//, //FAIL// and // | + | | |
| - | |%%--%%mail-user|Email address to receive notifications of job state changes as requested with the —mail-type option\\ **WARNING: due to bans from Microsoft we put a limit on the number of mails that can be sent to Hotmail/ | + | |%%--%%output |
| - | |%%--%%output | + | |
| |%%--%%partition|Specify in which partition the job has to run | | |%%--%%partition|Specify in which partition the job has to run | | ||
| Line 184: | Line 192: | ||
| < | < | ||
| #SBATCH --job-name=my_first_slurm_job | #SBATCH --job-name=my_first_slurm_job | ||
| - | #SBATCH --mail-type=BEGIN, | ||
| - | #SBATCH --mail-user=some@user.com | ||
| #SBATCH --output=job-%j.log | #SBATCH --output=job-%j.log | ||
| #SBATCH --partition=short | #SBATCH --partition=short | ||
| Line 230: | Line 236: | ||
| module purge | module purge | ||
| - | module load GROMACS/4.6.7-ictce-7.2.4-mt | + | module load GROMACS/2021.5-foss-2021b |
| - | srun mdrun | + | srun gmx_mpi < |
| </ | </ | ||
| - | This script will ask for 2 nodes and 4 tasks per node. The maximum runtime is 2 days and 12 hours. The amount of memory available for the job is almost 4 GiB per node. Once the job is executed, it will first load the module for Gromacs 4.6. To start a parallel (MPI) run of mdrun, we use srun (instead of mpirun) to start all mdrun processes on the allocated nodes. | + | This script will ask for 2 nodes and 4 tasks per node. The maximum runtime is 2 days and 12 hours. The amount of memory available for the job is almost 4 GiB per node. Once the job is executed, it will first load the module for GROMACS 2021.5 To start a parallel (MPI) run, we use srun (instead of mpirun) to start all GROMACS |