Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
habrok:job_management:scheduling_system [2022/12/22 13:02] – [Nodes and cores] fokke | habrok:job_management:scheduling_system [2024/11/18 12:18] (current) – Format table pedro | ||
---|---|---|---|
Line 107: | Line 107: | ||
If your application is using MPI and may benefit from a high bandwidth (the amount of data transferred per second) and/or low latency (the amount of time it takes for the first bit to arrive) you can send the job to the '' | If your application is using MPI and may benefit from a high bandwidth (the amount of data transferred per second) and/or low latency (the amount of time it takes for the first bit to arrive) you can send the job to the '' | ||
< | < | ||
- | #SBATCH --partition=omnipath | + | #SBATCH --partition=parallel |
</ | </ | ||
Since there are only limited resources available in this partition there are two important guidelines | Since there are only limited resources available in this partition there are two important guidelines | ||
Line 182: | Line 182: | ||
The following table gives an overview and description of other useful parameters that can be used: | The following table gives an overview and description of other useful parameters that can be used: | ||
- | ^Parameter ^Description | + | ^Parameter ^Description ^ |
- | |%%--%%job-name |Specify a name for the job, which will be shown in the job overview | + | |%%--%%job-name |Specify a name for the job, which will be shown in the job overview | |
- | |%%--%%mail-type|Comma-separated list of events for which an email notification should be sent. Valid event names are:\\ //ALL// - equivalent to: //BEGIN//, //END//, //FAIL// and // | + | | |
- | |%%--%%mail-user|Email address to receive notifications of job state changes as requested with the —mail-type option\\ **WARNING: due to bans from Microsoft we put a limit on the number of mails that can be sent to Hotmail/ | + | |%%--%%output |
- | |%%--%%output | + | |
|%%--%%partition|Specify in which partition the job has to run | | |%%--%%partition|Specify in which partition the job has to run | | ||
Line 193: | Line 192: | ||
< | < | ||
#SBATCH --job-name=my_first_slurm_job | #SBATCH --job-name=my_first_slurm_job | ||
- | #SBATCH --mail-type=BEGIN, | ||
- | #SBATCH --mail-user=some@user.com | ||
#SBATCH --output=job-%j.log | #SBATCH --output=job-%j.log | ||
#SBATCH --partition=short | #SBATCH --partition=short | ||
Line 239: | Line 236: | ||
module purge | module purge | ||
- | module load GROMACS/4.6.7-ictce-7.2.4-mt | + | module load GROMACS/2021.5-foss-2021b |
- | srun mdrun | + | srun gmx_mpi < |
</ | </ | ||
- | This script will ask for 2 nodes and 4 tasks per node. The maximum runtime is 2 days and 12 hours. The amount of memory available for the job is almost 4 GiB per node. Once the job is executed, it will first load the module for Gromacs 4.6. To start a parallel (MPI) run of mdrun, we use srun (instead of mpirun) to start all mdrun processes on the allocated nodes. | + | This script will ask for 2 nodes and 4 tasks per node. The maximum runtime is 2 days and 12 hours. The amount of memory available for the job is almost 4 GiB per node. Once the job is executed, it will first load the module for GROMACS 2021.5 To start a parallel (MPI) run, we use srun (instead of mpirun) to start all GROMACS |