<?xml version="1.0" encoding="UTF-8"?>
<!-- generator="FeedCreator 1.8" -->
<?xml-stylesheet href="https://wiki.hpc.rug.nl/lib/exe/css.php?s=feed" type="text/css"?>
<rdf:RDF
    xmlns="http://purl.org/rss/1.0/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
    xmlns:dc="http://purl.org/dc/elements/1.1/">
    <channel rdf:about="https://wiki.hpc.rug.nl/feed.php">
        <title>CIT Research Documentation - habrok:advanced_job_management</title>
        <description>University of Groningen</description>
        <link>https://wiki.hpc.rug.nl/</link>
        <image rdf:resource="https://wiki.hpc.rug.nl/_media/wiki/logo.png" />
       <dc:date>2026-04-14T14:32:06+00:00</dc:date>
        <items>
            <rdf:Seq>
                <rdf:li rdf:resource="https://wiki.hpc.rug.nl/habrok/advanced_job_management/blas_threads?rev=1772204517&amp;do=diff"/>
                <rdf:li rdf:resource="https://wiki.hpc.rug.nl/habrok/advanced_job_management/checking_job_performance?rev=1679490560&amp;do=diff"/>
                <rdf:li rdf:resource="https://wiki.hpc.rug.nl/habrok/advanced_job_management/getting_information_about_jobs_nodes_partitions?rev=1679491131&amp;do=diff"/>
                <rdf:li rdf:resource="https://wiki.hpc.rug.nl/habrok/advanced_job_management/interactive_jobs?rev=1736331510&amp;do=diff"/>
                <rdf:li rdf:resource="https://wiki.hpc.rug.nl/habrok/advanced_job_management/job_arrays?rev=1774620592&amp;do=diff"/>
                <rdf:li rdf:resource="https://wiki.hpc.rug.nl/habrok/advanced_job_management/job_dependencies?rev=1608631688&amp;do=diff"/>
                <rdf:li rdf:resource="https://wiki.hpc.rug.nl/habrok/advanced_job_management/job_prioritization?rev=1705315718&amp;do=diff"/>
                <rdf:li rdf:resource="https://wiki.hpc.rug.nl/habrok/advanced_job_management/many_file_jobs?rev=1752070981&amp;do=diff"/>
                <rdf:li rdf:resource="https://wiki.hpc.rug.nl/habrok/advanced_job_management/passing_parameters_to_a_job_script?rev=1773668491&amp;do=diff"/>
                <rdf:li rdf:resource="https://wiki.hpc.rug.nl/habrok/advanced_job_management/rtx_pro_6000_gpu_nodes?rev=1772187818&amp;do=diff"/>
                <rdf:li rdf:resource="https://wiki.hpc.rug.nl/habrok/advanced_job_management/running_jobs_on_gpus?rev=1772187624&amp;do=diff"/>
                <rdf:li rdf:resource="https://wiki.hpc.rug.nl/habrok/advanced_job_management/special_partitions?rev=1768385224&amp;do=diff"/>
                <rdf:li rdf:resource="https://wiki.hpc.rug.nl/habrok/advanced_job_management/start?rev=1681826336&amp;do=diff"/>
            </rdf:Seq>
        </items>
    </channel>
    <image rdf:about="https://wiki.hpc.rug.nl/_media/wiki/logo.png">
        <title>CIT Research Documentation</title>
        <link>https://wiki.hpc.rug.nl/</link>
        <url>https://wiki.hpc.rug.nl/_media/wiki/logo.png</url>
    </image>
    <item rdf:about="https://wiki.hpc.rug.nl/habrok/advanced_job_management/blas_threads?rev=1772204517&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2026-02-27T15:01:57+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Controlling the number of threads for OpenBLAS and Intel MKL</title>
        <link>https://wiki.hpc.rug.nl/habrok/advanced_job_management/blas_threads?rev=1772204517&amp;do=diff</link>
        <description>Controlling the number of threads for OpenBLAS and Intel MKL

The number of threads for the OpenBLAS and Intel MKL numerical libraries is set to 1 on loading their module. 
This to prevent their parallelization to interfere with the parallelization the code using these libraries.</description>
    </item>
    <item rdf:about="https://wiki.hpc.rug.nl/habrok/advanced_job_management/checking_job_performance?rev=1679490560&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2023-03-22T13:09:20+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Checking job performance</title>
        <link>https://wiki.hpc.rug.nl/habrok/advanced_job_management/checking_job_performance?rev=1679490560&amp;do=diff</link>
        <description>Checking job performance

Sometimes it is useful to take a closer look on the performance of jobs using the toolbox that Linux has.
We will not describe the details of these tools but just mention a few.

Logging in to compute nodes

Using the ssh command line tool it is possible to login into nodes where one of your jobs is running. Please note that you can only login into these nodes. A connection to another node will be refused.</description>
    </item>
    <item rdf:about="https://wiki.hpc.rug.nl/habrok/advanced_job_management/getting_information_about_jobs_nodes_partitions?rev=1679491131&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2023-03-22T13:18:51+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Getting information about jobs, nodes and partitions</title>
        <link>https://wiki.hpc.rug.nl/habrok/advanced_job_management/getting_information_about_jobs_nodes_partitions?rev=1679491131&amp;do=diff</link>
        <description>Getting information about jobs, nodes and partitions

There are multiple commands available that will show information about jobs, nodes, partitions or accounting information. All these commands have a lot of different options that can be used. We will give a short overview of some of the useful commands and options. For more information about any of the commands, click on its name to go to the documentation page on the SLURM website.</description>
    </item>
    <item rdf:about="https://wiki.hpc.rug.nl/habrok/advanced_job_management/interactive_jobs?rev=1736331510&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2025-01-08T10:18:30+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Interactive jobs</title>
        <link>https://wiki.hpc.rug.nl/habrok/advanced_job_management/interactive_jobs?rev=1736331510&amp;do=diff</link>
        <description>Interactive jobs

As described in the section about srun on the page about job scripts, srun can be used in your job script to launch tasks. However, you can also use srun on the login nodes to run tasks in an interactive way on one of the compute nodes, without having to write a job script. The srun can take more or less the same arguments as sbatch, i.e. the same options you can put in your job script, allowing to specify the requirements for this interactive job.</description>
    </item>
    <item rdf:about="https://wiki.hpc.rug.nl/habrok/advanced_job_management/job_arrays?rev=1774620592&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2026-03-27T14:09:52+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Job arrays</title>
        <link>https://wiki.hpc.rug.nl/habrok/advanced_job_management/job_arrays?rev=1774620592&amp;do=diff</link>
        <description>Job arrays

Job arrays allow you to easily submit a whole bunch of very similar jobs with a single job script. All jobs need to have the same resource requirements. The job array allows you to define some range of numbers; the length of this range determines how many jobs will be submitted. Furthermore, each job gets one of the numbers in this range through an environment variable</description>
    </item>
    <item rdf:about="https://wiki.hpc.rug.nl/habrok/advanced_job_management/job_dependencies?rev=1608631688&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2020-12-22T10:08:08+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Job dependencies</title>
        <link>https://wiki.hpc.rug.nl/habrok/advanced_job_management/job_dependencies?rev=1608631688&amp;do=diff</link>
        <description>Job dependencies

SLURM allows you to define different kind of dependencies between jobs, for instance to make sure that certain jobs will run only if another job succeeded/failed. The dependencies can be added to your job script using the following line:</description>
    </item>
    <item rdf:about="https://wiki.hpc.rug.nl/habrok/advanced_job_management/job_prioritization?rev=1705315718&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2024-01-15T10:48:38+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Job prioritization</title>
        <link>https://wiki.hpc.rug.nl/habrok/advanced_job_management/job_prioritization?rev=1705315718&amp;do=diff</link>
        <description>Job prioritization

The SLURM scheduler uses a priority based scheduling method. For each submitted job a priority is calculated; how this is done can be found below. The waiting job with the highest priority will, in principle, start first, except when a smaller/shorter job can start without delaying a job with a higher priority. In order to get some insights in why your job is waiting and what the position of your job in the queue is, several commands can be used. They can be found on this pag…</description>
    </item>
    <item rdf:about="https://wiki.hpc.rug.nl/habrok/advanced_job_management/many_file_jobs?rev=1752070981&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2025-07-09T14:23:01+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Many File Jobs</title>
        <link>https://wiki.hpc.rug.nl/habrok/advanced_job_management/many_file_jobs?rev=1752070981&amp;do=diff</link>
        <description>Many File Jobs

This page of the wiki is aimed at helping reduce the number of files you work with if these number in the thousands. Jobs with many files put a lot of load on the cluster&#039;s I/O slowing things down for everyone including yourself. Given that most jobs requiring many thousands of files generally fit in the field of data science and therefore make use of python, most of the methods described here target that, however at least one universal method will be explored as well.</description>
    </item>
    <item rdf:about="https://wiki.hpc.rug.nl/habrok/advanced_job_management/passing_parameters_to_a_job_script?rev=1773668491&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2026-03-16T13:41:31+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Passing parameters to a job script</title>
        <link>https://wiki.hpc.rug.nl/habrok/advanced_job_management/passing_parameters_to_a_job_script?rev=1773668491&amp;do=diff</link>
        <description>Passing parameters to a job script

It is possible to provide parameters to the sbatch command:


sbatch jobscript.sh parameter1 parameter2




When the job starts, it will start the jobscript.sh as “jobscript.sh parameter1 parameter2”. In your (Bash) script you can access the parameters in the usual way, e.g. with $1 and $2 to get the individual values or $* to get all parameter values.</description>
    </item>
    <item rdf:about="https://wiki.hpc.rug.nl/habrok/advanced_job_management/rtx_pro_6000_gpu_nodes?rev=1772187818&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2026-02-27T10:23:38+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Nvidia RTX Pro 6000 GPU Nodes</title>
        <link>https://wiki.hpc.rug.nl/habrok/advanced_job_management/rtx_pro_6000_gpu_nodes?rev=1772187818&amp;do=diff</link>
        <description>Nvidia RTX Pro 6000 GPU Nodes

On 2026-02-27 Hábrók was upgraded with three new GPU nodes equipped with NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs with 96GB of VRAM. Each node has 8 GPUs, two Zen 5 AMD EPYC 9575F 64-Core CPUs for a total of 128 cores and approximately 1.5TB of DDR5 memory. This makes them particularly suitable for AI workloads, for example, inference and large language models.</description>
    </item>
    <item rdf:about="https://wiki.hpc.rug.nl/habrok/advanced_job_management/running_jobs_on_gpus?rev=1772187624&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2026-02-27T10:20:24+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Running jobs on GPUs</title>
        <link>https://wiki.hpc.rug.nl/habrok/advanced_job_management/running_jobs_on_gpus?rev=1772187624&amp;do=diff</link>
        <description>Running jobs on GPUs

If you want your job to make use of a special resource like a GPU, you will have to request these. This can be done using the new Slurm option:


#SBATCH --gpus-per-node=n


Where n is the number of GPUs you want to use per node.</description>
    </item>
    <item rdf:about="https://wiki.hpc.rug.nl/habrok/advanced_job_management/special_partitions?rev=1768385224&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2026-01-14T10:07:04+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Group specific partitions</title>
        <link>https://wiki.hpc.rug.nl/habrok/advanced_job_management/special_partitions?rev=1768385224&amp;do=diff</link>
        <description>Group specific partitions

Group specific nodes

Some groups and institutes have bought their own extensions of Peregrine and Hábrók. These nodes have been put into special partitions. They share the same storage and software environment as all the other nodes.</description>
    </item>
    <item rdf:about="https://wiki.hpc.rug.nl/habrok/advanced_job_management/start?rev=1681826336&amp;do=diff">
        <dc:format>text/html</dc:format>
        <dc:date>2023-04-18T13:58:56+00:00</dc:date>
        <dc:creator>Anonymous (anonymous@undisclosed.example.com)</dc:creator>
        <title>Advanced Job Management</title>
        <link>https://wiki.hpc.rug.nl/habrok/advanced_job_management/start?rev=1681826336&amp;do=diff</link>
        <description>Advanced Job Management
advanced_job_management index</description>
    </item>
</rdf:RDF>
