Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
habrok:additional_information:changelog [2025/09/23 14:24] – [2025-09-18] fokkehabrok:additional_information:changelog [2026/02/27 14:53] (current) fokke
Line 1: Line 1:
-====== Hábrok Changelog ======+====== Hábrók Changelog ======
  
 This page records major changes to Hábrók, which are mostly carried out during scheduled maintenance periods. This changelog is kept since August 13th 2025, although changes have been made during past scheduled maintenance periods and on-demand as necessary. This page records major changes to Hábrók, which are mostly carried out during scheduled maintenance periods. This changelog is kept since August 13th 2025, although changes have been made during past scheduled maintenance periods and on-demand as necessary.
 +
 +===== 2026-02-27 =====
 +
 +  * New GPU nodes have been made available. See [[..:advanced_job_management:rtx_pro_6000_gpu_nodes]] for details.
 +  * The default number of threads for OpenBLAS and MKL has been set to 1. This to prevent their automatic parallelization to interfere with the parallelization of the software using these libraries. See [[..:advanced_job_management:blas_threads]] for details.
 +
 +===== 2026-02-18 =====
 +
 +  * The [[..:connecting_to_the_system:web_portal|]] has been updated from Open OnDemand v4.0.1 to Open OnDemand v4.1.0.
 +
 +===== 2026-01-26 =====
 +
 +  * Similarly to what was done on 2026-01-06, the memory of the compute nodes was again reduced slightly because the issue was not resolved yet. This means that you may need to adjust your memory request if you want to use all available memory on a node.
 +
 +===== 2026-01-06 =====
 +
 +  * The memory on the compute nodes available to jobs has been reduced to prevent nodes from running out of memory. This caused system services to crash, causing issues on affected nodes. This means that you may need to adjust your memory request if you want to use all available memory on a node.
 +  * The nodes bought by GELIFES are out of support and as such the ''gelifes'' partition has been decommissioned.
 +
 +===== 2025-10-24 =====
 +
 +  * Long and intensive processes running on the login, interactive, and interactive GPU nodes are now automatically killed to ensure everyone can access these resources fairly. See details [[..:connecting_to_the_system:login_nodes#long_process_termination|here]].
  
 ===== 2025-09-18 ===== ===== 2025-09-18 =====
Line 7: Line 29:
   * The maintenance for 15 September had to be extended because of a vulnerability in the Linux kernel, which needed to be patched urgently. In order to patch the issue more quickly we changed from Rocky Linux to Alma Linux on the compute and user interface nodes. Since both Linux distributions are derived from the same release of Redhat Enterprise Linux this does not affect any other installed software.   * The maintenance for 15 September had to be extended because of a vulnerability in the Linux kernel, which needed to be patched urgently. In order to patch the issue more quickly we changed from Rocky Linux to Alma Linux on the compute and user interface nodes. Since both Linux distributions are derived from the same release of Redhat Enterprise Linux this does not affect any other installed software.
   * The Lmod package for the module system needed to be downgraded as the latest release introduced issues, causing the module system to fail.   * The Lmod package for the module system needed to be downgraded as the latest release introduced issues, causing the module system to fail.
-  * We've equipped 24 regular nodes with an Cornelis Networks Omnipath adapter and migrated those to the parallel partition as "omni" nodes. We've extended the ''regularshort'' and ''regularmedium'' partition to share nodes with the ''parallel'' partitions.+  * We've equipped 24 regular nodes with an Cornelis Networks Omnipath adapter and migrated those to the parallel partition as "omni" nodes. We've extended the ''regularshort'' and ''regularmedium'' partition to share nodes with the ''parallel'' partitions. This extends the available capacity for jobs running over more than one node, that require a fast, low-latency interconnect.