Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
habrok:advanced_job_management:special_partitions [2025/08/22 15:09] – [Group specific nodes] Formatting pedrohabrok:advanced_job_management:special_partitions [2026/01/14 10:07] (current) – [GELIFES nodes] pedro
Line 21: Line 21:
 sacctmgr add user <username> account=<account> fairshare=1 sacctmgr add user <username> account=<account> fairshare=1
 </code> </code>
-Where <username> should be changed into the userid that is to be added to the account <account>. <account> must be changed into the name of the account, e.g. gelifes, digitallab, caos. The fairshare should be by default set to 1.+Where <username> should be changed into the userid that is to be added to the account <account>. <account> must be changed into the name of the account, e.g. ''digitallab''''caos''. The fairshare should be by default set to 1.
  
 In order to verify/check if a user has already been added to the account, the column “Account” in the output of the following command should show a row with “users” and one with the special account: In order to verify/check if a user has already been added to the account, the column “Account” in the output of the following command should show a row with “users” and one with the special account:
Line 54: Line 54:
 ===== GELIFES nodes ===== ===== GELIFES nodes =====
  
-The node themselves are 64 core AMD EPYC 7601 nodes, running at 2.2 GHz, with 512GB of memory. These should be suitable for most of the GELIFES workloadsThere is also quite a lot (16 TB) of **temporary** local scratch space per nodeavailable for jobs through the use $TMPDIR in the jobs scripts.+Until  the beginning of January 2026, Hábrók included nodes originally purchased by GEFLIES for the Peregrine cluster. These were 64 core AMD EPYC 7601 nodes, running at 2.2 GHz, with 512GB of memory. Because these nodes came from an earlier purchase, they were older than the existing Hábrók compute nodes and as such, their support has endedConsequentlythey have been decommissioned and the ''gelifes'' partition is no longer available
  
-==== Limits ====+Please use the ''regular'' partition instead.
  
-The gelifes partition has two types of limits: one on the number of jobs per user, and one on the number of cores allocated to different job lengths. 
- 
-|**Job type**|**Time limit**    |**Maximum number of submitted jobs per user**|**Maximum number of cores**| 
-|short       |≤ 1 day           |1000                                         |960                        | 
-|medium      |>1 day, ≤ 3 days  |1500                                         |640                        | 
-|long        |>3 days, ≤ 10 days|2000                                         |320                        | 
- 
-Note that jobs from all users contribute to the maximum number of cores that can be allocated to these jobs. This prevents the partition from being filled with long jobs, which would lead to higher waiting times for jobs. If this limit is reached, waiting jobs will get a ''%%Reason: QOSGrpCpuLimit%%''. 
- 
-==== Software ==== 
- 
-=== Modules === 
- 
-Since the instruction set of the AMD CPUs in the gelifes partition is compatible with that of the standard Intel based nodes, the software from these nodes is used on the gelifes partition. There is an issue with the intel compiler based software, however. Whereas the software based on the GNU compilers (foss toolchains) works fine, the software based on the intel toolchain does not work. This because the intel compiler introduces a CPU check in the code, which fails for the AMD nodes.\\ 
-Our advice is therefore to make use of the foss toolchains, as all relevant software should be available in the foss toolchains. 
- 
-==== GELIFES account ==== 
- 
-You can request access to the gelifes account, giving access to the GELIFES nodes by contacting Joke Bakker from GELIFES.