Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
habrok:connecting_to_the_system:login_nodes [2020/12/22 10:33] – external edit 127.0.0.1habrok:connecting_to_the_system:login_nodes [2024/12/19 09:20] (current) admin
Line 3: Line 3:
 ====== Login nodes ====== ====== Login nodes ======
  
-Peregrine has three login nodes that can be used to connect to the system. Besides redundancy reasons (you can always try another one if one of them is down), they all serve different purposes.+Hábrók has five login nodes that can be used to connect to the system. Besides redundancy reasons (you can always try another one if one of them is down), they all serve different purposes.
  
-===== Login node: peregrine.hpc.rug.nl =====+===== Login nodes =====
  
-This is the default login node that is used by most users. You can use it to connect to the system, copy your files, submit jobs, compile your code, et cetera. You should not use it to test your applications, since this might slow down the system, which will hinder other users who are trying to log in.+''login1.hb.hpc.rug.nl'' and ''login2.hb.hpc.rug.nl'' are the default login nodes that are used by most users. You can use these to connect to the system, copy your files, submit jobs, compile your code, et cetera. You should not use it to test your applications, since this might slow down the system, which will hinder other users who are trying to log in. It is also a smaller system.
  
-===== Interactive node: pg-interactive.hpc.rug.nl =====+We have set up two of these login nodes to increase the availability of the service.
  
-The interactive node is exactly same as the default login node, but it allows for a bit more testing. If you just want to run your program for a couple of minutes, this is the machine to use. Do keep in mind that this is also a shared machine and other people may also want to do some testing. So, if you need to do more intensive testing, consider submitting them as jobs.+===== Interactive nodes =====
  
-===== Interactive GPU node: pg-gpu.hpc.rug.nl =====+In Hábrók two interactive nodes have been configured, these are ''interactive1.hb.hpc.rug.nl'' and ''interactive2.hb.hpc.rug.nl''
  
-Finally, the interactive GPU node is login node equipped with two GPUsYou can use it to develop and test your GPU applications+The interactive nodes are about half the size of default compute node, and they allow for a bit more testingIf you just want to run your program for a couple of minutes, these are the machines to use. Do keep in mind that these are also a shared machines and other people may also want to do some testing. So, if you need to do longer and/or more intensive tests, these tasks should be submitted as jobs.
  
-This machine has two NVIDIA 1080 Ti desktop GPUs. The tool nvidia-smi will show if/which GPUs are in use, and you can select the one you want to use by setting:+To prevent a single user from using all capacity CPU and memory limits are in place.
  
-<code bash +===== Interactive GPU node =====
->export CUDA_VISIBLE_DEVICES=+
-or +
-export CUDA_VISIBLE_DEVICES=+
-</code>+
  
-Please keep in mind that this is also a shared machine, and more users want to use the GPUs in this machine. So, allow everyone to make use of these GPUs and do not perform long runs here. Long runs should be submitted as jobs to scheduler.+Finally, the interactive GPU nodes, ''gpu1.hb.hpc.rug.nl'' and ''gpu2.hb.hpc.rug.nl'' are login nodes equipped with a GPU. You can use them to develop and test your GPU applications.  
 + 
 +These machines have an NVIDIA L40s GPU each, which can be shared by multiple users. The tool ''nvidia-smi'' will show if the GPU is in use. 
 + 
 +Please keep in mind that this is also a shared machine, and more users want to use the GPU in this machine. So, allow everyone to make use of these GPUs and do not perform long runs here. Long runs should be submitted as jobs to scheduler
 + 
 +===== Periodic reboots ===== 
 + 
 +In order to prevent the login/interactive nodes from being filled up with temporary files and long-running processes, these nodes are rebooted every other week on Monday morning at 6:00 CE(S)T. The odd-numbered nodes (''login1'', ''interactive1'', ''gpu1'') are rebooted in odd weeks, and the even-numbered nodes (''login2'', ''interactive2'', ''gpu2'') are rebooted in even weeks.