This is an old revision of the document!


If you want to run a Large Language Model (LLM) on Habrok, here's one possible and relatively easy way to do it.

  1. Login with your account on Habrok (obviously).
    ssh pnumber@login1.hb.hpc.rug.nl

2. Start an interactive job on an A100 node (single GPU):

 ```bash
 srun --nodes=1 --ntasks=1 --partition=gpushort --mem=120G --time=04:00:00 --gres=gpu:a100:1 --pty bash
 ```

3. Load the Python and CUDA modules:

 ```bash
 module load Python/3.11.5-GCCcore-13.2.0 CUDA/12.1.1
 ```

4. Create a virtual environment (only once):

 ```bash
 python3 -m venv .env
 ```

5. Activate the venv:

 ```bash
 source .env/bin/activate
 ```

6. Upgrade `pip` (optional):

 ```bash
 pip install --upgrade pip
 ```