{{indexmenu_n>2}}
===== FAQ =====
==== Will my Peregrine account work on Hábrók? ====
Because there are quite a number of inactive accounts on Peregrine, we have decided not to automatically migrate the accounts to Hábrók, so your account on Peregrine will not automatically work on Hábrók.
If you want to use the new cluster, you need to request access to it by using the [[https://iris.service.rug.nl/|Self-Service Portal IRIS]].
Please go to Research and Innovation Support → Computing and Research Support Facilities → High Performance Computing Cluster → Request Hábrók Account.
==== Will my data be automatically moved from Peregrine to Hábrók? ====
We will not automatically move your data from Peregrine to Hábrók. The filesystems on Peregrine, ''/home'' and ''/data'' will be made available read-only on Hábrók for three months after Peregrine shuts down. You will have this time to move your data to permanent storage on Habrok.
**The data on Peregrine /scratch will not be migrated, since it is temporary space only.**
==== How do I migrate data to Habrok? ====
The best tool for copying data from one location to the other is ''rsync''. Here is an example showing how to synchronize a directory with files from the Peregrine ''/data'', available under ''/mnt/pg-data'' on the login nodes to the new ''/projects'' on Hábrók:
rsync -av /mnt/pg-data/p123456/important_data/ /projects/p123456/important_data/
Note the slashes at the end of the source and the destination. The following flags have been used:
* ''-a'': archive to copy everything recursively including file ownership and permissions
* ''-v'': verbose to show the progress
You can also enable compression using ''-z'', but this will only speed up the transfer of highly compressible data. Since sufficient bandwidth should be available for the transfers compression will probably only add overhead.
The best thing about using ''rsync'' is that you can restart the transfer in case of failures, and ''rsync'' will just continue where it stopped.
==== How do I migrate data from a group folder to Habrok? ====
The group folders will also be available at ''/mnt/pg_data/pg-group'', and we are currently creating new groups on Habrok. These groups will have a similar naming pattern as on Peregrine, e.g. ''pg-group'' becomes ''hb-group''. We will then add the users to these new groups, and the new group will be the owner of the folder ''/mnt/pg_data/pg-group''. From this point, the data can be copied over to Habrok using ''rsync'', as explained above. The location for the group folder on Habrok will be ''/projects/hb-group''.
==== How do I solve a processor or OS not supporting certain instructions? ====
**Short answer**: ensure you compile your program on the same CPU architecture as the compute node you will then run it on.
You may run across an error when you try to run an application that tells you the operating system or processor does not support certain instructions. This is often accompanied by a list of acronyms which correspond to the instructions. This is an example error message:
Please verify that both the operating system and the processor support Intel(R) X87, CMOV, MMX, FXSAVE, SSE, SSE2, SSE3, SSSE3, SSE4_1, SSE4_2, MOVBE, POPCNT, AVX, F16C, FMA, BMI, LZCNT, AVX2, AVX512F, AVX512DQ, ADX, AVX512CD, AVX512BW and AVX512VL instructions.
This happens because your program was compiled in a system that supports these instructions, but is running in a system that doesn't. Most likely you may have compiled your program on a different node than the one you are running your program from.
To get around this, you can make sure you compile your program in the same system that it will run. For example: if you need to run your program on one of the ''himem'' nodes, then you can submit a job that compiles your program, and subsequent jobs that then use the resulting executables to run your program. Alternatively, you can start an interactive session on a node with the target CPU architecture and compile your code interactively there. For example, assuming you want to compile your program for the ''himem'' nodes you could do ''srun --time=01:00:00 --partition=himem --pty /bin/bash''. This will queue an interactive session lasting up to one hour on which you can compile your code.