Workflow and data storage

In this section we will describe the basic workflow for working on the cluster. This workflow consists of five steps:

  1. Copy input data to the system wide storage area
  2. Prepare the job script:
    1. Define requirements
    2. Transfer input data to the fast local storage in the server used
    3. Run the program on the input data
    4. Transfer output data back to the central storage
  3. Submit the computational task to the job scheduling system
  4. Check the status and results of your calculations
  5. Copy results back to your local system or archival storage

This means that you'll need to know about the following topics:

  1. Data storage areas, including how to use the temporary high-performance local storage
  2. Data transfers
  3. Finding information about available software, or getting your software on the system
  4. Running computations using the job scheduler
  5. Checking the results of the computations

In this section we will focus on the data storage, and the next sections will delve deeper into the other topics, including the command-line interface, which is implied in some of the steps.

For most applications users need to work with data. Data can be parameters for a program that needs to be run, for example to set up a simulation. It can be input data that needs to be analyzed. And, finally, running simulations or data analyses will result in data containing the results of the computations.

Hábrók has its own storage system, which is decoupled from the desktop storage systems the university has. Although it would be nice to be able to access data from your desktop system directly on Hábrók, currently this is not possible. Technically this would be challenging, and there would also be performance issues, when people start to do more heavy processing on the desktop storage systems.

Since the storage is decoupled, data needs to be transferred to and from the system. Input data needs to be transferred to the Hábrók processing storage area. Any results that need to be further analyzed or stored for a longer period of time need to be transferred from Hábrók to some local or archival storage.

Hábrók currently has three storage areas, with different capabilities. On each storage area limits are enforced with respect to the amount of data stored, to ensure that each user has some space, and that the file systems will not suddenly be completely full.

On this page we will give a short description, more details can be found at Storage areas.

The home area is where users can store settings, programs and small data sets.

For larger data sets each user has access to a space on the scratch file system.
This area is only meant for data ready for processing, or recent results from processing. It is not meant for long term storage. THERE IS NO BACKUP!!

The Hábrók nodes also have local disks that can only be used by calculations running on that specific machine. This implies that this also is temporary space. These local disks are based on solid state (SSD) storage. This means that they are for most use cases much faster than the scratch area.

We therefore advise users to copy their input data sets from the scratch area to the local disk at the beginning of the job. The job can then run using the local disk for reading and writing data. At the end of the job the output has to be written back to the scratch area. This step is especially important if your input data is read multiple times or consists of many (>1000) files. Similar guidelines are applicable to output data.

Besides the storage directly available on all nodes of the cluster some external storage areas can be accessed from the login nodes. These areas are described below.

On the login nodes the /projects file system is mounted. Users can get access to storage on this storage area based on a fair use model. Additional storage is allocated on request and above a certain threshold costs are involved.

The Research Data Management system can be used from the Hábrók nodes. You can find more information about the RDMS on its dedicated wiki pages: https://wiki.hpc.rug.nl/rdms/start


Next section: Connecting to the system