Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
habrok:data_management:storage_areas [2024/05/30 10:58] – [/scratch] fokkehabrok:data_management:storage_areas [2025/05/20 14:09] (current) – [/scratch] Add link to Many File Jobs page pedro
Line 23: Line 23:
 There is also a limit on the number of files that can be stored. This to reduce the load on the file system metadata server, which keeps track of the data about files (time of access, change, size, location, etc.). Handling a huge number of files is a challenge for most shared file systems and accessing a huge amount of files will lead to performance bottlenecks. There is also a limit on the number of files that can be stored. This to reduce the load on the file system metadata server, which keeps track of the data about files (time of access, change, size, location, etc.). Handling a huge number of files is a challenge for most shared file systems and accessing a huge amount of files will lead to performance bottlenecks.
  
-The best way of handling data sets with many (> 10,000) files is to not store them on /scratch as is, but as (compressed) archive files. These files can then be extracted to the fast local storage on the compute nodes at the beginning of a job. +The best way of handling data sets with many (> 10,000) files is to not store them on /scratch as is, but as (compressed) archive files. These files can then be extracted to the fast local storage on the compute nodes at the beginning of a job. You can find more details and examples in our dedicated [[habrok:advanced_job_management:many_file_jobs|page]] on this topic.
  
 When the processing is performed on the fast local storage the job performance will be much better. When the processing is performed on the fast local storage the job performance will be much better.