Differences

This shows you the differences between two versions of the page.

Link to this comparison view

habrok:additional_information:job_hints [2020/09/18 07:41] – external edit 127.0.0.1habrok:additional_information:job_hints [2023/08/30 10:01] (current) – [The program efficiency is low. Check the file in- and output pattern of your application] camarocico
Line 23: Line 23:
 The following tips may help in reducing the problem: The following tips may help in reducing the problem:
   - It may be possible to modify the application or its settings to limit the amount of data read or written.   - It may be possible to modify the application or its settings to limit the amount of data read or written.
-  - It may be possible to switch to a different file system+  - It may be possible to use the local file system, as explained [[habrok:advanced_job_management:many_file_jobs|here]]
-      - Using /home is probably the slowest option. Switching to /data or /scratch may helpbecause more hard disks are involved for these file systems.  +
-      - The parallel shared file systems for /data and /scratch are very good in handling large files, but struggle with many small files. For such use cases using the local disk in the nodes may be a solution. The latter means that you will have to copy input data to ''$TMPDIR'' at the beginning of the job and copy relevant output from ''$TMPDIR'' back to your /home or /data area. This process works best for many files if the input and output data are stored in (compressed) archive files like tar(.gz) or zip files. +
  
 ==== The program efficiency is very low. Your program does not seem to run in parallel ==== ==== The program efficiency is very low. Your program does not seem to run in parallel ====