Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
habrok:job_management:checking_jobs [2023/03/22 13:07] – [Using jobinfo] fokkehabrok:job_management:checking_jobs [2024/06/21 09:51] (current) – [jobinfo GPU example] admin
Line 62: Line 62:
  
 From the moment that a job is submitted, you can request relevant information about this job using the jobinfo command. If you forgot the job ID that you want to have the information for, then you are able to request all jobs that you have submitted with ''squeue'' (see above), [[habrok:advanced_job_management:getting_information_about_jobs_nodes_partitions|sacct or sstat]]. The jobinfo command basically combines relevant output of the ''squeue'', ''sacct'' and ''sstat'' commands. It is also possible to use these commands themselves, especially if you want to have more detailed information about your jobs, such as info about available node partitions, lists of all your submitted jobs, a list of jobs that are in the queue or information about a node (that your job is running on). From the moment that a job is submitted, you can request relevant information about this job using the jobinfo command. If you forgot the job ID that you want to have the information for, then you are able to request all jobs that you have submitted with ''squeue'' (see above), [[habrok:advanced_job_management:getting_information_about_jobs_nodes_partitions|sacct or sstat]]. The jobinfo command basically combines relevant output of the ''squeue'', ''sacct'' and ''sstat'' commands. It is also possible to use these commands themselves, especially if you want to have more detailed information about your jobs, such as info about available node partitions, lists of all your submitted jobs, a list of jobs that are in the queue or information about a node (that your job is running on).
 +
 +The code for the jobinfo command is available at: https://github.com/rug-cit-hpc/hb-jobinfo
  
 After you submitted a job, you can request the information by using the command: After you submitted a job, you can request the information by using the command:
Line 69: Line 71:
 </code> </code>
 \\ \\
-**Note that jobinfo still needs be ported to Hábrók** E.g. ''%%jobinfo 633658%%'' will give the following information:+E.g. ''%%jobinfo 633658%%'' will give the following information:
  
 <code> <code>
-Name                tutorial +Job ID                         : 633658 
-User                p123456 +Name                           My_job 
-Partition           nodes +User                           p_number 
-Nodes               node012 +Partition                      regularlong 
-Cores               2 +Nodes                          node[6-7,14,19] 
-State               : COMPLETED +Number of Nodes                : 4 
-Submit              2015-09-11T13:03:03 +Cores                          16 
-Start               2015-09-11T13:03:19 +Number of Tasks                : 4 
-End                 2015-09-11T13:03:40 +State                          : COMPLETED   
-Reserved walltime   : 00:20:00 +Submit                         2024-04-01T12:46:52 
-Used walltime       00:00:21 +Start                          2024-04-01T16:15:22 
-Used CPU time       00:00:20 +End                            2024-04-05T20:30:22 
-% User (Computation): 23.87+Reserved walltime              10-00:00:00 
-% System (I/O)      76.13+Used walltime                   4-04:15:00 
-Mem reserved        1000M/node +Used CPU time                  14-22:06:02 (Efficiency: 22.33%) 
-Max Mem used        42.58M (node012) +% User (Computation)           99.77
-Max Disk Write      : 0.00  (node012+% System (I/O)                  0.23
-Max Disk Read       : 51.20K (node012)+Total memory reserved          40G 
 +Maximum memory used            8.71G 
 +Hints and tips      : 
 + 1) The program efficiency is lowYour program is not using the assigned cores 
 +    effectively. Please check if you are using all the cores you requested. 
 +    You may also need to check the file in- and output pattern of your program. 
 + 2You requested much more CPU memory than your program used. 
 +    Please reduce the requested amount of memory. 
 + *For more information on these issues see: 
 +    https://wiki.hpc.rug.nl/habrok/additional_information/job_hints
 </code> </code>
  
 +The jobinfo command supports the option ''-l'', which will show more advanced statistics.
 ===== Interpreting jobinfo output ===== ===== Interpreting jobinfo output =====
  
-This information shows that the job has run for 21 seconds, while 20 minutes were requested. With this knowledge similar jobs can be submitted with sbatch, while requesting less time for the resources. By doing so, the SLURM scheduler might be able to schedule your job earlier than it might have for a 20 minute request. In this case 20 minutes is not a lot, but if your job runs for hours or more, then you might profit from requesting resources for the time that it actually needs instead of 10 times more.+This information shows that the job has run for more than 4 days, while 10 days were requested. With this knowledge similar jobs can be submitted with sbatch, while requesting less time for the resources. By doing so, the SLURM scheduler might be able to schedule your job earlier than it might have for a 10 day request. 
  
-The same is true for the number of requested cores (which is requested with --ntasks, --ntasks-per-node, and/or --cpus-per-task in the batch script). The number of cores requested in this example is 2: for an efficient job, the used CPU time should be about twice the used walltime. This is not the case here, because the values are about the same. This implies that one core was used and the other was doing nothing. Hencethe number of cores requested in this job should have been 1 and not 2By doing so, the SLURM scheduler is able to run your job earlier while you do not lose any time, performance or accuracyFurthermore, your fairshare decreases less by requesting fewer cores, meaning that your next jobs will get higher priority.+An important metric is the Efficiency. This is related to the number of requested cores (which is requested with ''--ntasks''''--ntasks-per-node'', and/or ''--cpus-per-task'' in the batch script). The number of cores requested in this example is 16. For an efficient job, the used CPU time should be almost 16 times the used walltime. In this case the used CPU time is much lower, leading to an efficiency of only 22.33%. This suggests that only 4 of the 16 requested cores were actually used. Given the fact that the job was run on four nodes with four tasks, this means that either only one node was actually used, or that only a single CPU core per task was used. If the program was started with srunit should have been started on each node, which makes it quite probable that these tasks did not employ multithreading to start up more processesThe way to fix this should be checked in the program documentation. 
 +The low efficiency results in hint being displayed.
  
-Finally, we look at the amount of memory reserved. Each standard node has 128GB of memory and 24 cores, meaning that there is on average ~5GB per core availableFor simple jobs this should be more than enoughIf you do request more than 5GB memory, it might be useful to check the "Max Mem used" afterwards with jobinfo if you really needed the extra memory and possibly adjust for (similar) future jobs. In this case ~42MB is used at the maximum of this job, thus requesting 1000MB is also not that efficient (100MB should have been enough).+Not using the resources you requested is troublesome because somebody else could have used them insteadFurthermore your priority for newer jobs will be lower than necessary as all allocated resources are attributed to your cluster usage, reducing your priority for the next job more than necessaryAlso requesting more resources than necessary might increase the waiting time for your job, as it will take more time for these resources to become available.
  
 +Finally, we look at the amount of memory reserved. Each standard node has 512GB of memory and 128 cores, meaning that there is on average 4GB per core available. For simple jobs this should be more than enough. If you do request more than 4GB memory, it might be useful to look at the "Max Mem used" afterwards with jobinfo to check if you really needed the extra memory. You can then adjust the requested amount of memory for for similar future jobs. 
 +In this case 8.71G is used at the maximum of this job, thus requesting 40GB is not very efficient. In this case the amount requested per core is 2.5 GB, so for this case this is not a big issue.
  
 +===== jobinfo GPU example =====
 +
 +Here is the output of a job that was using a GPU:
 +<code>
 +Job ID                         : 833913
 +Name                           : gpu_job
 +User                           : s_number
 +Partition                      : gpumedium
 +Nodes                          : a100gpu5
 +Number of Nodes                : 1
 +Cores                          : 16
 +Number of Tasks                : 1
 +State                          : COMPLETED  
 +Submit                         : 2024-05-11T18:44:22
 +Start                          : 2024-05-11T18:46:03
 +End                            : 2024-05-11T21:14:37
 +Reserved walltime              : 06:00:00
 +Used walltime                  : 02:28:34
 +Used CPU time                  : 23:20:49 (Efficiency: 58.93%)
 +% User (Computation)           : 86.69%
 +% System (I/O)                 : 13.31%
 +Total memory reserved          : 16G
 +Maximum memory used            : 4.29G
 +Requested GPUs                 : a100=1
 +Allocated GPUs                 : a100=1
 +Max GPU utilization            : 35%
 +Max GPU memory used            : 3.76G
 +</code>
  
 +For a GPU job information about the GPU memory usage, GPU utilization and requested GPU resources is shown. The GPU utilization is the maximum utilization that was measured over the job's lifetime. Unfortunately this number may therefore not be very relevant as their may have been long periods of much lower GPU utilization. 
 +As you can see CPU memory and GPU memory are reported separately as they are different types of memory. CPU memory is connected to the CPU and GPU memory is separate memory on the GPU board.