Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
habrok:examples:orca [2023/03/22 13:26] fokkehabrok:examples:orca [2026/01/08 13:40] (current) – Section on loading ORCA admin
Line 1: Line 1:
 ====== ORCA ====== ====== ORCA ======
 +
 +===== Accessing ORCA =====
 +
 +ORCA is available for free on Hábrók, but the developers require that users agree to the //End User License Agreement (EULA) for the ORCA software//. Without doing so, attempting to load the ORCA modules will result in a error. To confirm your agreement, please open a ticket with us at [[hpc@rug.nl]].
 +
 +===== Using ORCA =====
  
 The following example can be used as a job script for running ORCA jobs: The following example can be used as a job script for running ORCA jobs:
  
-<code>+<code bash orca_example_job.sh>
 #!/bin/bash #!/bin/bash
 #SBATCH --job-name=orca #SBATCH --job-name=orca
Line 12: Line 18:
 #SBATCH --mem-per-cpu=1gb #SBATCH --mem-per-cpu=1gb
  
-module load ORCA/5.0.4-gompi-2022a+module load ORCA/6.1.1-gompi-2023b-avx2
 $EBROOTORCA/bin/orca my_orca_file.inp $EBROOTORCA/bin/orca my_orca_file.inp
 </code> </code>
 This job scripts requests a total of 12x1 cores. It does not specify whether/how many tasks should be run on the same node: this can be achieved by using additional options like ''%%--ntasks-per-node%%'' and ''%%--nodes%%'', if necessary. This job scripts requests a total of 12x1 cores. It does not specify whether/how many tasks should be run on the same node: this can be achieved by using additional options like ''%%--ntasks-per-node%%'' and ''%%--nodes%%'', if necessary.
  
-Note that ORCA itself will handle the parallelization by calling mpirun, so you do not need to use mpirun or srun in your job script. You do have to call the orca executable with the full pathname, which you can easily do by using $EBROOTORCA (a variable that points to the ORCA installation directory).+Note that ORCA itself will handle the parallelization by calling ''mpirun'', so you do not need to use ''mpirun'' or ''srun'' in your job script. You do have to call the ''orca'' executable with the full pathname, which you can easily do by using ''$EBROOTORCA'' (a variable that points to the ORCA installation directory).
  
 Finally, make sure to use the same number of requested cores in your ORCA input file, e.g.: Finally, make sure to use the same number of requested cores in your ORCA input file, e.g.:
  
-<code>+<code txt my_orca_file.inp>
  %pal nprocs 12 end  %pal nprocs 12 end
  ! RHF TightSCF PModel  ! RHF TightSCF PModel