Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
dcc:itsol:whisper:scripts [2024/08/12 10:15] – fixed spacing in the "Specialized scripts" section giuliodcc:itsol:whisper:scripts [2025/05/09 14:04] (current) – changed "" to <code></code> giulio
Line 5: Line 5:
  
 In order to run the script, you will first have to create it. Open your text editor of choice and copy the highlighted code below into the new file. Save the file with the name: ''whisper_runall.sh''. In order to run the script, you will first have to create it. Open your text editor of choice and copy the highlighted code below into the new file. Save the file with the name: ''whisper_runall.sh''.
 +
 +**Note:** The PyTorch module needed to install Whisper has changed due to an update on the dependencies of Whisper. The module displayed in the screenshots is the previous version. Please make sure to **use the version of the module you find in the text**.
  
  
Line 11: Line 13:
 ---- ----
  
 +<code>
 +#!/bin/bash
 +#SBATCH --time=08:00:00
 +#SBATCH --gpus-per-node=1
 +#SBATCH --mem=16000
  
-''#!/bin/bash'' +module load PyTorch/2.1.2-foss-2023a-CUDA-12.1.1 
- +source $HOME/.envs/whisper/bin/activate 
-''#SBATCH %%--%%time=08:00:00'' +whisper $HOME/whisper_audio/* --model large-v2 --output_dir $HOME/whisper_output/ 
- +</code>
-''#SBATCH %%--%%gpus-per-node=1'' +
- +
-''#SBATCH %%--%%mem=16000'' +
- +
-\\ +
- +
-''module load PyTorch/1.12.1-foss-2022a-CUDA-11.7.0'' +
- +
-''source $HOME/.envs/whisper/bin/activate'' +
- +
-''whisper $HOME/whisper_audio/%%--%%model large-v2 %%--%%output_dir $HOME/whisper_output/'' +
 ---- ----
  
Line 72: Line 67:
 The next two lines make sure that the virtual environment and the dependencies that Whisper needs to run are correctly loaded: The next two lines make sure that the virtual environment and the dependencies that Whisper needs to run are correctly loaded:
  
-  * ''module load PyTorch/1.12.1-foss-2022a-CUDA-11.7.0''+  * ''module load PyTorch/2.1.2-foss-2023a-CUDA-12.1.1''
  
 This line loads the program packages that Whisper needs to run. Please be sure to not modify it, otherwise the script is not going to load the correct dependencies. This line loads the program packages that Whisper needs to run. Please be sure to not modify it, otherwise the script is not going to load the correct dependencies.
Line 103: Line 98:
  
 ++++ Click to display the script | ++++ Click to display the script |
 +<code>
 +#!/bin/bash
 +#SBATCH --time=08:00:00
 +#SBATCH --gpus-per-node=1
 +#SBATCH --mem=16000
  
-''#!/bin/bash'' +module load PyTorch/2.1.2-foss-2023a-CUDA-12.1.1 
- +source $HOME/.envs/whisper/bin/activate 
-''#SBATCH %%--%%time=08:00:00'' +whisper $HOME/whisper_audio/* --model large-v2 --language English --output_dir $HOME/whisper_output/ 
- +</code>
-''#SBATCH %%--%%gpus-per-node=1'' +
- +
-''#SBATCH %%--%%mem=16000'' +
- +
-\\ +
- +
-''module load PyTorch/1.12.1-foss-2022a-CUDA-11.7.0'' +
- +
-''source $HOME/.envs/whisper/bin/activate'' +
- +
-''whisper $HOME/whisper_audio/%%--%%model large-v2 %%--%%language English %%--%%output_dir $HOME/whisper_output/'' +
 ++++ ++++
 \\ \\
Line 125: Line 113:
  
 Whisper is also capable of translating any X language into English. To let the program know that you wish to see a translation instead of a transcription, you need to specify which ''%%--%% task'' the program needs to perform. The script below is already edited to perform a translation. Please keep in mind that the transcript will **only** be translated then, and **the original text will not be displayed in the output files**. If you need to have the original as a means of comparison, you can either first run the general script on the audio, or you can run a forced language script (see above) before you run the translation. Whisper is also capable of translating any X language into English. To let the program know that you wish to see a translation instead of a transcription, you need to specify which ''%%--%% task'' the program needs to perform. The script below is already edited to perform a translation. Please keep in mind that the transcript will **only** be translated then, and **the original text will not be displayed in the output files**. If you need to have the original as a means of comparison, you can either first run the general script on the audio, or you can run a forced language script (see above) before you run the translation.
 +
 +When you save the script, you can call it ''whisper_translate.sh''. To execute it, simply type into the terminal ''sbatch whisper_translate.sh'' and follow the same steps as the general script (see [[dcc:itsol:whisper:running|here]]).
  
 **Note**: Regardless of whether you run the transcription or the translation first, the file names of the output files will be the exact same. In order for the second operation (translation or transcription) to not overwrite the first, you need to rename the output files before you run the second operation. In this way, the output of your first operation will remain untouched by the second operation. **Note**: Regardless of whether you run the transcription or the translation first, the file names of the output files will be the exact same. In order for the second operation (translation or transcription) to not overwrite the first, you need to rename the output files before you run the second operation. In this way, the output of your first operation will remain untouched by the second operation.
  
 ++++ Click to display the script | ++++ Click to display the script |
 +<code>
 +#!/bin/bash
 +#SBATCH --time=08:00:00
 +#SBATCH --gpus-per-node=1
 +#SBATCH --mem=16000
  
-''#!/bin/bash'' +module load PyTorch/2.1.2-foss-2023a-CUDA-12.1.1 
- +source $HOME/.envs/whisper/bin/activate 
-''#SBATCH %%--%%time=08:00:00'' +whisper $HOME/whisper_audio/* --model large-v2 --task translate --output_dir $HOME/whisper_output/ 
- +</code>
-''#SBATCH %%--%%gpus-per-node=1'' +
- +
-''#SBATCH %%--%%mem=16000'' +
- +
-\\ +
- +
-''module load PyTorch/1.12.1-foss-2022a-CUDA-11.7.0'' +
- +
-''source $HOME/.envs/whisper/bin/activate'' +
- +
-''whisper $HOME/whisper_audio/%%--%%model large-v2 %%--%%task translate %%--%%output_dir $HOME/whisper_output/'' +
 ++++  ++++