Whisper Guide

Attention!

The use of Whisper is at the user's own responsibility. The setup illustrated in this guide creates an instance of Whisper that is run locally and that does not send data outside of the local environment. Please make sure to handle your data correctly. If you have any doubts about how to do so, please start by following the steps on this page of our wiki: Data management safety measures.

If you have any questions on Whisper that this guide does not answer, please feel free to send us a message at dcc@rug.nl.

Introduction

This guide takes you through the steps to set up a personal system of speech-to-text transcription on the University of Groningen infrastructure (for UG staff and students) on the basis of the OpenAI Whisper automatic speech recognition (ASR) model running on the Hábrók High Performance Computing (HPC) cluster.

The process of transcribing spoken audio to text is usually a very time consuming manual process. The UG offers a licensed version of F4 Transkript on the University Workplace as an aid for manual transcription, but doesn't offer automatic speech recognition software.

This guide is offered by the DCC to help researchers process their research data as efficiently as possible, while optimizing data protection (keeping their audio files on UG storage instead of sending them to cloud services). For technical aspects, the service is supported by the Data Science and HPC team of the CIT. If you wish to read more on the detailed functionalities of Whisper, please refer to the manual in their Git repository.

Should you have any further questions on the use or initial setup of Whisper on Hábrók HPC, please contact the DCC at dcc@rug.nl.

→ Move to the next step