Earnings-21 is a real-world benchmark dataset that contains 39-hours of long-form English speech, sourced from public earning calls.
This directory contains a set of scripts to evaluate the performance of whisper.cpp on Earnings-21 corpus.
(Pre-requirement) Compile whisper-cli and prepare the Whisper
model in ggml format.
$ # Execute the commands below in the project root dir.
$ cmake -B build
$ cmake --build build --config Release
$ ./models/download-ggml-model.sh tiny
Consult whisper.cpp/README.md for more details.
Download the audio files.
$ make get-audio
Set up the environment to compute WER score.
$ pip install -r requirements.txt
For example, if you use virtualenv, you can set up it as follows:
$ python3 -m venv venv
$ . venv/bin/activate
$ pip install -r requirements.txt
Run the benchmark test.
$ make
Create eval.conf and override variables.
WHISPER_MODEL = large-v3-turbo
WHISPER_FLAGS = --no-prints --threads 8 --language en --output-txt
Check out eval.mk for more details.
Earnings-21 provides a small but representative subset (approximately 10-hour audio data) to evaluate ASR systems quickly.
To switch to the subset, create eval.conf and add the following line:
EARNINGS21_EVAL10 = yes
First, you need to download a VAD model:
$ # Execute the commands below in the project root dir.
$ ./models/download-vad-model.sh silero-v6.2.0
Create eval.conf with the following content:
WHISPER_FLAGS = --no-prints --language en --output-txt --vad --vad-model ../../models/ggml-silero-v6.2.0.bin