whisper-medium-eu / README.md
asierhv's picture
added funding
6c71385 verified
---
language:
- eu
license: apache-2.0
base_model: openai/whisper-medium
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Medium Basque
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_13_0 eu
type: mozilla-foundation/common_voice_13_0
config: eu
split: test
args: eu
metrics:
- name: Wer
type: wer
value: 14.119648426424725
---
# Whisper Medium Basque
## Model summary
**Whisper Medium Basque** is an automatic speech recognition (ASR) model for **Basque (eu)** speech. It is fine-tuned from [openai/whisper-medium] on the **Basque portion of Mozilla Common Voice 13.0**, achieving a **Word Error Rate (WER) of 14.12%** on the Common Voice evaluation split.
This model offers a balance between transcription accuracy and computational requirements, providing significantly improved ASR performance over smaller Whisper variants while remaining practical for offline or batch processing.
---
## Model description
* **Architecture:** Transformer-based encoder–decoder (Whisper)
* **Base model:** openai/whisper-medium
* **Language:** Basque (eu)
* **Task:** Automatic Speech Recognition (ASR)
* **Output:** Text transcription in Basque
* **Decoding:** Autoregressive sequence-to-sequence decoding
This medium-sized model leverages Whisper’s multilingual pretraining and is fine-tuned on Basque speech data, delivering higher transcription quality for a low-resource language while remaining manageable for typical GPU or CPU environments.
---
## Intended use
### Primary use cases
* High-quality transcription of Basque audio recordings
* Offline or batch ASR pipelines
* Research and development in Basque ASR
* Media, educational, and archival transcription tasks
### Intended users
* Researchers working on Basque or low-resource ASR
* Developers building Basque speech applications
* Academic and institutional users
### Out-of-scope use
* Real-time or low-latency ASR without additional optimization
* Speech translation tasks
* Safety-critical applications without validation
---
## Limitations and known issues
* Performance may degrade on:
* Noisy or low-quality recordings
* Conversational or spontaneous speech
* Accents underrepresented in Common Voice
* While highly accurate for a medium-sized model, errors can still occur under challenging acoustic conditions
* Dataset biases from Common Voice may be reflected in outputs
Users are encouraged to evaluate the model on their own data before deployment.
---
## Training and evaluation data
### Training data
* **Dataset:** Mozilla Common Voice 13.0 (Basque subset)
* **Data type:** Crowd-sourced, read speech
* **Preprocessing:**
* Audio resampled to 16 kHz
* Text normalized using Whisper tokenizer
* Filtering of invalid or problematic samples
### Evaluation data
* **Dataset:** Mozilla Common Voice 13.0 (Basque evaluation split)
* **Metric:** Word Error Rate (WER)
---
## Evaluation results
| Metric | Value |
| ---------- | ---------- |
| WER (eval) | **14.12%** |
These results indicate strong transcription performance for a medium-sized Whisper model fine-tuned for Basque.
---
## Training procedure
### Training hyperparameters
* Learning rate: 1e-5
* Optimizer: Adam (β1=0.9, β2=0.999, ε=1e-8)
* LR scheduler: Linear
* Warmup steps: 500
* Training steps: 10,000
* Train batch size: 64
* Evaluation batch size: 32
* Seed: 42
### Training results (summary)
| Training Loss | Epoch | Step | Validation Loss | WER |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.0206 | 4.02 | 1000 | 0.2998 | 16.9995 |
| 0.0036 | 9.01 | 2000 | 0.3235 | 15.5211 |
| 0.0018 | 14.01 | 3000 | 0.3454 | 14.9905 |
| 0.0013 | 19.01 | 4000 | 0.3538 | 14.9439 |
| 0.0013 | 24.01 | 5000 | 0.3587 | 14.8568 |
| 0.0002 | 29.0 | 6000 | 0.3799 | 14.4153 |
| 0.0001 | 33.02 | 7000 | 0.3937 | 14.2067 |
| 0.0001 | 38.02 | 8000 | 0.4050 | 14.1946 |
| 0.0001 | 43.01 | 9000 | 0.4119 | 14.1196 |
| 0.0001 | 48.01 | 10000 | 0.4150 | 14.1358 |
---
## Framework versions
- Transformers 4.33.0.dev0
- PyTorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
---
## How to use
```python
from transformers import pipeline
hf_model = "HiTZ/whisper-medium-eu" # replace with actual repo ID
device = 0 # set to -1 for CPU
pipe = pipeline(
task="automatic-speech-recognition",
model=hf_model,
device=device
)
result = pipe("audio.wav")
print(result["text"])
```
---
## Ethical considerations and risks
* This model transcribes speech and may process personal data.
* Users should ensure compliance with applicable data protection laws (e.g., GDPR).
* The model should not be used for surveillance or non-consensual audio processing.
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{dezuazo2025whisperlmimprovingasrmodels,
title={Whisper-LM: Improving ASR Models with Language Models for Low-Resource Languages},
author={Xabier de Zuazo and Eva Navas and Ibon Saratxaga and Inma Hernáez Rioja},
year={2025},
eprint={2503.23542},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
Please, check the related paper preprint in
[arXiv:2503.23542](https://arxiv.org/abs/2503.23542)
for more details.
---
## License
This model is available under the
[Apache-2.0 License](https://www.apache.org/licenses/LICENSE-2.0).
You are free to use, modify, and distribute this model as long as you credit
the original creators.
---
## Contact and attribution
* Fine-tuning and evaluation: HiTZ/Aholab - Basque Center for Language Technology
* Base model: OpenAI Whisper
* Dataset: Mozilla Common Voice
For questions or issues, please open an issue in the model repository.
## Funding
This project with reference 2022/TL22/00215335 has been parcially funded by the Ministerio de Transformación Digital and by the Plan de Recuperación, Transformación y Resiliencia – Funded by the European Union – NextGenerationEU [ILENIA](https://proyectoilenia.es/) and by the project [IkerGaitu](https://www.hitz.eus/iker-gaitu/) funded by the Basque Government.
This model was trained at [Hyperion](https://scc.dipc.org/docs/systems/hyperion/overview/), one of the high-performance computing (HPC) systems hosted by the DIPC Supercomputing Center.