Whisper Medium Spanish
Model summary
Whisper Medium Spanish is an automatic speech recognition (ASR) model for Spanish (es), fine-tuned from [openai/whisper-medium] on the Spanish subset of Mozilla Common Voice 13.0. It achieves a Word Error Rate (WER) of 5.4088% on the evaluation split.
This model offers higher accuracy than Whisper Small while remaining more efficient than Whisper Large variants, making it suitable for both batch and near real-time transcription of Spanish speech.
Model description
- Architecture: Transformer-based encoder–decoder (Whisper Medium)
- Base model: openai/whisper-medium
- Language: Spanish (es)
- Task: Automatic Speech Recognition (ASR)
- Output: Text transcription in Spanish
- Decoding: Autoregressive sequence-to-sequence decoding
Medium-sized model balances accuracy and speed, handling conversational Spanish better than smaller models.
Intended use
Primary use cases
- Batch or streaming transcription of Spanish speech
- Research on Spanish ASR
- Applications requiring moderate-to-high transcription accuracy without full-large model compute
Limitations
Accuracy may drop for:
- Noisy environments or overlapping speakers
- Strong regional accents not well represented in Common Voice
- Extremely fast or slurred speech
Not intended for legal, medical, or other safety-critical transcription.
Training and evaluation data
Dataset: Mozilla Common Voice 13.0 (Spanish subset)
Data type: Crowd-sourced read speech
Preprocessing:
- Audio resampled to 16 kHz
- Text tokenized with Whisper tokenizer
- Removal of invalid or corrupted samples
Evaluation metric: Word Error Rate (WER) on held-out evaluation set
Evaluation results
| Metric | Value |
|---|---|
| WER (eval) | 5.4088% |
Training procedure
Training hyperparameters
- Learning rate: 1e-5
- Optimizer: Adam (β1=0.9, β2=0.999, ε=1e-8)
- LR scheduler: Linear
- Warmup steps: 500
- Training steps: 10000
- Train batch size: 64
- Eval batch size: 32
- Seed: 42
Training results (summary)
| Training Loss | Epoch | Step | Validation Loss | WER |
|---|---|---|---|---|
| 0.0917 | 2.0 | 1000 | 0.1944 | 6.8560 |
| 0.0927 | 4.0 | 2000 | 0.1817 | 6.1439 |
| 0.0456 | 6.01 | 3000 | 0.1805 | 6.2626 |
| 0.0343 | 8.01 | 4000 | 0.2097 | 6.1773 |
| 0.0046 | 10.01 | 5000 | 0.2292 | 5.9374 |
| 0.0829 | 12.01 | 6000 | 0.1814 | 6.0644 |
| 0.0021 | 14.01 | 7000 | 0.2318 | 5.7096 |
| 0.0288 | 16.01 | 8000 | 0.1871 | 5.5755 |
| 0.1297 | 18.02 | 9000 | 0.1831 | 5.6885 |
| 0.0377 | 20.02 | 10000 | 0.1915 | 5.4088 |
Framework versions
- Transformers 4.33.0.dev0
- PyTorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
Example usage
from transformers import pipeline
hf_model = "HiTZ/whisper-medium-es" # replace with actual repo ID
device = 0 # -1 for CPU
pipe = pipeline(
task="automatic-speech-recognition",
model=hf_model,
device=device
)
result = pipe("audio.wav")
print(result["text"])
Ethical considerations and risks
- This model transcribes speech and may process personal data.
- Users should ensure compliance with applicable data protection laws (e.g., GDPR).
- The model should not be used for surveillance or non-consensual audio processing.
Citation
If you use this model in your research, please cite:
@misc{dezuazo2025whisperlmimprovingasrmodels,
title={Whisper-LM: Improving ASR Models with Language Models for Low-Resource Languages},
author={Xabier de Zuazo and Eva Navas and Ibon Saratxaga and Inma Hernáez Rioja},
year={2025},
eprint={2503.23542},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Please, check the related paper preprint in arXiv:2503.23542 for more details.
License
This model is available under the Apache-2.0 License. You are free to use, modify, and distribute this model as long as you credit the original creators.
Contact and attribution
- Fine-tuning and evaluation: HiTZ/Aholab - Basque Center for Language Technology
- Base model: OpenAI Whisper
- Dataset: Mozilla Common Voice
For questions or issues, please open an issue in the model repository.
- Downloads last month
- 10
Model tree for HiTZ/whisper-medium-es
Base model
openai/whisper-mediumDataset used to train HiTZ/whisper-medium-es
Collection including HiTZ/whisper-medium-es
Evaluation results
- Wer on mozilla-foundation/common_voice_13_0 estest set self-reported5.409