Whisper Small Galician

Model summary

Whisper Small Galician is an automatic speech recognition (ASR) model for Galician (gl) speech. It is fine-tuned from [openai/whisper-small] on the Galician portion of Mozilla Common Voice 13.0, achieving a Word Error Rate (WER) of 10.99% on the Common Voice evaluation split.

This model provides a good balance between transcription accuracy and computational efficiency, suitable for small-to-medium scale Galician ASR tasks.


Model description

  • Architecture: Transformer-based encoder–decoder (Whisper)
  • Base model: openai/whisper-small
  • Language: Galician (gl)
  • Task: Automatic Speech Recognition (ASR)
  • Output: Text transcription in Galician
  • Decoding: Autoregressive sequence-to-sequence decoding

The small model leverages Whisper’s multilingual pretraining and is fine-tuned on Galician speech data to provide accurate transcription with reasonable resource requirements, suitable for research, education, and media applications.


Intended use

Primary use cases

  • Accurate transcription of Galician audio recordings
  • Offline or batch ASR pipelines
  • Research and development in Galician ASR
  • Media, educational, and archival transcription tasks

Intended users

  • Researchers working on Galician or low-resource ASR
  • Developers building Galician speech applications
  • Academic or institutional users

Out-of-scope use

  • Real-time or low-latency ASR without optimization
  • Speech translation tasks
  • Safety-critical applications without validation

Limitations and known issues

  • Performance may degrade on:
    • Noisy or low-quality recordings
    • Conversational or spontaneous speech
    • Accents underrepresented in Common Voice
  • Transcription errors may still occur under challenging acoustic conditions
  • Dataset biases from Common Voice may be reflected in outputs

Users are encouraged to evaluate the model on their own data before deployment.


Training and evaluation data

Training data

  • Dataset: Mozilla Common Voice 13.0 (Galician subset)
  • Data type: Crowd-sourced, read speech
  • Preprocessing:
    • Audio resampled to 16 kHz
    • Text normalized using Whisper tokenizer
    • Filtering of invalid or problematic samples

Evaluation data

  • Dataset: Mozilla Common Voice 13.0 (Galician evaluation split)
  • Metric: Word Error Rate (WER)

Evaluation results

Metric Value
WER (eval) 10.99%

This reflects the expected performance of a small Whisper model fine-tuned for Galician.


Training procedure

Training hyperparameters

  • Learning rate: 1e-5
  • Optimizer: Adam (β1=0.9, β2=0.999, ε=1e-8)
  • LR scheduler: Linear
  • Warmup steps: 500
  • Training steps: 5,000
  • Train batch size: 64
  • Evaluation batch size: 32
  • Seed: 42

Training results (summary)

Training Loss Epoch Step Validation Loss WER
0.0214 4.04 1000 0.2737 11.5394
0.0024 9.04 2000 0.3159 11.0565
0.001 14.04 3000 0.3370 10.9944
0.0007 19.04 4000 0.3497 11.0151
0.0006 24.04 5000 0.3555 10.9875

Framework versions

  • Transformers 4.33.0.dev0
  • PyTorch 2.0.1+cu117
  • Datasets 2.14.4
  • Tokenizers 0.13.3

How to use

from transformers import pipeline

hf_model = "HiTZ/whisper-small-gl"  # replace with actual repo ID
device = 0  # set to -1 for CPU

pipe = pipeline(
    task="automatic-speech-recognition",
    model=hf_model,
    device=device
)

result = pipe("audio.wav")
print(result["text"])

Ethical considerations and risks

  • This model transcribes speech and may process personal data.
  • Users should ensure compliance with applicable data protection laws (e.g., GDPR).
  • The model should not be used for surveillance or non-consensual audio processing.

Citation

If you use this model in your research, please cite:

@misc{dezuazo2025whisperlmimprovingasrmodels,
  title={Whisper-LM: Improving ASR Models with Language Models for Low-Resource Languages},
  author={Xabier de Zuazo and Eva Navas and Ibon Saratxaga and Inma Hernáez Rioja},
  year={2025},
  eprint={2503.23542},
  archivePrefix={arXiv},
  primaryClass={cs.CL}
}

Please, check the related paper preprint in arXiv:2503.23542 for more details.


License

This model is available under the Apache-2.0 License. You are free to use, modify, and distribute this model as long as you credit the original creators.


Contact and attribution

  • Fine-tuning and evaluation: HiTZ/Aholab - Basque Center for Language Technology
  • Base model: OpenAI Whisper
  • Dataset: Mozilla Common Voice

For questions or issues, please open an issue in the model repository.

Downloads last month
40
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for HiTZ/whisper-small-gl

Finetuned
(3139)
this model

Dataset used to train HiTZ/whisper-small-gl

Collection including HiTZ/whisper-small-gl

Evaluation results