asierhv commited on
Commit
048b950
·
verified ·
1 Parent(s): 5fa34b6

added description and "how to use" example

Browse files
Files changed (1) hide show
  1. README.md +129 -41
README.md CHANGED
@@ -28,47 +28,93 @@ model-index:
28
  value: 5.070020005715919
29
  ---
30
 
31
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
32
- should probably proofread and complete it, then remove this comment. -->
33
-
34
  # Whisper Large Catalan
35
 
36
- This model is a fine-tuned version of [openai/whisper-large](https://huggingface.co/openai/whisper-large) on the mozilla-foundation/common_voice_13_0 ca dataset.
37
- It achieves the following results on the evaluation set:
38
- - Loss: 0.1458
39
- - Wer: 5.0700
 
 
 
40
 
41
  ## Model description
42
 
43
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
44
 
45
- ## Intended uses & limitations
 
 
46
 
47
- More information needed
 
 
 
 
 
 
 
 
 
 
48
 
49
  ## Training and evaluation data
50
 
51
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
52
 
53
  ## Training procedure
54
 
55
  ### Training hyperparameters
56
 
57
- The following hyperparameters were used during training:
58
- - learning_rate: 1e-05
59
- - train_batch_size: 32
60
- - eval_batch_size: 16
61
- - seed: 42
62
- - gradient_accumulation_steps: 2
63
- - total_train_batch_size: 64
64
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
65
- - lr_scheduler_type: linear
66
- - lr_scheduler_warmup_steps: 500
67
- - training_steps: 20000
68
-
69
- ### Training results
70
-
71
- | Training Loss | Epoch | Step | Validation Loss | Wer |
72
  |:-------------:|:-----:|:-----:|:---------------:|:------:|
73
  | 0.1059 | 1.02 | 1000 | 0.1744 | 7.6342 |
74
  | 0.0159 | 3.02 | 2000 | 0.1943 | 7.3850 |
@@ -91,27 +137,57 @@ The following hyperparameters were used during training:
91
  | 0.0356 | 37.0 | 19000 | 0.1458 | 5.0700 |
92
  | 0.0132 | 39.0 | 20000 | 0.1310 | 5.1941 |
93
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
94
 
95
- ### Framework versions
96
 
97
- - Transformers 4.33.0.dev0
98
- - Pytorch 2.0.1+cu117
99
- - Datasets 2.14.4
100
- - Tokenizers 0.13.3
 
101
 
102
  ## Citation
103
 
104
- If you use these models in your research, please cite:
105
 
106
  ```bibtex
107
  @misc{dezuazo2025whisperlmimprovingasrmodels,
108
- title={Whisper-LM: Improving ASR Models with Language Models for Low-Resource Languages},
109
- author={Xabier de Zuazo and Eva Navas and Ibon Saratxaga and Inma Hernáez Rioja},
110
- year={2025},
111
- eprint={2503.23542},
112
- archivePrefix={arXiv},
113
- primaryClass={cs.CL},
114
- url={https://arxiv.org/abs/2503.23542},
115
  }
116
  ```
117
 
@@ -119,9 +195,21 @@ Please, check the related paper preprint in
119
  [arXiv:2503.23542](https://arxiv.org/abs/2503.23542)
120
  for more details.
121
 
122
- ## Licensing
 
 
123
 
124
  This model is available under the
125
  [Apache-2.0 License](https://www.apache.org/licenses/LICENSE-2.0).
126
  You are free to use, modify, and distribute this model as long as you credit
127
- the original creators.
 
 
 
 
 
 
 
 
 
 
 
28
  value: 5.070020005715919
29
  ---
30
 
 
 
 
31
  # Whisper Large Catalan
32
 
33
+ ## Model summary
34
+
35
+ **Whisper Large Catalan** is an automatic speech recognition (ASR) model for **Catalan (ca)** speech. It is fine-tuned from [openai/whisper-large] on the **Catalan subset of Mozilla Common Voice 13.0**, achieving a **Word Error Rate (WER) of 5.070%** on the evaluation split.
36
+
37
+ This model is suitable for high-accuracy transcription and supports longer audio sequences with larger model capacity compared to the medium variant.
38
+
39
+ ---
40
 
41
  ## Model description
42
 
43
+ * **Architecture:** Transformer-based encoder–decoder (Whisper)
44
+ * **Base model:** openai/whisper-large
45
+ * **Language:** Catalan (ca)
46
+ * **Task:** Automatic Speech Recognition (ASR)
47
+ * **Output:** Text transcription in Catalan
48
+ * **Decoding:** Autoregressive sequence-to-sequence decoding
49
+
50
+ Fine-tuned to improve transcription quality on Catalan audio.
51
+
52
+ ---
53
+
54
+ ## Intended use
55
+
56
+ ### Primary use cases
57
+
58
+ * High-accuracy transcription of Catalan audio
59
+ * Research and development in Catalan ASR
60
+ * Media, educational, or accessibility applications
61
+
62
+ ### Out-of-scope use
63
 
64
+ * Real-time transcription without optimization
65
+ * Speech translation
66
+ * Safety-critical applications without further validation
67
 
68
+ ---
69
+
70
+ ## Limitations and known issues
71
+
72
+ * Performance may degrade on:
73
+ * Noisy or low-quality recordings
74
+ * Conversational or spontaneous speech
75
+ * Regional dialects not well represented in Common Voice
76
+ * Occasional transcription errors on difficult audio
77
+
78
+ ---
79
 
80
  ## Training and evaluation data
81
 
82
+ * **Dataset:** Mozilla Common Voice 13.0 (Catalan subset)
83
+ * **Data type:** Crowd-sourced, read speech
84
+ * **Preprocessing:**
85
+ * Audio resampled to 16 kHz
86
+ * Text normalized using Whisper tokenizer
87
+ * Filtering of invalid or problematic samples
88
+
89
+ * **Evaluation metric:** Word Error Rate (WER) on held-out evaluation set
90
+
91
+ ---
92
+
93
+ ## Evaluation results
94
+
95
+ | Metric | Value |
96
+ | ---------- | ---------- |
97
+ | WER (eval) | **5.070%** |
98
+
99
+ ---
100
 
101
  ## Training procedure
102
 
103
  ### Training hyperparameters
104
 
105
+ * Learning rate: 1e-5
106
+ * Optimizer: Adam (β1=0.9, β2=0.999, ε=1e-8)
107
+ * LR scheduler: Linear
108
+ * Warmup steps: 500
109
+ * Training steps: 20,000
110
+ * Train batch size: 32
111
+ * Eval batch size: 16
112
+ * Gradient accumulation steps: 2
113
+ * Seed: 42
114
+
115
+ ### Training results (summary)
116
+
117
+ | Training Loss | Epoch | Step | Validation Loss | WER |
 
 
118
  |:-------------:|:-----:|:-----:|:---------------:|:------:|
119
  | 0.1059 | 1.02 | 1000 | 0.1744 | 7.6342 |
120
  | 0.0159 | 3.02 | 2000 | 0.1943 | 7.3850 |
 
137
  | 0.0356 | 37.0 | 19000 | 0.1458 | 5.0700 |
138
  | 0.0132 | 39.0 | 20000 | 0.1310 | 5.1941 |
139
 
140
+ ---
141
+
142
+ ## Framework versions
143
+
144
+ - Transformers 4.33.0.dev0
145
+ - PyTorch 2.0.1+cu117
146
+ - Datasets 2.14.4
147
+ - Tokenizers 0.13.3
148
+
149
+ ---
150
+
151
+ ## How to use
152
+
153
+ ```python
154
+ from transformers import pipeline
155
+
156
+ hf_model = "HiTZ/whisper-large-ca" # replace with actual repo ID
157
+ device = 0 # set to -1 for CPU
158
+
159
+ pipe = pipeline(
160
+ task="automatic-speech-recognition",
161
+ model=hf_model,
162
+ device=device
163
+ )
164
+
165
+ result = pipe("audio.wav")
166
+ print(result["text"])
167
+ ```
168
+
169
+ ---
170
 
171
+ ## Ethical considerations and risks
172
 
173
+ * This model transcribes speech and may process personal data.
174
+ * Users should ensure compliance with applicable data protection laws (e.g., GDPR).
175
+ * The model should not be used for surveillance or non-consensual audio processing.
176
+
177
+ ---
178
 
179
  ## Citation
180
 
181
+ If you use this model in your research, please cite:
182
 
183
  ```bibtex
184
  @misc{dezuazo2025whisperlmimprovingasrmodels,
185
+ title={Whisper-LM: Improving ASR Models with Language Models for Low-Resource Languages},
186
+ author={Xabier de Zuazo and Eva Navas and Ibon Saratxaga and Inma Hernáez Rioja},
187
+ year={2025},
188
+ eprint={2503.23542},
189
+ archivePrefix={arXiv},
190
+ primaryClass={cs.CL}
 
191
  }
192
  ```
193
 
 
195
  [arXiv:2503.23542](https://arxiv.org/abs/2503.23542)
196
  for more details.
197
 
198
+ ---
199
+
200
+ ## License
201
 
202
  This model is available under the
203
  [Apache-2.0 License](https://www.apache.org/licenses/LICENSE-2.0).
204
  You are free to use, modify, and distribute this model as long as you credit
205
+ the original creators.
206
+
207
+ ---
208
+
209
+ ## Contact and attribution
210
+
211
+ * Fine-tuning and evaluation: HiTZ/Aholab - Basque Center for Language Technology
212
+ * Base model: OpenAI Whisper
213
+ * Dataset: Mozilla Common Voice
214
+
215
+ For questions or issues, please open an issue in the model repository.