whisper-small-sesotho

This model is a fine-tuned version of openai/whisper-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5607
  • Wer: 0.4212

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 5000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
1.5799 8.4746 500 0.4938 0.4363
0.0106 16.9492 1000 0.5006 0.3685
0.0018 25.4237 1500 0.5144 0.3372
0.001 33.8983 2000 0.5209 0.4070
0.0002 42.3729 2500 0.5338 0.4846
0.0002 50.8475 3000 0.5435 0.4885
0.0001 59.3220 3500 0.5503 0.4632
0.0001 67.7966 4000 0.5554 0.4104
0.0001 76.2712 4500 0.5590 0.4217
0.0001 84.7458 5000 0.5607 0.4212

Framework versions

  • Transformers 4.52.4
  • Pytorch 2.6.0+cu124
  • Datasets 2.14.4
  • Tokenizers 0.21.2
Downloads last month
8
Safetensors
Model size
0.2B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for misterkissi/whisper-small-sesotho

Finetuned
(3163)
this model

Space using misterkissi/whisper-small-sesotho 1

Evaluation results