Back to Models
TA

tarteel-ai/whisper-base-ar-quran

tarteel-aiaudio

whisper-base-ar-quran

This model is a fine-tuned version of openai/whisper-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0839
  • Wer: 5.7544

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • total_train_batch_size: 128
  • total_eval_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 5000
  • mixed_precision_training: Native AMP

Training results

Training LossEpochStepValidation LossWer
0.10920.052500.196913.3890
0.03610.15000.158310.6375
0.01920.157500.11098.8468
0.01440.210000.11577.9754
0.0080.2512500.10007.5360
0.00481.0315000.09336.8227
0.01131.0817500.09556.9638
0.02091.1320000.08246.3586
0.00431.1822500.08306.3444
0.0021.2325000.10156.3025
0.00132.0127500.08636.0639
0.00142.0630000.09056.0213
0.00182.1132500.08646.0293
0.00082.1635000.08875.9308
0.00292.2137500.07775.9159
0.00222.2640000.08475.8749
0.00053.0542500.08275.8352
0.00033.145000.08265.7800
0.00063.1547500.08335.7625
0.00033.250000.08395.7544

Framework versions

  • Transformers 4.26.0.dev0
  • Pytorch 1.13.0+cu117
  • Datasets 2.7.1.dev0
  • Tokenizers 0.13.2
Visit Website

0 reviews

5
0
4
0
3
0
2
0
1
0
Likes161
Downloads
📝

No reviews yet

Be the first to review tarteel-ai/whisper-base-ar-quran!

Model Info

Providertarteel-ai
Categoryaudio
Reviews0
Avg. Rating / 5.0

Community

Likes161
Downloads

Rating Guidelines

★★★★★Exceptional
★★★★Great
★★★Good
★★Fair
Poor