ymoslem commited on
Commit
c02fffe
1 Parent(s): 3dc94d0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -6
README.md CHANGED
@@ -6,9 +6,16 @@ tags:
6
  metrics:
7
  - bleu
8
  - wer
 
9
  model-index:
10
  - name: whisper-tiny-ga2en-v1.2
11
  results: []
 
 
 
 
 
 
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -17,11 +24,11 @@ should probably proofread and complete it, then remove this comment. -->
17
  # whisper-tiny-ga2en-v1.2
18
 
19
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
20
- It achieves the following results on the evaluation set:
21
- - Loss: 2.7774
22
- - Bleu: 15.38
23
- - Chrf: 32.04
24
- - Wer: 93.6515
25
 
26
  ## Model description
27
 
@@ -37,6 +44,10 @@ More information needed
37
 
38
  ## Training procedure
39
 
 
 
 
 
40
  ### Training hyperparameters
41
 
42
  The following hyperparameters were used during training:
@@ -71,4 +82,4 @@ The following hyperparameters were used during training:
71
  - Transformers 4.39.2
72
  - Pytorch 2.2.1+cu121
73
  - Datasets 2.18.0
74
- - Tokenizers 0.15.2
 
6
  metrics:
7
  - bleu
8
  - wer
9
+ - chrf
10
  model-index:
11
  - name: whisper-tiny-ga2en-v1.2
12
  results: []
13
+ datasets:
14
+ - ymoslem/IWSLT2023-GA-EN
15
+ language:
16
+ - ga
17
+ - en
18
+ library_name: transformers
19
  ---
20
 
21
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
24
  # whisper-tiny-ga2en-v1.2
25
 
26
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
27
+ The best model (this version) based on ChrF is at checkpoint 900, epoch 13.24, and it achieves the following results on the evaluation set:
28
+ - Loss: 2.7626
29
+ - Bleu: 15.36
30
+ - Chrf: 32.05
31
+ - Wer: 93.1112
32
 
33
  ## Model description
34
 
 
44
 
45
  ## Training procedure
46
 
47
+ ### Experiment
48
+
49
+ - learning_rate: 0.0001
50
+
51
  ### Training hyperparameters
52
 
53
  The following hyperparameters were used during training:
 
82
  - Transformers 4.39.2
83
  - Pytorch 2.2.1+cu121
84
  - Datasets 2.18.0
85
+ - Tokenizers 0.15.2