File size: 2,926 Bytes
e6e51f9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
---
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
model-index:
- name: MAScIR_elderly_whisper-medium-LoRA-data-augmented
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# MAScIR_elderly_whisper-medium-LoRA-data-augmented

This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0358

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3

### Training results

| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.8119        | 0.09  | 100  | 0.2422          |
| 0.7907        | 0.19  | 200  | 0.2357          |
| 0.6762        | 0.28  | 300  | 0.2311          |
| 0.7081        | 0.38  | 400  | 0.2256          |
| 0.5623        | 0.47  | 500  | 0.1946          |
| 0.569         | 0.57  | 600  | 0.1697          |
| 9.0833        | 0.66  | 700  | 8.1242          |
| 6.1681        | 0.76  | 800  | 5.9288          |
| 5.5565        | 0.85  | 900  | 4.9360          |
| 2.0714        | 0.95  | 1000 | 0.2584          |
| 0.6051        | 1.04  | 1100 | 0.2062          |
| 0.485         | 1.14  | 1200 | 0.1824          |
| 0.637         | 1.23  | 1300 | 0.1522          |
| 0.5521        | 1.33  | 1400 | 0.1371          |
| 0.3999        | 1.42  | 1500 | 0.1331          |
| 0.4788        | 1.52  | 1600 | 0.1344          |
| 0.3738        | 1.61  | 1700 | 0.0952          |
| 0.3046        | 1.71  | 1800 | 0.0871          |
| 0.4335        | 1.8   | 1900 | 0.0770          |
| 0.3876        | 1.9   | 2000 | 0.0654          |
| 0.4226        | 1.99  | 2100 | 0.0638          |
| 0.2651        | 2.09  | 2200 | 0.0612          |
| 0.2075        | 2.18  | 2300 | 0.0541          |
| 0.2464        | 2.28  | 2400 | 0.0473          |
| 0.1797        | 2.37  | 2500 | 0.0482          |
| 0.2393        | 2.47  | 2600 | 0.0428          |
| 0.1764        | 2.56  | 2700 | 0.0396          |
| 0.1398        | 2.66  | 2800 | 0.0390          |
| 0.1855        | 2.75  | 2900 | 0.0382          |
| 0.232         | 2.85  | 3000 | 0.0369          |
| 0.2           | 2.94  | 3100 | 0.0358          |


### Framework versions

- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3