PereLluis13
commited on
Commit
•
712c47e
1
Parent(s):
3b1702c
Update README.md
Browse filesAdded WER using evaluation script.
README.md
CHANGED
@@ -1,4 +1,3 @@
|
|
1 |
-
<======================Copy **raw** version from here=========================
|
2 |
---
|
3 |
language: el #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
|
4 |
datasets:
|
@@ -25,12 +24,12 @@ model-index:
|
|
25 |
metrics:
|
26 |
- name: Test WER
|
27 |
type: wer
|
28 |
-
value:
|
29 |
---
|
30 |
|
31 |
-
# Wav2Vec2-Large-XLSR-53-greek
|
32 |
|
33 |
-
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on greek using the [Common Voice](https://huggingface.co/datasets/common_voice) and [CSS10](https://github.com/Kyubyong/css10) datasets.
|
34 |
When using this model, make sure that your speech input is sampled at 16kHz.
|
35 |
|
36 |
## Usage
|
@@ -43,25 +42,25 @@ import torchaudio
|
|
43 |
from datasets import load_dataset
|
44 |
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
|
45 |
|
46 |
-
test_dataset = load_dataset("common_voice", "el", split="test
|
47 |
|
48 |
-
processor = Wav2Vec2Processor.from_pretrained("PereLluis13/wav2vec2-large-xlsr-53-greek")
|
49 |
-
model = Wav2Vec2ForCTC.from_pretrained("PereLluis13/wav2vec2-large-xlsr-53-greek")
|
50 |
|
51 |
resampler = torchaudio.transforms.Resample(48_000, 16_000)
|
52 |
|
53 |
# Preprocessing the datasets.
|
54 |
# We need to read the aduio files as arrays
|
55 |
def speech_file_to_array_fn(batch):
|
56 |
-
|
57 |
-
|
58 |
-
|
59 |
|
60 |
test_dataset = test_dataset.map(speech_file_to_array_fn)
|
61 |
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
|
62 |
|
63 |
with torch.no_grad():
|
64 |
-
|
65 |
|
66 |
predicted_ids = torch.argmax(logits, dim=-1)
|
67 |
|
@@ -72,8 +71,7 @@ print("Reference:", test_dataset["sentence"][:2])
|
|
72 |
|
73 |
## Evaluation
|
74 |
|
75 |
-
The model can be evaluated as follows on the greek test data of Common Voice.
|
76 |
-
|
77 |
|
78 |
```python
|
79 |
import torch
|
@@ -85,42 +83,42 @@ import re
|
|
85 |
test_dataset = load_dataset("common_voice", "el", split="test") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
|
86 |
wer = load_metric("wer")
|
87 |
|
88 |
-
processor = Wav2Vec2Processor.from_pretrained("PereLluis13/wav2vec2-large-xlsr-53-greek")
|
89 |
-
model = Wav2Vec2ForCTC.from_pretrained("PereLluis13/wav2vec2-large-xlsr-53-greek")
|
90 |
model.to("cuda")
|
91 |
|
92 |
-
chars_to_ignore_regex = '[
|
|
|
93 |
resampler = torchaudio.transforms.Resample(48_000, 16_000)
|
94 |
|
95 |
# Preprocessing the datasets.
|
96 |
# We need to read the aduio files as arrays
|
97 |
def speech_file_to_array_fn(batch):
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
|
102 |
|
103 |
test_dataset = test_dataset.map(speech_file_to_array_fn)
|
104 |
|
105 |
# Preprocessing the datasets.
|
106 |
# We need to read the aduio files as arrays
|
107 |
def evaluate(batch):
|
108 |
-
|
109 |
|
110 |
-
|
111 |
-
|
112 |
|
113 |
-
|
114 |
-
|
115 |
-
|
116 |
|
117 |
result = test_dataset.map(evaluate, batched=True, batch_size=8)
|
118 |
|
119 |
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
|
120 |
```
|
121 |
|
122 |
-
**Test Result**:
|
123 |
-
|
124 |
|
125 |
## Training
|
126 |
|
@@ -140,6 +138,4 @@ The Common Voice `train`, `validation`, and CSS10 datasets were used for trainin
|
|
140 |
|
141 |
As suggested by Florian Zimmermeister.
|
142 |
|
143 |
-
The script used for training can be found in [run_common_voice.py](examples/research_projects/wav2vec2/run_common_voice.py), still pending of PR. The only changes are to `speech_file_to_array_fn`. Batch size was kept at 32 (using `gradient_accumulation_steps`) using one of the [OVH](https://www.ovh.com/) machines, with a V100 GPU (thank you very much [OVH](https://www.ovh.com/)). The model trained for 40 epochs, the first 20 with the `train+validation` splits, and then `extra` split was added with the data from CSS10 at the 20th epoch. # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here.
|
144 |
-
|
145 |
-
=======================To here===============================>
|
|
|
|
|
1 |
---
|
2 |
language: el #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
|
3 |
datasets:
|
|
|
24 |
metrics:
|
25 |
- name: Test WER
|
26 |
type: wer
|
27 |
+
value: 20.89 #TODO (IMPORTANT): replace {wer_result_on_test} with the WER error rate you achieved on the common_voice test set. It should be in the format XX.XX (don't add the % sign here). **Please** remember to fill out this value after you evaluated your model, so that your model appears on the leaderboard. If you fill out this model card before evaluating your model, please remember to edit the model card afterward to fill in your value
|
28 |
---
|
29 |
|
30 |
+
# Wav2Vec2-Large-XLSR-53-greek
|
31 |
|
32 |
+
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on greek using the [Common Voice](https://huggingface.co/datasets/common_voice) and [CSS10](https://github.com/Kyubyong/css10) datasets.
|
33 |
When using this model, make sure that your speech input is sampled at 16kHz.
|
34 |
|
35 |
## Usage
|
|
|
42 |
from datasets import load_dataset
|
43 |
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
|
44 |
|
45 |
+
test_dataset = load_dataset("common_voice", "el", split="test")
|
46 |
|
47 |
+
processor = Wav2Vec2Processor.from_pretrained("PereLluis13/wav2vec2-large-xlsr-53-greek")
|
48 |
+
model = Wav2Vec2ForCTC.from_pretrained("PereLluis13/wav2vec2-large-xlsr-53-greek")
|
49 |
|
50 |
resampler = torchaudio.transforms.Resample(48_000, 16_000)
|
51 |
|
52 |
# Preprocessing the datasets.
|
53 |
# We need to read the aduio files as arrays
|
54 |
def speech_file_to_array_fn(batch):
|
55 |
+
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
|
56 |
+
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
|
57 |
+
\\treturn batch
|
58 |
|
59 |
test_dataset = test_dataset.map(speech_file_to_array_fn)
|
60 |
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
|
61 |
|
62 |
with torch.no_grad():
|
63 |
+
\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
|
64 |
|
65 |
predicted_ids = torch.argmax(logits, dim=-1)
|
66 |
|
|
|
71 |
|
72 |
## Evaluation
|
73 |
|
74 |
+
The model can be evaluated as follows on the greek test data of Common Voice.
|
|
|
75 |
|
76 |
```python
|
77 |
import torch
|
|
|
83 |
test_dataset = load_dataset("common_voice", "el", split="test") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
|
84 |
wer = load_metric("wer")
|
85 |
|
86 |
+
processor = Wav2Vec2Processor.from_pretrained("PereLluis13/wav2vec2-large-xlsr-53-greek")
|
87 |
+
model = Wav2Vec2ForCTC.from_pretrained("PereLluis13/wav2vec2-large-xlsr-53-greek")
|
88 |
model.to("cuda")
|
89 |
|
90 |
+
chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“\\\\%\\\\'\\\\�]'
|
91 |
+
|
92 |
resampler = torchaudio.transforms.Resample(48_000, 16_000)
|
93 |
|
94 |
# Preprocessing the datasets.
|
95 |
# We need to read the aduio files as arrays
|
96 |
def speech_file_to_array_fn(batch):
|
97 |
+
\\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
|
98 |
+
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
|
99 |
+
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
|
100 |
+
\\treturn batch
|
101 |
|
102 |
test_dataset = test_dataset.map(speech_file_to_array_fn)
|
103 |
|
104 |
# Preprocessing the datasets.
|
105 |
# We need to read the aduio files as arrays
|
106 |
def evaluate(batch):
|
107 |
+
\\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
|
108 |
|
109 |
+
\\twith torch.no_grad():
|
110 |
+
\\t\\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
|
111 |
|
112 |
+
\\tpred_ids = torch.argmax(logits, dim=-1)
|
113 |
+
\\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
|
114 |
+
\\treturn batch
|
115 |
|
116 |
result = test_dataset.map(evaluate, batched=True, batch_size=8)
|
117 |
|
118 |
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
|
119 |
```
|
120 |
|
121 |
+
**Test Result**: 20.89 %
|
|
|
122 |
|
123 |
## Training
|
124 |
|
|
|
138 |
|
139 |
As suggested by Florian Zimmermeister.
|
140 |
|
141 |
+
The script used for training can be found in [run_common_voice.py](examples/research_projects/wav2vec2/run_common_voice.py), still pending of PR. The only changes are to `speech_file_to_array_fn`. Batch size was kept at 32 (using `gradient_accumulation_steps`) using one of the [OVH](https://www.ovh.com/) machines, with a V100 GPU (thank you very much [OVH](https://www.ovh.com/)). The model trained for 40 epochs, the first 20 with the `train+validation` splits, and then `extra` split was added with the data from CSS10 at the 20th epoch. # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here.
|
|
|
|