comodoro commited on
Commit
2fcb483
1 Parent(s): 91b4cda

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -21
README.md CHANGED
@@ -1,30 +1,33 @@
1
- language: cs
2
- datasets:
3
- - common_voice
4
- metrics:
5
- - wer
6
  tags:
7
- - audio
8
  - automatic-speech-recognition
9
- - speech
 
 
10
  - xlsr-fine-tuning-week
11
- license: apache-2.0
 
12
  model-index:
13
- - name: {Czech Wav2Vec2 XLSR 300M}
14
  results:
15
  - task:
16
- name: Speech Recognition
17
  type: automatic-speech-recognition
18
  dataset:
19
- name: Common Voice cs
20
  type: common_voice
21
  args: cs
22
  metrics:
23
  - name: Test WER
24
  type: wer
25
- value: {wer_result_on_test} #TODO (IMPORTANT): replace {wer_result_on_test} with the WER error rate you achieved on the common_voice test set. It should be in the format XX.XX (don't add the % sign here). **Please** remember to fill out this value after you evaluated your model, so that your model appears on the leaderboard. If you fill out this model card before evaluating your model, please remember to edit the model card afterward to fill in your value
 
 
 
26
  ---
27
-
28
  # Wav2Vec2-Large-XLSR-53-Czech
29
 
30
  Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Czech using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
@@ -67,9 +70,9 @@ print("Reference:", test_dataset[:2]["sentence"])
67
  ```
68
 
69
 
70
- ## Evaluation#TODO
71
 
72
- The model can be evaluated as follows on the Czech test data of Common Voice.
73
 
74
 
75
  ```python
@@ -79,14 +82,14 @@ from datasets import load_dataset, load_metric
79
  from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
80
  import re
81
 
82
- test_dataset = load_dataset("common_voice", "{lang_id}", split="test") #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site.
83
  wer = load_metric("wer")
84
 
85
- processor = Wav2Vec2Processor.from_pretrained("{model_id}") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
86
- model = Wav2Vec2ForCTC.from_pretrained("{model_id}") #TODO: replace {model_id} with your model id. The model id consists of {your_username}/{your_modelname}, *e.g.* `elgeish/wav2vec2-large-xlsr-53-arabic`
87
  model.to("cuda")
88
 
89
- chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]' # TODO: adapt this list to include all special characters you removed from the data
90
  resampler = torchaudio.transforms.Resample(48_000, 16_000)
91
 
92
  # Preprocessing the datasets.
@@ -116,11 +119,11 @@ result = test_dataset.map(evaluate, batched=True, batch_size=8)
116
  print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
117
  ```
118
 
119
- **Test Result**: XX.XX % # TODO: write output of print here. IMPORTANT: Please remember to also replace {wer_result_on_test} at the top of with this value here. tags.
120
 
121
 
122
  ## Training
123
 
124
  The Common Voice `train` and `validation` datasets were used for training
125
 
126
- The script used for training can be found [here](...) # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here.
 
1
+ ---
2
+ language:
3
+ - cs
4
+ license: apache-2.0
 
5
  tags:
 
6
  - automatic-speech-recognition
7
+ - common_voice
8
+ - generated_from_trainer
9
+ - robust-speech-event
10
  - xlsr-fine-tuning-week
11
+ datasets:
12
+ - common_voice
13
  model-index:
14
+ - name: Czech comodoro Wav2Vec2 XLSR 300M CV6.1
15
  results:
16
  - task:
17
+ name: Automatic Speech Recognition
18
  type: automatic-speech-recognition
19
  dataset:
20
+ name: Common Voice 6.1
21
  type: common_voice
22
  args: cs
23
  metrics:
24
  - name: Test WER
25
  type: wer
26
+ value: 22.20
27
+ - name: Test CER
28
+ type: cer
29
+ value: 5.1
30
  ---
 
31
  # Wav2Vec2-Large-XLSR-53-Czech
32
 
33
  Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Czech using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
 
70
  ```
71
 
72
 
73
+ ## Evaluation
74
 
75
+ The model can be evaluated as follows on the Czech test data of Common Voice 6.1
76
 
77
 
78
  ```python
 
82
  from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
83
  import re
84
 
85
+ test_dataset = load_dataset("common_voice", "cs", split="test")
86
  wer = load_metric("wer")
87
 
88
+ processor = Wav2Vec2Processor.from_pretrained("comodoro/wav2vec2-xls-r-300m-cs")
89
+ model = Wav2Vec2ForCTC.from_pretrained("comodoro/wav2vec2-xls-r-300m-cs")
90
  model.to("cuda")
91
 
92
+ chars_to_ignore_regex = '[\,\?\.\!\-\;\:\/\"\“\„\%\”\�\–\'\`\«\»\—\’\…]'
93
  resampler = torchaudio.transforms.Resample(48_000, 16_000)
94
 
95
  # Preprocessing the datasets.
 
119
  print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
120
  ```
121
 
122
+ **Test Result**: 22.20 %
123
 
124
 
125
  ## Training
126
 
127
  The Common Voice `train` and `validation` datasets were used for training
128
 
129
+ # TODO The script used for training can be found [here](...)