PereLluis13 commited on
Commit
5bc097c
1 Parent(s): 6cc5f19

Update with newer checkpoint epoch 4

Browse files
Files changed (2) hide show
  1. README.md +23 -6
  2. pytorch_model.bin +1 -1
README.md CHANGED
@@ -28,7 +28,7 @@ model-index:
28
 
29
  # Wav2Vec2-Large-XLSR-53-ca
30
 
31
- Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on catalan using the [Common Voice](https://huggingface.co/datasets/common_voice)dataset.
32
  When using this model, make sure that your speech input is sampled at 16kHz.
33
 
34
  ## Usage
@@ -113,15 +113,32 @@ def evaluate(batch):
113
  return batch
114
 
115
  result = test_dataset.map(evaluate, batched=True, batch_size=8)
116
-
117
- print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
118
  ```
119
 
120
- **Test Result**: XX.XX % # TODO: write output of print here. IMPORTANT: Please remember to also replace {wer_result_on_test} at the top of with this value here. tags.
121
 
122
 
123
  ## Training
124
 
125
- The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO: adapt to state all the datasets that were used for training.
126
 
127
- The script used for training can be found [here](...) # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here.
 
28
 
29
  # Wav2Vec2-Large-XLSR-53-ca
30
 
31
+ Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on catalan using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset.
32
  When using this model, make sure that your speech input is sampled at 16kHz.
33
 
34
  ## Usage
 
113
  return batch
114
 
115
  result = test_dataset.map(evaluate, batched=True, batch_size=8)
116
+ import jiwer
117
+
118
+ # Chunk WER computation due to memory issues, taken from https://huggingface.co/pcuenq/wav2vec2-large-xlsr-53-es
119
+ def chunked_wer(targets, predictions, chunk_size=None):
120
+ if chunk_size is None: return jiwer.wer(targets, predictions)
121
+ start = 0
122
+ end = chunk_size
123
+ H, S, D, I = 0, 0, 0, 0
124
+ while start < len(targets):
125
+ chunk_metrics = jiwer.compute_measures(targets[start:end], predictions[start:end])
126
+ H = H + chunk_metrics["hits"]
127
+ S = S + chunk_metrics["substitutions"]
128
+ D = D + chunk_metrics["deletions"]
129
+ I = I + chunk_metrics["insertions"]
130
+ start += chunk_size
131
+ end += chunk_size
132
+ return float(S + D + I) / float(H + S + D)
133
+
134
+ print("WER: {:2f}".format(100 * chunked_wer(result["sentence"], result["pred_strings"], chunk_size=4000)))
135
  ```
136
 
137
+ **Test Result**: 15.20 % # TODO: write output of print here. IMPORTANT: Please remember to also replace {wer_result_on_test} at the top of with this value here. tags.
138
 
139
 
140
  ## Training
141
 
142
+ The Common Voice `train`, `validation` datasets were used for training. At the second epoch training was halted due to a memory issue, and was continued with lower batch size, but acc. gradient steps were scaled to keep it at 32 batch size during all training.
143
 
144
+ The script used for training can be found [here](https://github.com/huggingface/transformers/blob/master/examples/research_projects/wav2vec2/run_common_voice.py). Slight modifications were done in order to speed up the ordering by length during training, which can be found [here](https://discuss.huggingface.co/t/spanish-asr-fine-tuning-wav2vec2/4586/6). Another version trained for catalan can be found [here](https://huggingface.co/ccoreilly/wav2vec2-large-xlsr-catala), which may be better than this one since it was trained with extra data and for longer time. Whoever, since it used different splits that include part of the Common Voice test set, this version can be used to get a baseline on the Common Voice dataset.
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:09a799507e51872c6c4be7ea8e404b5b3613a30c806a9909897b14cc87f934a2
3
  size 1262282327
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ad46cc2cbbcb1cef52c8bc48c61155e79d08e54afa3a6385ea8baf54dccb7bfa
3
  size 1262282327