File size: 3,014 Bytes
0756430 db7a68e 0756430 db7a68e 0756430 db7a68e 0756430 db7a68e 0756430 c73f4cb |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 |
---
language: vi
datasets:
- vivos
- common_voice
metrics:
- wer
pipeline_tag: automatic-speech-recognition
tags:
- audio
- speech
- Transformer
license: cc-by-nc-4.0
model-index:
- name: Wav2vec2 Base Vietnamese 160h
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice vi
type: common_voice
args: vi
metrics:
- name: Test WER
type: wer
value: 0
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8.0
type: mozilla-foundation/common_voice_8_0
args: vi
metrics:
- name: Test WER
type: wer
value: 0
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: VIVOS
type: vivos
args: vi
metrics:
- name: Test WER
type: wer
value: 0
---
# FINETUNE WAV2VEC 2.0 FOR SPEECH RECOGNITION
## Table of contents
1. [Documentation](#documentation)
2. [Installation](#installation)
3. [Usage](#usage)
4. [Logs and Visualization](#logs)
<a name = "documentation" ></a>
## Documentation
Suppose you need a simple way to fine-tune the Wav2vec 2.0 model for the task of Speech Recognition on your datasets, then you came to the right place.
</br>
All documents related to this repo can be found here:
- [Wav2vec2ForCTC](https://huggingface.co/docs/transformers/model_doc/wav2vec2#transformers.Wav2Vec2ForCTC)
- [Tutorial](https://huggingface.co/blog/fine-tune-wav2vec2-english)
- [Code reference](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py)
<a name = "installation" ></a>
## Installation
```
pip install -r requirements.txt
```
<a name = "usage" ></a>
## Usage
1. Prepare your dataset
- Your dataset can be in <b>.txt</b> or <b>.csv</b> format.
- <b>path</b> and <b>transcript</b> columns are compulsory. The <b>path</b> column contains the paths to your stored audio files, depending on your dataset location, it can be either absolute paths or relative paths. The <b>transcript</b> column contains the corresponding transcripts to the audio paths.
- Check out our [data_example.csv](dataset/data_example.csv) file for more information.
2. Configure the config.toml file
3. Run
- Start training:
```
python train.py -c config.toml
```
- Continue to train from resume:
```
python train.py -c config.toml -r
```
- Load specific model and start training:
```
python train.py -c config.toml -p path/to/your/model.tar
```
<a name = "logs" ></a>
## Logs and Visualization
The logs during the training will be stored, and you can visualize it using TensorBoard by running this command:
```
# specify the <name> in config.json
tensorboard --logdir ~/saved/<name>
# specify a port 8080
tensorboard --logdir ~/saved/<name> --port 8080
```
|