Update
Browse files
README.md
CHANGED
@@ -36,7 +36,7 @@ ISML datasets (80 thousands hours unlabeled data) and babel datasets (2 thousand
|
|
36 |
## Training procedure
|
37 |
The model is pre-trained by wav2vec2 (https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 70 epochs with a batch size of 128. We use the same hyper-parameters on different model sizes.
|
38 |
The downstream models are finetuned:
|
39 |
-
|
40 |
Stage 1:
|
41 |
```
|
42 |
python wenet/bin/train.py --gpu 0,1,2,3,4,5,6,7 \
|
|
|
36 |
## Training procedure
|
37 |
The model is pre-trained by wav2vec2 (https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 70 epochs with a batch size of 128. We use the same hyper-parameters on different model sizes.
|
38 |
The downstream models are finetuned:
|
39 |
+
|
40 |
Stage 1:
|
41 |
```
|
42 |
python wenet/bin/train.py --gpu 0,1,2,3,4,5,6,7 \
|