Commit
·
7f3cc26
1
Parent(s):
8301504
Update README.md
Browse files
README.md
CHANGED
@@ -7,12 +7,44 @@ language: et
|
|
7 |
license: cc-by-4.0
|
8 |
---
|
9 |
|
10 |
-
|
11 |
|
|
|
12 |
This is a general-purpose Estonian ASR model trained in the Lab of Language Technology at TalTech.
|
13 |
|
|
|
14 |
|
15 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
```BibTex
|
17 |
@inproceedings{watanabe2018espnet,
|
18 |
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
|
|
|
7 |
license: cc-by-4.0
|
8 |
---
|
9 |
|
10 |
+
# Estonian Espnet2 ASR model
|
11 |
|
12 |
+
## Model description
|
13 |
This is a general-purpose Estonian ASR model trained in the Lab of Language Technology at TalTech.
|
14 |
|
15 |
+
## Intended uses & limitations
|
16 |
|
17 |
+
This model is intended for general-purpose speech recognition, such as broadcast conversations, interviews, talks, etc.
|
18 |
+
|
19 |
+
|
20 |
+
## How to use
|
21 |
+
```python
|
22 |
+
|
23 |
+
from espnet2.bin.asr_inference import Speech2Text
|
24 |
+
|
25 |
+
model = Speech2Text.from_pretrained(
|
26 |
+
"TalTechNLP/espnet2_estonian"
|
27 |
+
)
|
28 |
+
|
29 |
+
speech, rate = soundfile.read("speech.wav")
|
30 |
+
text, *_ = model(speech)
|
31 |
+
```
|
32 |
+
|
33 |
+
#### Limitations and bias
|
34 |
+
|
35 |
+
## Training data
|
36 |
+
|
37 |
+
## Training procedure
|
38 |
+
|
39 |
+
|
40 |
+
## Evaluation results
|
41 |
+
|
42 |
+
|
43 |
+
|
44 |
+
### BibTeX entry and citation info
|
45 |
+
|
46 |
+
|
47 |
+
#### Citing ESPnet
|
48 |
```BibTex
|
49 |
@inproceedings{watanabe2018espnet,
|
50 |
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
|