mtspeech commited on
Commit
643e234
·
verified ·
1 Parent(s): ecadfd3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -14,13 +14,13 @@ tags:
14
  - speech-recognition
15
  ---
16
 
17
- # MooER (摩耳): LLM-based Speech Recognition and Translation Models from Moore Threads
18
 
19
  **Online Demo**: https://mooer-speech.mthreads.com:10077/
20
 
21
  ## 📖 Introduction
22
 
23
- We introduce **MooER (摩耳)**: LLM-based speech recognition and translation models developed by Moore Threads. With the *MooER* framework, you can transcribe the speech into text (speech recognition or, ASR), and translate it into other languages (speech translation or, AST) in a end-to-end manner. The performance of *MooER* is demonstrated in the subsequent section, along with our insights into model configurations, training strategies, and more, provided in our [technical report](https://arxiv.org/abs/2408.05101).
24
 
25
  For the usage of the model files, please refer to our [GitHub](https://github.com/MooreThreads/MooER)
26
 
@@ -239,7 +239,7 @@ If you find MooER useful for your research, please 🌟 this repo and cite our w
239
 
240
  ```bibtex
241
  @article{liang2024mooer,
242
- title = {MooER: LLM-based Speech Recognition and Translation Models from Moore Threads},
243
  author = {Zhenlin Liang, Junhao Xu, Yi Liu, Yichao Hu, Jian Li, Yajun Zheng, Meng Cai, Hua Wang},
244
  journal = {arXiv preprint arXiv:2408.05101},
245
  url = {https://arxiv.org/abs/2408.05101},
 
14
  - speech-recognition
15
  ---
16
 
17
+ # MooER (摩耳): an LLM-based Speech Recognition and Translation Model from Moore Threads
18
 
19
  **Online Demo**: https://mooer-speech.mthreads.com:10077/
20
 
21
  ## 📖 Introduction
22
 
23
+ We introduce **MooER (摩耳)**: an LLM-based speech recognition and translation model developed by Moore Threads. With the *MooER* framework, you can transcribe the speech into text (speech recognition or, ASR), and translate it into other languages (speech translation or, AST) in a end-to-end manner. The performance of *MooER* is demonstrated in the subsequent section, along with our insights into model configurations, training strategies, and more, provided in our [technical report](https://arxiv.org/abs/2408.05101).
24
 
25
  For the usage of the model files, please refer to our [GitHub](https://github.com/MooreThreads/MooER)
26
 
 
239
 
240
  ```bibtex
241
  @article{liang2024mooer,
242
+ title = {MooER: LLM-based Speech Recognition and Translation Models from Moore Thread},
243
  author = {Zhenlin Liang, Junhao Xu, Yi Liu, Yichao Hu, Jian Li, Yajun Zheng, Meng Cai, Hua Wang},
244
  journal = {arXiv preprint arXiv:2408.05101},
245
  url = {https://arxiv.org/abs/2408.05101},