|
These are the pretrained models used in the paper |
|
[The Interpreter Understands Your Meaning: End-to-end Spoken Language Understanding Aided by Speech Translation](https://arxiv.org/abs/2305.09652), that are the most important and time-consuming, |
|
and used for further fine-tuning in our experiments, including the ASR and ST pretrained models, as well as the jointly fine-tuned SLURP models. |
|
For details about how to use and fine-tune these models, see the code [here](https://github.com/idiap/translation-aided-slu). |
|
--- |
|
license: cc-by-nc-4.0 |
|
language: |
|
- en |
|
- fr |
|
--- |