File size: 567 Bytes
6e23d57
 
 
df758f1
034ca68
20a2f50
 
 
 
 
1
2
3
4
5
6
7
8
9
10
These are the pretrained models used in the paper 
[The Interpreter Understands Your Meaning: End-to-end Spoken Language Understanding Aided by Speech Translation](https://arxiv.org/abs/2305.09652), that are the most important and time-consuming,
and used for further fine-tuning in our experiments, including the ASR and ST pretrained models, as well as the jointly fine-tuned SLURP models.
For details about how to use and fine-tune these models, see the code [here](https://github.com/idiap/translation-aided-slu).
---
license: cc-by-nc-4.0
language:
- en
- fr
---