Update README.md
Browse files
README.md
CHANGED
@@ -78,6 +78,10 @@ model-index:
|
|
78 |
|
79 |
# SpeechLLM
|
80 |
|
|
|
|
|
|
|
|
|
81 |
![](./speechllm.png)
|
82 |
|
83 |
SpeechLLM is a multi-modal LLM trained to predict the metadata of the speaker's turn in a conversation. speechllm-2B model is based on HubertX audio encoder and TinyLlama LLM. The model predicts the following:
|
|
|
78 |
|
79 |
# SpeechLLM
|
80 |
|
81 |
+
[![github](https://img.shields.io/badge/-Github-black?logo=github)](https://github.com/skit-ai/SpeechLLM.git)
|
82 |
+
[![Open in Colab](https://img.shields.io/badge/Open%20in%20Colab-F9AB00?logo=googlecolab&color=blue)](https://colab.research.google.com/drive/1uqhRl36LJKA4IxnrhplLMv0wQ_f3OuBM?usp=sharing)
|
83 |
+
|
84 |
+
|
85 |
![](./speechllm.png)
|
86 |
|
87 |
SpeechLLM is a multi-modal LLM trained to predict the metadata of the speaker's turn in a conversation. speechllm-2B model is based on HubertX audio encoder and TinyLlama LLM. The model predicts the following:
|