Text Generation
Transformers
Safetensors
Slovenian
English
llama
conversational
text-generation-inference
Inference Endpoints
zolicsaki commited on
Commit
3f9861a
1 Parent(s): b65763a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -7
README.md CHANGED
@@ -26,6 +26,7 @@ SambaLingo-Slovenian-Chat is a human aligned chat model trained in Slovenian and
26
  - **Language(s):** Slovenian, English
27
  - **Finetuned from model:** [Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf)
28
  - **Try this model:** [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space)
 
29
  - **Blog Post**: [sambalingo-open-source-language-experts](https://sambanova.ai/blog/sambalingo-open-source-language-experts)
30
 
31
  ## Getting Started
@@ -99,6 +100,9 @@ The DPO phase was done on the [ultrafeedback](https://huggingface.co/datasets/Hu
99
  ## Tokenizer Details
100
  We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.
101
 
 
 
 
102
  ## Uses
103
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
104
 
@@ -141,12 +145,12 @@ We would like to give a special thanks to the following groups:
141
 
142
  ## Cite SambaLingo
143
  ```
144
- @software{sambalingo,
145
- title = {{SambaLingo: Open Source Language Experts}},
146
- author = {SambaNova Systems},
147
- url = {https://huggingface.co/sambanovasystems/SambaLingo-Slovenian-Chat}
148
- month = {2},
149
- year = {2024},
150
- version = {1.0},
151
  }
152
  ```
 
26
  - **Language(s):** Slovenian, English
27
  - **Finetuned from model:** [Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf)
28
  - **Try this model:** [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space)
29
+ - **Paper:** [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
30
  - **Blog Post**: [sambalingo-open-source-language-experts](https://sambanova.ai/blog/sambalingo-open-source-language-experts)
31
 
32
  ## Getting Started
 
100
  ## Tokenizer Details
101
  We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.
102
 
103
+ ## Evaluation
104
+ For evaluation results see our paper: [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
105
+
106
  ## Uses
107
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
108
 
 
145
 
146
  ## Cite SambaLingo
147
  ```
148
+ @misc{csaki2024sambalingo,
149
+ title={SambaLingo: Teaching Large Language Models New Languages},
150
+ author={Zoltan Csaki and Bo Li and Jonathan Li and Qiantong Xu and Pian Pawakapan and Leon Zhang and Yun Du and Hengyu Zhao and Changran Hu and Urmish Thakker},
151
+ year={2024},
152
+ eprint={2404.05829},
153
+ archivePrefix={arXiv},
154
+ primaryClass={cs.CL}
155
  }
156
  ```