dreamerdeo
commited on
Commit
•
8d0a555
1
Parent(s):
4e839a8
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,103 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
- zh
|
5 |
+
- id
|
6 |
+
- th
|
7 |
+
- vi
|
8 |
+
- ms
|
9 |
+
- lo
|
10 |
+
datasets:
|
11 |
+
- cerebras/SlimPajama-627B
|
12 |
+
- Skywork/SkyPile-150B
|
13 |
+
- allenai/MADLAD-400
|
14 |
+
- cc100
|
15 |
+
- CohereForAI/aya_dataset
|
16 |
+
- CohereForAI/aya_collection
|
17 |
+
- Open-Orca/OpenOrca
|
18 |
+
tags:
|
19 |
+
- multilingual
|
20 |
+
- sea
|
21 |
+
- sailor
|
22 |
+
- sft
|
23 |
+
- chat
|
24 |
+
- instruction
|
25 |
+
license: apache-2.0
|
26 |
+
base_model: sail/Sailor-7B
|
27 |
+
---
|
28 |
+
|
29 |
+
<div align="center">
|
30 |
+
<img src="banner_sailor.jpg" width="700"/>
|
31 |
+
</div>
|
32 |
+
|
33 |
+
Sailor is a suite of Open Language Models tailored for South-East Asia (SEA), focusing on languages such as 🇮🇩Indonesian, 🇹🇭Thai, 🇻🇳Vietnamese, 🇲🇾Malay, and 🇱🇦Lao.
|
34 |
+
Developed with careful data curation, Sailor models are designed to understand and generate text across diverse linguistic landscapes of SEA region.
|
35 |
+
Built from [Qwen 1.5](https://huggingface.co/collections/Qwen/qwen15-65c0a2f577b1ecb76d786524) , Sailor encompasses models of varying sizes, spanning from 0.5B to 7B versions for different requirements.
|
36 |
+
We further fine-tune the base model with open-source datasets to get instruction-tuned models, namedly Sailor-Chat.
|
37 |
+
Benchmarking results demonstrate Sailor's proficiency in tasks such as question answering, commonsense reasoning, and other tasks in SEA languages.
|
38 |
+
|
39 |
+
> The logo was generated by MidJourney
|
40 |
+
|
41 |
+
## Model Summary
|
42 |
+
- **Model Collections:** [Base Model & Chat Model](https://huggingface.co/collections/sail/sailor-65e19a749f978976f1959825)
|
43 |
+
- **Project Website:** [sailorllm.github.io](https://sailorllm.github.io/)
|
44 |
+
- **Codebase:** [github.com/sail-sg/sailor-llm](https://github.com/sail-sg/sailor-llm)
|
45 |
+
- **Technical Report:** Coming Soon
|
46 |
+
|
47 |
+
|
48 |
+
## Training details
|
49 |
+
Sailor is crafted by continually pre-training from language models like the remarkable Qwen 1.5 models, which already has a great performance on SEA languages.
|
50 |
+
The pre-training corpus heavily leverages the publicly available corpus, including
|
51 |
+
[SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B),
|
52 |
+
[SkyPile](https://huggingface.co/datasets/Skywork/SkyPile-150B),
|
53 |
+
[CC100](https://huggingface.co/datasets/cc100) and [MADLAD-400](https://huggingface.co/datasets/allenai/MADLAD-400).
|
54 |
+
The instruction tuning corpus are all public available including
|
55 |
+
[aya_collection](https://huggingface.co/datasets/CohereForAI/aya_collection),
|
56 |
+
[aya_dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset),
|
57 |
+
[OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca).
|
58 |
+
|
59 |
+
By employing aggressive data deduplication and careful data cleaning on the collected corpus, we have attained a high-quality dataset spanning various languages.
|
60 |
+
Through systematic experiments to determine the weights of different languages, Sailor models undergo training from 200B to 400B tokens, tailored to different model sizes.
|
61 |
+
The approach boosts their performance on SEA languages while maintaining proficiency in English and Chinese without significant compromise.
|
62 |
+
Finally, we continually pre-train the Qwen1.5-0.5B model with 400 Billion tokens, and other models with 200 Billion tokens to obtain the Sailor models.
|
63 |
+
|
64 |
+
## Requirements
|
65 |
+
The code of Sailor has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`.
|
66 |
+
|
67 |
+
## Quickstart
|
68 |
+
|
69 |
+
Here provides a code snippet to show you how to load the tokenizer and model and how to generate contents.
|
70 |
+
|
71 |
+
```python
|
72 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
73 |
+
device = "cuda" # the device to load the model
|
74 |
+
|
75 |
+
model = AutoModelForCausalLM.from_pretrained("sail/Sailor-7B", device_map="auto")
|
76 |
+
tokenizer = AutoTokenizer.from_pretrained("sail/Sailor-7B")
|
77 |
+
|
78 |
+
input_message = "Model bahasa adalah model probabilistik"
|
79 |
+
### The given Indonesian input translates to 'A language model is a probabilistic model of.'
|
80 |
+
|
81 |
+
model_inputs = tokenizer([input_message], return_tensors="pt").to(device)
|
82 |
+
|
83 |
+
generated_ids = model.generate(
|
84 |
+
model_inputs.input_ids,
|
85 |
+
max_new_tokens=64
|
86 |
+
)
|
87 |
+
|
88 |
+
generated_ids = [
|
89 |
+
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
|
90 |
+
]
|
91 |
+
|
92 |
+
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
93 |
+
print(response)
|
94 |
+
```
|
95 |
+
|
96 |
+
# License
|
97 |
+
|
98 |
+
Sailor is distributed under the terms of the Apache License 2.0.
|
99 |
+
No restrict on the research and the commercial use, but should comply with the [Qwen License](https://huggingface.co/Qwen/Qwen1.5-1.8B/blob/main/LICENSE).
|
100 |
+
|
101 |
+
# Contact Us
|
102 |
+
|
103 |
+
If you have any questions, please raise an issue or contact us at [doulx@sea.com](mailto:doulx@sea.com) or [liuqian@sea.com](mailto:liuqian@sea.com).
|