Update README: add the details for data collections and fix typos
Browse files
README.md
CHANGED
@@ -11,24 +11,40 @@ datasets:
|
|
11 |
|
12 |
## Model Description
|
13 |
|
14 |
-
LLaMA-2-7B-32K-Chat is an open-source, long-context chat model finetuned from [LLaMA-2-7B-32K](https://huggingface.co/togethercomputer/LLaMA-2-7B-32K), over high-quality
|
15 |
-
We
|
|
|
16 |
We hope that this can enable everyone to finetune their own version of [LLaMA-2-7B-32K](https://huggingface.co/togethercomputer/LLaMA-2-7B-32K) — play with [Together API](https://together.ai/blog/api-announcement) and give us feedback!
|
|
|
17 |
|
|
|
18 |
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
We
|
24 |
-
|
|
|
|
|
|
|
|
|
25 |
|
26 |
|
27 |
## Model Usage
|
28 |
|
29 |
-
|
30 |
The updated inference stack allows for efficient inference.
|
31 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
|
33 |
```
|
34 |
[INST] <your instruction here> [\INST].
|
|
|
11 |
|
12 |
## Model Description
|
13 |
|
14 |
+
LLaMA-2-7B-32K-Chat is an open-source, long-context chat model finetuned from [LLaMA-2-7B-32K](https://huggingface.co/togethercomputer/LLaMA-2-7B-32K), over high-quality instruction and chat data.
|
15 |
+
We built Llama-2-7B-32K-Chat with less than 200 lines of Python script using [Together API](https://together.ai/blog/api-announcement), and we also make the recipe fully available.
|
16 |
+
For more details, please refer to our [Github repo](https://github.com/togethercomputer/LLaMA-2-32K-Chat).
|
17 |
We hope that this can enable everyone to finetune their own version of [LLaMA-2-7B-32K](https://huggingface.co/togethercomputer/LLaMA-2-7B-32K) — play with [Together API](https://together.ai/blog/api-announcement) and give us feedback!
|
18 |
+
For more details
|
19 |
|
20 |
+
## Data Collection Details
|
21 |
|
22 |
+
LLaMA-2-7B-32K-Chat is fine-tuned over a combination of two parts:
|
23 |
+
1. **19K single- and multi-round conversations generated by human instructions and [Llama-2-70B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) outputs**.
|
24 |
+
We collected the dataset following the distillation paradigm that is used by Alpaca, Vicuna, WizardLM, Orca — producing instructions by querying a powerful LLM.
|
25 |
+
This dataset is also released [here](https://huggingface.co/datasets/togethercomputer/llama-instruct).
|
26 |
+
We also share the complete collection recipe [here](https://github.com/togethercomputer/LLaMA-2-32K-Chat).
|
27 |
+
|
28 |
+
3. **4K instructions of summarization from the BookSum datasets**.
|
29 |
+
BookSum is a unique dataset designed to address the challenges of long-form narrative summarization.
|
30 |
+
This dataset features source documents from the literature domain, including novels, plays, and stories, and offers human-written, highly abstractive summaries.
|
31 |
+
We here focus on chapter-level data. BookSum poses a unique set of challenges, necessitating that the model comprehensively read through each chapter.
|
32 |
|
33 |
|
34 |
## Model Usage
|
35 |
|
36 |
+
We encourage you to try out this model using the [Together API](https://together.ai/blog/api-announcement).
|
37 |
The updated inference stack allows for efficient inference.
|
38 |
+
Alternatively, you can load the model directly from the Hugging Face model hub using
|
39 |
+
|
40 |
+
```python
|
41 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
42 |
+
|
43 |
+
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/LLaMA-2-7B-32K")
|
44 |
+
model = AutoModelForCausalLM.from_pretrained("togethercomputer/LLaMA-2-7B-32K", trust_remote_code=True, torch_dtype=torch.float16)
|
45 |
+
```
|
46 |
+
|
47 |
+
The model is also hosted on [Together Playground](https://api.together.xyz/playground). You can simply play with the model by using prompt formatted by
|
48 |
|
49 |
```
|
50 |
[INST] <your instruction here> [\INST].
|