yuchenglu commited on
Commit
65e7cb3
1 Parent(s): c9cd6ba

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -20,20 +20,20 @@ We hope that this can enable everyone to finetune their own version of [LLaMA-2-
20
 
21
  LLaMA-2-7B-32K-Chat is fine-tuned over a combination of two parts:
22
  1. **19K single- and multi-round conversations generated by human instructions and [Llama-2-70B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) outputs**.
23
- We collected the dataset following the distillation paradigm that is used by Alpaca, Vicuna, WizardLM, Orca — producing instructions by querying a powerful LLM.
24
- This dataset is also released [here](https://huggingface.co/datasets/togethercomputer/llama-instruct).
25
- We also share the complete collection recipe [here](https://github.com/togethercomputer/LLaMA-2-32K-Chat).
26
 
27
  3. **4K instructions of summarization from the BookSum datasets**.
28
  BookSum is a unique dataset designed to address the challenges of long-form narrative summarization.
29
  This dataset features source documents from the literature domain, including novels, plays, and stories, and offers human-written, highly abstractive summaries.
30
  We here focus on chapter-level data. BookSum poses a unique set of challenges, necessitating that the model comprehensively read through each chapter.
 
31
 
32
 
33
  ## Model Usage
34
 
35
- We encourage you to try out this model using the [Together API](https://together.ai/blog/api-announcement).
36
- The updated inference stack allows for efficient inference.
37
  Alternatively, you can load the model directly from the Hugging Face model hub using
38
 
39
  ```python
@@ -43,7 +43,7 @@ tokenizer = AutoTokenizer.from_pretrained("togethercomputer/LLaMA-2-7B-32K-Chat"
43
  model = AutoModelForCausalLM.from_pretrained("togethercomputer/LLaMA-2-7B-32K-Chat", trust_remote_code=True, torch_dtype=torch.float16)
44
  ```
45
 
46
- The model is also hosted on [Together Playground](https://api.together.xyz/playground). You can simply play with the model by using prompt formatted by
47
 
48
  ```
49
  [INST] <your instruction here> [\INST].
 
20
 
21
  LLaMA-2-7B-32K-Chat is fine-tuned over a combination of two parts:
22
  1. **19K single- and multi-round conversations generated by human instructions and [Llama-2-70B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) outputs**.
23
+ We collected the dataset following the distillation paradigm that is used by Alpaca, Vicuna, WizardLM, Orca — producing instructions by querying a powerful LLM (in this case, [Llama-2-70B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)).
24
+ The complete dataset is also released [here](https://huggingface.co/datasets/togethercomputer/llama-instruct).
25
+ We also share the complete recipe for the data collection process [here](https://github.com/togethercomputer/LLaMA-2-32K-Chat).
26
 
27
  3. **4K instructions of summarization from the BookSum datasets**.
28
  BookSum is a unique dataset designed to address the challenges of long-form narrative summarization.
29
  This dataset features source documents from the literature domain, including novels, plays, and stories, and offers human-written, highly abstractive summaries.
30
  We here focus on chapter-level data. BookSum poses a unique set of challenges, necessitating that the model comprehensively read through each chapter.
31
+ We used 4K of the instructions in our fine-tuning.
32
 
33
 
34
  ## Model Usage
35
 
36
+ We encourage you to try out this model using the [Together API](https://together.ai/blog/api-announcement). The updated inference stack allows for efficient inference.
 
37
  Alternatively, you can load the model directly from the Hugging Face model hub using
38
 
39
  ```python
 
43
  model = AutoModelForCausalLM.from_pretrained("togethercomputer/LLaMA-2-7B-32K-Chat", trust_remote_code=True, torch_dtype=torch.float16)
44
  ```
45
 
46
+ The model is also hosted on [Together Playground](https://api.together.xyz/playground). You can simply play with the model by using prompt formatted by:
47
 
48
  ```
49
  [INST] <your instruction here> [\INST].