princeton-nlp commited on
Commit
f69a598
1 Parent(s): da8c4df

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -0
README.md CHANGED
@@ -1,3 +1,33 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+
5
+ **Paper**: [https://arxiv.org/pdf/2310.06694.pdf](https://arxiv.org/pdf/2310.06694.pdf)
6
+ **Code**: https://github.com/princeton-nlp/LLM-Shearing
7
+ **Models**: [Sheared-LLaMA-1.3B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B), [Sheared-LLaMA-2.7B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-2.7B)
8
+
9
+
10
+ ## Training information
11
+ This is the instruction tuned version of [princeton-nlp/Sheared-LLaMA-1.3B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B). We trained the base model on 10,000 instruction-response pairs
12
+ sampled from the ShareGPT dataset (first-turns only). We use the following prompt to perform instruction tuning.
13
+
14
+ > You are a helpful assistant. Write a response that appropriately completes the request.\n\n### Input:\n{input}\n\n### Response:
15
+
16
+ This model can be loaded through transformers.LlamaModelForCausalLM as follows:
17
+
18
+ ```
19
+ from transformers import LlamaModelForCausalLM
20
+ model = LlamaModelForCausalLM.from_pretrained("princeton-nlp/Sheared-LLaMA-1.3B-ShareGPT")
21
+ ```
22
+
23
+ ## Bibtex
24
+
25
+ If you find our model useful, consider citing us with:
26
+ ```
27
+ @article{xia2023sheared,
28
+ title={Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning},
29
+ author={Xia, Mengzhou and Gao, Tianyu, and Zeng, Zhiyuan and Chen, Danqi},
30
+ year={2023}
31
+ }
32
+ ```
33
+