GGUF
Composer
MosaicML
llm-foundry
mav23 commited on
Commit
9927e71
1 Parent(s): c873862

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. README.md +196 -0
  3. mpt-7b-8k-instruct.Q4_0.gguf +3 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ mpt-7b-8k-instruct.Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,196 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - competition_math
5
+ - knkarthick/dialogsum
6
+ - mosaicml/dolly_hhrlhf
7
+ - duorc
8
+ - emozilla/quality
9
+ - scrolls/summ_screen_fd
10
+ - spider
11
+ tags:
12
+ - Composer
13
+ - MosaicML
14
+ - llm-foundry
15
+ inference: false
16
+ ---
17
+
18
+ # MPT-7B-Instruct-8k
19
+
20
+ MPT-7B-Instruct-8k is a model for long-form instruction following, especially question-answering on and summarization of longer documents.
21
+ It is built by finetuning [MPT-7B-8k](https://huggingface.co/mosaicml/mpt-7b-8k) on [Dolly HHRLHF](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets. It is also trained on [Competition Math](https://huggingface.co/datasets/competition_math), [Duorc](https://huggingface.co/datasets/duorc), [CoT GSM8k](https://huggingface.co/datasets/conceptofmind/cot_submix_original), [Qasper](https://huggingface.co/datasets/allenai/qasper), [Quality](https://huggingface.co/datasets/emozilla/quality), [Summ Screen FD](https://huggingface.co/datasets/tau/scrolls) and [Spider](https://huggingface.co/datasets/spider).
22
+ This is the same dataset that [MPT-30B-Instruct](https://huggingface.co/mosaicml/mpt-30b-instruct) was trained on.
23
+ * License: Apache 2.0
24
+
25
+ This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
26
+
27
+ ## Model Date
28
+
29
+ July 18, 2023
30
+
31
+ ## Model License
32
+
33
+ Apache 2.0
34
+
35
+ ## Documentation
36
+
37
+ * [Blog post: MPT-7B-8k](https://www.mosaicml.com/blog/long-context-mpt-7b-8k)
38
+ * [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
39
+ * Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
40
+
41
+ ## How to Use
42
+
43
+ This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning.
44
+
45
+ ```python
46
+ import transformers
47
+ model = transformers.AutoModelForCausalLM.from_pretrained(
48
+ 'mosaicml/mpt-7b-instruct-8k',
49
+ trust_remote_code=True
50
+ )
51
+ ```
52
+ Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
53
+ This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
54
+ `MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
55
+
56
+ To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
57
+ ```python
58
+ import torch
59
+ import transformers
60
+
61
+ name = 'mosaicml/mpt-7b-instruct-8k'
62
+
63
+ config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
64
+ config.attn_config['attn_impl'] = 'triton' # change this to use triton-based FlashAttention
65
+ config.init_device = 'cuda:0' # For fast initialization directly on GPU!
66
+
67
+ model = transformers.AutoModelForCausalLM.from_pretrained(
68
+ name,
69
+ config=config,
70
+ torch_dtype=torch.bfloat16, # Load model weights in bfloat16
71
+ trust_remote_code=True
72
+ )
73
+ ```
74
+
75
+ The model was trained initially with a sequence length of 2048 with an additional pretraining stage for sequence length adapation up to 8192. However, ALiBi enables users to increase the maximum sequence length even further during finetuning and/or inference. For example:
76
+
77
+ ```python
78
+ import transformers
79
+
80
+ name = 'mosaicml/mpt-7b-instruct-8k'
81
+
82
+ config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
83
+ config.max_seq_len = 16384 # (input + output) tokens can now be up to 16384
84
+
85
+ model = transformers.AutoModelForCausalLM.from_pretrained(
86
+ name,
87
+ config=config,
88
+ trust_remote_code=True
89
+ )
90
+ ```
91
+
92
+ This model was trained with the MPT-7B-chat tokenizer which is based on the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer and includes additional ChatML tokens.
93
+
94
+ ```python
95
+ from transformers import AutoTokenizer
96
+ tokenizer = AutoTokenizer.from_pretrained('mosaicml/mpt-7b-8k')
97
+ ```
98
+
99
+ The model can then be used, for example, within a text-generation pipeline.
100
+ Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
101
+
102
+ ```python
103
+ from transformers import pipeline
104
+
105
+ with torch.autocast('cuda', dtype=torch.bfloat16):
106
+ inputs = tokenizer('Here is a recipe for vegan banana bread:\n', return_tensors="pt").to('cuda')
107
+ outputs = model.generate(**inputs, max_new_tokens=100)
108
+ print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
109
+
110
+ # or using the HF pipeline
111
+ pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
112
+ with torch.autocast('cuda', dtype=torch.bfloat16):
113
+ print(
114
+ pipe('Here is a recipe for vegan banana bread:\n',
115
+ max_new_tokens=100,
116
+ do_sample=True,
117
+ use_cache=True))
118
+ ```
119
+
120
+ ## Model Description
121
+
122
+ The architecture is a modification of a standard decoder-only transformer.
123
+
124
+ The model has been modified from a standard transformer in the following ways:
125
+ * It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
126
+ * It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
127
+ * It does not use biases
128
+
129
+
130
+ | Hyperparameter | Value |
131
+ |----------------|-------|
132
+ |n_parameters | 6.7B |
133
+ |n_layers | 32 |
134
+ | n_heads | 32 |
135
+ | d_model | 4096 |
136
+ | vocab size | 50432 |
137
+ | sequence length | 2048 |
138
+
139
+ ## Data Mix
140
+
141
+ The model was trained on the following data mix:
142
+
143
+ | Data Source | Number of Tokens in Source | Proportion |
144
+ |-------------|----------------------------|------------|
145
+ | competition_math | 1.6 M | 3.66% |
146
+ | cot_gsm8k | 3.36 M | 7.67% |
147
+ | dialogsum | 0.1 M | 0.23% |
148
+ | dolly_hhrlhf | 5.89 M | 13.43% |
149
+ | duorc | 7.8 M | 17.80% |
150
+ | qasper | 8.72 M | 19.90% |
151
+ | quality | 11.29 M | 25.78% |
152
+ | scrolls/summ_screen_fd | 4.97 M | 11.33% |
153
+ | spider | 0.089 M | 0.20% |
154
+
155
+ ### Training Configuration
156
+
157
+ This model was trained on 8 80GB A100s for about 6.3 hours using the [MosaicML Platform](https://www.mosaicml.com/platform).
158
+ The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the AdamW optimizer.
159
+
160
+ ## Limitations and Biases
161
+
162
+ _The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
163
+
164
+ MPT-7B-Instruct-8k can produce factually incorrect output, and should not be relied on to produce factually accurate information.
165
+ MPT-7B-Instruct-8k was trained on various public datasets.
166
+ While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
167
+
168
+ ## Acknowledgements
169
+
170
+ This model was finetuned by the MosaicML NLP team.
171
+
172
+ ## Disclaimer
173
+
174
+ The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
175
+
176
+
177
+ ## MosaicML Platform
178
+
179
+ If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://www.mosaicml.com/get-started?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b-8k).
180
+
181
+
182
+ ## Citation
183
+
184
+ Please cite this model using the following format:
185
+
186
+ ```
187
+ @online{MosaicML2023Introducing,
188
+ author = {MosaicML NLP Team},
189
+ title = {Introducing MPT-30B: Raising the bar
190
+ for open-source foundation models},
191
+ year = {2023},
192
+ url = {www.mosaicml.com/blog/mpt-30b},
193
+ note = {Accessed: 2023-06-22},
194
+ urldate = {2023-06-22}
195
+ }
196
+ ```
mpt-7b-8k-instruct.Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef74d0319bd9a041c6bf691286d43079f390cd287606c85f1b718847526f3230
3
+ size 3796178624