nico-che julien-c HF staff commited on
Commit
daf17fc
0 Parent(s):

Duplicate from gpt2

Browse files

Co-authored-by: Julien Chaumond <julien-c@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
2
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.h5 filter=lfs diff=lfs merge=lfs -text
5
+ *.tflite filter=lfs diff=lfs merge=lfs -text
6
+ *.tar.gz filter=lfs diff=lfs merge=lfs -text
7
+ *.ot filter=lfs diff=lfs merge=lfs -text
8
+ *.onnx filter=lfs diff=lfs merge=lfs -text
9
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
10
+ model.safetensors filter=lfs diff=lfs merge=lfs -text
64-8bits.tflite ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c966da3b74697803352ca7c6f2f220e7090a557b619de9da0c6b34d89f7825c1
3
+ size 125162496
64-fp16.tflite ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1ceafd82e733dd4b21570b2a86cf27556a983041806c033a55d086e0ed782cd3
3
+ size 248269688
64.tflite ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cfcd510b239d90b71ee87d4e57a5a8c2d55b2a941e5d9fe5852298268ddbe61b
3
+ size 495791932
README.md ADDED
@@ -0,0 +1,167 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ tags:
4
+ - exbert
5
+
6
+ license: mit
7
+ ---
8
+
9
+
10
+ # GPT-2
11
+
12
+ Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
13
+
14
+ Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
15
+ [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
16
+ and first released at [this page](https://openai.com/blog/better-language-models/).
17
+
18
+ Disclaimer: The team releasing GPT-2 also wrote a
19
+ [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card
20
+ has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
21
+
22
+ ## Model description
23
+
24
+ GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This
25
+ means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
26
+ of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
27
+ it was trained to guess the next word in sentences.
28
+
29
+ More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
30
+ shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
31
+ predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
32
+
33
+ This way, the model learns an inner representation of the English language that can then be used to extract features
34
+ useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
35
+ prompt.
36
+
37
+ This is the **smallest** version of GPT-2, with 124M parameters.
38
+
39
+ **Related Models:** [GPT-Large](https://huggingface.co/gpt2-large), [GPT-Medium](https://huggingface.co/gpt2-medium) and [GPT-XL](https://huggingface.co/gpt2-xl)
40
+
41
+ ## Intended uses & limitations
42
+
43
+ You can use the raw model for text generation or fine-tune it to a downstream task. See the
44
+ [model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
45
+
46
+ ### How to use
47
+
48
+ You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
49
+ set a seed for reproducibility:
50
+
51
+ ```python
52
+ >>> from transformers import pipeline, set_seed
53
+ >>> generator = pipeline('text-generation', model='gpt2')
54
+ >>> set_seed(42)
55
+ >>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
56
+
57
+ [{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."},
58
+ {'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"},
59
+ {'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"},
60
+ {'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"},
61
+ {'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}]
62
+ ```
63
+
64
+ Here is how to use this model to get the features of a given text in PyTorch:
65
+
66
+ ```python
67
+ from transformers import GPT2Tokenizer, GPT2Model
68
+ tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
69
+ model = GPT2Model.from_pretrained('gpt2')
70
+ text = "Replace me by any text you'd like."
71
+ encoded_input = tokenizer(text, return_tensors='pt')
72
+ output = model(**encoded_input)
73
+ ```
74
+
75
+ and in TensorFlow:
76
+
77
+ ```python
78
+ from transformers import GPT2Tokenizer, TFGPT2Model
79
+ tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
80
+ model = TFGPT2Model.from_pretrained('gpt2')
81
+ text = "Replace me by any text you'd like."
82
+ encoded_input = tokenizer(text, return_tensors='tf')
83
+ output = model(encoded_input)
84
+ ```
85
+
86
+ ### Limitations and bias
87
+
88
+ The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
89
+ unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
90
+ [model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
91
+
92
+ > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
93
+ > that require the generated text to be true.
94
+ >
95
+ > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
96
+ > not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
97
+ > study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
98
+ > and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
99
+ > levels of caution around use cases that are sensitive to biases around human attributes.
100
+
101
+ Here's an example of how the model can have biased predictions:
102
+
103
+ ```python
104
+ >>> from transformers import pipeline, set_seed
105
+ >>> generator = pipeline('text-generation', model='gpt2')
106
+ >>> set_seed(42)
107
+ >>> generator("The White man worked as a", max_length=10, num_return_sequences=5)
108
+
109
+ [{'generated_text': 'The White man worked as a mannequin for'},
110
+ {'generated_text': 'The White man worked as a maniser of the'},
111
+ {'generated_text': 'The White man worked as a bus conductor by day'},
112
+ {'generated_text': 'The White man worked as a plumber at the'},
113
+ {'generated_text': 'The White man worked as a journalist. He had'}]
114
+
115
+ >>> set_seed(42)
116
+ >>> generator("The Black man worked as a", max_length=10, num_return_sequences=5)
117
+
118
+ [{'generated_text': 'The Black man worked as a man at a restaurant'},
119
+ {'generated_text': 'The Black man worked as a car salesman in a'},
120
+ {'generated_text': 'The Black man worked as a police sergeant at the'},
121
+ {'generated_text': 'The Black man worked as a man-eating monster'},
122
+ {'generated_text': 'The Black man worked as a slave, and was'}]
123
+ ```
124
+
125
+ This bias will also affect all fine-tuned versions of this model.
126
+
127
+ ## Training data
128
+
129
+ The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
130
+ pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
131
+ this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
132
+ 40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
133
+ [here](https://github.com/openai/gpt-2/blob/master/domains.txt).
134
+
135
+ ## Training procedure
136
+
137
+ ### Preprocessing
138
+
139
+ The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
140
+ vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
141
+
142
+ The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact
143
+ details of training.
144
+
145
+ ## Evaluation results
146
+
147
+ The model achieves the following results without any fine-tuning (zero-shot):
148
+
149
+ | Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
150
+ |:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
151
+ | (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
152
+ | | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 |
153
+
154
+
155
+ ### BibTeX entry and citation info
156
+
157
+ ```bibtex
158
+ @article{radford2019language,
159
+ title={Language Models are Unsupervised Multitask Learners},
160
+ author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
161
+ year={2019}
162
+ }
163
+ ```
164
+
165
+ <a href="https://huggingface.co/exbert/?model=gpt2">
166
+ <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
167
+ </a>
config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "activation_function": "gelu_new",
3
+ "architectures": [
4
+ "GPT2LMHeadModel"
5
+ ],
6
+ "attn_pdrop": 0.1,
7
+ "bos_token_id": 50256,
8
+ "embd_pdrop": 0.1,
9
+ "eos_token_id": 50256,
10
+ "initializer_range": 0.02,
11
+ "layer_norm_epsilon": 1e-05,
12
+ "model_type": "gpt2",
13
+ "n_ctx": 1024,
14
+ "n_embd": 768,
15
+ "n_head": 12,
16
+ "n_layer": 12,
17
+ "n_positions": 1024,
18
+ "resid_pdrop": 0.1,
19
+ "summary_activation": null,
20
+ "summary_first_dropout": 0.1,
21
+ "summary_proj_to_labels": true,
22
+ "summary_type": "cls_index",
23
+ "summary_use_proj": true,
24
+ "task_specific_params": {
25
+ "text-generation": {
26
+ "do_sample": true,
27
+ "max_length": 50
28
+ }
29
+ },
30
+ "vocab_size": 50257
31
+ }
flax_model.msgpack ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:192e8257ae9e8f796f764630f4a488a6a16d1461762d62b49ef7405df951a283
3
+ size 497764120
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 50256,
3
+ "eos_token_id": 50256,
4
+ "transformers_version": "4.26.0.dev0",
5
+ "_from_model_config": true
6
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:248dfc3911869ec493c76e65bf2fcf7f615828b0254c12b473182f0f81d3a707
3
+ size 548105171
onnx/config.json ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "gpt2",
3
+ "activation_function": "gelu_new",
4
+ "architectures": [
5
+ "GPT2LMHeadModel"
6
+ ],
7
+ "attn_pdrop": 0.1,
8
+ "bos_token_id": 50256,
9
+ "embd_pdrop": 0.1,
10
+ "eos_token_id": 50256,
11
+ "initializer_range": 0.02,
12
+ "layer_norm_epsilon": 1e-05,
13
+ "model_type": "gpt2",
14
+ "n_ctx": 1024,
15
+ "n_embd": 768,
16
+ "n_head": 12,
17
+ "n_inner": null,
18
+ "n_layer": 12,
19
+ "n_positions": 1024,
20
+ "reorder_and_upcast_attn": false,
21
+ "resid_pdrop": 0.1,
22
+ "scale_attn_by_inverse_layer_idx": false,
23
+ "scale_attn_weights": true,
24
+ "summary_activation": null,
25
+ "summary_first_dropout": 0.1,
26
+ "summary_proj_to_labels": true,
27
+ "summary_type": "cls_index",
28
+ "summary_use_proj": true,
29
+ "task_specific_params": {
30
+ "text-generation": {
31
+ "do_sample": true,
32
+ "max_length": 50
33
+ }
34
+ },
35
+ "transformers_version": "4.30.2",
36
+ "use_cache": true,
37
+ "vocab_size": 50257
38
+ }
onnx/decoder_model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e3fc9615868ff8f5e0429b892a0f6ca692784ba6c4ca31c4e9ee8218e7cce34f
3
+ size 653665842
onnx/decoder_model_merged.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e6fc046fe5a7cfeeb8bb3c7d4c1b6a8bd90ced7339a75d5567957e3bc9d48abe
3
+ size 655189339
onnx/decoder_with_past_model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:570d958241de81f12d82a8358dfc0b408a7bf44ff2bd10ac4a97dab24a8118db
3
+ size 653672649
onnx/generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 50256,
4
+ "eos_token_id": 50256,
5
+ "transformers_version": "4.30.2"
6
+ }
onnx/merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
onnx/special_tokens_map.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<|endoftext|>",
3
+ "eos_token": "<|endoftext|>",
4
+ "unk_token": "<|endoftext|>"
5
+ }
onnx/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
onnx/tokenizer_config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "bos_token": "<|endoftext|>",
4
+ "clean_up_tokenization_spaces": true,
5
+ "eos_token": "<|endoftext|>",
6
+ "model_max_length": 1024,
7
+ "tokenizer_class": "GPT2Tokenizer",
8
+ "unk_token": "<|endoftext|>"
9
+ }
onnx/vocab.json ADDED
The diff for this file is too large to render. See raw diff
 
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7c5d3f4b8b76583b422fcb9189ad6c89d5d97a094541ce8932dce3ecabde1421
3
+ size 548118077
rust_model.ot ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:adf0adedbf4016b249550f866c66a3b3a3d09c8b3b3a1f6e5e9a265d94e0270e
3
+ size 702517648
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d08c1307f7dfae6f878e0a2ca5715d587d2640530db8ef96fc0c1fc474dd9fee
3
+ size 497933648
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
vocab.json ADDED
The diff for this file is too large to render. See raw diff