Henk717 commited on
Commit
cc2db0d
1 Parent(s): b837088

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +196 -188
README.md CHANGED
@@ -1,188 +1,196 @@
1
- ---
2
- license: apache-2.0
3
- ---
4
- # GPT-J-Skein - GGML Edition
5
-
6
- This is the GGML port of our classic GPT-J-Skein model, a 6B model focussed on text adventures with additional novel data.
7
- It was a beloved text adventure and even writing model, back in the day people used the anti You bias userscript to enhance its writing ability.
8
- Later it was remade as Skein-20B which we also intend to convert to GGUF.
9
-
10
- ### GGML in 2024, really?
11
- Yes, GPT-J never saw adoption by Llamacpp and until this changes we have to rely on older code that originated from the pygmalioncpp project and that still lives on in KoboldCpp today.
12
- This model release was tested to work in KoboldCpp 1.66, but due to the age of the format does come with limitations.
13
-
14
- ### What are the limitations of this conversion?
15
- This format dates back to a time where K quants did not exist yet, so you will only be able to use regular quants or the FP16 version.
16
- Likewise a lot of modern features will be missing from the engine, you can still use smartcontext but you can't use context shifting.
17
- You can offload if you have a CUDA compatible GPU (ROCm is untested but may work), for full acceleration it is required to have every layer on the GPU.
18
- For non Nvidia GPU's you can use CLBlast to speedup the prompt processing, Vulkan does not support these older GGML models as it does not exist in our legacy code.
19
- Rope scaling even though its a much newer feature should be compatible, we also expect some of the more modern samplers to be compatible.
20
-
21
- ---
22
-
23
- # Model Card for GPT-J-6B-Skein
24
-
25
- # Model Details
26
-
27
- ## Model Description
28
-
29
-
30
- - **Developed by:** KoboldAI
31
- - **Shared by [Optional]:** KoboldAI
32
- - **Model type:** Text Generation
33
- - **Language(s) (NLP):** English
34
- - **License:** Apache License 2.0
35
- - **Related Models:** [GPT-J 6B](https://huggingface.co/EleutherAI/gpt-j-6B?text=My+name+is+Mariama%2C+my+favorite)
36
- - **Parent Model:** GPT-J
37
- - **Resources for more information:**
38
- - [GitHub Repo](https://github.com/kingoflolz/mesh-transformer-jax)
39
- - [Associated Model Doc](https://huggingface.co/docs/transformers/main/en/model_doc/gptj#transformers.GPTJForCausalLM)
40
-
41
- # Uses
42
-
43
-
44
- ## Direct Use
45
-
46
- This model is designed for creative story generation. It can understand both free-form text and text written in interactive fiction style with actions starting with "> You", such as:
47
-
48
- ```
49
- You become aware of her breathing -- the slight expansion of her ribs, the soft exhalation -- natural, and yet somehow studied. "Ah -- by the way," she says, in a way that utterly fails to be casual, "have you seen the artist out there? -- My artist, that is."
50
-
51
- "No," you respond, uneasy. You open your mouth and close it again.
52
-
53
- > You ask about the experience of waking up
54
- ```
55
-
56
- ## Downstream Use [Optional]
57
-
58
- More information needed
59
-
60
- ## Out-of-Scope Use
61
-
62
- The model should not be used to intentionally create hostile or alienating environments for people.
63
-
64
- # Bias, Risks, and Limitations
65
- The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output.
66
- GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.
67
- As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
68
-
69
- See the [GPT-J 6B model card](https://huggingface.co/EleutherAI/gpt-j-6B?text=My+name+is+Mariama%2C+my+favorite) for more information.
70
-
71
- ## Recommendations
72
-
73
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
74
-
75
-
76
- # Training Details
77
-
78
- ## Training Data
79
-
80
- The data are mostly comprised of light novels from the dataset of the [KoboldAI/GPT-Neo-2.7B-Horni-LN](https://huggingface.co/KoboldAI/GPT-Neo-2.7B-Horni-LN) model and assorted interactive fiction. The dataset uses `[Themes: <comma-separated list of genres>]` for tagging, which means that if similar text is placed in the context, the model will attempt to generate text in the specified style(s). For more details about the dataset, consult [this document](https://wandb.ai/ve-forbryderne/skein/runs/files/files/datasets/README.txt).
81
-
82
- ## Training Procedure
83
-
84
-
85
- ### Preprocessing
86
-
87
- The data were preprocessed using the Python package ftfy to eliminate as much as possible non-ASCII punctuation characters and possible encoding errors. The interactive fiction in the dataset also underwent deduplication since interactive fiction logs often contain duplicate text from, for example, visiting the same in-game area several times. spaCy was used for grammatical analysis with the purpose of reformatting the actions commonly found in old text adventure games into more complete sentences. There was also some manual elimination of things such as "thank you for playing" messages and title messages.
88
-
89
- ### Speeds, Sizes, Times
90
-
91
- Training took approximately 14 hours in total, with the average speed being 5265 tokens per second.
92
-
93
- # Evaluation
94
-
95
-
96
- ## Testing Data, Factors & Metrics
97
-
98
- ### Testing Data
99
-
100
- More information needed
101
-
102
- ### Factors
103
-
104
-
105
- ### Metrics
106
-
107
- More information needed
108
- ## Results
109
-
110
- More information needed
111
-
112
- # Model Examination
113
-
114
- More information needed
115
-
116
- # Environmental Impact
117
-
118
-
119
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
120
-
121
- - **Hardware Type:** More information needed
122
- - **Hours used:** More information needed
123
- - **Cloud Provider:** More information needed
124
- - **Compute Region:** More information needed
125
- - **Carbon Emitted:** More information needed
126
-
127
- # Technical Specifications [optional]
128
-
129
- ## Model Architecture and Objective
130
-
131
- More information needed
132
-
133
- ## Compute Infrastructure
134
-
135
- More information needed
136
-
137
- ### Hardware
138
-
139
- More information needed
140
-
141
- ### Software
142
- https://github.com/kingoflolz/mesh-transformer-jax
143
-
144
- # Citation
145
-
146
-
147
- **BibTeX:**
148
- ```
149
- @misc{mesh-transformer-jax,
150
- author = {Wang, Ben},
151
- title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}},
152
- howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
153
- year = 2021,
154
- month = May
155
- }
156
- ```
157
-
158
- # Glossary [optional]
159
- More information needed
160
-
161
- # More Information [optional]
162
-
163
- More information needed
164
-
165
- # Model Card Authors [optional]
166
-
167
-
168
- KoboldAI in collaboration with Ezi Ozoani and the Hugging Face team
169
-
170
- # Model Card Contact
171
-
172
- More information needed
173
-
174
- # How to Get Started with the Model
175
-
176
- Use the code below to get started with the model.
177
-
178
- <details>
179
- <summary> Click to expand </summary>
180
-
181
- ```python
182
- from transformers import AutoTokenizer, AutoModelForCausalLM
183
-
184
- tokenizer = AutoTokenizer.from_pretrained("KoboldAI/GPT-J-6B-Skein")
185
-
186
- model = AutoModelForCausalLM.from_pretrained("KoboldAI/GPT-J-6B-Skein")
187
- ```
188
- </details>
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ # GPT-J-Skein - GGML Edition
5
+
6
+ This is the GGML port of our classic GPT-J-Skein model, a 6B model focussed on text adventures with additional novel data.
7
+ It was a beloved text adventure and even writing model, back in the day people used the anti You bias userscript to enhance its writing ability.
8
+ Later it was remade as Skein-20B which we also intend to convert to GGUF.
9
+
10
+ ### GGML in 2024, really?
11
+ Yes, GPT-J never saw adoption by Llamacpp and until this changes we have to rely on older code that originated from the pygmalioncpp project and that still lives on in KoboldCpp today.
12
+ This model release was tested to work in KoboldCpp 1.66, but due to the age of the format does come with limitations.
13
+
14
+ ### What are the limitations of this conversion?
15
+ This format dates back to a time where K quants did not exist yet, so you will only be able to use regular quants or the FP16 version.
16
+ Likewise a lot of modern features will be missing from the engine, you can still use smartcontext but you can't use context shifting.
17
+ You can offload if you have a CUDA compatible GPU (ROCm is untested but may work), for full acceleration it is required to have every layer on the GPU.
18
+ For non Nvidia GPU's you can use CLBlast to speedup the prompt processing, Vulkan does not support these older GGML models as it does not exist in our legacy code.
19
+ Rope scaling even though its a much newer feature should be compatible, we also expect some of the more modern samplers to be compatible.
20
+
21
+ ### I don't use KoboldCpp, can I use it in X?
22
+ No, this upload is only meant for use with KoboldCpp.
23
+ If you haven't tried KoboldCpp yet go give it a try! You can find it on https://koboldai.org/cpp
24
+
25
+ ### How was this conversion done?
26
+ Inside KoboldCpp's source code you can find otherarch/tools/convert_hf_gptj.py
27
+ The relevant quantize_gptj can be compiled by using make tools in the KoboldCpp source code root directory.
28
+
29
+ ---
30
+
31
+ # Model Card for GPT-J-6B-Skein
32
+
33
+ # Model Details
34
+
35
+ ## Model Description
36
+
37
+
38
+ - **Developed by:** KoboldAI
39
+ - **Shared by [Optional]:** KoboldAI
40
+ - **Model type:** Text Generation
41
+ - **Language(s) (NLP):** English
42
+ - **License:** Apache License 2.0
43
+ - **Related Models:** [GPT-J 6B](https://huggingface.co/EleutherAI/gpt-j-6B?text=My+name+is+Mariama%2C+my+favorite)
44
+ - **Parent Model:** GPT-J
45
+ - **Resources for more information:**
46
+ - [GitHub Repo](https://github.com/kingoflolz/mesh-transformer-jax)
47
+ - [Associated Model Doc](https://huggingface.co/docs/transformers/main/en/model_doc/gptj#transformers.GPTJForCausalLM)
48
+
49
+ # Uses
50
+
51
+
52
+ ## Direct Use
53
+
54
+ This model is designed for creative story generation. It can understand both free-form text and text written in interactive fiction style with actions starting with "> You", such as:
55
+
56
+ ```
57
+ You become aware of her breathing -- the slight expansion of her ribs, the soft exhalation -- natural, and yet somehow studied. "Ah -- by the way," she says, in a way that utterly fails to be casual, "have you seen the artist out there? -- My artist, that is."
58
+
59
+ "No," you respond, uneasy. You open your mouth and close it again.
60
+
61
+ > You ask about the experience of waking up
62
+ ```
63
+
64
+ ## Downstream Use [Optional]
65
+
66
+ More information needed
67
+
68
+ ## Out-of-Scope Use
69
+
70
+ The model should not be used to intentionally create hostile or alienating environments for people.
71
+
72
+ # Bias, Risks, and Limitations
73
+ The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output.
74
+ GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.
75
+ As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
76
+
77
+ See the [GPT-J 6B model card](https://huggingface.co/EleutherAI/gpt-j-6B?text=My+name+is+Mariama%2C+my+favorite) for more information.
78
+
79
+ ## Recommendations
80
+
81
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
82
+
83
+
84
+ # Training Details
85
+
86
+ ## Training Data
87
+
88
+ The data are mostly comprised of light novels from the dataset of the [KoboldAI/GPT-Neo-2.7B-Horni-LN](https://huggingface.co/KoboldAI/GPT-Neo-2.7B-Horni-LN) model and assorted interactive fiction. The dataset uses `[Themes: <comma-separated list of genres>]` for tagging, which means that if similar text is placed in the context, the model will attempt to generate text in the specified style(s). For more details about the dataset, consult [this document](https://wandb.ai/ve-forbryderne/skein/runs/files/files/datasets/README.txt).
89
+
90
+ ## Training Procedure
91
+
92
+
93
+ ### Preprocessing
94
+
95
+ The data were preprocessed using the Python package ftfy to eliminate as much as possible non-ASCII punctuation characters and possible encoding errors. The interactive fiction in the dataset also underwent deduplication since interactive fiction logs often contain duplicate text from, for example, visiting the same in-game area several times. spaCy was used for grammatical analysis with the purpose of reformatting the actions commonly found in old text adventure games into more complete sentences. There was also some manual elimination of things such as "thank you for playing" messages and title messages.
96
+
97
+ ### Speeds, Sizes, Times
98
+
99
+ Training took approximately 14 hours in total, with the average speed being 5265 tokens per second.
100
+
101
+ # Evaluation
102
+
103
+
104
+ ## Testing Data, Factors & Metrics
105
+
106
+ ### Testing Data
107
+
108
+ More information needed
109
+
110
+ ### Factors
111
+
112
+
113
+ ### Metrics
114
+
115
+ More information needed
116
+ ## Results
117
+
118
+ More information needed
119
+
120
+ # Model Examination
121
+
122
+ More information needed
123
+
124
+ # Environmental Impact
125
+
126
+
127
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
128
+
129
+ - **Hardware Type:** More information needed
130
+ - **Hours used:** More information needed
131
+ - **Cloud Provider:** More information needed
132
+ - **Compute Region:** More information needed
133
+ - **Carbon Emitted:** More information needed
134
+
135
+ # Technical Specifications [optional]
136
+
137
+ ## Model Architecture and Objective
138
+
139
+ More information needed
140
+
141
+ ## Compute Infrastructure
142
+
143
+ More information needed
144
+
145
+ ### Hardware
146
+
147
+ More information needed
148
+
149
+ ### Software
150
+ https://github.com/kingoflolz/mesh-transformer-jax
151
+
152
+ # Citation
153
+
154
+
155
+ **BibTeX:**
156
+ ```
157
+ @misc{mesh-transformer-jax,
158
+ author = {Wang, Ben},
159
+ title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}},
160
+ howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
161
+ year = 2021,
162
+ month = May
163
+ }
164
+ ```
165
+
166
+ # Glossary [optional]
167
+ More information needed
168
+
169
+ # More Information [optional]
170
+
171
+ More information needed
172
+
173
+ # Model Card Authors [optional]
174
+
175
+
176
+ KoboldAI in collaboration with Ezi Ozoani and the Hugging Face team
177
+
178
+ # Model Card Contact
179
+
180
+ More information needed
181
+
182
+ # How to Get Started with the Model
183
+
184
+ Use the code below to get started with the model.
185
+
186
+ <details>
187
+ <summary> Click to expand </summary>
188
+
189
+ ```python
190
+ from transformers import AutoTokenizer, AutoModelForCausalLM
191
+
192
+ tokenizer = AutoTokenizer.from_pretrained("KoboldAI/GPT-J-6B-Skein")
193
+
194
+ model = AutoModelForCausalLM.from_pretrained("KoboldAI/GPT-J-6B-Skein")
195
+ ```
196
+ </details>