Update README.md
Browse files
README.md
CHANGED
@@ -4,44 +4,19 @@ license: other
|
|
4 |
commercial: no
|
5 |
inference: false
|
6 |
---
|
7 |
-
# OPT
|
8 |
## Model description
|
9 |
-
|
10 |
|
11 |
-
|
12 |
-
The data can be divided in 6 different datasets:
|
13 |
-
- Literotica (everything with 4.5/5 or higher)
|
14 |
-
- Sexstories (everything with 90 or higher)
|
15 |
-
- Dataset-G (private dataset of X-rated stories)
|
16 |
-
- Doc's Lab (all stories)
|
17 |
-
- Pike Dataset (novels with "adult" rating)
|
18 |
-
- SoFurry (collection of various animals)
|
19 |
|
20 |
-
|
21 |
-
|
22 |
-
### How to use
|
23 |
-
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
|
24 |
-
```py
|
25 |
-
>>> from transformers import pipeline
|
26 |
-
>>> generator = pipeline('text-generation', model='KoboldAI/OPT-13B-Erebus')
|
27 |
-
>>> generator("Welcome Captain Janeway, I apologize for the delay.", do_sample=True, min_length=50)
|
28 |
-
[{'generated_text': 'Welcome Captain Janeway, I apologize for the delay."\nIt's all right," Janeway said. "I'm certain that you're doing your best to keep me informed of what\'s going on."'}]
|
29 |
```
|
|
|
30 |
|
31 |
-
|
32 |
-
|
33 |
|
34 |
### License
|
35 |
-
OPT-13B is licensed under the OPT-175B license, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
|
36 |
-
|
37 |
-
### BibTeX entry and citation info
|
38 |
-
```
|
39 |
-
@misc{zhang2022opt,
|
40 |
-
title={OPT: Open Pre-trained Transformer Language Models},
|
41 |
-
author={Susan Zhang and Stephen Roller and Naman Goyal and Mikel Artetxe and Moya Chen and Shuohui Chen and Christopher Dewan and Mona Diab and Xian Li and Xi Victoria Lin and Todor Mihaylov and Myle Ott and Sam Shleifer and Kurt Shuster and Daniel Simig and Punit Singh Koura and Anjali Sridhar and Tianlu Wang and Luke Zettlemoyer},
|
42 |
-
year={2022},
|
43 |
-
eprint={2205.01068},
|
44 |
-
archivePrefix={arXiv},
|
45 |
-
primaryClass={cs.CL}
|
46 |
-
}
|
47 |
-
```
|
|
|
4 |
commercial: no
|
5 |
inference: false
|
6 |
---
|
7 |
+
# OPT-13B-Erebus-4bit-128g
|
8 |
## Model description
|
9 |
+
**Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content.**
|
10 |
|
11 |
+
This is a 4-bit GPTQ quantization of OPT-13B-Erebus. Original Model: **https://huggingface.co/KoboldAI/OPT-13B-Erebus**
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
|
13 |
+
### Quantization Information
|
14 |
+
Quantized with: https://github.com/0cc4m/GPTQ-for-LLaMa
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
```
|
16 |
+
python repos/gptq/opt.py --wbits 4 models/KoboldAI_OPT-13B-Erebus c4 --groupsize 128 --save models/KoboldAI_OPT-13B-Erebus/4bit-128g.pt
|
17 |
|
18 |
+
python repos/gptq/opt.py --wbits 4 models/KoboldAI_OPT-13B-Erebus c4 --groupsize 128 --save_safetensors models/KoboldAI_OPT-13B-Erebus/4bit-128g.safetensors
|
19 |
+
```
|
20 |
|
21 |
### License
|
22 |
+
OPT-13B is licensed under the OPT-175B license, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|