File size: 2,275 Bytes
84a751e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6abd23d
84a751e
 
 
6abd23d
84a751e
 
 
 
 
 
6abd23d
84a751e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6abd23d
84a751e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
---
language: 
  - en
datasets:
  - natural_instructions
  - the_pile
  - cot
  - Muennighoff/P3
tags:
- ctranslate2
- int8
- float16
  - gpt
pipeline_tag: text-generation
inference:
  parameters:
    temperature: 0.1
widget:
- text: "Is this review positive or negative? Review: Best cast iron skillet you will ever buy. Answer:"
  example_title: "Sentiment analysis"
- text: "Where is Zurich? Ans:"
  example_title: "Question Answering"
---
# # Fast-Inference with Ctranslate2
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.

quantized version of [togethercomputer/GPT-JT-6B-v0](https://huggingface.co/togethercomputer/GPT-JT-6B-v0)
```bash
pip install hf-hub-ctranslate2>=2.0.6 
```
Converted on 2023-05-19 using
```
ct2-transformers-converter --model togethercomputer/GPT-JT-6B-v0 --output_dir /home/michael/tmp-ct2fast-GPT-JT-6B-v0 --force --copy_files merges.txt tokenizer.json README.md tokenizer_config.json vocab.json special_tokens_map.json added_tokens.json .gitattributes --quantization float16
```

Checkpoint compatible to [ctranslate2>=3.13.0](https://github.com/OpenNMT/CTranslate2) and [hf-hub-ctranslate2>=2.0.6](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"` 
- `compute_type=int8`  for `device="cpu"`

```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer

model_name = "michaelfeil/ct2fast-GPT-JT-6B-v0"
# use either TranslatorCT2fromHfHub or GeneratorCT2fromHfHub here, depending on model.
model = GeneratorCT2fromHfHub(
        # load in int8 on CUDA
        model_name_or_path=model_name, 
        device="cuda",
        compute_type="int8_float16",
        tokenizer=AutoTokenizer.from_pretrained("togethercomputer/GPT-JT-6B-v0")
)
outputs = model.generate(
    text=["How do you call a fast Flan-ingo?", "User: How are you doing? Bot:"],
)
print(outputs)
```

# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.

# Original description
    

# Quick Start

```python
from transformers import pipeline

pipe = pipeline(model='togethercomputer/GPT-JT-6B-v0')

pipe("Where is Zurich? Ans:")
```