File size: 985 Bytes
7dedd28
022c244
7dedd28
 
 
 
 
 
 
 
 
 
022c244
7dedd28
 
 
914ddd3
7dedd28
35d55ca
7dedd28
35d55ca
7dedd28
 
fdbaddc
 
7dedd28
 
d045899
7dedd28
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
---
license: llama3
language:
- en
library_name: CTranslate2
pipeline_tag: text-generation
tags:
- facebook
- meta
- llama
- llama-3
- kaltcit
- cat
- ct2
- quantized model
- int8
base_model: turboderp/llama3-turbcat-instruct-8b
---
# CTranslate2 int8 version of turbcat 8b

This is a int8_float16 quantization of [turbcat 8b](https://huggingface.co/turboderp/llama3-turbcat-instruct-8b)\
See more on CTranslate2: [Docs](https://opennmt.net/CTranslate2/index.html) | [Github](https://github.com/OpenNMT/CTranslate2)

This model and it's dataset was created by [Kaltcit](discord://discord.com/users/550000146289524737), an admin of the [Exllama](https://discord.gg/NSFwVuCjRq) Discord server.

This model was converted to ct2 format using the following commnd:
```
ct2-transformers-converter --model kat_turbcat --output_dir turbcat-ct2 --quantization int8_float16 --low_cpu_mem_usage
```

***no converstion needed using the model from this repository as it is already in ct2 format.***