metadata
license: llama3
language:
- en
library_name: CTranslate2
pipeline_tag: text-generation
tags:
- facebook
- meta
- llama
- llama-3
- kaltcit
- cat
- ct2
- quantized model
- int8
base_model: turboderp/llama3-turbcat-instruct-8b
CTranslate2 int8 version of turbcat 8b
This is a int8_float16 quantization of turbcat 8b
See more on CTranslate2: Docs | Github
This model and it's dataset was created by Kaltcit, an admin of the Exllama Discord server.
This model was converted to ct2 format using the following commnd:
ct2-transformers-converter --model kat_turbcat --output_dir turbcat-ct2 --quantization int8_float16 --low_cpu_mem_usage
no converstion needed using the model from this repository as it is already in ct2 format.