File size: 550 Bytes
5abc4be
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
---
datasets:
- Yukang/LongAlpaca-16k-length
library_name: transformers
pipeline_tag: text-generation
base_model: mattshumer/Llama-3-8B-16K
---

# Llama-3-8B-16K-GGUF
- This is quantized version of [mattshumer/Llama-3-8B-16K](https://huggingface.co/mattshumer/Llama-3-8B-16K) created using llama.cpp

# Model Description
This is an extended (16K) context version of LLaMA 3 8B (base, not instruct). Trained for five hours on 8x A6000 GPUs, using the `Yukang/LongAlpaca-16k-length` dataset.

`rope_theta` was set to `1000000.0`. Trained with Axolotl.