Llama-3-8B-16K-GGUF / README.md
munish0838's picture
Create README.md
5abc4be verified
|
raw
history blame contribute delete
No virus
550 Bytes
metadata
datasets:
  - Yukang/LongAlpaca-16k-length
library_name: transformers
pipeline_tag: text-generation
base_model: mattshumer/Llama-3-8B-16K

Llama-3-8B-16K-GGUF

Model Description

This is an extended (16K) context version of LLaMA 3 8B (base, not instruct). Trained for five hours on 8x A6000 GPUs, using the Yukang/LongAlpaca-16k-length dataset.

rope_theta was set to 1000000.0. Trained with Axolotl.