Llama 2 7B, fine-tuned on Panorama media
This repo contains the QLoRA adapter.
Prompt:
Write a hypothetical news story based on the given headline
### Title:
{prompt}
Text:
Training procedure
The following bitsandbytes
quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
Framework versions
- PEFT 0.5.0.dev0
Additional information
Thanks its5Q for dataset and help
- Downloads last month
- 5
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for lapki/Llama-2-7b-panorama-QLoRA
Base model
meta-llama/Llama-2-7b-hf