Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
Vijayendra
/
llama3-8b-lora-cyclic-attention
like
0
PEFT
PyTorch
Safetensors
llama
4-bit precision
bitsandbytes
arxiv:
1910.09700
Model card
Files
Files and versions
Community
Train
Use this model
22876b4
llama3-8b-lora-cyclic-attention
/
tokenizer.json
Vijayendra
Upload fine-tuned LoRA model with cyclic attention
22876b4
verified
4 months ago
raw
Copy download link
history
Safe
9.09 MB
File too large to display, you can
check the raw version
instead.