Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
Vijayendra
/
llama3-8b-lora-cyclic-attention
like
0
PEFT
PyTorch
Safetensors
llama
4-bit precision
bitsandbytes
arxiv:
1910.09700
Model card
Files
Files and versions
Community
Train
Use this model
main
llama3-8b-lora-cyclic-attention
1 contributor
History:
6 commits
Vijayendra
Update peft_config.json
374d2e2
verified
about 2 months ago
.gitattributes
1.52 kB
initial commit
3 months ago
README.md
5.1 kB
Upload fine-tuned LoRA model with cyclic attention
3 months ago
adapter_config.json
731 Bytes
Update adapter_config.json
3 months ago
adapter_model.safetensors
40 Bytes
LFS
Upload fine-tuned LoRA model with cyclic attention
3 months ago
config.json
1.17 kB
Upload fine-tuned LoRA model with cyclic attention
3 months ago
peft_config.json
717 Bytes
Update peft_config.json
about 2 months ago
pytorch_model.bin
pickle
Detected Pickle imports (5)
"torch.BFloat16Storage"
,
"collections.OrderedDict"
,
"torch.FloatStorage"
,
"torch.ByteStorage"
,
"torch._utils._rebuild_tensor_v2"
What is a pickle import?
5.87 GB
LFS
Upload fine-tuned LoRA model with cyclic attention
3 months ago
special_tokens_map.json
464 Bytes
Upload fine-tuned LoRA model with cyclic attention
3 months ago
tokenizer.json
9.09 MB
Upload fine-tuned LoRA model with cyclic attention
3 months ago
tokenizer_config.json
50.6 kB
Upload fine-tuned LoRA model with cyclic attention
3 months ago