Full finetuning and LoRA adapters for Llama-2-7B finetuned on Magicoder-Evol-Instruct-110K
LoRA-Learns-Less-and-Forgets-Less
community
AI & ML interests
None defined yet.
Organization Card
These are the model weights associated with the TMLR 2024 publication LoRA Learns Less and Forgets Less (Biderman et al. 2024). This work was done in collaboration with Databricks Mosaic AI Research.
models
17
LoRA-TMLR-2024/magicoder-lora-rank-16-alpha-32
Updated
LoRA-TMLR-2024/starcoder-full-finetuning-lr-1e-05-20B-token
Updated
LoRA-TMLR-2024/starcoder-lora-rank-256-20B-tokens
Updated
•
2
LoRA-TMLR-2024/openwebmath-full-finetuning-lr-1e-05-20B-tokens
Updated
LoRA-TMLR-2024/openwebmath-lora-rank-256-20B-tokens
Updated
•
4
LoRA-TMLR-2024/openwebmath-lora-rank-16-20B-tokens
Updated
•
4
LoRA-TMLR-2024/starcoder-lora-rank-16-20B-tokens
Updated
LoRA-TMLR-2024/magicoder-lora-rank-64-alpha-128
Updated
•
5
LoRA-TMLR-2024/magicoder-lora-rank-256-alpha-512
Updated
LoRA-TMLR-2024/magicoder-full-finetuning-lr-5e-05
Updated
datasets
None public yet