base_model: | |
- unsloth/Meta-Llama-3.1-8B-Instruct | |
- RLHFlow/Llama3.1-8B-PRM-Mistral-Data | |
library_name: transformers | |
tags: | |
- mergekit | |
- peft | |
# Mistral-Data-r128-LoRA | |
This is a LoRA extracted from a language model. It was extracted using [mergekit](https://github.com/arcee-ai/mergekit). | |
## LoRA Details | |
This LoRA adapter was extracted from [RLHFlow/Llama3.1-8B-PRM-Mistral-Data](https://huggingface.co/RLHFlow/Llama3.1-8B-PRM-Mistral-Data) and uses [unsloth/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct) as a base. | |
### Parameters | |
The following command was used to extract this LoRA adapter: | |
```sh | |
mergekit-extract-lora RLHFlow/Llama3.1-8B-PRM-Mistral-Data unsloth/Meta-Llama-3.1-8B-Instruct OUTPUT_PATH --no-lazy-unpickle --skip-undecomposable --rank=128 --extend-vocab --model_name=Mistral-Data-r128-LoRA --verbose | |
``` | |