This lora model is trained on a combination of 50MB datasets containing various conversations mainly from GPT4. The model shows a clear overfitting after 6 epochs. The base model is decapoda-research/llama-7b-hf You can use https://github.com/tloen/alpaca-lora to run it.
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.