--- license: apache-2.0 datasets: - totally-not-an-llm/everything-sharegptformat-morecleaned language: - en pipeline_tag: text-generation --- This is [OpenLLaMA 3B V2](https://huggingface.co/openlm-research/open_llama_3b_v2) finetuned on [EverythingLM Data(ShareGPT format more cleaned)](totally-not-an-llm/everything-sharegptformat-morecleaned) for 1 epochs. Prompt template: ``` ### HUMAN: {prompt} ### RESPONSE: ``` Note: Don't expect this model to be good, I was just starting out to finetune. So don't roast me please!