license: mit | |
datasets: | |
- anon8231489123/ShareGPT_Vicuna_unfiltered | |
- viewv/LLaMA-13B | |
- tatsu-lab/alpaca | |
metrics: | |
- accuracy | |
pipeline_tag: text-generation | |
tags: | |
- alpaca | |
- vicuna | |
- finetuned | |
- unfiltered | |
language: | |
- en | |
## Model Description | |
- This model is **Filtered** and **Quantized** to 4Bit binary file. | |
- This model has been trained with 13B alpaca dataset and finetuned by vicuna dataset. | |
- This model will work with [llama.cpp](https://github.com/ggerganov/llama.cpp) |