File size: 2,627 Bytes
554f9b2 15514c2 ca9cf38 554f9b2 f3efe18 554f9b2 96fcc4f 554f9b2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 |
---
license: mit
datasets:
- garage-bAInd/Open-Platypus
- databricks/databricks-dolly-15k
- timdettmers/openassistant-guanaco
language:
- en
pipeline_tag: text-generation
---
# GPT2_platypus-dolly-guanaco
**gpt2_platypus-dolly-guanaco** is an instruction fine-tuned model based on the GPT-2 transformer architecture.
### Benchmark Metrics
| Metric | gpt2_platypus-dolly-guanaco | GPT-2 (base) |
|-----------------------|-------|-------|
| Avg. | **30.18** | 29.9 |
| ARC (25-shot) | **23.21** | 21.84 |
| HellaSwag (10-shot) | 31.04 | **31.6** |
| MMLU (5-shot) | **26.16** | 25.86 |
| TruthfulQA (0-shot) | 40.31 | **40.67** |
We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results.
### Model Details
* **Trained by**: Luiz G A Alves
* **Model type:** **gpt2_platypus-dolly-guanaco** is an auto-regressive language model based on the GPT-2 transformer architecture.
* **Language(s)**: English
### How to use:
```python
# Use a pipeline as a high-level helper
>>> from transformers import pipeline
>>> pipe = pipeline("text-generation", model="lgaalves/gpt2_platypus-dolly-guanaco")
>>> question = "What is a large language model?"
>>> answer = pipe(question)
>>> print(answer[0]['generated_text'])
```
or, you can load the model direclty using:
```python
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("lgaalves/gpt2_open-platypus")
model = AutoModelForCausalLM.from_pretrained("lgaalves/gpt2_open-platypus")
```
### Training Dataset
`lgaalves/gpt2_platypus-dolly-guanaco` was trained using 3 datasets:
- [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus)
- [databricks/databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k)
- [timdettmers/openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco)
### Training Procedure
`lgaalves/gpt2_platypus-dolly-guanaco` was instruction fine-tuned using LoRA on 1 T4 GPU on Google Colab. It took about 1 hour to train it.
# Intended uses, limitations & biases
You can use the raw model for text generation or fine-tune it to a downstream task. The model was not extensively tested and may produce false information. It contains a lot of unfiltered content from the internet, which is far from neutral. |