File size: 2,161 Bytes
3939ff8 e3f20db f743e02 3939ff8 e3f20db 05c40c3 e2460d4 e3f20db e2460d4 e3f20db |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 |
---
license: cc-by-sa-3.0
inference: false
language:
- en
library_name: transformers
pipeline_tag: text2text-generation
datasets:
- pszemraj/dolly_hhrlhf-text2text
tags:
- instruct
- dolly_hhrlhf
---
# bart-base-instruct: dolly_hhrlhf
<a href="https://colab.research.google.com/gist/pszemraj/a0c0a8cc24abfbf609f75f9d5c56c348/bart-base-instruct-example.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the pszemraj/dolly_hhrlhf-text2text dataset.
## Model description
text2text models fine-tuned on a [modified dataset for text2text generation](https://huggingface.co/datasets/pszemraj/dolly_hhrlhf-text2text) based on the relatively more permissive [mosaicml/dolly_hhrlhf](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) dataset.
Basic usage in Python:
```python
# pip install -q transformers accelerate
from transformers import pipeline, GenerationConfig
model_name = "pszemraj/bart-base-instruct-dolly_hhrlhf"
assistant = pipeline(
"text2text-generation",
model_name,
device_map="auto"
)
cfg = GenerationConfig.from_pretrained(model_name)
# pass an 'instruction' as the prompt to the pipeline
prompt = "Write a guide on how to become a ninja while working a 9-5 job."
result = assistant(prompt, generation_config=cfg)[0]["generated_text"]
print(result)
```
> using the generation config is optional, can subsitute with other generation params.
## Intended uses & limitations
- this is **not** tuned with RLHF etc, and may output offensive results
- this model is rather small (~600 MB) and therefore it's "cognition" abilities are rather limited.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3.0 |