File size: 5,057 Bytes
7a56f4d 12e61e0 f7449c7 fe96d63 f7449c7 4b65d21 d8e8e53 3c6049e fe96d63 23e2847 fe96d63 d8e8e53 fe96d63 7a56f4d f7449c7 7a56f4d f7449c7 8ff0b1c f7449c7 1d62389 f7449c7 1d62389 f7449c7 1d62389 f7449c7 8b86372 f7449c7 7a56f4d f7449c7 7a56f4d 3bf1e72 0932d5b 7a56f4d 15da1bb |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 |
---
license: apache-2.0
tags:
- generated_from_trainer
- distilgpt2
- email generation
- email
datasets:
- aeslc
- postbot/multi_emails
widget:
- text: 'Good Morning Professor Beans,
Hope you are doing well. I just wanted to reach out and ask if differential calculus
will be on the exam'
example_title: email to prof
- text: 'Hey <NAME>,
Thank you for signing up for my weekly newsletter. Before we get started, you''ll
have to confirm your email address.'
example_title: newsletter
- text: 'Hi <NAME>,
I hope this email finds you well. I wanted to reach out and ask about office hours'
example_title: office hours
- text: 'Greetings <NAME>,
I hope you had a splendid evening at the Company sausage eating festival. I am
reaching out because'
example_title: festival
- text: 'Good Morning Harold,
I was wondering when the next'
example_title: event
- text: URGENT - I need the TPS reports
example_title: URGENT
- text: 'Hi Archibald,
I hope this email finds you extremely well.'
example_title: emails that find you
- text: 'Hello there.
I just wanted to reach out and check in to'
example_title: checking in
- text: 'Hello <NAME>,
I hope this email finds you well. I wanted to reach out and see if you''ve enjoyed
your time with us'
example_title: work well
- text: 'Hi <NAME>,
I hope this email finds you well. I wanted to reach out and see if we could catch
up'
example_title: catch up
- text: I'm <NAME> and I just moved into the area and wanted to reach out and get
some details on where I could get groceries and
example_title: grocery
parameters:
min_length: 4
max_length: 128
length_penalty: 0.8
no_repeat_ngram_size: 2
do_sample: false
num_beams: 8
early_stopping: true
repetition_penalty: 5.5
base_model: distilgpt2
---
# distilgpt2-emailgen
Why write the rest of your email when you can generate it?
```python
from transformers import pipeline
model_tag = "postbot/distilgpt2-emailgen"
generator = pipeline(
'text-generation',
model=model_tag,
)
prompt = """
Hello,
Following up on the bubblegum shipment."""
result = generator(
prompt,
max_length=64,
do_sample=False,
early_stopping=True,
) # generate
print(result[0]['generated_text'])
```
- try it in a [Google Colab](https://colab.research.google.com/gist/pszemraj/91df57e0c2caf1d5273b78576ad2853e/postbot-distilgpt2-emailgen-demo.ipynb) notebook
- Use it in bash/cmd [with this gist](https://gist.github.com/pszemraj/c1b0a76445418b6bbddd5f9633d1bb7f) :)
> For this model, formatting matters. The results may be (significantly) different between the structure outlined above and `prompt = "Hey, just wanted to ..."` etc.
## Model description
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on a dataset of 50k emails, including the classic `aeslc` dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6247
## Intended uses & limitations
The intended use of this model is to provide suggestions to "autocomplete" the rest of your email. Said another way, it should serve as a **tool to write predictable emails faster**. It is not intended to write entire emails; at least **some input** is required to guide the direction of the model.
Please verify any suggestions by the model for A) False claims and B) negation statements before accepting/sending something.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.02
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.8299 | 1.0 | 248 | 2.7971 |
| 2.6984 | 2.0 | 496 | 2.6826 |
| 2.7022 | 3.0 | 744 | 2.6361 |
| 2.6436 | 4.0 | 992 | 2.6245 |
| 2.6195 | 5.0 | 1240 | 2.6247 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_postbot__distilgpt2-emailgen)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 24.89 |
| ARC (25-shot) | 21.76 |
| HellaSwag (10-shot) | 27.52 |
| MMLU (5-shot) | 25.97 |
| TruthfulQA (0-shot) | 46.17 |
| Winogrande (5-shot) | 51.62 |
| GSM8K (5-shot) | 0.0 |
| DROP (3-shot) | 1.16 |
|