Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Model Description:
|
2 |
+
To create t5-base-c4jfleg model, T5-base model is fine-tuned on the [**JFLEG dataset**](https://huggingface.co/datasets/jfleg) and [**C4 200M dataset**](https://huggingface.co/datasets/liweili/c4_200m) by taking around 3000 examples from each with the objective of grammar correction.
|
3 |
+
|
4 |
+
|
5 |
+
The original Google's [**T5-base**] model was pre-trained on [**C4 dataset**](https://huggingface.co/datasets/c4).
|
6 |
+
|
7 |
+
The T5 model was presented in [**Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer**](https://arxiv.org/pdf/1910.10683.pdf) by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.
|
8 |
+
|
9 |
+
## Usage :
|
10 |
+
```
|
11 |
+
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
|
12 |
+
|
13 |
+
tokenizer = AutoTokenizer.from_pretrained("team-writing-assistant/t5-base-c4jfleg")
|
14 |
+
|
15 |
+
model = AutoModelForSeq2SeqLM.from_pretrained("team-writing-assistant/t5-base-c4jfleg")
|
16 |
+
```
|
17 |
+
|
18 |
+
## Examples :
|
19 |
+
Input: My grammar are bad.
|
20 |
+
Output: My grammar is bad.
|
21 |
+
|
22 |
+
Input: Speed of light is fastest than speed of sound
|
23 |
+
Output: Speed of light is faster than speed of sound.
|