Update README.md
Browse files
README.md
CHANGED
@@ -43,19 +43,57 @@ parameters:
|
|
43 |
|
44 |
# grammar-synthesis-small - beta
|
45 |
|
46 |
-
This model is a fine-tuned version of [pszemraj/grammar-synthesis-small-WIP](https://huggingface.co/pszemraj/grammar-synthesis-small-WIP) on an
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
47 |
|
48 |
## Model description
|
49 |
|
50 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
51 |
|
52 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
53 |
|
54 |
-
More information needed
|
55 |
|
56 |
## Training and evaluation data
|
57 |
|
58 |
-
More information needed
|
59 |
|
60 |
## Training procedure
|
61 |
|
|
|
43 |
|
44 |
# grammar-synthesis-small - beta
|
45 |
|
46 |
+
This model is a fine-tuned version of [pszemraj/grammar-synthesis-small-WIP](https://huggingface.co/pszemraj/grammar-synthesis-small-WIP) for grammar correction on an expanded version of the [JFLEG](https://paperswithcode.com/dataset/jfleg) dataset.
|
47 |
+
|
48 |
+
usage in Python (after `pip install transformers`):
|
49 |
+
|
50 |
+
```
|
51 |
+
from transformers import pipeline
|
52 |
+
corrector = pipeline(
|
53 |
+
'text2text-generation',
|
54 |
+
'pszemraj/grammar-synthesis-large',
|
55 |
+
)
|
56 |
+
raw_text = 'i can has cheezburger'
|
57 |
+
results = corrector(raw_text)
|
58 |
+
print(results)
|
59 |
+
```
|
60 |
|
61 |
## Model description
|
62 |
|
63 |
+
The intent is to create a text2text language model that successfully completes "single-shot grammar correction" on a potentially grammatically incorrect text **that could have a lot of mistakes** with the important qualifier of **it does not semantically change text/information that IS grammatically correct.**
|
64 |
+
|
65 |
+
Compare some of the heavier-error examples on [other grammar correction models](https://huggingface.co/models?dataset=dataset:jfleg) to see the difference :)
|
66 |
+
|
67 |
+
## Limitations
|
68 |
+
|
69 |
+
- dataset: `cc-by-nc-sa-4.0`
|
70 |
+
- model: `apache-2.0`
|
71 |
+
- this is **still a work-in-progress** and while probably useful for "single-shot grammar correction" in a lot of cases, **give the outputs a glance for correctness ok?**
|
72 |
+
|
73 |
+
## Use Cases
|
74 |
+
|
75 |
+
Obviously, this section is quite general as there are many things one can use "general single-shot grammar correction" for. Some ideas or use cases:
|
76 |
|
77 |
+
1. Correcting highly error-prone LM outputs. Some examples would be audio transcription (ASR) (this is literally some of the examples) or something like handwriting OCR.
|
78 |
+
- To be investigated further, depending on what model/system is used it _might_ be worth it to apply this after OCR on typed characters.
|
79 |
+
2. Correcting/infilling text generated by text generation models to be cohesive/remove obvious errors that break the conversation immersion. I use this on the outputs of [this OPT 2.7B chatbot-esque model of myself](https://huggingface.co/pszemraj/opt-peter-2.7B).
|
80 |
+
> An example of this model running on CPU with beam search:
|
81 |
+
|
82 |
+
```
|
83 |
+
original response:
|
84 |
+
ive heard it attributed to a bunch of different philosophical schools, including stoicism, pragmatism, existentialism and even some forms of post-structuralism. i think one of the most interesting (and most difficult) philosophical problems is trying to let dogs (or other animals) out of cages. the reason why this is a difficult problem is because it seems to go against our grain (so to
|
85 |
+
synthesizing took 306.12 seconds
|
86 |
+
Final response in 1294.857 s:
|
87 |
+
I've heard it attributed to a bunch of different philosophical schools, including solipsism, pragmatism, existentialism and even some forms of post-structuralism. i think one of the most interesting (and most difficult) philosophical problems is trying to let dogs (or other animals) out of cages. the reason why this is a difficult problem is because it seems to go against our grain (so to speak)
|
88 |
+
```
|
89 |
+
_Note: that I have some other logic that removes any periods at the end of the final sentence in this chatbot setting [to avoid coming off as passive aggressive](https://www.npr.org/2020/09/05/909969004/before-texting-your-kid-make-sure-to-double-check-your-punctuation)_
|
90 |
+
|
91 |
+
3. Somewhat related to #2 above, fixing/correcting so-called [tortured-phrases](https://arxiv.org/abs/2107.06751) that are dead giveaways text was generated by a language model. _Note that _SOME_ of these are not fixed, especially as they venture into domain-specific terminology (i.e. irregular timberland instead of Random Forest)._
|
92 |
|
|
|
93 |
|
94 |
## Training and evaluation data
|
95 |
|
96 |
+
More information needed 😉
|
97 |
|
98 |
## Training procedure
|
99 |
|