File size: 899 Bytes
b23298a
59bc92b
 
889b0f1
 
 
 
 
 
 
53e510a
b23298a
 
8ad6c32
 
 
 
889b0f1
b23298a
889b0f1
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
---
base_model: google/flan-t5-large
library_name: peft
datasets:
- jhu-clsp/jfleg
language:
- en
pipeline_tag: text2text-generation
tags:
- text-generation-inference
- grammar
---

This model is part of the [GrammarCorrector](https://github.com/akhmat-s/GrammarCorrector) tool.
"[FlanT5 from scratch for the grammar correction tool](https://medium.com/@akhmat-s/flant5-from-scratch-for-the-grammar-correction-tool-deadba9a6778)" article about how this model was trained.


The primary objective of the experiment was to develop a highly effective tool using relatively small models, minimal datasets, and constrained computational resources.

To accomplish this goal, we implemented two key strategies:
- [Perplexity-Based Data](https://arxiv.org/abs/2405.20541) Pruning With Small Reference Models.
- A simple sampling and voting method for [multiple LLM agents](https://arxiv.org/abs/2402.05120).