Edit model card

A more recent version can be found here. Training smaller and/or comparably sized models is a WIP.

t5-v1_1-base-ft-jflAUG

GOAL: a more robust and generalized grammar and spelling correction model that corrects everything in a single shot. It should have a minimal impact on the semantics of correct sentences (i.e. it does not change things that do not need to be changed).

  • this model (at least from preliminary testing) can handle large amounts of errors in the source text (i.e. from audio transcription) and still produce cohesive results.
  • a fine-tuned version of google/t5-v1_1-base on an expanded version of the JFLEG dataset.

Model description

  • this is a WIP. This fine-tuned model is v1.
  • long term: a generalized grammar and spelling correction model that can handle lots of things at the same time.
  • currently, it seems to be more of a "gibberish to mostly correct English" translator

Intended uses & limitations

  • try some tests with the examples here
  • thus far, some limitations are: sentence fragments are not autocorrected (at least, if entered individually), some more complicated pronoun/they/he/her etc. agreement is not always fixed.

Training and evaluation data

  • trained as text-to-text
  • JFLEG dataset + additional selected and/or generated grammar corrections

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 6e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.05
  • num_epochs: 5

Framework versions

  • Transformers 4.17.0
  • Pytorch 1.10.0+cu111
  • Datasets 2.0.0
  • Tokenizers 0.11.6
Downloads last month
12
Safetensors
Model size
248M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train pszemraj/t5-v1_1-base-ft-jflAUG

Space using pszemraj/t5-v1_1-base-ft-jflAUG 1