File size: 1,581 Bytes
cf6bd8e cdfdf96 cf6bd8e cdfdf96 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
---
library_name: transformers
language: en
license: mit
---
# BART-base-ocr
This model is released as part of the paper [Leveraging LLMs for Post-OCR Correction of Historical Newspapers](https://aclanthology.org/2024.lt4hala-1.14/) and designed to correct OCR text. [BART-base](https://huggingface.co/facebook/bart-base) is fine-tuned for post-OCR correction of historical English, using [BLN600](https://aclanthology.org/2024.lrec-main.219/), a parallel corpus of 19th century newspaper machine/human transcription.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline
model = AutoModelForSeq2SeqLM.from_pretrained('pykale/bart-base-ocr')
tokenizer = AutoTokenizer.from_pretrained('pykale/bart-base-ocr')
generator = pipeline('text2text-generation', model=model.to('cuda'), tokenizer=tokenizer, device='cuda', max_length=1024)
ocr = "The defendant wits'fined �5 and costs."
pred = generator(ocr)[0]['generated_text']
print(pred)
```
## Citation
```
@inproceedings{thomas-etal-2024-leveraging,
title = "Leveraging {LLM}s for Post-{OCR} Correction of Historical Newspapers",
author = "Thomas, Alan and Gaizauskas, Robert and Lu, Haiping",
editor = "Sprugnoli, Rachele and Passarotti, Marco",
booktitle = "Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024",
month = "may",
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.lt4hala-1.14",
pages = "116--121",
}
``` |