Andy Janco commited on
Commit
62b5fa1
1 Parent(s): 91ab1e2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -24
README.md CHANGED
@@ -35,9 +35,9 @@ task_ids: []
35
 
36
  # Dataset Card for Pages of Early Soviet Performance (PESP)
37
 
38
- This dataset was created as part of the [Early Soviet Performance](https://cdh.princeton.edu/projects/pages-early-soviet-performance/) project at Princeton and is an effort to generate useful research data from a previously scanned [collection of illustrated periodicals](https://dpul.princeton.edu/slavic/catalog?f%5Breadonly_collections_ssim%5D%5B%5D=Russian+Illustrated+Periodicals) held by Princeton's Firestone Library. Our work focused on document segmentation and the prediction of images, text, and mixed text in the document images. The mixedtext category refers to segments where the typeface and text layout are mixed with other visual elements. At a practical level, this category identifies sections that present problems for OCR. It also highlights the experimental use of text, images, and other elements by the editors and has research value.
39
 
40
- For each of the ten journals of interest in Princeton's digital collections (DPUL), we identified the IIIF manifest URI. Using those manifests, we downloaded each of the 24,000 document images. The [`IIIF_URIs.json`](https://huggingface.co/datasets/ajanco/pesp/blob/main/IIIF_URIs.json) file in this repository can be used to fetch the images from the Princeton Library IIIF servers.
41
 
42
  ## Journal manifests
43
  - [Эрмитаж](https://figgy.princeton.edu/concern/scanned_resources/6b561fbb-ba28-4afb-91d2-d77b8728d7d9/manifest?manifest=https://figgy.princeton.edu/concern/scanned_resources/6b561fbb-ba28-4afb-91d2-d77b8728d7d9/manifest)
@@ -54,28 +54,36 @@ For each of the ten journals of interest in Princeton's digital collections (DPU
54
 
55
  ## Model
56
 
57
- Using [makesense.ai](https://www.makesense.ai/) and a custom active learning application called ["Mayakovsky"](https://github.com/CDH-ITMO-Periodicals-Project/mayakovsky) we generated training data to add the new labels to a [YOLOv5 model](https://docs.ultralytics.com/tutorials/train-custom-datasets/).
 
 
 
 
58
 
59
  ## Dataset
60
 
61
- With the trained and fine-tuned model, we generated predictions for each of the images in the collection. The dataset contains an entry for each image with the following fields:
62
- - filename, the image name (ex. 'Советский театр_1932 No. 4_16') with journal name, year, issue, page.
63
- - dpul, the URL for the image's journal in Digital Princeton University Library
64
- - journal, the journal name
65
- - year, the year of the journal issue
66
- - issue, the issue for the image
67
- - URI, the IIIF URI used to fetch the image from Princeton's IIIF server
68
- - yolo, the raw model prediction (ex '3 0.1655 0.501396 0.311'), in Yolo's normalized xywh format (<object-class> <x> <y> <width> <height>). The labels are 'image'=0, 'mixedtext'=1, 'title'=2, 'textblock'=3.
69
- - yolo_predictions, a List with a dictionary for each of the model's predictions with fields for:
70
- - label, the predicted label
71
- - x, the x-value location of the center point of the prediction
72
- - y, the y-value location of the center point of the prediction
73
- - w, the total width of the prediction's bounding box
74
- - h, the total height of the prediction's bounding box
75
- - abbyy_text, the text extracted from the predicted document segment using ABBY FineReader. Note that due to costs, only about 800 images have this data
76
- - tesseract_text, the text extracted from the predicted document segment using Tesseract.
77
- - vision_text, the text extracted from the predicted document segment using Google Vision.
78
-
79
-
80
-
81
- 'filename', 'dpul', 'journal', 'year', 'issue', 'uri', 'yolo', 'yolo_predictions', 'text', 'images_meta'
 
 
 
 
 
35
 
36
  # Dataset Card for Pages of Early Soviet Performance (PESP)
37
 
38
+ This dataset was created as part of the [Early Soviet Performance](https://cdh.princeton.edu/projects/pages-early-soviet-performance/) project at Princeton and is an effort to generate useful research data from a previously scanned [collection of illustrated periodicals](https://dpul.princeton.edu/slavic/catalog?f%5Breadonly_collections_ssim%5D%5B%5D=Russian+Illustrated+Periodicals) held by Princeton's Firestone Library. Our work focused on document segmentation and the prediction of images, text, titles, and mixed text in the document images. The mixedtext category refers to segments where the typeface and text layout are mixed with other visual elements. This category identifies sections that present problems for OCR and also highlights the experimental use of text, images, and other elements in the documents.
39
 
40
+ For each of the ten journals of interest in Princeton's digital collections (DPUL), we identified the IIIF manifest URI. Using those manifests, we downloaded each of the 24,000 document images.
41
 
42
  ## Journal manifests
43
  - [Эрмитаж](https://figgy.princeton.edu/concern/scanned_resources/6b561fbb-ba28-4afb-91d2-d77b8728d7d9/manifest?manifest=https://figgy.princeton.edu/concern/scanned_resources/6b561fbb-ba28-4afb-91d2-d77b8728d7d9/manifest)
 
54
 
55
  ## Model
56
 
57
+ Using [makesense.ai](https://www.makesense.ai/) and a custom active learning application called ["Mayakovsky"](https://github.com/CDH-ITMO-Periodicals-Project/mayakovsky) we generated training data for a [YOLOv5 model](https://docs.ultralytics.com/tutorials/train-custom-datasets/). The model was fine-tuned on the new labels and predictions were generated for all images in the collection.
58
+
59
+ ## OCR
60
+
61
+ Using the model's predictions for image, title, text and mixedtext segments, we cropped using the bounding box and ran OCR on that document segment using Tesseract, Google Vision, and ABBYY FineReader Engine. Given that the output of these various OCR engines can be difficult to compare, the document segments give a common denominator for comparison of OCR outputs. Having three variations of the extracted text can be useful for experiments with OCR post-correction.
62
 
63
  ## Dataset
64
 
65
+ The dataset contains an entry for each image with the following fields:
66
+ - filename: the image name (ex. 'Советский театр_1932 No. 4_16') with journal name, year, issue, page.
67
+ - dpul: the URL for the image's journal in Digital Princeton University Library
68
+ - journal: the journal name
69
+ - year: the year of the journal issue
70
+ - issue: the issue for the image
71
+ - URI: the IIIF URI used to fetch the image from Princeton's IIIF server
72
+ - yolo: the raw model prediction (ex '3 0.1655 0.501396 0.311'), in Yolo's normalized xywh format (<object-class> <x> <y> <width> <height>). The labels are 'image'=0, 'mixedtext'=1, 'title'=2, 'textblock'=3.
73
+ - yolo_predictions: a List with a dictionary for each of the model's predictions with fields for:
74
+ - label: the predicted label
75
+ - x: the x-value location of the center point of the prediction
76
+ - y: the y-value location of the center point of the prediction
77
+ - w: the total width of the prediction's bounding box
78
+ - h: the total height of the prediction's bounding box
79
+ - abbyy_text: the text extracted from the predicted document segment using ABBY FineReader. Note that due to costs, only about 800 images have this data
80
+ - tesseract_text: the text extracted from the predicted document segment using Tesseract.
81
+ - vision_text: the text extracted from the predicted document segment using Google Vision.
82
+
83
+ # Useage
84
+
85
+ ```python
86
+ from datasets import load_dataset
87
+
88
+ dataset = load_dataset('ajanco/pesp')
89
+ ```