File size: 12,552 Bytes
88874ba
c6367bc
 
 
 
 
 
 
 
 
 
 
 
 
ab8d76a
23d2a6c
c6367bc
 
 
 
88874ba
e7ce566
 
 
 
88874ba
c6367bc
 
 
88874ba
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3c3fcf7
 
 
 
 
 
 
 
 
 
88874ba
 
875bf2c
 
 
 
 
 
 
 
c6367bc
88874ba
c6367bc
88874ba
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c6367bc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
88874ba
c6367bc
 
88874ba
 
 
 
 
 
 
 
 
 
 
c6367bc
88874ba
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262

---
license: other
license_name: idl-train
license_link: LICENSE
task_categories:
- image-to-text
size_categories:
- 10M<n<100M
---
# Dataset Card for Industry Documents Library (IDL)

## Dataset Description

- **Point of Contact from curators:** [Kate Tasker, UCSF](mailto:kate.tasker@ucsf.edu)
- **Point of Contact Hugging Face:** [Pablo Montalvo](mailto:pablo@huggingface.co)

### Dataset Summary

Industry Documents Library (IDL) is a document dataset filtered from [UCSF documents library](https://www.industrydocuments.ucsf.edu/) with 19 million pages kept as valid samples.
Each document exists as a collection of a pdf, a tiff image with the same contents rendered, a json file containing extensive Textract OCR annotations from the [idl_data](https://github.com/furkanbiten/idl_data) project, and a .ocr file with the original, older OCR annotation.
<center>
    <img src="https://huggingface.co/datasets/pixparse/IDL-wds/resolve/main/doc_images/idl_page_example.png" alt="An addendum from an internal legal document" width="600" height="300">
    <p><em>An example page of one pdf document from the Industry Documents Library. </em></p>
</center>


### Usage

For faster download, you can use directly the `huggingface_hub` library. Make sure `hf_transfer` is installed prior to downloading and mind that you have enough space locally. 

```python
 import os
 
 os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
 
 from huggingface_hub import HfApi, logging
 
 #logging.set_verbosity_debug()
 hf = HfApi()
 hf.snapshot_download("pixparse/IDL-wds", repo_type="dataset", local_dir_use_symlinks=False)
 
```



# TODO The following example uses fitz which is AGPL. We should also recommend the same with pypdf.

```python
from chug import create_wds_loader, create_doc_anno_pipe

# TODO import image transforms and text transforms from pixparse.data


decoder = create_decoder_pipe(
    image_preprocess=image_preprocess, # callable of transforms to image tensors
    anno_preprocess=anno_preprocess, # callable of transforms to text into tensors of labels and targets
    image_key="pdf",
    image_fmt="RGB",
)

loader = create_wds_loader(
    "/my_data/idl-train-*.tar",
    decoder,
    is_train=True,
    resampled=False,
    start_interval=0,
    num_samples=2159432,
    workers=8,
    batch_size=32, # adjust to your architecture capacity
    seed=seed, # set a seed
    world_size=world_size, # get world_size from your training environment
)

```

Further, a metadata file `_pdfa-english-train-info-minimal.json` contains the list of samples per shard, with same basename and `.json` or `.pdf` extension, 
as well as the count of files per shard. 


#### Words and lines document metadata

Initially, we obtained the raw data from the IDL API and combined it with the `idl_data` annotation. This information is then reshaped into lines organized in reading order, under the key lines. We keep non-reshaped word and bounding box information under the word key, should users want to use their own heuristic.

The way we obtain an approximate reading order is simply by looking at the frequency peaks of the leftmost word x-coordinate. A frequency peak means that a high number of lines are starting from the same point. Then, we keep track of the x-coordinate of each such identified column. If no peaks are found, the document is assumed to be readable in plain format.
The code to detect columns can be found here.
```python
def get_columnar_separators(page, min_prominence=0.3, num_bins=10, kernel_width=1):
    """
    Identifies the x-coordinates that best separate columns by analyzing the derivative of a histogram
    of the 'left' values (xmin) of bounding boxes.

    Args:
        page (dict): Page data with 'bbox' containing bounding boxes of words.
        min_prominence (float): The required prominence of peaks in the histogram.
        num_bins (int): Number of bins to use for the histogram.
        kernel_width (int): The width of the Gaussian kernel used for smoothing the histogram.

    Returns:
        separators (list): The x-coordinates that separate the columns, if any.
    """
    try:
        left_values = [b[0] for b in page['bbox']]
        hist, bin_edges = np.histogram(left_values, bins=num_bins)
        hist = scipy.ndimage.gaussian_filter1d(hist, kernel_width)
        min_val = min(hist)
        hist = np.insert(hist, [0, len(hist)], min_val)
        bin_width = bin_edges[1] - bin_edges[0]
        bin_edges = np.insert(bin_edges, [0, len(bin_edges)], [bin_edges[0] - bin_width, bin_edges[-1] + bin_width])

        peaks, _ = scipy.signal.find_peaks(hist, prominence=min_prominence * np.max(hist))
        derivatives = np.diff(hist)

        separators = []
        if len(peaks) > 1:
            # This finds the index of the maximum derivative value between peaks
            # which indicates peaks after trough --> column
            for i in range(len(peaks)-1):
                peak_left = peaks[i]
                peak_right = peaks[i+1]
                max_deriv_index = np.argmax(derivatives[peak_left:peak_right]) + peak_left
                separator_x = bin_edges[max_deriv_index + 1]
                separators.append(separator_x)
    except Exception as e:
        separators = []
    return separators
```

That way, columnar documents can be better separated. This is a basic heuristic but it should improve overall the readability of the documents.

<div style="text-align: center;">
    <img src="https://huggingface.co/datasets/pixparse/IDL-wds/resolve/main/doc_images/arrows_plot_straight.png" alt="Numbered bounding boxes on a document" style="width: 300px; height: 150px; display: inline-block;">
    <img src="https://huggingface.co/datasets/pixparse/IDL-wds/resolve/main/doc_images/bounding_boxes_straight.png" alt="A simple representation of reading order" style="width: 300px; height: 150px; display: inline-block;">
</div>
<p style="text-align: center;"><em>Standard reading order for a single-column document.</em></p>



For each pdf document, we store statistics on number of pages per shard, number of valid samples per shard. A valid sample is a sample that can be encoded then decoded, which we did for each sample.

This instance of IDL is in [webdataset](https://github.com/webdataset/webdataset/commits/main) .tar format. It can be used with webdataset library or current releases of Hugging Face `datasets`.
Here is an example using the "streaming" parameter. We do recommend downloading the dataset to save bandwidth. 

```python
dataset = load_dataset('pixparse/IDL-wds', streaming=True)
print(next(iter(dataset['train'])).keys())
>> dict_keys(['__key__', '__url__', 'json', 'ocr', 'pdf', 'tif'])
```

For faster download, you can use directly the `huggingface_hub` library. Make sure `hf_transfer` is install prior to downloading and mind that you have enough space locally. 

```python
 import os
 
 os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
 
 from huggingface_hub import HfApi, logging
 
 #logging.set_verbosity_debug()
 hf = HfApi()
 hf.snapshot_download("pixparse/pdfa-english-train", repo_type="dataset", local_dir_use_symlinks=False)                               
```
### Data, metadata and statistics.
Add example image here

The metadata for each document has been formatted in this way. Each `pdf` is paired with a `json` file with the following structure. Entries have been shortened for readability.

```json
{
  "pages": [
    {
      "text": [
        "COVIDIEN",
        "Mallinckrodt",
        "Addendum",
        "This Addendum to the Consulting Agreement (the \"Agreement\") of July 28, 2010 (\"Effective Date\") by",
        "and between David Brushwod, R.Ph., J.D., with an address at P.O. Box 100496, Gainesville, FL 32610-",
      ],
      "bbox": [
        [0.185964, 0.058857, 0.092199, 0.011457],
        [0.186465, 0.079529, 0.087209, 0.009247],
        [0.459241, 0.117854, 0.080015, 0.011332],
        [0.117109, 0.13346, 0.751004, 0.014365],
        [0.117527, 0.150306, 0.750509, 0.012954]
      ],
      "poly": [
        [
          {"X": 0.185964, "Y": 0.058857}, {"X": 0.278163, "Y": 0.058857}, {"X": 0.278163, "Y": 0.070315}, {"X": 0.185964, "Y": 0.070315}
        ],
        [
          {"X": 0.186465, "Y": 0.079529}, {"X": 0.273673, "Y": 0.079529}, {"X": 0.273673, "Y": 0.088777}, {"X": 0.186465, "Y": 0.088777}
        ],
        [
          {"X": 0.459241, "Y": 0.117854}, {"X": 0.539256, "Y": 0.117854}, {"X": 0.539256, "Y": 0.129186}, {"X": 0.459241, "Y": 0.129186}
        ],
        [
          {"X": 0.117109, "Y": 0.13346}, {"X": 0.868113, "Y": 0.13346}, {"X": 0.868113, "Y": 0.147825}, {"X": 0.117109, "Y": 0.147825}
        ],
        [
          {"X": 0.117527, "Y": 0.150306}, {"X": 0.868036, "Y": 0.150306}, {"X": 0.868036, "Y": 0.163261}, {"X": 0.117527, "Y": 0.163261}
        ]
      ],
      "score": [
        0.9939, 0.5704, 0.9961, 0.9898, 0.9935
      ]
    }
  ]
}

```


The top-level key, `pages`, is a list of every page in the document. The above example shows only one page. `text` is a list of lines in the document, with their individual associated bounding box in the next entry. `bbox` contains the bounding box coordinates in `left, top, width, height` format, with coordinates relative to the page size. `poly` is the corresponding polygon.

`score` is the confidence score for each line obtained with Textract.
### Data Splits

#### Train
* `idl-train-*.tar`
* Downloaded on 2023/12/16
* 3000 shards, 3144726 samples, 19174595 pages

## Additional Information

### Dataset Curators

Pablo Montalvo, Ross Wightman

### Licensing Information

While the Industry Documents Library is a public archive of documents and audiovisual materials, companies or individuals hold the rights to the information they created, meaning material cannot be “substantially” reproduced in books or other media without the copyright holder’s permission.

The use of copyrighted material, including reproduction, is governed by United States copyright law (Title 17, United States Code). The law may permit the “fair use” of a copyrighted work, including the making of a photocopy, “for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship or research.” 17 U.S.C. § 107.

The Industry Documents Library makes its collections available under court-approved agreements with the rightsholders or under the fair use doctrine, depending on the collection.

According to the US Copyright Office, when determining whether a particular use comes under “fair use” you must consider the following:

    the purpose and character of the use, including whether it is of commercial nature or for nonprofit educational purposes;
    the nature of the copyrighted work itself;
    how much of the work you are using in relation to the copyrighted work as a whole (1 page of a 1000 page work or 1 print advertisement vs. an entire 30 second advertisement);
    the effect of the use upon the potential market for or value of the copyrighted work. (For additional information see the US Copyright Office Fair Use Index).

Each user of this website is responsible for ensuring compliance with applicable copyright laws. Persons obtaining, or later using, a copy of copyrighted material in excess of “fair use” may become liable for copyright infringement. By accessing this website, the user agrees to hold harmless the University of California, its affiliates and their directors, officers, employees and agents from all claims and expenses, including attorneys’ fees, arising out of the use of this website by the user.

For more in-depth information on copyright and fair use, visit the [Stanford University Libraries’ Copyright and Fair Use website.](https://fairuse.stanford.edu/) 

If you hold copyright to a document or documents in our collections and have concerns about our inclusion of this material, please see the IDL Take-Down Policy or contact us with any questions.

In the dataset, the API from the Industry Documents Library holds the following permissions counts per file, showing all are now public (none are "confidential" or "privileged", only formerly.)

```json
            {'public/no restrictions': 3005133,
             'public/formerly confidential': 264978,
             'public/formerly privileged': 30063,
             'public/formerly privileged/formerly confidential': 669,
             'public/formerly confidential/formerly privileged': 397,
             }
```
### Citation Information