patfig / README.md
lcolonn's picture
Convert dataset to Parquet
58ef1a4 verified
|
raw
history blame
8.16 kB
metadata
language:
  - en
license: cc-by-nc-4.0
size_categories:
  - 10K<n<100K
task_categories:
  - image-to-text
  - visual-question-answering
  - image-classification
pretty_name: PatFig
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
dataset_info:
  features:
    - name: image
      dtype: image
    - name: image_name
      dtype: string
    - name: pub_number
      dtype: string
    - name: title
      dtype: string
    - name: figs_norm
      sequence: string
    - name: short_description
      sequence: string
    - name: long_description
      sequence: string
    - name: short_description_token_count
      dtype: int64
    - name: long_description_token_count
      dtype: int64
    - name: draft_class
      dtype: string
    - name: cpc_class
      dtype: string
    - name: relevant_terms
      list:
        - name: element_identifier
          dtype: string
        - name: terms
          sequence: string
    - name: associated_claims
      dtype: string
    - name: compound
      dtype: bool
    - name: references
      sequence: string
  splits:
    - name: train
      num_bytes: 1998632864.066
      num_examples: 17386
    - name: test
      num_bytes: 118291788
      num_examples: 998
  download_size: 1735361199
  dataset_size: 2116924652.066

PatFig Dataset

PatFig Dataset Logo

Table of Contents

Introduction

The PatFig Dataset is a curated collection of over 18,000 patent images from more than 7,000 European patent applications, spanning the year 2020. It aims to provide a comprehensive resource for research and applications in image captioning, abstract reasoning, patent analysis, and automated documentprocessing. The overarching goal of this dataset is to advance the research in visually situated language understanding towards more hollistic consumption of the visual and textual data.

Dataset Description

Overview

This dataset includes patent figures accompanied by short and long captions, reference numerals, corresponding terms, and a minimal set of claims, offering a detailed insight into the depicted inventions.

Structure

  • Image Files: Technical drawings, block diagrams, flowcharts, plots, and grayscale photographs.
  • Captions: Each figure is accompanied by a short and long caption describing its content and context.
  • Reference Numerals and Terms: Key components in the figures are linked to their descriptions through reference numerals.
  • Minimal Set of Claims: Claims sentences summarizing the interactions among elements within each figure.
  • Metadata: Includes image names, publication numbers, titles, figure identifiers, and more. The detailed descriptions of the fields are available in the Dataset Documentation.

Categories

The dataset is categorized according to the International Patent Classification (IPC) system, ensuring a diverse representation of technological domains.

Usage

The PatFig Dataset is intended for use in patent image analysis, document image processing, visual question answering tasks, and image captioning in technical contexts. Users are encouraged to explore innovative applications in related fields.

PatFig Image Captioning Version PatFig VQA Version

Challenges and Considerations

Users should be aware of challenges such as interpreting compound figures. PatFig was built automatically using high-performance machine-learning and deep-learning methods. Therefore, the data might contain noise, which was mentioned in the corresponding paper.

License and Usage Guidelines

The dataset is released under a Creative Commons Attribution-NonCommercial 2.0 Generic (CC BY-NC 2.0) License. It is intended for non-commercial use, and users must adhere to the license terms.

Cite as

@inproceedings{aubakirova2023patfig,
  title={PatFig: Generating Short and Long Captions for Patent Figures},
  author={Aubakirova, Dana and Gerdes, Kim and Liu, Lufei},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={2843--2849},
  year={2023}
}