Datasets:
dataset_info:
features:
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-ENERGY_KJ_100G
'2': I-ENERGY_KJ_100G
'3': B-VITAMIN_D_SERVING
'4': I-VITAMIN_D_SERVING
'5': B-SODIUM_SERVING
'6': I-SODIUM_SERVING
'7': B-PROTEINS_SERVING
'8': I-PROTEINS_SERVING
'9': B-ADDED_SUGARS_SERVING
'10': I-ADDED_SUGARS_SERVING
'11': B-CALCIUM_SERVING
'12': I-CALCIUM_SERVING
'13': B-FAT_SERVING
'14': I-FAT_SERVING
'15': B-ENERGY_KJ_SERVING
'16': I-ENERGY_KJ_SERVING
'17': B-SUGARS_100G
'18': I-SUGARS_100G
'19': B-SATURATED_FAT_SERVING
'20': I-SATURATED_FAT_SERVING
'21': B-SERVING_SIZE
'22': I-SERVING_SIZE
'23': B-SALT_SERVING
'24': I-SALT_SERVING
'25': B-ENERGY_KCAL_SERVING
'26': I-ENERGY_KCAL_SERVING
'27': B-FAT_100G
'28': I-FAT_100G
'29': B-SUGARS_SERVING
'30': I-SUGARS_SERVING
'31': B-FIBER_SERVING
'32': I-FIBER_SERVING
'33': B-TRANS_FAT_SERVING
'34': I-TRANS_FAT_SERVING
'35': B-POTASSIUM_SERVING
'36': I-POTASSIUM_SERVING
'37': B-CARBOHYDRATES_100G
'38': I-CARBOHYDRATES_100G
'39': B-POTASSIUM_100G
'40': I-POTASSIUM_100G
'41': B-IRON_SERVING
'42': I-IRON_SERVING
'43': B-CHOLESTEROL_100G
'44': I-CHOLESTEROL_100G
'45': B-TRANS_FAT_100G
'46': I-TRANS_FAT_100G
'47': B-ADDED_SUGARS_100G
'48': I-ADDED_SUGARS_100G
'49': B-FIBER_100G
'50': I-FIBER_100G
'51': B-CALCIUM_100G
'52': I-CALCIUM_100G
'53': B-SODIUM_100G
'54': I-SODIUM_100G
'55': B-ENERGY_KCAL_100G
'56': I-ENERGY_KCAL_100G
'57': B-CHOLESTEROL_SERVING
'58': I-CHOLESTEROL_SERVING
'59': B-CARBOHYDRATES_SERVING
'60': I-CARBOHYDRATES_SERVING
'61': B-SALT_100G
'62': I-SALT_100G
'63': B-VITAMIN_D_100G
'64': I-VITAMIN_D_100G
'65': B-SATURATED_FAT_100G
'66': I-SATURATED_FAT_100G
'67': B-PROTEINS_100G
'68': I-PROTEINS_100G
'69': B-IRON_100G
'70': I-IRON_100G
- name: tokens
sequence: string
- name: bboxes
sequence:
sequence: int64
- name: image
dtype: image
- name: meta
struct:
- name: barcode
dtype: string
- name: image_id
dtype: string
- name: image_url
dtype: string
- name: split
dtype: string
- name: ocr_url
dtype: string
- name: batch
dtype: string
- name: label_studio_id
dtype: int64
- name: checked
dtype: bool
- name: usda_table
dtype: bool
- name: nutrition_text
dtype: bool
- name: no_nutrition_table
dtype: bool
- name: comment
dtype: string
splits:
- name: train
num_bytes: 607157648.1712618
num_examples: 2884
- name: test
num_bytes: 41894719.82873824
num_examples: 199
download_size: 635258020
dataset_size: 649052368
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
license: cc-by-sa-3.0
task_categories:
- token-classification
tags:
- food
size_categories:
- 1K<n<10K
Nutrient extraction dataset
This dataset contains annotated images of nutrition tables. The goal of this dataset was to train a model to extract nutrient values from nutrition tables, as part of the Nutrisight project. It contains ~3k samples in total (2.8k for training and 199 for testing). For more information about the project, please refer to the nutrisight directory in the openfoodfacts-ai GitHub repository.
The images were collected from the Open Food Facts database, and annotated by a team of professional annotators. The dataset is meant to be used as a training/testing dataset using LayoutLM-like models: we expect the OCR to be performed prior to prediction. The target task for this dataset is token classification: the model should assign a single label to each token in the input image.
We use the BIO tagging scheme. We only annotate the value (+ unit) of the nutrition table, not the nutrient name. All other tokens are annotated as "O". The nutrient values can be per 100g or per serving, so we have one label type for each case, one suffixing the label with "_100g" and the other with "_SERVING".
The values are annotated with the following labels. We removed the 'B-' and '-I' prefixes for readability, so the real number of labels is twice the number of labels listed below.
- ADDED_SUGARS_SERVING
- CALCIUM_100G
- CALCIUM_SERVING
- CARBOHYDRATES_100G
- CARBOHYDRATES_SERVING
- CHOLESTEROL_SERVING
- ENERGY_KCAL_100G
- ENERGY_KCAL_SERVING
- ENERGY_KJ_100G
- ENERGY_KJ_SERVING
- FAT_100G
- FAT_SERVING
- FIBER_100G
- FIBER_SERVING
- IRON_SERVING
- POTASSIUM_SERVING
- PROTEINS_100G
- PROTEINS_SERVING
- SALT_100G
- SALT_SERVING
- SATURATED_FAT_100G
- SATURATED_FAT_SERVING
- SERVING_SIZE
- SODIUM_100G
- SODIUM_SERVING
- SUGARS_100G
- SUGARS_SERVING
- TRANS_FAT_100G
- TRANS_FAT_SERVING
- VITAMIN_D_100G
- VITAMIN_D_SERVING
The following fields are available for each sample:
ner_tags
: a list of label IDs for each token in the input image. The label IDs are integers, and the mapping from label IDs to label names can be found in the HuggingFace dataset metadata. It's automatically available when loading the dataset using thedatasets
library.tokens
: a list of tokens in the input image. This was extracted using Google Cloud Vision API.bboxes
: a list of bounding boxes for each token in the input image. The bounding boxes are in the format[x_min, y_min, x_max, y_max]
, where(x_min, y_min)
is the top-left corner and(x_max, y_max)
is the bottom-right corner of the bounding box. The bounding boxes are in the same order as the tokens. The coordinates should be normalized between 1 and 1000 (excluded). This was extracted from Google Cloud Vision OCR result as well.image
: the image.meta
: a dictionary containing the following fields:barcode
: the barcode of the product.image_id
: the ID of the image (digit, specific to the product)image_url
: the URL of the image.split
: the split of the image (train, test).ocr_url
: the URL of the OCR result.batch
: the annotation batch (annotations were performed by batches of ~100 samples)label_studio_id
: the ID of the task in Label Studio.checked
: whether a second annotator checked the annotation.usda_table
: whether the nutrition table is from a USDA-like table, as annotated by the annotators.nutrition_text
: whether the nutrition information have a text structure (not a table), as annotated by the annotators.no_nutrition_table
: whether the image contains no nutrition table, as annotated by the annotators.comment
: a comment from the annotators.
The dataset (including the images) are licensed under the Creative Commons Attribution Share Alike license (CC-BY-SA 3.0).