The Dataset Viewer has been disabled on this dataset.

Dataset origin: https://zenodo.org/records/5034605

MultiSubs: A Large-scale Multimodal and Multilingual Dataset

Introduction

MultiSubs is a dataset of multilingual subtitles gathered from the OPUS OpenSubtitles dataset, which in turn was sourced from opensubtitles.org. We have supplemented some text fragments (visually salient nouns for now) of the subtitles with web images.

Please refer to the paper below for a more detailed description of the dataset.

⚠ The images are available to download as a separate zip file.

Disclaimer

The MultiSubs dataset is provided as is. As the dataset relies on external sources such as movie subtitles and BabelNet, the corpus itself may contain offensive langauge and images, such as curse words and images depicting nudity or sexual organs, among others. While these are kept to a minimal, they inevitably exist.

Content

monolingual/

Contains the original English dataset as described in paragraph 1 of Section 3 in the paper.

en.sents.tok and en.sents.det

Tokenised and detokenised versions of English sentences, each containing at least one occurrence of a fragment from en.fragments below.

Format: sentence-id \t sentence where sentence-id is imdb-id#sentence-number.

There are 10,241,617 English sentences (paragraph 1 of Section 3 in the paper).

en.fragments

List of fragments extracted from English sentences that are (1) single word nouns; (2) have an imageability score of at least 500; (3) occur more than once. There are 13,268,536 fragment instances and 4,099 unique tokens. See paragraph 1 of Section 3 in the paper.

Format: sentence-id \t fragment-start \t fragment-end \t fragment-type \t raw-fragment \t tokenised-fragment \t POS-sequence

e.g. 417#1 \t 2 \t 5 \t N \t Trip \t trip \t NN

  • fragment-start and fragment-end are the start/end character indices for the fragment in en.sents.det.
  • fragment-type is always N (single word nouns) for this release.
  • POS-sequence are the PoS tag sequence for the fragment (a single tag for single word nouns).

multilingual/

en-[es|pt_br|fr|de].sents.json

Aligned bilingual corpus of Spanish [es], (Brazilian) Portugese [pt_br], French [fr] and German [de] sentences respectively.

The JSON object is a dictionary, where the key is the ID of the aligned sentence, and the value is a dictionary in the following format:

{
    "src": { 
        "det": "A detokenised sentence in the source language (English).",
        "tok": "a tokenised sentence in the source language ( english ) ."
    },
    
    "trg": {
        "det": "The aligned sentence in the target language, detokenised.",
        "tok": "the aligned sentence in the target language , detokenised ."
    }
}
en-[es|pt_br|fr|de].fragments.json

The fragments extracted from the sentences above.

The JSON object is a list, where each item in the list contains fragment in the following format:

{  "sentId": "6617#en{7}:de{14}",
   "srcSentId": "6617#7",
   "srcCharStart": 4,
   "srcCharEnd": 6,
   "srcFragment": "man",
   "trgFragment": "mann",
   "trgTokenIndex": 2,
   "trgFragmentList": ["mann", "mensch", "männer"],
   "synsets": "bn:00001533n;bn:00044576n;bn:00053096n;bn:00053097n;bn:00053099n;bn:03478581n"
}
  • srcCharStart and srcCharEnd are the positions of the first and last character of the fragment in the detokenised English sentence respectively (starts from 0).

  • trgTokenIndex is the position of the token in the tokenised sentence in the target language (starts from 0).

  • trgFragmentList is a list of plausible (sense-disambiguated) translations for this fragment in the target language

  • synsets are the inferred babelnet synset ID(s) for the fragment. Multiple synsets are separated by semicolons. This semicolon-separated-string can be used as the ID to query all images for this fragment in images.json below.

images.json

A JSON dictionary.

Key: Babelnet synset IDs separated by semicolons (see above). Value: List of image IDs associated with this set of synset IDs.

en-[es|pt_br|fr|de].intersect.json

A JSON dictionary. Gives the ids of sentences in each intersect_N subset, as described in Section 3.1 of the paper.

Key: "intersect1", "intersect2", "intersect3", "intersect4"

Value: List of sentence ids for each subset. The number of sentences should correspond to those reported in Table 1 of the paper.

human_eval/

Results of Human Evaluation of the "Gap Filling Game" (Section 5 of paper)

results.json

Format:

{
  challengeId: {
    "subtitleId": id,
    "userId": userId,
    "consensus": "intersectN",
    "word": correctWord,
    "correctAttempt": howManyAttempts (0 if failed after 3 attempts)
    "guess1": {"word": word, "score": score} ,
    "guess2": {"word": word, "score": score},
    "guess3": {"word": word, "score": score}
  }
}
  • "guess2" and "guess3" may be absent depending on "correctAttempt".
results_detailed.json

This version has more details, including the sentences and image(s) shown to the user. There are 16 instances missing in this version compared to results.json - we have lost some of the information of these.

Additional fields:

  • "images": ["IMAGE1", "IMAGE2", "IMAGE3", ...]
  • "leftContext": "The left portion of the sentence before the missing word"
  • "rightContext": "The right portion of the sentence after the missing word"

The first image in the "images" list is shown in Attempt 2 (one image).

tasks/fill_in_the_blank/

Dataset for the fill-in-the-blank task (Section 6.1)

sents.json

A JSON list. Each item in the list is a dictionary representing a sentence for the fill-in-the-blank task.

There are 4383978 sentences in this file, although not all are used (only 4377772 are used)

Blanks are marked as <_> in the sentences.

Format for each item:

{"sentId": "417#en{2}:pt_br{2}", 
 "word": "hall", 
 "wordLower": "hall",
 "sent": {
    "det": "The astronomers are assembled in a large <_> embellished with instruments.", 
    "tok": "the astronomers are assembled in a large <_> embellished with instruments ."
  }, 
  "synsets": "bn:00004493n;bn:00042664n", 
  "intersect": "intersect=1", 
  "imageId": "4E6B2547DF16BB40DB0036159E1CBF0BA12127752D3C447E7CE8BFB3", 
  "srcSentId": "417#2", 
  "srcCharStart": 41, 
  "srcCharEnd": 44
}
  • wordLower is the lowercased version of the token.
  • intersect gives the specific subset this sentence belongs to. You can retrieve a list of sentences belonging to a subset using intersect.json instead (below).
  • imageId is the image randomly selected for the sentence, but keeping the training, test and validation images disjoint. This is described at the end of Section 6.1.2 in the paper. You may use this if you are training/testing a model that takes in a single image as input to ensure your experiments comparable. imageId may be null if not used in the split.
intersect.json

A JSON dictionary, containing the list of sentences each intersect{=N} subset contain.

Keys: "intersect=1", "intersect=2", "intersect=3", "intersect=4"

Value: List of indices, pointing to the sentences in sents.json

splits.json

A JSON dictionary, containing the different train/test splits.

Keys: Main splits:

  • "train" - all training (4277772 instances)
  • "val" - all validation (5000 instances)
  • "test" - all test (5000 instances)

Training subsets (first paragraph of Section 6.1.4 in paper)

  • "trainIntersect=1" - training subset where intersect=1 (2499265)
  • "trainIntersect=2" - training subset where intersect=2 (1252886)
  • "trainIntersect=3" - training subset where intersect=3 (462860)
  • "trainIntersect=4" - training subset where intersect=4 (62761)

Validation and test subsets (second paragraph of Section 6.1.4 in paper). The test results reported in the paper are based on this subset.

  • "valSubset" - subset of validation (3143)
  • "testSubset" - subset of test set (3262)

Values: List of indices, pointing to the sentences in sents.json for each split.

tasks/lexical_translation/

Dataset for the lexical translation task (Section 6.2).

en-[es|pt_br|fr|de].sents.json

Same as the fill-in-the-blank task above, with two additional keys:

  • "target" for the exact word in the target language.
  • "positiveTargets" for a list of acceptable words in the target language.
en-[es|pt_br|fr|de].intersect.json

Same as the fill-in-the-blank task above.

en-[es|pt_br|fr|de].splits.json

Same as the fill-in-the-blank task above.

Number of instances:

es

  • train: 2356787
  • val: 5000
  • test: 5000
  • valSubset: 3172
  • testSubset: 3117

pt_br

  • train: 1950455
  • val: 5000
  • test: 5000
  • valSubset: 3084
  • testSubset: 3167

fr

  • train: 1143608
  • val: 5000
  • test: 5000
  • valSubset: 2930
  • testSubset: 2944

de

  • train: 405759
  • val: 5000
  • test: 5000
  • valSubset: 3047
  • testSubset: 3007

Citation

I hope that you do something useful and impactful with this dataset to really move the field forward, and not just publish papers for the sake of it.

Please cite the following paper if you use this dataset in your work:

Josiah Wang, Pranava Madhyastha, Josiel Figueiredo, Chiraag Lala, Lucia Specia (2021). MultiSubs: A Large-scale Multimodal and Multilingual Dataset. CoRR, abs/2103.01910. Available at: https://arxiv.org/abs/2103.01910

@article{DBLP:journals/corr/abs-2103-01910,
  author    = {Josiah Wang and
               Pranava Madhyastha and
               Josiel Figueiredo and
               Chiraag Lala and
               Lucia Specia},
  title     = {MultiSubs: {A} Large-scale Multimodal and Multilingual Dataset},
  journal   = {CoRR},
  volume    = {abs/2103.01910},
  year      = {2021},
  url       = {https://arxiv.org/abs/2103.01910},
  archivePrefix = {arXiv},
  eprint    = {2103.01910},
  timestamp = {Thu, 04 Mar 2021 17:00:40 +0100},
  biburl    = {https://dblp.org/rec/journals/corr/abs-2103-01910.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}
Downloads last month
33