ghomasHudson commited on
Commit
42a2105
1 Parent(s): f092b02

Remove confirm from google drive link as fixed in library

Browse files
Files changed (1) hide show
  1. muld.py +4 -4
muld.py CHANGED
@@ -38,7 +38,7 @@ The NarrativeQA Reading Comprehension Challenge Dataset consists of user-submitt
38
  publisher={MIT Press}
39
  }""",
40
  "urls": {
41
- datasets.Split.TRAIN: "https://drive.google.com/uc?export=download&confirm=yTib&id=1sUXIC6lmk9Khp2mnr9VZwQ-StDlHqTw1",
42
  datasets.Split.VALIDATION: "https://drive.google.com/uc?&confirm=yTib&export=download&id=1xdXEhLHtcqOZh0FbPhY_dnvNMg2bALtm",
43
  datasets.Split.TEST: "https://drive.google.com/uc?confirm=yTib&export=download&id=1BPBXyfYWVGtOXVQv_hlqtvbT25rTQzGu",
44
  }
@@ -49,8 +49,8 @@ The NarrativeQA Reading Comprehension Challenge Dataset consists of user-submitt
49
  The HotpotQA dataset consists of questions from crowd workers which require information from multiple Wikipedia articles in order to answer, thus testing the ability for models to perform multi-hop question answering. The data is commonly presented as a list of paragraphs containing relevant information plus a setting where the addition of 'distractor paragraphs' fully test the ability of the model to comprehend which information is relevant to the question asked. To transform this into a long document, we expand each paragraph with its full Wikipedia page as well as adding additional distractor articles
50
  from similar topics (randomly chosen from links on the existing pages) in order to meet the 10,000 token minimum length requirement for this benchmark. These articles are shuffled and concatenated to form the model input.""",
51
  "urls": {
52
- datasets.Split.TRAIN: "https://drive.google.com/uc?export=download&confirm=yTib&id=1OlGRyCEL9JhwIQIKViaWIXCOB_pwj8xU",
53
- datasets.Split.VALIDATION: "https://drive.google.com/uc?export=download&confirm=yTib&id=1_Svtg6PycBpezDYJ78zcJqLa8Ohnk6Gq"
54
  }
55
  },
56
 
@@ -85,7 +85,7 @@ The Open Subtitles corpus (Lison et al., 2018) consists of aligned subtitles
85
  Style change detection is the task of identifying the points where the author changes in a document constructed from the work of multiple authors. We use stories contributed to the fanfiction website Archive of Our Own, which contains a large number of works submitted by fans of popular films, tv, game, and book charactersmakicab10mw.
86
  """,
87
  "urls": {
88
- datasets.Split.TRAIN: "https://drive.google.com/uc?export=download&id=1R29IQ_bFLw3_6DYLtP7YWFTGe7FQAevT&confirm=yTib",
89
  datasets.Split.VALIDATION: "https://drive.google.com/uc?export=download&id=1B_RkTaMMOQXfJ7nDFCpq8GAth7yiW7vF",
90
  datasets.Split.TEST: "https://drive.google.com/uc?export=download&id=1-1eULJlV9nGrAwpdaEr5Ykchwfxn06kj"
91
  }
 
38
  publisher={MIT Press}
39
  }""",
40
  "urls": {
41
+ datasets.Split.TRAIN: "https://drive.google.com/uc?export=download&id=1sUXIC6lmk9Khp2mnr9VZwQ-StDlHqTw1",
42
  datasets.Split.VALIDATION: "https://drive.google.com/uc?&confirm=yTib&export=download&id=1xdXEhLHtcqOZh0FbPhY_dnvNMg2bALtm",
43
  datasets.Split.TEST: "https://drive.google.com/uc?confirm=yTib&export=download&id=1BPBXyfYWVGtOXVQv_hlqtvbT25rTQzGu",
44
  }
 
49
  The HotpotQA dataset consists of questions from crowd workers which require information from multiple Wikipedia articles in order to answer, thus testing the ability for models to perform multi-hop question answering. The data is commonly presented as a list of paragraphs containing relevant information plus a setting where the addition of 'distractor paragraphs' fully test the ability of the model to comprehend which information is relevant to the question asked. To transform this into a long document, we expand each paragraph with its full Wikipedia page as well as adding additional distractor articles
50
  from similar topics (randomly chosen from links on the existing pages) in order to meet the 10,000 token minimum length requirement for this benchmark. These articles are shuffled and concatenated to form the model input.""",
51
  "urls": {
52
+ datasets.Split.TRAIN: "https://drive.google.com/uc?export=download&id=1OlGRyCEL9JhwIQIKViaWIXCOB_pwj8xU",
53
+ datasets.Split.VALIDATION: "https://drive.google.com/uc?export=download&id=1_Svtg6PycBpezDYJ78zcJqLa8Ohnk6Gq"
54
  }
55
  },
56
 
 
85
  Style change detection is the task of identifying the points where the author changes in a document constructed from the work of multiple authors. We use stories contributed to the fanfiction website Archive of Our Own, which contains a large number of works submitted by fans of popular films, tv, game, and book charactersmakicab10mw.
86
  """,
87
  "urls": {
88
+ datasets.Split.TRAIN: "https://drive.google.com/uc?export=download&id=1R29IQ_bFLw3_6DYLtP7YWFTGe7FQAevT",
89
  datasets.Split.VALIDATION: "https://drive.google.com/uc?export=download&id=1B_RkTaMMOQXfJ7nDFCpq8GAth7yiW7vF",
90
  datasets.Split.TEST: "https://drive.google.com/uc?export=download&id=1-1eULJlV9nGrAwpdaEr5Ykchwfxn06kj"
91
  }