url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.18B
| node_id
stringlengths 18
32
| number
int64 1
4.03k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
int64 1,587B
1,648B
| updated_at
int64 1,587B
1,648B
| closed_at
int64 1,587B
1,648B
⌀ | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/189 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/189/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/189/comments | https://api.github.com/repos/huggingface/datasets/issues/189/events | https://github.com/huggingface/datasets/issues/189 | 624,048,881 | MDU6SXNzdWU2MjQwNDg4ODE= | 189 | [Question] BERT-style multiple choice formatting | {
"login": "sarahwie",
"id": 8027676,
"node_id": "MDQ6VXNlcjgwMjc2NzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sarahwie",
"html_url": "https://github.com/sarahwie",
"followers_url": "https://api.github.com/users/sarahwie/followers",
"following_url": "https://api.github.com/users/sarahwie/following{/other_user}",
"gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions",
"organizations_url": "https://api.github.com/users/sarahwie/orgs",
"repos_url": "https://api.github.com/users/sarahwie/repos",
"events_url": "https://api.github.com/users/sarahwie/events{/privacy}",
"received_events_url": "https://api.github.com/users/sarahwie/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hi @sarahwie, can you details this a little more?\r\n\r\nI'm not sure I understand what you refer to and what you mean when you say \"Previously, this was done by passing a list of InputFeatures to the dataloader instead of a list of InputFeature\"",
"I think I've resolved it. For others' reference: to convert from using the [`MultipleChoiceDataset` class](https://github.com/huggingface/transformers/blob/a34a9896ac2a4a33ff9cd805c76eed914c8d8965/examples/multiple-choice/utils_multiple_choice.py#L82)/[`run_multiple_choice.py`](https://github.com/huggingface/transformers/blob/a34a9896ac2a4a33ff9cd805c76eed914c8d8965/examples/multiple-choice/run_multiple_choice.py) script in Huggingface Transformers, I've done the following for hellaswag:\r\n\r\n1. converted the `convert_examples_to_features()` function to only take one input and return a dictionary rather than a list:\r\n```\r\ndef convert_examples_to_features(example, tokenizer, max_length):\r\n\r\n choices_inputs = defaultdict(list)\r\n for ending_idx, ending in enumerate(example['endings']['ending']):\r\n text_a = example['ctx']\r\n text_b = ending\r\n\r\n inputs = tokenizer.encode_plus(\r\n text_a,\r\n text_b,\r\n add_special_tokens=True,\r\n max_length=max_length,\r\n pad_to_max_length=True,\r\n return_overflowing_tokens=True,\r\n )\r\n if \"num_truncated_tokens\" in inputs and inputs[\"num_truncated_tokens\"] > 0:\r\n logger.info(\r\n \"Attention! you are cropping tokens (swag task is ok). \"\r\n \"If you are training ARC and RACE and you are poping question + options,\"\r\n \"you need to try to use a bigger max seq length!\"\r\n )\r\n\r\n for key in inputs:\r\n choices_inputs[key].append(inputs[key])\r\n \r\n choices_inputs['label'] = int(example['label'])\r\n\r\n return choices_inputs\r\n```\r\n2. apply this directly (instance-wise) to dataset, convert dataset to torch tensors. Dataset is then ready to be passed to `Trainer` instance.\r\n\r\n```\r\ndataset['train'] = dataset['train'].map(lambda x: convert_examples_to_features(x, tokenizer, max_length), batched=False)\r\ncolumns = ['input_ids', 'token_type_ids', 'attention_mask', 'label']\r\ndataset['train'].set_format(type='torch', columns=columns)\r\n```"
] | 1,590,383,465,000 | 1,590,431,908,000 | 1,590,431,908,000 | NONE | null | Hello, I am wondering what the equivalent formatting of a dataset should be to allow for multiple-choice answering prediction, BERT-style. Previously, this was done by passing a list of `InputFeatures` to the dataloader instead of a list of `InputFeature`, where `InputFeatures` contained lists of length equal to the number of answer choices in the MCQ instead of single items. I'm a bit confused on what the output of my feature conversion function should be when using `dataset.map()` to ensure similar behavior.
Thanks! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/189/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/189/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/188 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/188/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/188/comments | https://api.github.com/repos/huggingface/datasets/issues/188/events | https://github.com/huggingface/datasets/issues/188 | 623,890,430 | MDU6SXNzdWU2MjM4OTA0MzA= | 188 | When will the remaining math_dataset modules be added as dataset objects | {
"login": "tylerroost",
"id": 31251196,
"node_id": "MDQ6VXNlcjMxMjUxMTk2",
"avatar_url": "https://avatars.githubusercontent.com/u/31251196?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tylerroost",
"html_url": "https://github.com/tylerroost",
"followers_url": "https://api.github.com/users/tylerroost/followers",
"following_url": "https://api.github.com/users/tylerroost/following{/other_user}",
"gists_url": "https://api.github.com/users/tylerroost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tylerroost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tylerroost/subscriptions",
"organizations_url": "https://api.github.com/users/tylerroost/orgs",
"repos_url": "https://api.github.com/users/tylerroost/repos",
"events_url": "https://api.github.com/users/tylerroost/events{/privacy}",
"received_events_url": "https://api.github.com/users/tylerroost/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"On a similar note it would be nice to differentiate between train-easy, train-medium, and train-hard",
"Hi @tylerroost, we don't have a timeline for this at the moment.\r\nIf you want to give it a look we would be happy to review a PR on it.\r\nAlso, the library is one week old so everything is quite barebones, in particular the doc.\r\nYou should expect some bumps on the road.\r\n\r\nTo get you started, you can check the datasets scripts in the `./datasets` folder on the repo and find the one on math_datasets that will need to be modified. Then you should check the original repository on the math_dataset to see where the other files to download are located and what is the expected format for the various parts of the dataset.\r\n\r\nTo get a general overview on how datasets scripts are written and used, you can read the nice tutorial on how to add a new dataset for TensorFlow Dataset [here](https://www.tensorflow.org/datasets/add_dataset), our API is not exactly identical but it can give you a high-level overview.",
"Thanks I'll give it a look"
] | 1,590,335,212,000 | 1,590,346,428,000 | 1,590,346,428,000 | NONE | null | Currently only the algebra_linear_1d is supported. Is there a timeline for making the other modules supported. If no timeline is established, how can I help? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/188/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/188/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/187 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/187/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/187/comments | https://api.github.com/repos/huggingface/datasets/issues/187/events | https://github.com/huggingface/datasets/issues/187 | 623,627,800 | MDU6SXNzdWU2MjM2Mjc4MDA= | 187 | [Question] How to load wikipedia ? Beam runner ? | {
"login": "richarddwang",
"id": 17963619,
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richarddwang",
"html_url": "https://github.com/richarddwang",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"I have seen that somebody is hard working on easierly loadable wikipedia. #129 \r\nMaybe I should wait a few days for that version ?",
"Yes we (well @lhoestq) are very actively working on this."
] | 1,590,229,132,000 | 1,590,365,522,000 | 1,590,365,522,000 | CONTRIBUTOR | null | When `nlp.load_dataset('wikipedia')`, I got
* `WARNING:nlp.builder:Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided. Please pass a nlp.DownloadConfig(beam_runner=...) object to the builder.download_and_prepare(download_config=...) method. Default values will be used.`
* `AttributeError: 'NoneType' object has no attribute 'size'`
Could somebody tell me what should I do ?
# Env
On Colab,
```
git clone https://github.com/huggingface/nlp
cd nlp
pip install -q .
```
```
%pip install -q apache_beam mwparserfromhell
-> ERROR: pydrive 1.3.1 has requirement oauth2client>=4.0.0, but you'll have oauth2client 3.0.0 which is incompatible.
ERROR: google-api-python-client 1.7.12 has requirement httplib2<1dev,>=0.17.0, but you'll have httplib2 0.12.0 which is incompatible.
ERROR: chainer 6.5.0 has requirement typing-extensions<=3.6.6, but you'll have typing-extensions 3.7.4.2 which is incompatible.
```
```
pip install -q apache-beam[interactive]
ERROR: google-colab 1.0.0 has requirement ipython~=5.5.0, but you'll have ipython 5.10.0 which is incompatible.
```
# The whole message
```
WARNING:nlp.builder:Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided. Please pass a nlp.DownloadConfig(beam_runner=...) object to the builder.download_and_prepare(download_config=...) method. Default values will be used.
Downloading and preparing dataset wikipedia/20200501.aa (download: Unknown size, generated: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wikipedia/20200501.aa/1.0.0...
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process()
44 frames
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.PerWindowInvoker.invoke_process()
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window()
/usr/local/lib/python3.6/dist-packages/apache_beam/io/iobase.py in process(self, element, init_result)
1081 writer.write(e)
-> 1082 return [window.TimestampedValue(writer.close(), timestamp.MAX_TIMESTAMP)]
1083
/usr/local/lib/python3.6/dist-packages/apache_beam/io/filebasedsink.py in close(self)
422 def close(self):
--> 423 self.sink.close(self.temp_handle)
424 return self.temp_shard_path
/usr/local/lib/python3.6/dist-packages/apache_beam/io/parquetio.py in close(self, writer)
537 if len(self._buffer[0]) > 0:
--> 538 self._flush_buffer()
539 if self._record_batches_byte_size > 0:
/usr/local/lib/python3.6/dist-packages/apache_beam/io/parquetio.py in _flush_buffer(self)
569 for b in x.buffers():
--> 570 size = size + b.size
571 self._record_batches_byte_size = self._record_batches_byte_size + size
AttributeError: 'NoneType' object has no attribute 'size'
During handling of the above exception, another exception occurred:
AttributeError Traceback (most recent call last)
<ipython-input-9-340aabccefff> in <module>()
----> 1 dset = nlp.load_dataset('wikipedia')
/usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
518 download_mode=download_mode,
519 ignore_verifications=ignore_verifications,
--> 520 save_infos=save_infos,
521 )
522
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs)
370 verify_infos = not save_infos and not ignore_verifications
371 self._download_and_prepare(
--> 372 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
373 )
374 # Sync info
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos)
770 with beam.Pipeline(runner=beam_runner, options=beam_options,) as pipeline:
771 super(BeamBasedBuilder, self)._download_and_prepare(
--> 772 dl_manager, pipeline=pipeline, verify_infos=False
773 ) # TODO{beam} verify infos
774
/usr/local/lib/python3.6/dist-packages/apache_beam/pipeline.py in __exit__(self, exc_type, exc_val, exc_tb)
501 def __exit__(self, exc_type, exc_val, exc_tb):
502 if not exc_type:
--> 503 self.run().wait_until_finish()
504
505 def visit(self, visitor):
/usr/local/lib/python3.6/dist-packages/apache_beam/pipeline.py in run(self, test_runner_api)
481 return Pipeline.from_runner_api(
482 self.to_runner_api(use_fake_coders=True), self.runner,
--> 483 self._options).run(False)
484
485 if self._options.view_as(TypeOptions).runtime_type_check:
/usr/local/lib/python3.6/dist-packages/apache_beam/pipeline.py in run(self, test_runner_api)
494 finally:
495 shutil.rmtree(tmpdir)
--> 496 return self.runner.run_pipeline(self, self._options)
497
498 def __enter__(self):
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/direct/direct_runner.py in run_pipeline(self, pipeline, options)
128 runner = BundleBasedDirectRunner()
129
--> 130 return runner.run_pipeline(pipeline, options)
131
132
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in run_pipeline(self, pipeline, options)
553
554 self._latest_run_result = self.run_via_runner_api(
--> 555 pipeline.to_runner_api(default_environment=self._default_environment))
556 return self._latest_run_result
557
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in run_via_runner_api(self, pipeline_proto)
563 # TODO(pabloem, BEAM-7514): Create a watermark manager (that has access to
564 # the teststream (if any), and all the stages).
--> 565 return self.run_stages(stage_context, stages)
566
567 @contextlib.contextmanager
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in run_stages(self, stage_context, stages)
704 stage,
705 pcoll_buffers,
--> 706 stage_context.safe_coders)
707 metrics_by_stage[stage.name] = stage_results.process_bundle.metrics
708 monitoring_infos_by_stage[stage.name] = (
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in _run_stage(self, worker_handler_factory, pipeline_components, stage, pcoll_buffers, safe_coders)
1071 cache_token_generator=cache_token_generator)
1072
-> 1073 result, splits = bundle_manager.process_bundle(data_input, data_output)
1074
1075 def input_for(transform_id, input_id):
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in process_bundle(self, inputs, expected_outputs)
2332
2333 with UnboundedThreadPoolExecutor() as executor:
-> 2334 for result, split_result in executor.map(execute, part_inputs):
2335
2336 split_result_list += split_result
/usr/lib/python3.6/concurrent/futures/_base.py in result_iterator()
584 # Careful not to keep a reference to the popped future
585 if timeout is None:
--> 586 yield fs.pop().result()
587 else:
588 yield fs.pop().result(end_time - time.monotonic())
/usr/lib/python3.6/concurrent/futures/_base.py in result(self, timeout)
430 raise CancelledError()
431 elif self._state == FINISHED:
--> 432 return self.__get_result()
433 else:
434 raise TimeoutError()
/usr/lib/python3.6/concurrent/futures/_base.py in __get_result(self)
382 def __get_result(self):
383 if self._exception:
--> 384 raise self._exception
385 else:
386 return self._result
/usr/local/lib/python3.6/dist-packages/apache_beam/utils/thread_pool_executor.py in run(self)
42 # If the future wasn't cancelled, then attempt to execute it.
43 try:
---> 44 self._future.set_result(self._fn(*self._fn_args, **self._fn_kwargs))
45 except BaseException as exc:
46 # Even though Python 2 futures library has #set_exection(),
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in execute(part_map)
2329 self._registered,
2330 cache_token_generator=self._cache_token_generator)
-> 2331 return bundle_manager.process_bundle(part_map, expected_outputs)
2332
2333 with UnboundedThreadPoolExecutor() as executor:
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in process_bundle(self, inputs, expected_outputs)
2243 process_bundle_descriptor_id=self._bundle_descriptor.id,
2244 cache_tokens=[next(self._cache_token_generator)]))
-> 2245 result_future = self._worker_handler.control_conn.push(process_bundle_req)
2246
2247 split_results = [] # type: List[beam_fn_api_pb2.ProcessBundleSplitResponse]
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in push(self, request)
1557 self._uid_counter += 1
1558 request.instruction_id = 'control_%s' % self._uid_counter
-> 1559 response = self.worker.do_instruction(request)
1560 return ControlFuture(request.instruction_id, response)
1561
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/sdk_worker.py in do_instruction(self, request)
413 # E.g. if register is set, this will call self.register(request.register))
414 return getattr(self, request_type)(
--> 415 getattr(request, request_type), request.instruction_id)
416 else:
417 raise NotImplementedError
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/sdk_worker.py in process_bundle(self, request, instruction_id)
448 with self.maybe_profile(instruction_id):
449 delayed_applications, requests_finalization = (
--> 450 bundle_processor.process_bundle(instruction_id))
451 monitoring_infos = bundle_processor.monitoring_infos()
452 monitoring_infos.extend(self.state_cache_metrics_fn())
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/bundle_processor.py in process_bundle(self, instruction_id)
837 for data in data_channel.input_elements(instruction_id,
838 expected_transforms):
--> 839 input_op_by_transform_id[data.transform_id].process_encoded(data.data)
840
841 # Finish all operations.
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/bundle_processor.py in process_encoded(self, encoded_windowed_values)
214 decoded_value = self.windowed_coder_impl.decode_from_stream(
215 input_stream, True)
--> 216 self.output(decoded_value)
217
218 def try_split(self, fraction_of_remainder, total_buffer_size):
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.Operation.output()
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.Operation.output()
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.SingletonConsumerSet.receive()
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.process()
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.process()
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process()
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner._reraise_augmented()
/usr/local/lib/python3.6/dist-packages/future/utils/__init__.py in raise_with_traceback(exc, traceback)
417 if traceback == Ellipsis:
418 _, _, traceback = sys.exc_info()
--> 419 raise exc.with_traceback(traceback)
420
421 else:
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process()
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.PerWindowInvoker.invoke_process()
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window()
/usr/local/lib/python3.6/dist-packages/apache_beam/io/iobase.py in process(self, element, init_result)
1080 for e in bundle[1]: # values
1081 writer.write(e)
-> 1082 return [window.TimestampedValue(writer.close(), timestamp.MAX_TIMESTAMP)]
1083
1084
/usr/local/lib/python3.6/dist-packages/apache_beam/io/filebasedsink.py in close(self)
421
422 def close(self):
--> 423 self.sink.close(self.temp_handle)
424 return self.temp_shard_path
/usr/local/lib/python3.6/dist-packages/apache_beam/io/parquetio.py in close(self, writer)
536 def close(self, writer):
537 if len(self._buffer[0]) > 0:
--> 538 self._flush_buffer()
539 if self._record_batches_byte_size > 0:
540 self._write_batches(writer)
/usr/local/lib/python3.6/dist-packages/apache_beam/io/parquetio.py in _flush_buffer(self)
568 for x in arrays:
569 for b in x.buffers():
--> 570 size = size + b.size
571 self._record_batches_byte_size = self._record_batches_byte_size + size
AttributeError: 'NoneType' object has no attribute 'size' [while running 'train/Save to parquet/Write/WriteImpl/WriteBundles']
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/187/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/187/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/186 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/186/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/186/comments | https://api.github.com/repos/huggingface/datasets/issues/186/events | https://github.com/huggingface/datasets/issues/186 | 623,595,180 | MDU6SXNzdWU2MjM1OTUxODA= | 186 | Weird-ish: Not creating unique caches for different phases | {
"login": "zphang",
"id": 1668462,
"node_id": "MDQ6VXNlcjE2Njg0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1668462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zphang",
"html_url": "https://github.com/zphang",
"followers_url": "https://api.github.com/users/zphang/followers",
"following_url": "https://api.github.com/users/zphang/following{/other_user}",
"gists_url": "https://api.github.com/users/zphang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zphang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zphang/subscriptions",
"organizations_url": "https://api.github.com/users/zphang/orgs",
"repos_url": "https://api.github.com/users/zphang/repos",
"events_url": "https://api.github.com/users/zphang/events{/privacy}",
"received_events_url": "https://api.github.com/users/zphang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Looks like a duplicate of #120.\r\nThis is already fixed on master. We'll do a new release on pypi soon",
"Good catch, it looks fixed.\r\n"
] | 1,590,216,058,000 | 1,590,265,338,000 | 1,590,265,337,000 | NONE | null | Sample code:
```python
import nlp
dataset = nlp.load_dataset('boolq')
def func1(x):
return x
def func2(x):
return None
train_output = dataset["train"].map(func1)
valid_output = dataset["validation"].map(func1)
print()
print(len(train_output), len(valid_output))
# Output: 9427 9427
```
The map method in both cases seem to be pointing to the same cache, so the latter call based on the validation data will return the processed train data cache.
What's weird is that the following doesn't seem to be an issue:
```python
train_output = dataset["train"].map(func2)
valid_output = dataset["validation"].map(func2)
print()
print(len(train_output), len(valid_output))
# 9427 3270
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/186/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/186/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/185 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/185/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/185/comments | https://api.github.com/repos/huggingface/datasets/issues/185/events | https://github.com/huggingface/datasets/pull/185 | 623,172,484 | MDExOlB1bGxSZXF1ZXN0NDIxODkxNjY2 | 185 | [Commands] In-detail instructions to create dummy data folder | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"awesome !"
] | 1,590,150,385,000 | 1,590,156,395,000 | 1,590,156,394,000 | MEMBER | null | ### Dummy data command
This PR adds a new command `python nlp-cli dummy_data <path_to_dataset_folder>` that gives in-detail instructions on how to add the dummy data files.
It would be great if you can try it out by moving the current dummy_data folder of any dataset in `./datasets` with `mv datasets/<dataset_script>/dummy_data datasets/<dataset_name>/dummy_data_copy` and running the command `python nlp-cli dummy_data ./datasets/<dataset_name>` to see if you like the instructions.
### CONTRIBUTING.md
Also the CONTRIBUTING.md is made cleaner including a new section on "How to add a dataset".
### Current PRs
It would be nice if we can try out if this command helps current PRs, *e.g.* #169 to add a dataset. I comment on those PRs. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/185/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/185/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/185",
"html_url": "https://github.com/huggingface/datasets/pull/185",
"diff_url": "https://github.com/huggingface/datasets/pull/185.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/185.patch",
"merged_at": 1590156394000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/184 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/184/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/184/comments | https://api.github.com/repos/huggingface/datasets/issues/184/events | https://github.com/huggingface/datasets/pull/184 | 623,120,929 | MDExOlB1bGxSZXF1ZXN0NDIxODQ5MTQ3 | 184 | Use IndexError instead of ValueError when index out of range | {
"login": "richarddwang",
"id": 17963619,
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richarddwang",
"html_url": "https://github.com/richarddwang",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,590,144,222,000 | 1,590,654,678,000 | 1,590,654,678,000 | CONTRIBUTOR | null | **`default __iter__ needs IndexError`**.
When I want to create a wrapper of arrow dataset to adapt to fastai,
I don't know how to initialize it, so I didn't use inheritance but use object composition.
I wrote sth like this.
```
clas HF_dataset():
def __init__(self, arrow_dataset):
self.dset = arrow_dataset
def __getitem__(self, i):
return self.my_get_item(self.dset)
```
But `for sample in my_dataset:` gave me `ValueError(f"Index ({key}) outside of table length ({self._data.num_rows}).")` . This is because default `__iter__` will stop when it catched `IndexError`.
You can also see my [work](https://github.com/richardyy1188/Pretrain-MLM-and-finetune-on-GLUE-with-fastai/blob/master/GLUE_with_fastai.ipynb) that uses fastai2 to show/load batches from huggingface/nlp GLUE datasets
So I hope we can use `IndexError` instead to let other people who want to wrap it for any purpose won't be caught by this caveat.
BTW, I super appreciate your work, both transformers and nlp save my life. 💖💖💖💖💖💖💖
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/184/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/184/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/184",
"html_url": "https://github.com/huggingface/datasets/pull/184",
"diff_url": "https://github.com/huggingface/datasets/pull/184.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/184.patch",
"merged_at": 1590654678000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/183 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/183/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/183/comments | https://api.github.com/repos/huggingface/datasets/issues/183/events | https://github.com/huggingface/datasets/issues/183 | 623,054,270 | MDU6SXNzdWU2MjMwNTQyNzA= | 183 | [Bug] labels of glue/ax are all -1 | {
"login": "richarddwang",
"id": 17963619,
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richarddwang",
"html_url": "https://github.com/richarddwang",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"This is the test set given by the Glue benchmark. The labels are not provided, and therefore set to -1.",
"Ah, yeah. Why it didn’t occur to me. 😂\nThank you for your comment."
] | 1,590,137,016,000 | 1,590,185,645,000 | 1,590,185,645,000 | CONTRIBUTOR | null | ```
ax = nlp.load_dataset('glue', 'ax')
for i in range(30): print(ax['test'][i]['label'], end=', ')
```
```
-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1,
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/183/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/183/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/182 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/182/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/182/comments | https://api.github.com/repos/huggingface/datasets/issues/182/events | https://github.com/huggingface/datasets/pull/182 | 622,646,770 | MDExOlB1bGxSZXF1ZXN0NDIxNDcxMjg4 | 182 | Update newsroom.py | {
"login": "yoavartzi",
"id": 3289873,
"node_id": "MDQ6VXNlcjMyODk4NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3289873?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yoavartzi",
"html_url": "https://github.com/yoavartzi",
"followers_url": "https://api.github.com/users/yoavartzi/followers",
"following_url": "https://api.github.com/users/yoavartzi/following{/other_user}",
"gists_url": "https://api.github.com/users/yoavartzi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yoavartzi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yoavartzi/subscriptions",
"organizations_url": "https://api.github.com/users/yoavartzi/orgs",
"repos_url": "https://api.github.com/users/yoavartzi/repos",
"events_url": "https://api.github.com/users/yoavartzi/events{/privacy}",
"received_events_url": "https://api.github.com/users/yoavartzi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,590,080,863,000 | 1,590,165,503,000 | 1,590,165,503,000 | CONTRIBUTOR | null | Updated the URL for Newsroom download so it's more robust to future changes. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/182/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/182/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/182",
"html_url": "https://github.com/huggingface/datasets/pull/182",
"diff_url": "https://github.com/huggingface/datasets/pull/182.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/182.patch",
"merged_at": 1590165503000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/181 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/181/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/181/comments | https://api.github.com/repos/huggingface/datasets/issues/181/events | https://github.com/huggingface/datasets/issues/181 | 622,634,420 | MDU6SXNzdWU2MjI2MzQ0MjA= | 181 | Cannot upload my own dataset | {
"login": "korakot",
"id": 3155646,
"node_id": "MDQ6VXNlcjMxNTU2NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3155646?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/korakot",
"html_url": "https://github.com/korakot",
"followers_url": "https://api.github.com/users/korakot/followers",
"following_url": "https://api.github.com/users/korakot/following{/other_user}",
"gists_url": "https://api.github.com/users/korakot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/korakot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/korakot/subscriptions",
"organizations_url": "https://api.github.com/users/korakot/orgs",
"repos_url": "https://api.github.com/users/korakot/repos",
"events_url": "https://api.github.com/users/korakot/events{/privacy}",
"received_events_url": "https://api.github.com/users/korakot/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"It's my misunderstanding. I cannot just upload a csv. I need to write a dataset loading script too.",
"I now try with the sample `datasets/csv` folder. \r\n\r\n nlp-cli upload csv\r\n\r\nThe error is still the same\r\n\r\n```\r\n2020-05-21 17:20:56.394659: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\r\nAbout to upload file /content/csv/csv.py to S3 under filename csv/csv.py and namespace korakot\r\nAbout to upload file /content/csv/dummy/0.0.0/dummy_data.zip to S3 under filename csv/dummy/0.0.0/dummy_data.zip and namespace korakot\r\nProceed? [Y/n] y\r\nUploading... This might take a while if files are large\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/nlp-cli\", line 33, in <module>\r\n service.run()\r\n File \"/usr/local/lib/python3.6/dist-packages/nlp/commands/user.py\", line 234, in run\r\n token=token, filename=filename, filepath=filepath, organization=self.args.organization\r\n File \"/usr/local/lib/python3.6/dist-packages/nlp/hf_api.py\", line 141, in presign_and_upload\r\n urls = self.presign(token, filename=filename, organization=organization)\r\n File \"/usr/local/lib/python3.6/dist-packages/nlp/hf_api.py\", line 132, in presign\r\n return PresignedUrl(**d)\r\nTypeError: __init__() got an unexpected keyword argument 'cdn'\r\n```\r\n",
"We haven't tested the dataset upload feature yet cc @julien-c \r\nThis is on our short/mid-term roadmap though",
"Even if I fix the `TypeError: __init__() got an unexpected keyword argument 'cdn'` error, it looks like it still uploads to `https://s3.amazonaws.com/models.huggingface.co/bert/<namespace>/<dataset_name>` instead of `https://s3.amazonaws.com/datasets.huggingface.co/nlp/<namespace>/<dataset_name>`",
"@lhoestq The endpoints in https://github.com/huggingface/nlp/blob/master/src/nlp/hf_api.py should be (depending on the type of file):\r\n```\r\nPOST /api/datasets/presign\r\nGET /api/datasets/listObjs\r\nDELETE /api/datasets/deleteObj\r\nPOST /api/metrics/presign \r\nGET /api/metrics/listObjs\r\nDELETE /api/metrics/deleteObj\r\n```\r\n\r\nIn addition to this, @thomwolf cleaned up the objects with dataclasses but you should revert this and re-align to the hf_api that's in this branch of transformers: https://github.com/huggingface/transformers/pull/4632 (so that potential new JSON attributes in the API output don't break existing versions of any library)",
"New commands are\r\n```\r\nnlp-cli upload_dataset <path/to/dataset>\r\nnlp-cli upload_metric <path/to/metric>\r\nnlp-cli s3_datasets {rm, ls}\r\nnlp-cli s3_metrics {rm, ls}\r\n```\r\nClosing this issue."
] | 1,590,079,552,000 | 1,592,518,482,000 | 1,592,518,482,000 | NONE | null | I look into `nlp-cli` and `user.py` to learn how to upload my own data.
It is supposed to work like this
- Register to get username, password at huggingface.co
- `nlp-cli login` and type username, passworld
- I have a single file to upload at `./ttc/ttc_freq_extra.csv`
- `nlp-cli upload ttc/ttc_freq_extra.csv`
But I got this error.
```
2020-05-21 16:33:52.722464: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
About to upload file /content/ttc/ttc_freq_extra.csv to S3 under filename ttc/ttc_freq_extra.csv and namespace korakot
Proceed? [Y/n] y
Uploading... This might take a while if files are large
Traceback (most recent call last):
File "/usr/local/bin/nlp-cli", line 33, in <module>
service.run()
File "/usr/local/lib/python3.6/dist-packages/nlp/commands/user.py", line 234, in run
token=token, filename=filename, filepath=filepath, organization=self.args.organization
File "/usr/local/lib/python3.6/dist-packages/nlp/hf_api.py", line 141, in presign_and_upload
urls = self.presign(token, filename=filename, organization=organization)
File "/usr/local/lib/python3.6/dist-packages/nlp/hf_api.py", line 132, in presign
return PresignedUrl(**d)
TypeError: __init__() got an unexpected keyword argument 'cdn'
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/181/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/181/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/180 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/180/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/180/comments | https://api.github.com/repos/huggingface/datasets/issues/180/events | https://github.com/huggingface/datasets/pull/180 | 622,556,861 | MDExOlB1bGxSZXF1ZXN0NDIxMzk5Nzg2 | 180 | Add hall of fame | {
"login": "clmnt",
"id": 821155,
"node_id": "MDQ6VXNlcjgyMTE1NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/821155?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clmnt",
"html_url": "https://github.com/clmnt",
"followers_url": "https://api.github.com/users/clmnt/followers",
"following_url": "https://api.github.com/users/clmnt/following{/other_user}",
"gists_url": "https://api.github.com/users/clmnt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/clmnt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clmnt/subscriptions",
"organizations_url": "https://api.github.com/users/clmnt/orgs",
"repos_url": "https://api.github.com/users/clmnt/repos",
"events_url": "https://api.github.com/users/clmnt/events{/privacy}",
"received_events_url": "https://api.github.com/users/clmnt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,590,072,828,000 | 1,590,165,316,000 | 1,590,165,314,000 | MEMBER | null | powered by https://github.com/sourcerer-io/hall-of-fame | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/180/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/180/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/180",
"html_url": "https://github.com/huggingface/datasets/pull/180",
"diff_url": "https://github.com/huggingface/datasets/pull/180.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/180.patch",
"merged_at": 1590165314000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/179 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/179/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/179/comments | https://api.github.com/repos/huggingface/datasets/issues/179/events | https://github.com/huggingface/datasets/issues/179 | 622,525,410 | MDU6SXNzdWU2MjI1MjU0MTA= | 179 | [Feature request] separate split name and split instructions | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"If your dataset is a collection of sub-datasets, you should probably consider having one config per sub-dataset. For example for Glue, we have sst2, mnli etc.\r\nIf you want to have multiple train sets (for example one per stage). The easiest solution would be to name them `nlp.Split(\"train_stage1\")`, `nlp.Split(\"train_stage2\")`, etc. or something like that.",
"Thanks for the tip! I ended up setting up three different versions of the dataset with their own configs.\r\n\r\nfor the named splits, I was trying with `nlp.Split(\"train-stage1\")`, which fails. Changing to `nlp.Split(\"train_stage1\")` works :) I looked for examples of what works in the code comments, it may be worth adding some examples of valid/invalid names in there?"
] | 1,590,070,251,000 | 1,590,154,268,000 | 1,590,154,267,000 | MEMBER | null | Currently, the name of an nlp.NamedSplit is parsed in arrow_reader.py and used as the instruction.
This makes it impossible to have several training sets, which can occur when:
- A dataset corresponds to a collection of sub-datasets
- A dataset was built in stages, adding new examples at each stage
Would it be possible to have two separate fields in the Split class, a name /instruction and a unique ID that is used as the key in the builder's split_dict ? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/179/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/179/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/178 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/178/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/178/comments | https://api.github.com/repos/huggingface/datasets/issues/178/events | https://github.com/huggingface/datasets/pull/178 | 621,979,849 | MDExOlB1bGxSZXF1ZXN0NDIwOTMyMDI5 | 178 | [Manual data] improve error message for manual data in general | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,589,998,245,000 | 1,589,998,732,000 | 1,589,998,730,000 | MEMBER | null | `nlp.load("xsum")` now leads to the following error message:
![Screenshot from 2020-05-20 20-05-28](https://user-images.githubusercontent.com/23423619/82481825-3587ea00-9ad6-11ea-9ca2-5794252c6ac7.png)
I guess the manual download instructions for `xsum` can also be improved. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/178/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/178/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/178",
"html_url": "https://github.com/huggingface/datasets/pull/178",
"diff_url": "https://github.com/huggingface/datasets/pull/178.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/178.patch",
"merged_at": 1589998730000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/177 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/177/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/177/comments | https://api.github.com/repos/huggingface/datasets/issues/177/events | https://github.com/huggingface/datasets/pull/177 | 621,975,368 | MDExOlB1bGxSZXF1ZXN0NDIwOTI4MzE0 | 177 | Xsum manual download instruction | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,589,997,761,000 | 1,589,998,610,000 | 1,589,998,609,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/177/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/177/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/177",
"html_url": "https://github.com/huggingface/datasets/pull/177",
"diff_url": "https://github.com/huggingface/datasets/pull/177.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/177.patch",
"merged_at": 1589998609000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/176 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/176/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/176/comments | https://api.github.com/repos/huggingface/datasets/issues/176/events | https://github.com/huggingface/datasets/pull/176 | 621,934,638 | MDExOlB1bGxSZXF1ZXN0NDIwODkzNDky | 176 | [Tests] Refactor MockDownloadManager | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,589,994,456,000 | 1,589,998,639,000 | 1,589,998,638,000 | MEMBER | null | Clean mock download manager class.
The print function was not of much help I think.
We should think about adding a command that creates the dummy folder structure for the user. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/176/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/176/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/176",
"html_url": "https://github.com/huggingface/datasets/pull/176",
"diff_url": "https://github.com/huggingface/datasets/pull/176.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/176.patch",
"merged_at": 1589998638000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/175 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/175/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/175/comments | https://api.github.com/repos/huggingface/datasets/issues/175/events | https://github.com/huggingface/datasets/issues/175 | 621,929,428 | MDU6SXNzdWU2MjE5Mjk0Mjg= | 175 | [Manual data dir] Error message: nlp.load_dataset('xsum') -> TypeError | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,589,994,032,000 | 1,589,998,730,000 | 1,589,998,730,000 | CONTRIBUTOR | null | v 0.1.0 from pip
```python
import nlp
xsum = nlp.load_dataset('xsum')
```
Issue is `dl_manager.manual_dir`is `None`
```python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-42-8a32f066f3bd> in <module>
----> 1 xsum = nlp.load_dataset('xsum')
~/miniconda3/envs/nb/lib/python3.7/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
515 download_mode=download_mode,
516 ignore_verifications=ignore_verifications,
--> 517 save_infos=save_infos,
518 )
519
~/miniconda3/envs/nb/lib/python3.7/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs)
361 verify_infos = not save_infos and not ignore_verifications
362 self._download_and_prepare(
--> 363 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
364 )
365 # Sync info
~/miniconda3/envs/nb/lib/python3.7/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
397 split_dict = SplitDict(dataset_name=self.name)
398 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 399 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
400 # Checksums verification
401 if verify_infos:
~/miniconda3/envs/nb/lib/python3.7/site-packages/nlp/datasets/xsum/5c5fca23aaaa469b7a1c6f095cf12f90d7ab99bcc0d86f689a74fd62634a1472/xsum.py in _split_generators(self, dl_manager)
102 with open(dl_path, "r") as json_file:
103 split_ids = json.load(json_file)
--> 104 downloaded_path = os.path.join(dl_manager.manual_dir, "xsum-extracts-from-downloads")
105 return [
106 nlp.SplitGenerator(
~/miniconda3/envs/nb/lib/python3.7/posixpath.py in join(a, *p)
78 will be discarded. An empty last part will result in a path that
79 ends with a separator."""
---> 80 a = os.fspath(a)
81 sep = _get_sep(a)
82 path = a
TypeError: expected str, bytes or os.PathLike object, not NoneType
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/175/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/175/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/174 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/174/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/174/comments | https://api.github.com/repos/huggingface/datasets/issues/174/events | https://github.com/huggingface/datasets/issues/174 | 621,928,403 | MDU6SXNzdWU2MjE5Mjg0MDM= | 174 | nlp.load_dataset('xsum') -> TypeError | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,589,993,949,000 | 1,589,996,626,000 | 1,589,996,626,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/174/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/174/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/173 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/173/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/173/comments | https://api.github.com/repos/huggingface/datasets/issues/173/events | https://github.com/huggingface/datasets/pull/173 | 621,764,932 | MDExOlB1bGxSZXF1ZXN0NDIwNzUyNzQy | 173 | Rm extracted test dirs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Thanks for cleaning up the extracted dummy data folders! Instead of changing the file_utils we could also just put these folders under `.gitignore` (or maybe already done?).",
"Awesome! I guess you might have to add the changes for the MockDLManager now in a different file though because of my last PR - sorry!"
] | 1,589,981,448,000 | 1,590,165,276,000 | 1,590,165,275,000 | MEMBER | null | All the dummy data used for tests were duplicated. For each dataset, we had one zip file but also its extracted directory. I removed all these directories
Furthermore instead of extracting next to the dummy_data.zip file, we extract in the temp `cached_dir` used for tests, so that all the extracted directories get removed after testing.
Finally there was a bug in the `mock_download_manager` that would let it create directories with invalid names, as in #172. I fixed that by encoding url arguments. I had to rename the dummy data for `scientific_papers` and `cnn_dailymail` (the aws tests don't pass for those 2 in this PR, but they will once aws will be synced, as the local ones do)
Let me know if it sounds good to you @patrickvonplaten . I'm still not entirely familiar with the mock downloader | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/173/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/173/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/173",
"html_url": "https://github.com/huggingface/datasets/pull/173",
"diff_url": "https://github.com/huggingface/datasets/pull/173.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/173.patch",
"merged_at": 1590165275000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/172 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/172/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/172/comments | https://api.github.com/repos/huggingface/datasets/issues/172/events | https://github.com/huggingface/datasets/issues/172 | 621,377,386 | MDU6SXNzdWU2MjEzNzczODY= | 172 | Clone not working on Windows environment | {
"login": "codehunk628",
"id": 51091425,
"node_id": "MDQ6VXNlcjUxMDkxNDI1",
"avatar_url": "https://avatars.githubusercontent.com/u/51091425?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/codehunk628",
"html_url": "https://github.com/codehunk628",
"followers_url": "https://api.github.com/users/codehunk628/followers",
"following_url": "https://api.github.com/users/codehunk628/following{/other_user}",
"gists_url": "https://api.github.com/users/codehunk628/gists{/gist_id}",
"starred_url": "https://api.github.com/users/codehunk628/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/codehunk628/subscriptions",
"organizations_url": "https://api.github.com/users/codehunk628/orgs",
"repos_url": "https://api.github.com/users/codehunk628/repos",
"events_url": "https://api.github.com/users/codehunk628/events{/privacy}",
"received_events_url": "https://api.github.com/users/codehunk628/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Should be fixed on master now :)",
"Thanks @lhoestq 👍 Now I can uninstall WSL and get back to work with windows.🙂"
] | 1,589,935,514,000 | 1,590,238,153,000 | 1,590,233,272,000 | CONTRIBUTOR | null | Cloning in a windows environment is not working because of use of special character '?' in folder name ..
Please consider changing the folder name ....
Reference to folder -
nlp/datasets/cnn_dailymail/dummy/3.0.0/3.0.0/dummy_data-zip-extracted/dummy_data/uc?export=download&id=0BwmD_VLjROrfM1BxdkxVaTY2bWs/dailymail/stories/
error log:
fatal: cannot create directory at 'datasets/cnn_dailymail/dummy/3.0.0/3.0.0/dummy_data-zip-extracted/dummy_data/uc?export=download&id=0BwmD_VLjROrfM1BxdkxVaTY2bWs': Invalid argument
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/172/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/172/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/171 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/171/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/171/comments | https://api.github.com/repos/huggingface/datasets/issues/171/events | https://github.com/huggingface/datasets/pull/171 | 621,199,128 | MDExOlB1bGxSZXF1ZXN0NDIwMjk0ODM0 | 171 | fix squad metric format | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"One thing for SQuAD is that I wanted to be able to use the SQuAD dataset directly in the metrics and I'm not sure it will be possible with this format.\r\n\r\n(maybe it's not really possible in general though)",
"This is kinda related to one thing I had in mind which is that we may want to be able to dump our model predictions in a `Dataset` as well so that we don't keep them in memory (and we can export them in a nice format later as well when we will have a serialization formats).\r\n\r\nMaybe this is overkill though, I haven't fully wraped my head around this.",
"I'm also perfectly fine with merging this PR in the current state and working on a larger scope later.",
"This is the format needed to run the official script directly. The format of the squad dataset is different from the input of the metric. \r\n\r\n> One thing for SQuAD is that I wanted to be able to use the SQuAD dataset directly in the metrics and I'm not sure it will be possible with this format.\r\n> \r\n> (maybe it's not really possible in general though)\r\n\r\nOk I see. I'll try to use the same format",
"Ok with this update I changed the format to fit the squad dataset format.\r\nNow you can do:\r\n```python\r\nsquad_dset = nlp.load_dataset(\"squad\")\r\nsquad_metric = nlp.load_metric(\"/Users/quentinlhoest/Desktop/hf/nlp-bis/metrics/squad\")\r\npredictions = [\r\n {\"id\": v[\"id\"], \"prediction_text\": v[\"answers\"][\"text\"][0]} # take first possible answer\r\n for v in squad_dset[\"validation\"]\r\n]\r\nsquad_metric.compute(predictions, squad_dset[\"validation\"])\r\n```"
] | 1,589,913,456,000 | 1,590,154,610,000 | 1,590,154,608,000 | MEMBER | null | The format of the squad metric was wrong.
This should fix #143
I tested with
```python3
predictions = [
{'id': '56be4db0acb8001400a502ec', 'prediction_text': 'Denver Broncos'}
]
references = [
{'answers': [{'text': 'Denver Broncos'}], 'id': '56be4db0acb8001400a502ec'}
]
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/171/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/171/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/171",
"html_url": "https://github.com/huggingface/datasets/pull/171",
"diff_url": "https://github.com/huggingface/datasets/pull/171.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/171.patch",
"merged_at": 1590154608000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/170 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/170/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/170/comments | https://api.github.com/repos/huggingface/datasets/issues/170/events | https://github.com/huggingface/datasets/pull/170 | 621,119,747 | MDExOlB1bGxSZXF1ZXN0NDIwMjMwMDIx | 170 | Rename anli dataset | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,589,905,617,000 | 1,589,977,389,000 | 1,589,977,388,000 | MEMBER | null | What we have now as the `anli` dataset is actually the αNLI dataset from the ART challenge dataset. This name is confusing because `anli` is also the name of adversarial NLI (see [https://github.com/facebookresearch/anli](https://github.com/facebookresearch/anli)).
I renamed the current `anli` dataset by `art`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/170/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/170/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/170",
"html_url": "https://github.com/huggingface/datasets/pull/170",
"diff_url": "https://github.com/huggingface/datasets/pull/170.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/170.patch",
"merged_at": 1589977387000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/169 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/169/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/169/comments | https://api.github.com/repos/huggingface/datasets/issues/169/events | https://github.com/huggingface/datasets/pull/169 | 621,099,682 | MDExOlB1bGxSZXF1ZXN0NDIwMjE1NDkw | 169 | Adding Qanta (Quizbowl) Dataset | {
"login": "EntilZha",
"id": 1382460,
"node_id": "MDQ6VXNlcjEzODI0NjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1382460?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EntilZha",
"html_url": "https://github.com/EntilZha",
"followers_url": "https://api.github.com/users/EntilZha/followers",
"following_url": "https://api.github.com/users/EntilZha/following{/other_user}",
"gists_url": "https://api.github.com/users/EntilZha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EntilZha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EntilZha/subscriptions",
"organizations_url": "https://api.github.com/users/EntilZha/orgs",
"repos_url": "https://api.github.com/users/EntilZha/repos",
"events_url": "https://api.github.com/users/EntilZha/events{/privacy}",
"received_events_url": "https://api.github.com/users/EntilZha/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @EntilZha - sorry for waiting so long until taking action here. We created a new command and a new recipe of how to add dummy_data. Can you maybe rebase to `master` as explained in 7. of https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-contribute-to-nlp and check that your dummy data is correct following the instructions here: https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-add-a-dataset ? \r\n\r\nIf the tests described in 5. of https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-add-a-dataset pass we can merge the PR :-) ",
"I updated to the most recent master and followed the steps, but still having the similar error where it can't find the correct file since the path to the directory is given, rather than the individual files within them. This still something wrong about how I'm inputting the data or how the tests are reading it?",
"It's the dummy_data structure. You actually have to call the dummy data file name `dummy_data` (not .json anything). So there should not be a `dummy_data` folder but for each config only a `dummy_data` which contains your json dummy data. Can you maybe try once more - if it doesn't work I do it for you :-). ",
"Would that work if there are multiple files? In my case, I'm including something similar to squad 1.0/2.0 where we have the main dataset + an additional challenge set in different files. Would I have the zip decompress to two files in that case?",
"This dataset was actually a special case. It helped us improve the dummy data instructions :-), see #195 .Close this PR and merge #194."
] | 1,589,904,181,000 | 1,590,497,551,000 | 1,590,497,551,000 | CONTRIBUTOR | null | This PR adds the qanta question answering datasets from [Quizbowl: The Case for Incremental Question Answering](https://arxiv.org/abs/1904.04792) and [Trick Me If You Can: Human-in-the-loop Generation of Adversarial Question Answering Examples](https://www.aclweb.org/anthology/Q19-1029/) (adversarial fold)
This partially continues a discussion around fixing dummy data from https://github.com/huggingface/nlp/issues/161
I ran the following code to double check that it works and did some sanity checks on the output. The majority of the code itself is from our `allennlp` version of the dataset reader.
```python
import nlp
# Default is full question
data = nlp.load_dataset('./datasets/qanta')
# Four configs
# Primarily useful for training
data = nlp.load_dataset('./datasets/qanta', 'mode=sentences,char_skip=25')
# Primarily used in evaluation
data = nlp.load_dataset('./datasets/qanta', 'mode=first,char_skip=25')
data = nlp.load_dataset('./datasets/qanta', 'mode=full,char_skip=25')
# Primarily useful in evaluation and "live" play
data = nlp.load_dataset('./datasets/qanta', 'mode=runs,char_skip=25')
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/169/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/169/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/169",
"html_url": "https://github.com/huggingface/datasets/pull/169",
"diff_url": "https://github.com/huggingface/datasets/pull/169.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/169.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/168 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/168/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/168/comments | https://api.github.com/repos/huggingface/datasets/issues/168/events | https://github.com/huggingface/datasets/issues/168 | 620,959,819 | MDU6SXNzdWU2MjA5NTk4MTk= | 168 | Loading 'wikitext' dataset fails | {
"login": "itay1itzhak",
"id": 25987633,
"node_id": "MDQ6VXNlcjI1OTg3NjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/25987633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/itay1itzhak",
"html_url": "https://github.com/itay1itzhak",
"followers_url": "https://api.github.com/users/itay1itzhak/followers",
"following_url": "https://api.github.com/users/itay1itzhak/following{/other_user}",
"gists_url": "https://api.github.com/users/itay1itzhak/gists{/gist_id}",
"starred_url": "https://api.github.com/users/itay1itzhak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/itay1itzhak/subscriptions",
"organizations_url": "https://api.github.com/users/itay1itzhak/orgs",
"repos_url": "https://api.github.com/users/itay1itzhak/repos",
"events_url": "https://api.github.com/users/itay1itzhak/events{/privacy}",
"received_events_url": "https://api.github.com/users/itay1itzhak/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hi, make sure you have a recent version of pyarrow.\r\n\r\nAre you using it in Google Colab? In this case, this error is probably the same as #128",
"Thanks!\r\n\r\nYes I'm using Google Colab, it seems like a duplicate then.",
"Closing as it is a duplicate",
"Hi,\r\nThe squad bug seems to be fixed, but the loading of the 'wikitext' still suffers from this problem (on Colab with pyarrow=0.17.1).",
"When you install `nlp` for the first time on a Colab runtime, it updates the `pyarrow` library that was already on colab. This update shows this message on colab:\r\n```\r\nWARNING: The following packages were previously imported in this runtime:\r\n [pyarrow]\r\nYou must restart the runtime in order to use newly installed versions.\r\n```\r\nYou just have to restart the runtime and it should be fine.",
"That was it, thanks!"
] | 1,589,893,469,000 | 1,590,529,612,000 | 1,590,529,612,000 | NONE | null | Loading the 'wikitext' dataset fails with Attribute error:
Code to reproduce (From example notebook):
import nlp
wikitext_dataset = nlp.load_dataset('wikitext')
Error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-17-d5d9df94b13c> in <module>()
11
12 # Load a dataset and print the first examples in the training set
---> 13 wikitext_dataset = nlp.load_dataset('wikitext')
14 print(wikitext_dataset['train'][0])
6 frames
/usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
518 download_mode=download_mode,
519 ignore_verifications=ignore_verifications,
--> 520 save_infos=save_infos,
521 )
522
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs)
363 verify_infos = not save_infos and not ignore_verifications
364 self._download_and_prepare(
--> 365 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
366 )
367 # Sync info
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
416 try:
417 # Prepare split will record examples associated to the split
--> 418 self._prepare_split(split_generator, **prepare_split_kwargs)
419 except OSError:
420 raise OSError("Cannot find data file. " + (self.MANUAL_DOWNLOAD_INSTRUCTIONS or ""))
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in _prepare_split(self, split_generator)
594 example = self.info.features.encode_example(record)
595 writer.write(example)
--> 596 num_examples, num_bytes = writer.finalize()
597
598 assert num_examples == num_examples, f"Expected to write {split_info.num_examples} but wrote {num_examples}"
/usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in finalize(self, close_stream)
173 def finalize(self, close_stream=True):
174 if self.pa_writer is not None:
--> 175 self.write_on_file()
176 self.pa_writer.close()
177 if close_stream:
/usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in write_on_file(self)
124 else:
125 # All good
--> 126 self._write_array_on_file(pa_array)
127 self.current_rows = []
128
/usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in _write_array_on_file(self, pa_array)
93 def _write_array_on_file(self, pa_array):
94 """Write a PyArrow Array"""
---> 95 pa_batch = pa.RecordBatch.from_struct_array(pa_array)
96 self._num_bytes += pa_array.nbytes
97 self.pa_writer.write_batch(pa_batch)
AttributeError: type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array' | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/168/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/168/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/167 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/167/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/167/comments | https://api.github.com/repos/huggingface/datasets/issues/167/events | https://github.com/huggingface/datasets/pull/167 | 620,908,786 | MDExOlB1bGxSZXF1ZXN0NDIwMDY0NDMw | 167 | [Tests] refactor tests | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Nice !"
] | 1,589,888,612,000 | 1,589,905,032,000 | 1,589,905,030,000 | MEMBER | null | This PR separates AWS and Local tests to remove these ugly statements in the script:
```python
if "/" not in dataset_name:
logging.info("Skip {} because it is a canonical dataset")
return
```
To run a `aws` test, one should now run the following command:
```python
pytest -s tests/test_dataset_common.py::AWSDatasetTest::test_builder_class_wmt14
```
The same `local` test, can be run with:
```python
pytest -s tests/test_dataset_common.py::LocalDatasetTest::test_builder_class_wmt14
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/167/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/167/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/167",
"html_url": "https://github.com/huggingface/datasets/pull/167",
"diff_url": "https://github.com/huggingface/datasets/pull/167.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/167.patch",
"merged_at": 1589905030000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/166 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/166/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/166/comments | https://api.github.com/repos/huggingface/datasets/issues/166/events | https://github.com/huggingface/datasets/issues/166 | 620,850,218 | MDU6SXNzdWU2MjA4NTAyMTg= | 166 | Add a method to shuffle a dataset | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067400324,
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion",
"name": "generic discussion",
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library"
}
] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"+1 for the naming convention\r\n\r\nAbout the `shuffle` method, from my understanding it should be done in `Dataloader` (better separation between dataset processing - usage)",
"+1 for shuffle in `Dataloader`. \r\nSome `Dataloader` just store idxs of dataset and just shuffle those idxs, which might(?) be faster than do shuffle in dataset, especially when doing shuffle every epoch.\r\n\r\nAlso +1 for the naming convention.",
"As you might already know the issue of dataset shuffling came up in the nlp code [walkthrough](https://youtu.be/G3pOvrKkFuk?t=3204) by Yannic Kilcher\r\n",
"We added the `.shuffle` method :)\r\n\r\nClosing this one."
] | 1,589,882,926,000 | 1,592,924,853,000 | 1,592,924,852,000 | MEMBER | null | Could maybe be a `dataset.shuffle(generator=None, seed=None)` signature method.
Also, we could maybe have a clear indication of which method modify in-place and which methods return/cache a modified dataset. I kinda like torch conversion of having an underscore suffix for all the methods which modify a dataset in-place. What do you think? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/166/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/166/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/165 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/165/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/165/comments | https://api.github.com/repos/huggingface/datasets/issues/165/events | https://github.com/huggingface/datasets/issues/165 | 620,758,221 | MDU6SXNzdWU2MjA3NTgyMjE= | 165 | ANLI | {
"login": "douwekiela",
"id": 6024930,
"node_id": "MDQ6VXNlcjYwMjQ5MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6024930?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/douwekiela",
"html_url": "https://github.com/douwekiela",
"followers_url": "https://api.github.com/users/douwekiela/followers",
"following_url": "https://api.github.com/users/douwekiela/following{/other_user}",
"gists_url": "https://api.github.com/users/douwekiela/gists{/gist_id}",
"starred_url": "https://api.github.com/users/douwekiela/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/douwekiela/subscriptions",
"organizations_url": "https://api.github.com/users/douwekiela/orgs",
"repos_url": "https://api.github.com/users/douwekiela/repos",
"events_url": "https://api.github.com/users/douwekiela/events{/privacy}",
"received_events_url": "https://api.github.com/users/douwekiela/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,589,874,657,000 | 1,589,977,387,000 | 1,589,977,387,000 | NONE | null | Can I recommend the following:
For ANLI, use https://github.com/facebookresearch/anli. As that paper says, "Our dataset is not
to be confused with abductive NLI (Bhagavatula et al., 2019), which calls itself αNLI, or ART.".
Indeed, the paper cited under what is currently called anli says in the abstract "We introduce a challenge dataset, ART".
The current naming will confuse people :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/165/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/165/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/164 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/164/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/164/comments | https://api.github.com/repos/huggingface/datasets/issues/164/events | https://github.com/huggingface/datasets/issues/164 | 620,540,250 | MDU6SXNzdWU2MjA1NDAyNTA= | 164 | Add Spanish POR and NER Datasets | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hello @mrm8488, are these datasets official datasets published in an NLP/CL/ML venue?",
"What about this one: https://github.com/ccasimiro88/TranslateAlignRetrieve?"
] | 1,589,840,301,000 | 1,590,424,125,000 | 1,590,424,125,000 | NONE | null | Hi guys,
In order to cover multilingual support a little step could be adding standard Datasets used for Spanish NER and POS tasks.
I can provide it in raw and preprocessed formats. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/164/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/164/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/163 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/163/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/163/comments | https://api.github.com/repos/huggingface/datasets/issues/163/events | https://github.com/huggingface/datasets/issues/163 | 620,534,307 | MDU6SXNzdWU2MjA1MzQzMDc= | 163 | [Feature request] Add cos-e v1.0 | {
"login": "sarahwie",
"id": 8027676,
"node_id": "MDQ6VXNlcjgwMjc2NzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sarahwie",
"html_url": "https://github.com/sarahwie",
"followers_url": "https://api.github.com/users/sarahwie/followers",
"following_url": "https://api.github.com/users/sarahwie/following{/other_user}",
"gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions",
"organizations_url": "https://api.github.com/users/sarahwie/orgs",
"repos_url": "https://api.github.com/users/sarahwie/repos",
"events_url": "https://api.github.com/users/sarahwie/events{/privacy}",
"received_events_url": "https://api.github.com/users/sarahwie/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Sounds good, @mariamabarham do you want to give a look?\r\nI think we should have two configurations so we can allow either version of the dataset to be loaded with the `1.0` version being the default maybe.\r\n\r\nCc some authors of the great cos-e: @nazneenrajani @bmccann",
"cos_e v1.0 is related to CQA v1.0 but only CQA v1.11 dataset is available on their website. Indeed their is lots of ids in cos_e v1, which are not in CQA v1.11 or the other way around.\r\n@sarahwie, @thomwolf, @nazneenrajani, @bmccann do you know where I can find CQA v1.0\r\n",
"@mariamabarham I'm also not sure where to find CQA 1.0. Perhaps it's not possible to include this version of the dataset. I'll close the issue if that's the case.",
"I do have a copy of the dataset. I can upload it to our repo.",
"Great @nazneenrajani. let me know once done.\r\nThanks",
"@mariamabarham @sarahwie I added them to the cos-e repo https://github.com/salesforce/cos-e/tree/master/data/v1.0",
"You can now do\r\n```python\r\nfrom nlp import load_dataset\r\ncos_e = load_dataset(\"cos_e\", \"v1.0\")\r\n```\r\nThanks @mariamabarham !",
"Thanks!",
"@mariamabarham Just wanted to note that default behavior `cos_e = load_dataset(\"cos_e\")` now loads `v1.0`. Not sure if this is intentional (but the flag specification does work as intended). ",
"> @mariamabarham Just wanted to note that default behavior `cos_e = load_dataset(\"cos_e\")` now loads `v1.0`. Not sure if this is intentional (but the flag specification does work as intended).\r\n\r\nIn the new version of `nlp`, if you try `cos_e = load_dataset(\"cos_e\")` it throws this error:\r\n```\r\nValueError: Config name is missing.\r\nPlease pick one among the available configs: ['v1.0', 'v1.11']\r\nExample of usage:\r\n\t`load_dataset('cos_e', 'v1.0')`\r\n```\r\nFor datasets with at least two configurations, we now force the user to pick one (no default)"
] | 1,589,839,526,000 | 1,592,349,325,000 | 1,592,333,526,000 | NONE | null | I noticed the second release of cos-e (v1.11) is included in this repo. I wanted to request inclusion of v1.0, since this is the version on which results are reported on in [the paper](https://www.aclweb.org/anthology/P19-1487/), and v1.11 has noted [annotation](https://github.com/salesforce/cos-e/issues/2) [issues](https://arxiv.org/pdf/2004.14546.pdf). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/163/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/163/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/162 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/162/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/162/comments | https://api.github.com/repos/huggingface/datasets/issues/162/events | https://github.com/huggingface/datasets/pull/162 | 620,513,554 | MDExOlB1bGxSZXF1ZXN0NDE5NzQ4Mzky | 162 | fix prev files hash in map | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Awesome! ",
"Hi, yes, this seems to fix #160 -- I cloned the branch locally and verified",
"Perfect then :)"
] | 1,589,836,851,000 | 1,589,837,781,000 | 1,589,837,780,000 | MEMBER | null | Fix the `.map` issue in #160.
This makes sure it takes the previous files when computing the hash. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/162/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/162/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/162",
"html_url": "https://github.com/huggingface/datasets/pull/162",
"diff_url": "https://github.com/huggingface/datasets/pull/162.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/162.patch",
"merged_at": 1589837780000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/161 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/161/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/161/comments | https://api.github.com/repos/huggingface/datasets/issues/161/events | https://github.com/huggingface/datasets/issues/161 | 620,487,535 | MDU6SXNzdWU2MjA0ODc1MzU= | 161 | Discussion on version identifier & MockDataLoaderManager for test data | {
"login": "EntilZha",
"id": 1382460,
"node_id": "MDQ6VXNlcjEzODI0NjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1382460?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EntilZha",
"html_url": "https://github.com/EntilZha",
"followers_url": "https://api.github.com/users/EntilZha/followers",
"following_url": "https://api.github.com/users/EntilZha/following{/other_user}",
"gists_url": "https://api.github.com/users/EntilZha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EntilZha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EntilZha/subscriptions",
"organizations_url": "https://api.github.com/users/EntilZha/orgs",
"repos_url": "https://api.github.com/users/EntilZha/repos",
"events_url": "https://api.github.com/users/EntilZha/events{/privacy}",
"received_events_url": "https://api.github.com/users/EntilZha/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067400324,
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion",
"name": "generic discussion",
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library"
}
] | open | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"usually you can replace `download` in your dataset script with `download_and_prepare()` - could you share the code for your dataset here? :-) ",
"I have an initial version here: https://github.com/EntilZha/nlp/tree/master/datasets/qanta Thats pretty close to what I'll do as a PR, but still want to do some more sanity checks/tests (just got tests passing).\r\n\r\nI figured out how to get all tests passing by adding a download command and some finagling with the data zip https://github.com/EntilZha/nlp/blob/master/tests/utils.py#L127\r\n\r\n",
"I'm quite positive that you can just replace the `dl_manager.download()` statements here: https://github.com/EntilZha/nlp/blob/4d46443b65f1f756921db8275594e6af008a1de7/datasets/qanta/qanta.py#L194 with `dl_manager.download_and_extract()` even though you don't extract anything. I would prefer to avoid adding more functions to the MockDataLoadManager and keep it as simple as possible (It's already to complex now IMO). \r\n\r\nCould you check if you can replace the `download()` function? ",
"I might be doing something wrong, but swapping those two gives this error:\r\n```\r\n> with open(path) as f:\r\nE IsADirectoryError: [Errno 21] Is a directory: 'datasets/qanta/dummy/mode=first,char_skip=25/2018.4.18/dummy_data-zip-extracted/dummy_data'\r\n\r\nsrc/nlp/datasets/qanta/3d965403133687b819905ead4b69af7bcee365865279b2f797c79f809b4490c3/qanta.py:280: IsADirectoryError\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n```\r\n\r\nSo it seems like the directory name is getting passed. Is this not functioning as expected, or is there some caching happening maybe? I deleted the dummy files and re-ran the import script with no changes. I'm digging a bit in with a debugger, but no clear reason yet",
"From what I can tell here: https://github.com/huggingface/nlp/blob/master/tests/utils.py#L115\r\n\r\n1. `data_url` is the correct http link\r\n2. `path_to_dummy_data` is a directory, which is causing the issue\r\n\r\nThat path comes from `download_dummy_data`, which I think assumes that the data comes from the zip file, but isn't aware of individual files. So it seems like it data manager needs to be aware if the url its getting is for a file or a zip/directory, and pass this information along. This might happen in `download_dummy_data`, but probably better to happen in `download_and_extract`? Maybe a simple check to see if `os.path.basename` returns the dummy data zip filename, if not then join paths with the basename of the url?",
"I think the dataset script works correctly. Just the dummy data structure seems to be wrong. I will soon add more commands that should make the create of the dummy data easier.\r\n\r\nI'd recommend that you won't concentrate too much on the dummy data.\r\nIf you manage to load the dataset correctly via:\r\n\r\n```python \r\n# use local path to qanta\r\nnlp.load_dataset(\"./datasets/qanta\")\r\n```\r\n\r\nthen feel free to open a PR and we will look into the dummy data problem together :-) \r\n\r\nAlso please make sure that the Version is in the format 1.0.0 (three numbers separated by two points) - not a date. ",
"The script loading seems to work fine so I'll work on getting a PR open after a few sanity checks on the data.\r\n\r\nOn version, we currently have it versioned with YYYY.MM.DD scheme so it would be nice to not change that, but will it cause issues?",
"> The script loading seems to work fine so I'll work on getting a PR open after a few sanity checks on the data.\r\n> \r\n> On version, we currently have it versioned with YYYY.MM.DD scheme so it would be nice to not change that, but will it cause issues?\r\n\r\nIt would cause issues for sure for the tests....not sure if it would also cause issues otherwise.\r\n\r\nI would prefer to keep the same version style as we have for other models. You could for example simply add version 1.0.0 and add a comment with the date you currently use for the versioning.\r\n\r\n What is your opinion regarding the version here @lhoestq @mariamabarham @thomwolf ? ",
"Maybe use the YYYY.MM.DD as the config name ? That's what we are doing for wikipedia",
"> Maybe use the YYYY.MM.DD as the config name ? That's what we are doing for wikipedia\r\n\r\nI'm not sure if this will work because the name should be unique and it seems that he has multiple config name in his data with the same version.\r\nAs @patrickvonplaten suggested, I think you can add a comment about the version in the data description.",
"Actually maybe our versioning format (inherited from tfds) is too strong for what we use it for?\r\nWe could allow any string maybe?\r\n\r\nI see it more and more like an identifier for the user that we will back with a serious hashing/versioning system.- so we could let the user quite free on it.",
"I'm good with either putting it in description, adding it to the config, or loosening version formatting. I mostly don't have a full conceptual grasp of what each identifier ends up meaning in the datasets code so hard to evaluate the best approach.\r\n\r\nFor background, the multiple formats is a consequence of:\r\n\r\n1. Each example is one multi-sentence trivia question\r\n2. For training, its better to treat each sentence as an example\r\n3. For evaluation, should test on: (1) first sentence, (2) full question, and (3) partial questions (does the model get the question right having seen the first half)\r\n\r\nWe use the date format for version since: (1) we expect some degree of updates since new questions come in every year and (2) the timestamp itself matches the Wikipedia dump that it is dependent on (so similar to the Wikipedia dataset).\r\n\r\nperhaps this is better discussed in https://github.com/huggingface/nlp/pull/169 or update title?"
] | 1,589,833,890,000 | 1,590,343,803,000 | null | CONTRIBUTOR | null | Hi, I'm working on adding a dataset and ran into an error due to `download` not being defined on `MockDataLoaderManager`, but being defined in `nlp/utils/download_manager.py`. The readme step running this: `RUN_SLOW=1 pytest tests/test_dataset_common.py::DatasetTest::test_load_real_dataset_localmydatasetname` triggers the error. If I can get something to work, I can include it in my data PR once I'm done. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/161/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/161/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/160 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/160/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/160/comments | https://api.github.com/repos/huggingface/datasets/issues/160/events | https://github.com/huggingface/datasets/issues/160 | 620,448,236 | MDU6SXNzdWU2MjA0NDgyMzY= | 160 | caching in map causes same result to be returned for train, validation and test | {
"login": "dpressel",
"id": 247881,
"node_id": "MDQ6VXNlcjI0Nzg4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/247881?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dpressel",
"html_url": "https://github.com/dpressel",
"followers_url": "https://api.github.com/users/dpressel/followers",
"following_url": "https://api.github.com/users/dpressel/following{/other_user}",
"gists_url": "https://api.github.com/users/dpressel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dpressel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dpressel/subscriptions",
"organizations_url": "https://api.github.com/users/dpressel/orgs",
"repos_url": "https://api.github.com/users/dpressel/repos",
"events_url": "https://api.github.com/users/dpressel/events{/privacy}",
"received_events_url": "https://api.github.com/users/dpressel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @dpressel, \r\n\r\nthanks for posting your issue! Can you maybe add a complete code snippet that we can copy paste to reproduce the error? For example, I'm not sure where the variable `train_set` comes from in your code and it seems like you are loading multiple datasets at once? ",
"Hi, the full example was listed in the PR above, but here is the exact link:\r\n\r\nhttps://github.com/dpressel/mead-baseline/blob/3c1aa3ca062cb23f303ca98ac40b6652b37ee971/api-examples/layers-classify-hf-datasets.py\r\n\r\nThe problem is coming from\r\n```\r\n if cache_file_name is None:\r\n # we create a unique hash from the function, current dataset file and the mapping args\r\n cache_kwargs = {\r\n \"with_indices\": with_indices,\r\n \"batched\": batched,\r\n \"batch_size\": batch_size,\r\n \"remove_columns\": remove_columns,\r\n \"keep_in_memory\": keep_in_memory,\r\n \"load_from_cache_file\": load_from_cache_file,\r\n \"cache_file_name\": cache_file_name,\r\n \"writer_batch_size\": writer_batch_size,\r\n \"arrow_schema\": arrow_schema,\r\n \"disable_nullable\": disable_nullable,\r\n }\r\n cache_file_name = self._get_cache_file_path(function, cache_kwargs)\r\n```\r\nThe cached value is always the same, but I was able to change that by just renaming the function each time which seems to fix the issue.",
"Ok, I think @lhoestq has already found a solution :-) Maybe you can chime in @lhoestq ",
"This fixed my issue (I think)\r\n\r\nhttps://github.com/dpressel/mead-baseline/commit/48aa8ecde4b307bd3e7dde5fe71e43a1d4769ee1",
"> Ok, I think @lhoestq has already found a solution :-) Maybe you can chime in @lhoestq\r\n\r\nOh, awesome! I see the PR, Ill check it out",
"The PR should prevent the cache from losing track of the of the dataset type (based on the location of its data). Not sure about your second problem though (cache off).",
"Yes, with caching on, it seems to work without the function renaming hack, I mentioned this also in the PR. Thanks!"
] | 1,589,829,723,000 | 1,589,837,780,000 | 1,589,837,780,000 | NONE | null | hello,
I am working on a program that uses the `nlp` library with the `SST2` dataset.
The rough outline of the program is:
```
import nlp as nlp_datasets
...
parser.add_argument('--dataset', help='HuggingFace Datasets id', default=['glue', 'sst2'], nargs='+')
...
dataset = nlp_datasets.load_dataset(*args.dataset)
...
# Create feature vocabs
vocabs = create_vocabs(dataset.values(), vectorizers)
...
# Create a function to vectorize based on vectorizers and vocabs:
print('TS', train_set.num_rows)
print('VS', valid_set.num_rows)
print('ES', test_set.num_rows)
# factory method to create a `convert_to_features` function based on vocabs
convert_to_features = create_featurizer(vectorizers, vocabs)
train_set = train_set.map(convert_to_features, batched=True)
train_set.set_format(type='torch', columns=list(vectorizers.keys()) + ['y', 'lengths'])
train_loader = torch.utils.data.DataLoader(train_set, batch_size=args.batchsz)
valid_set = valid_set.map(convert_to_features, batched=True)
valid_set.set_format(type='torch', columns=list(vectorizers.keys()) + ['y', 'lengths'])
valid_loader = torch.utils.data.DataLoader(valid_set, batch_size=args.batchsz)
test_set = test_set.map(convert_to_features, batched=True)
test_set.set_format(type='torch', columns=list(vectorizers.keys()) + ['y', 'lengths'])
test_loader = torch.utils.data.DataLoader(test_set, batch_size=args.batchsz)
print('TS', train_set.num_rows)
print('VS', valid_set.num_rows)
print('ES', test_set.num_rows)
```
Im not sure if Im using it incorrectly, but the results are not what I expect. Namely, the `.map()` seems to grab the datset from the cache and then loses track of what the specific dataset is, instead using my training data for all datasets:
```
TS 67349
VS 872
ES 1821
TS 67349
VS 67349
ES 67349
```
The behavior changes if I turn off the caching but then the results fail:
```
train_set = train_set.map(convert_to_features, batched=True, load_from_cache_file=False)
...
valid_set = valid_set.map(convert_to_features, batched=True, load_from_cache_file=False)
...
test_set = test_set.map(convert_to_features, batched=True, load_from_cache_file=False)
```
Now I get the right set of features back...
```
TS 67349
VS 872
ES 1821
100%|██████████| 68/68 [00:00<00:00, 92.78it/s]
100%|██████████| 1/1 [00:00<00:00, 75.47it/s]
0%| | 0/2 [00:00<?, ?it/s]TS 67349
VS 872
ES 1821
100%|██████████| 2/2 [00:00<00:00, 77.19it/s]
```
but I think its losing track of the original training set:
```
Traceback (most recent call last):
File "/home/dpressel/dev/work/baseline/api-examples/layers-classify-hf-datasets.py", line 148, in <module>
for x in train_loader:
File "/home/dpressel/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 345, in __next__
data = self._next_data()
File "/home/dpressel/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 385, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/dpressel/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/dpressel/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/dpressel/anaconda3/lib/python3.7/site-packages/nlp/arrow_dataset.py", line 338, in __getitem__
output_all_columns=self._output_all_columns,
File "/home/dpressel/anaconda3/lib/python3.7/site-packages/nlp/arrow_dataset.py", line 294, in _getitem
outputs = self._unnest(self._data.slice(key, 1).to_pydict())
File "pyarrow/table.pxi", line 1211, in pyarrow.lib.Table.slice
File "pyarrow/public-api.pxi", line 390, in pyarrow.lib.pyarrow_wrap_table
File "pyarrow/error.pxi", line 85, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Column 3: In chunk 0: Invalid: Length spanned by list offsets (15859698) larger than values array (length 100000)
Process finished with exit code 1
```
The full-example program (minus the print stmts) is here:
https://github.com/dpressel/mead-baseline/pull/620/files
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/160/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/160/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/159 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/159/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/159/comments | https://api.github.com/repos/huggingface/datasets/issues/159/events | https://github.com/huggingface/datasets/issues/159 | 620,420,700 | MDU6SXNzdWU2MjA0MjA3MDA= | 159 | How can we add more datasets to nlp library? | {
"login": "Tahsin-Mayeesha",
"id": 17886829,
"node_id": "MDQ6VXNlcjE3ODg2ODI5",
"avatar_url": "https://avatars.githubusercontent.com/u/17886829?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tahsin-Mayeesha",
"html_url": "https://github.com/Tahsin-Mayeesha",
"followers_url": "https://api.github.com/users/Tahsin-Mayeesha/followers",
"following_url": "https://api.github.com/users/Tahsin-Mayeesha/following{/other_user}",
"gists_url": "https://api.github.com/users/Tahsin-Mayeesha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tahsin-Mayeesha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tahsin-Mayeesha/subscriptions",
"organizations_url": "https://api.github.com/users/Tahsin-Mayeesha/orgs",
"repos_url": "https://api.github.com/users/Tahsin-Mayeesha/repos",
"events_url": "https://api.github.com/users/Tahsin-Mayeesha/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tahsin-Mayeesha/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Found it. https://github.com/huggingface/nlp/tree/master/datasets"
] | 1,589,826,931,000 | 1,589,827,028,000 | 1,589,827,027,000 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/159/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/159/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/158 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/158/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/158/comments | https://api.github.com/repos/huggingface/datasets/issues/158/events | https://github.com/huggingface/datasets/pull/158 | 620,396,658 | MDExOlB1bGxSZXF1ZXN0NDE5NjUyNTQy | 158 | add Toronto Books Corpus | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,589,824,485,000 | 1,591,861,755,000 | 1,589,873,696,000 | CONTRIBUTOR | null | This PR adds the Toronto Books Corpus.
.
It on consider TMX and plain text files (Moses) defined in the table **Statistics and TMX/Moses Downloads** [here](http://opus.nlpl.eu/Books.php ) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/158/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/158/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/158",
"html_url": "https://github.com/huggingface/datasets/pull/158",
"diff_url": "https://github.com/huggingface/datasets/pull/158.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/158.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/157 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/157/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/157/comments | https://api.github.com/repos/huggingface/datasets/issues/157/events | https://github.com/huggingface/datasets/issues/157 | 620,356,542 | MDU6SXNzdWU2MjAzNTY1NDI= | 157 | nlp.load_dataset() gives "TypeError: list_() takes exactly one argument (2 given)" | {
"login": "saahiluppal",
"id": 47444392,
"node_id": "MDQ6VXNlcjQ3NDQ0Mzky",
"avatar_url": "https://avatars.githubusercontent.com/u/47444392?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saahiluppal",
"html_url": "https://github.com/saahiluppal",
"followers_url": "https://api.github.com/users/saahiluppal/followers",
"following_url": "https://api.github.com/users/saahiluppal/following{/other_user}",
"gists_url": "https://api.github.com/users/saahiluppal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saahiluppal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saahiluppal/subscriptions",
"organizations_url": "https://api.github.com/users/saahiluppal/orgs",
"repos_url": "https://api.github.com/users/saahiluppal/repos",
"events_url": "https://api.github.com/users/saahiluppal/events{/privacy}",
"received_events_url": "https://api.github.com/users/saahiluppal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"You can just run: \r\n`val = nlp.load_dataset('squad')` \r\n\r\nif you want to have just the validation script you can also do:\r\n\r\n`val = nlp.load_dataset('squad', split=\"validation\")`",
"If you want to load a local dataset, make sure you include a `./` before the folder name. ",
"This happens by just doing run all cells on colab ... I assumed the colab example is broken. ",
"Oh I see you might have a wrong version of pyarrow install on the colab -> could you try the following. Add these lines to the beginning of your notebook, restart the runtime and run it again:\r\n```\r\n!pip uninstall -y -qq pyarrow\r\n!pip uninstall -y -qq nlp\r\n!pip install -qq git+https://github.com/huggingface/nlp.git\r\n```",
"> Oh I see you might have a wrong version of pyarrow install on the colab -> could you try the following. Add these lines to the beginning of your notebook, restart the runtime and run it again:\r\n> \r\n> ```\r\n> !pip uninstall -y -qq pyarrow\r\n> !pip uninstall -y -qq nlp\r\n> !pip install -qq git+https://github.com/huggingface/nlp.git\r\n> ```\r\n\r\nTried, having the same error.",
"Can you post a link here of your colab? I'll make a copy of it and see what's wrong",
"This should be fixed in the current version of the notebook. You can try it again",
"Also see: https://github.com/huggingface/nlp/issues/222",
"I am getting this error when running this command\r\n```\r\nval = nlp.load_dataset('squad', split=\"validation\")\r\n```\r\n\r\nFileNotFoundError: [Errno 2] No such file or directory: '/root/.cache/huggingface/datasets/squad/plain_text/1.0.0/dataset_info.json'\r\n\r\nCan anybody help?",
"It seems like your download was corrupted :-/ Can you run the following command: \r\n\r\n```\r\nrm -r /root/.cache/huggingface/datasets\r\n```\r\n\r\nto delete the cache completely and rerun the download? ",
"I tried the notebook again today and it worked without barfing. 👌 "
] | 1,589,820,398,000 | 1,591,344,538,000 | 1,591,344,538,000 | NONE | null | I'm trying to load datasets from nlp but there seems to have error saying
"TypeError: list_() takes exactly one argument (2 given)"
gist can be found here
https://gist.github.com/saahiluppal/c4b878f330b10b9ab9762bc0776c0a6a | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/157/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/157/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/156 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/156/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/156/comments | https://api.github.com/repos/huggingface/datasets/issues/156/events | https://github.com/huggingface/datasets/issues/156 | 620,263,687 | MDU6SXNzdWU2MjAyNjM2ODc= | 156 | SyntaxError with WMT datasets | {
"login": "tomhosking",
"id": 9419158,
"node_id": "MDQ6VXNlcjk0MTkxNTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9419158?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomhosking",
"html_url": "https://github.com/tomhosking",
"followers_url": "https://api.github.com/users/tomhosking/followers",
"following_url": "https://api.github.com/users/tomhosking/following{/other_user}",
"gists_url": "https://api.github.com/users/tomhosking/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tomhosking/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomhosking/subscriptions",
"organizations_url": "https://api.github.com/users/tomhosking/orgs",
"repos_url": "https://api.github.com/users/tomhosking/repos",
"events_url": "https://api.github.com/users/tomhosking/events{/privacy}",
"received_events_url": "https://api.github.com/users/tomhosking/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Jeez - don't know what happened there :D Should be fixed now! \r\n\r\nThanks a lot for reporting this @tomhosking !",
"Hi @patrickvonplaten!\r\n\r\nI'm now getting the below error:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-28-3206959998b9> in <module>\r\n 1 import nlp\r\n 2 \r\n----> 3 dataset = nlp.load_dataset('wmt14')\r\n 4 print(dataset['train'][0])\r\n\r\n~/.local/lib/python3.6/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)\r\n 507 # Instantiate the dataset builder\r\n 508 builder_instance = builder_cls(\r\n--> 509 cache_dir=cache_dir, name=name, version=version, data_dir=data_dir, data_files=data_files, **config_kwargs,\r\n 510 )\r\n 511 \r\n\r\nTypeError: Can't instantiate abstract class Wmt with abstract methods _subsets\r\n```\r\n\r\n",
"To correct this error I think you need the master branch of `nlp`. Can you try to install `nlp` with. `WMT` was not included at the beta release of the library. \r\n\r\nCan you try:\r\n`pip install git+https://github.com/huggingface/nlp.git`\r\n\r\nand check again? ",
"That works, thanks :)\r\n\r\nThe WMT datasets are listed in by `list_datasets()` in the beta release on pypi - it would be good to only show datasets that are actually supported by that version?",
"Usually, the idea is that a dataset can be added without releasing a new version. The problem in the case of `WMT` was that some \"core\" code of the library had to be changed as well. \r\n\r\n@thomwolf @lhoestq @julien-c - How should we go about this. If we add a dataset that also requires \"core\" code changes, how do we handle the versioning? The moment a dataset is on AWS it will actually be listed with `list_datasets()` in all earlier versions...\r\n\r\nIs there a way to somehow insert the `pip version` to the HfApi() and get only the datasets that were available for this version (at the date of the release of the version) @julien-c ? ",
"We plan to have something like a `requirements.txt` per dataset to prevent user from loading dataset with old version of `nlp` or any other libraries. Right now the solution is just to keep `nlp` up to date when you want to load a dataset that leverages the latests features of `nlp`.\r\n\r\nFor datasets that are on AWS but that use features that are not released yet we should be able to filter those from the `list_dataset` as soon as we have the `requirements.txt` feature on (filter datasets that need a future version of `nlp`).\r\n\r\nShall we rename this issue to be more explicit about the problem ?\r\nSomething like `Specify the minimum version of the nlp library required for each dataset` ?",
"Closing this one.\r\nFeel free to re-open if you have other questions :)"
] | 1,589,812,698,000 | 1,595,522,515,000 | 1,595,522,515,000 | NONE | null | The following snippet produces a syntax error:
```
import nlp
dataset = nlp.load_dataset('wmt14')
print(dataset['train'][0])
```
```
Traceback (most recent call last):
File "/home/tom/.local/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3326, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-8-3206959998b9>", line 3, in <module>
dataset = nlp.load_dataset('wmt14')
File "/home/tom/.local/lib/python3.6/site-packages/nlp/load.py", line 505, in load_dataset
builder_cls = import_main_class(module_path, dataset=True)
File "/home/tom/.local/lib/python3.6/site-packages/nlp/load.py", line 56, in import_main_class
module = importlib.import_module(module_path)
File "/usr/lib/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/tom/.local/lib/python3.6/site-packages/nlp/datasets/wmt14/c258d646f4f5870b0245f783b7aa0af85c7117e06aacf1e0340bd81935094de2/wmt14.py", line 21, in <module>
from .wmt_utils import Wmt, WmtConfig
File "/home/tom/.local/lib/python3.6/site-packages/nlp/datasets/wmt14/c258d646f4f5870b0245f783b7aa0af85c7117e06aacf1e0340bd81935094de2/wmt_utils.py", line 659
<<<<<<< HEAD
^
SyntaxError: invalid syntax
```
Python version:
`3.6.9 (default, Apr 18 2020, 01:56:04) [GCC 8.4.0]`
Running on Ubuntu 18.04, via a Jupyter notebook | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/156/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/156/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/155 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/155/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/155/comments | https://api.github.com/repos/huggingface/datasets/issues/155/events | https://github.com/huggingface/datasets/pull/155 | 620,067,946 | MDExOlB1bGxSZXF1ZXN0NDE5Mzg1ODM0 | 155 | Include more links in README, fix typos | {
"login": "Bharat123rox",
"id": 13381361,
"node_id": "MDQ6VXNlcjEzMzgxMzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/13381361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bharat123rox",
"html_url": "https://github.com/Bharat123rox",
"followers_url": "https://api.github.com/users/Bharat123rox/followers",
"following_url": "https://api.github.com/users/Bharat123rox/following{/other_user}",
"gists_url": "https://api.github.com/users/Bharat123rox/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bharat123rox/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bharat123rox/subscriptions",
"organizations_url": "https://api.github.com/users/Bharat123rox/orgs",
"repos_url": "https://api.github.com/users/Bharat123rox/repos",
"events_url": "https://api.github.com/users/Bharat123rox/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bharat123rox/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"I fixed a conflict :) thanks !"
] | 1,589,795,228,000 | 1,590,654,717,000 | 1,590,654,717,000 | CONTRIBUTOR | null | Include more links and fix typos in README | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/155/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/155/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/155",
"html_url": "https://github.com/huggingface/datasets/pull/155",
"diff_url": "https://github.com/huggingface/datasets/pull/155.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/155.patch",
"merged_at": 1590654717000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/154 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/154/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/154/comments | https://api.github.com/repos/huggingface/datasets/issues/154/events | https://github.com/huggingface/datasets/pull/154 | 620,059,066 | MDExOlB1bGxSZXF1ZXN0NDE5Mzc4Mzgw | 154 | add Ubuntu Dialogs Corpus datasets | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,589,794,488,000 | 1,589,796,748,000 | 1,589,796,747,000 | CONTRIBUTOR | null | This PR adds the Ubuntu Dialog Corpus datasets version 2.0. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/154/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/154/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/154",
"html_url": "https://github.com/huggingface/datasets/pull/154",
"diff_url": "https://github.com/huggingface/datasets/pull/154.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/154.patch",
"merged_at": 1589796747000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/153 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/153/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/153/comments | https://api.github.com/repos/huggingface/datasets/issues/153/events | https://github.com/huggingface/datasets/issues/153 | 619,972,246 | MDU6SXNzdWU2MTk5NzIyNDY= | 153 | Meta-datasets (GLUE/XTREME/...) – Special care to attributions and citations | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067400324,
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion",
"name": "generic discussion",
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library"
}
] | open | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"As @yoavgo suggested, there should be the possibility to call a function like nlp.bib that outputs all bibtex ref from the datasets and models actually used and eventually nlp.bib.forreadme that would output the same info + versions numbers so they can be included in a readme.md file.",
"Actually, double checking with @mariamabarham, we already have this feature I think.\r\n\r\nIt's like this currently:\r\n```python\r\n>>> from nlp import load_dataset\r\n>>> \r\n>>> dataset = load_dataset('glue', 'cola', split='train')\r\n>>> print(dataset.info.citation)\r\n@article{warstadt2018neural,\r\n title={Neural Network Acceptability Judgments},\r\n author={Warstadt, Alex and Singh, Amanpreet and Bowman, Samuel R},\r\n journal={arXiv preprint arXiv:1805.12471},\r\n year={2018}\r\n}\r\n@inproceedings{wang2019glue,\r\n title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},\r\n author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},\r\n note={In the Proceedings of ICLR.},\r\n year={2019}\r\n}\r\n\r\nNote that each GLUE dataset has its own citation. Please see the source to see\r\nthe correct citation for each contained dataset.\r\n```\r\n\r\nWhat do you think @dseddah?",
"Looks good but why would there be a difference between the ref in the source and the one to be printed? ",
"Yes, I think we should remove this warning @mariamabarham.\r\n\r\nIt's probably a relic of tfds which didn't have the same way to access citations. "
] | 1,589,786,662,000 | 1,589,836,696,000 | null | MEMBER | null | Meta-datasets are interesting in terms of standardized benchmarks but they also have specific behaviors, in particular in terms of attribution and authorship. It's very important that each specific dataset inside a meta dataset is properly referenced and the citation/specific homepage/etc are very visible and accessible and not only the generic citation of the meta-dataset itself.
Let's take GLUE as an example:
The configuration has the citation for each dataset included (e.g. [here](https://github.com/huggingface/nlp/blob/master/datasets/glue/glue.py#L154-L161)) but it should be copied inside the dataset info so that, when people access `dataset.info.citation` they get both the citation for GLUE and the citation for the specific datasets inside GLUE that they have loaded. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/153/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/153/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/152 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/152/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/152/comments | https://api.github.com/repos/huggingface/datasets/issues/152/events | https://github.com/huggingface/datasets/pull/152 | 619,971,900 | MDExOlB1bGxSZXF1ZXN0NDE5MzA4OTE2 | 152 | Add GLUE config name check | {
"login": "Bharat123rox",
"id": 13381361,
"node_id": "MDQ6VXNlcjEzMzgxMzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/13381361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bharat123rox",
"html_url": "https://github.com/Bharat123rox",
"followers_url": "https://api.github.com/users/Bharat123rox/followers",
"following_url": "https://api.github.com/users/Bharat123rox/following{/other_user}",
"gists_url": "https://api.github.com/users/Bharat123rox/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bharat123rox/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bharat123rox/subscriptions",
"organizations_url": "https://api.github.com/users/Bharat123rox/orgs",
"repos_url": "https://api.github.com/users/Bharat123rox/repos",
"events_url": "https://api.github.com/users/Bharat123rox/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bharat123rox/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"If tests are being added, any guidance on where to add tests would be helpful!\r\n\r\nTagging @thomwolf for review",
"Looks good to me. Is this compatible with the way we are doing tests right now @patrickvonplaten ?",
"If the tests pass it should be fine :-) \r\n\r\n@Bharat123rox could you check whether the tests pass locally via: \r\n`pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_glue`",
"The test fails with an `AssertionError` because the name is not being passed to kwargs, however I'm not sure how to do that, because only the config file is being passed to the tests of all datasets?\r\n\r\nI'm guessing this is the corresponding code:\r\nhttps://github.com/huggingface/nlp/blob/2b3621bb5c78caf02c5a969b8e67fa0c145da4e6/tests/test_dataset_common.py#L141-L143\r\n\r\nAnd these are the logs:\r\n```\r\n___________________ DatasetTest.test_load_dataset_local_glue ___________________\r\n\r\nself = <tests.test_dataset_common.DatasetTest testMethod=test_load_dataset_local_glue>\r\ndataset_name = 'glue'\r\n\r\n @local\r\n def test_load_dataset_local(self, dataset_name):\r\n # test only first config\r\n if \"/\" in dataset_name:\r\n logging.info(\"Skip {} because it is not a canonical dataset\")\r\n return\r\n\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True)\r\n\r\ntests/test_dataset_common.py:200:\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\ntests/test_dataset_common.py:74: in check_load_dataset\r\n dataset_builder = dataset_builder_cls(config=config, cache_dir=processed_temp_dir)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n\r\nself = <nlp.datasets.glue.fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597.glue.Glue object at 0x135c0ea90>\r\nargs = ()\r\nkwargs = {'cache_dir': '/var/folders/r6/mnw5ntvn5y72j7d4s1fm273m0000gn/T/tmpa9rpq3tl', 'config': GlueConfig(name='cola', versio...linguistic theory. Each example is a sequence of words annotated\\nwith whether it is a grammatical English sentence.')}\r\n\r\n def __init__(self, *args, **kwargs):\r\n> assert ('name' in kwargs and kwargs['name'] is not None), \"Glue has to be called with a configuration name\"\r\nE AssertionError: Glue has to be called with a configuration name\r\n\r\n/usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597/glue.py:139: AssertionError\r\n----------------------------- Captured stderr call -----------------------------\r\nINFO:nlp.load:Checking ./datasets/glue/glue.py for additional imports.\r\nINFO:filelock:Lock 5209998288 acquired on ./datasets/glue/glue.py.lock\r\nINFO:nlp.load:Found main folder for dataset ./datasets/glue/glue.py at /usr/local/lib/python3.7/site-packages/nlp/datasets/glue\r\nINFO:nlp.load:Found specific version folder for dataset ./datasets/glue/glue.py at /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597\r\nINFO:nlp.load:Found script file from ./datasets/glue/glue.py to /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597/glue.py\r\nINFO:nlp.load:Found dataset infos file from ./datasets/glue/dataset_infos.json to /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597/dataset_infos.json\r\nINFO:nlp.load:Found metadata file for dataset ./datasets/glue/glue.py at /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597/glue.json\r\nINFO:filelock:Lock 5209998288 released on ./datasets/glue/glue.py.lock\r\nINFO:nlp.load:Checking ./datasets/glue/glue.py for additional imports.\r\nINFO:filelock:Lock 5196802640 acquired on ./datasets/glue/glue.py.lock\r\nINFO:nlp.load:Found main folder for dataset ./datasets/glue/glue.py at /usr/local/lib/python3.7/site-packages/nlp/datasets/glue\r\nINFO:nlp.load:Found specific version folder for dataset ./datasets/glue/glue.py at /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597\r\nINFO:nlp.load:Found script file from ./datasets/glue/glue.py to /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597/glue.py\r\nINFO:nlp.load:Found dataset infos file from ./datasets/glue/dataset_infos.json to /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597/dataset_infos.json\r\nINFO:nlp.load:Found metadata file for dataset ./datasets/glue/glue.py at /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597/glue.json\r\nINFO:filelock:Lock 5196802640 released on ./datasets/glue/glue.py.lock\r\n------------------------------ Captured log call -------------------------------\r\nINFO nlp.load:load.py:157 Checking ./datasets/glue/glue.py for additional imports.\r\nINFO filelock:filelock.py:274 Lock 5209998288 acquired on ./datasets/glue/glue.py.lock\r\nINFO nlp.load:load.py:320 Found main folder for dataset ./datasets/glue/glue.py at /usr/local/lib/python3.7/site-packages/nlp/datasets/glue\r\nINFO nlp.load:load.py:333 Found specific version folder for dataset ./datasets/glue/glue.py at /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597\r\nINFO nlp.load:load.py:346 Found script file from ./datasets/glue/glue.py to /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597/glue.py\r\nINFO nlp.load:load.py:356 Found dataset infos file from ./datasets/glue/dataset_infos.json to /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597/dataset_infos.json\r\nINFO nlp.load:load.py:367 Found metadata file for dataset ./datasets/glue/glue.py at /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597/glue.json\r\nINFO filelock:filelock.py:318 Lock 5209998288 released on ./datasets/glue/glue.py.lock\r\nINFO nlp.load:load.py:157 Checking ./datasets/glue/glue.py for additional imports.\r\nINFO filelock:filelock.py:274 Lock 5196802640 acquired on ./datasets/glue/glue.py.lock\r\nINFO nlp.load:load.py:320 Found main folder for dataset ./datasets/glue/glue.py at /usr/local/lib/python3.7/site-packages/nlp/datasets/glue\r\nINFO nlp.load:load.py:333 Found specific version folder for dataset ./datasets/glue/glue.py at /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597\r\nINFO nlp.load:load.py:346 Found script file from ./datasets/glue/glue.py to /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597/glue.py\r\nINFO nlp.load:load.py:356 Found dataset infos file from ./datasets/glue/dataset_infos.json to /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597/dataset_infos.json\r\nINFO nlp.load:load.py:367 Found metadata file for dataset ./datasets/glue/glue.py at /usr/local/lib/python3.7/site-packages/nlp/datasets/glue/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597/glue.json\r\nINFO filelock:filelock.py:318 Lock 5196802640 released on ./datasets/glue/glue.py.lock\r\n```",
"Closing as #130 is fixed !"
] | 1,589,786,623,000 | 1,590,617,352,000 | 1,590,617,352,000 | CONTRIBUTOR | null | Fixes #130 by adding a name check to the Glue class | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/152/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/152/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/152",
"html_url": "https://github.com/huggingface/datasets/pull/152",
"diff_url": "https://github.com/huggingface/datasets/pull/152.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/152.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/151 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/151/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/151/comments | https://api.github.com/repos/huggingface/datasets/issues/151/events | https://github.com/huggingface/datasets/pull/151 | 619,968,480 | MDExOlB1bGxSZXF1ZXN0NDE5MzA2MTYz | 151 | Fix JSON tests. | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,589,786,258,000 | 1,589,786,512,000 | 1,589,786,511,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/151/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/151/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/151",
"html_url": "https://github.com/huggingface/datasets/pull/151",
"diff_url": "https://github.com/huggingface/datasets/pull/151.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/151.patch",
"merged_at": 1589786511000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/150 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/150/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/150/comments | https://api.github.com/repos/huggingface/datasets/issues/150/events | https://github.com/huggingface/datasets/pull/150 | 619,809,645 | MDExOlB1bGxSZXF1ZXN0NDE5MTgyODU4 | 150 | Add WNUT 17 NER dataset | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"The PR looks awesome! \r\nSince you have already added a dataset I imagine the tests as described in 5. of https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-add-a-dataset all pass, right @stefan-it ?\r\n\r\nI think we are then good to merge this :-) @lhoestq ",
"Nice !\r\n\r\nOne thing though: I saw that you copied the `dataset_info.json` (one split info), which is different from the `dataset_infos.json` (split infos of all configs) that we expect.\r\n\r\nCould you generate the `dataset_infos.json` file using this command please ?\r\n```\r\npython nlp-cli test datasets/wnut_17 --save_infos --all_configs\r\n```",
"Hi @patrickvonplaten I just rebased onto latest `master` version and executed the commands. All tests passed then :)\r\n\r\n@lhoestq thanks for that hint! I've generated and added the `dataset_infos.json` and deleted `dataset_info.json`.",
"Awesome ! I guess it's ready to be merged now :)"
] | 1,589,753,944,000 | 1,590,525,479,000 | 1,590,525,479,000 | CONTRIBUTOR | null | Hi,
this PR adds the WNUT 17 dataset to `nlp`.
> Emerging and Rare entity recognition
> This shared task focuses on identifying unusual, previously-unseen entities in the context of emerging discussions. Named entities form the basis of many modern approaches to other tasks (like event clustering and summarisation), but recall on them is a real problem in noisy text - even among annotators. This drop tends to be due to novel entities and surface forms. Take for example the tweet “so.. kktny in 30 mins?” - even human experts find entity kktny hard to detect and resolve. This task will evaluate the ability to detect and classify novel, emerging, singleton named entities in noisy text.
>
> The goal of this task is to provide a definition of emerging and of rare entities, and based on that, also datasets for detecting these entities.
More information about the dataset can be found on the [shared task page](https://noisy-text.github.io/2017/emerging-rare-entities.html).
Dataset is taken is taken from their [GitHub repository](https://github.com/leondz/emerging_entities_17), because the data provided in this repository contains minor fixes in the dataset format.
## Usage
Then the WNUT 17 dataset can be used in `nlp` like this:
```python
import nlp
wnut_17 = nlp.load_dataset("./datasets/wnut_17/wnut_17.py")
print(wnut_17)
```
This outputs:
```txt
'train': Dataset(schema: {'id': 'string', 'tokens': 'list<item: string>', 'labels': 'list<item: string>'}, num_rows: 3394)
'validation': Dataset(schema: {'id': 'string', 'tokens': 'list<item: string>', 'labels': 'list<item: string>'}, num_rows: 1009)
'test': Dataset(schema: {'id': 'string', 'tokens': 'list<item: string>', 'labels': 'list<item: string>'}, num_rows: 1287)
```
Number are identical with the ones in [this paper](https://www.ijcai.org/Proceedings/2019/0702.pdf) and are the same as using the `dataset` reader in Flair.
## Features
The following feature format is used to represent a sentence in the WNUT 17 dataset:
| Feature | Example | Description
| ---- | ---- | -----------------
| `id` | `0` | Number (id) of current sentence
| `tokens` | `["AHFA", "extends", "deadline"]` | List of tokens (strings) for a sentence
| `labels` | `["B-group", "O", "O"]` | List of labels (outer span)
The following labels are used in WNUT 17:
```txt
O
B-corporation
I-corporation
B-location
I-location
B-product
I-product
B-person
I-person
B-group
I-group
B-creative-work
I-creative-work
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/150/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/150/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/150",
"html_url": "https://github.com/huggingface/datasets/pull/150",
"diff_url": "https://github.com/huggingface/datasets/pull/150.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/150.patch",
"merged_at": 1590525479000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/149 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/149/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/149/comments | https://api.github.com/repos/huggingface/datasets/issues/149/events | https://github.com/huggingface/datasets/issues/149 | 619,735,739 | MDU6SXNzdWU2MTk3MzU3Mzk= | 149 | [Feature request] Add Ubuntu Dialogue Corpus dataset | {
"login": "danth",
"id": 28959268,
"node_id": "MDQ6VXNlcjI4OTU5MjY4",
"avatar_url": "https://avatars.githubusercontent.com/u/28959268?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danth",
"html_url": "https://github.com/danth",
"followers_url": "https://api.github.com/users/danth/followers",
"following_url": "https://api.github.com/users/danth/following{/other_user}",
"gists_url": "https://api.github.com/users/danth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danth/subscriptions",
"organizations_url": "https://api.github.com/users/danth/orgs",
"repos_url": "https://api.github.com/users/danth/repos",
"events_url": "https://api.github.com/users/danth/events{/privacy}",
"received_events_url": "https://api.github.com/users/danth/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"@AlphaMycelium the Ubuntu Dialogue Corpus [version 2]( https://github.com/rkadlec/ubuntu-ranking-dataset-creator) is added. Note that it requires a manual download by following the download instructions in the [repos]( https://github.com/rkadlec/ubuntu-ranking-dataset-creator).\r\nMaybe we can close this issue for now?"
] | 1,589,730,159,000 | 1,589,821,306,000 | 1,589,821,306,000 | NONE | null | https://github.com/rkadlec/ubuntu-ranking-dataset-creator or http://dataset.cs.mcgill.ca/ubuntu-corpus-1.0/ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/149/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/149/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/148 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/148/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/148/comments | https://api.github.com/repos/huggingface/datasets/issues/148/events | https://github.com/huggingface/datasets/issues/148 | 619,590,555 | MDU6SXNzdWU2MTk1OTA1NTU= | 148 | _download_and_prepare() got an unexpected keyword argument 'verify_infos' | {
"login": "richarddwang",
"id": 17963619,
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richarddwang",
"html_url": "https://github.com/richarddwang",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Same error for dataset 'wiki40b'",
"Should be fixed on master :)"
] | 1,589,680,133,000 | 1,589,787,513,000 | 1,589,787,513,000 | CONTRIBUTOR | null | # Reproduce
In Colab,
```
%pip install -q nlp
%pip install -q apache_beam mwparserfromhell
dataset = nlp.load_dataset('wikipedia')
```
get
```
Downloading and preparing dataset wikipedia/20200501.aa (download: Unknown size, generated: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wikipedia/20200501.aa/1.0.0...
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-6-52471d2a0088> in <module>()
----> 1 dataset = nlp.load_dataset('wikipedia')
1 frames
/usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
515 download_mode=download_mode,
516 ignore_verifications=ignore_verifications,
--> 517 save_infos=save_infos,
518 )
519
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs)
361 verify_infos = not save_infos and not ignore_verifications
362 self._download_and_prepare(
--> 363 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
364 )
365 # Sync info
TypeError: _download_and_prepare() got an unexpected keyword argument 'verify_infos'
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/148/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/148/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/147 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/147/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/147/comments | https://api.github.com/repos/huggingface/datasets/issues/147/events | https://github.com/huggingface/datasets/issues/147 | 619,581,907 | MDU6SXNzdWU2MTk1ODE5MDc= | 147 | Error with sklearn train_test_split | {
"login": "ClonedOne",
"id": 6853743,
"node_id": "MDQ6VXNlcjY4NTM3NDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6853743?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ClonedOne",
"html_url": "https://github.com/ClonedOne",
"followers_url": "https://api.github.com/users/ClonedOne/followers",
"following_url": "https://api.github.com/users/ClonedOne/following{/other_user}",
"gists_url": "https://api.github.com/users/ClonedOne/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ClonedOne/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ClonedOne/subscriptions",
"organizations_url": "https://api.github.com/users/ClonedOne/orgs",
"repos_url": "https://api.github.com/users/ClonedOne/repos",
"events_url": "https://api.github.com/users/ClonedOne/events{/privacy}",
"received_events_url": "https://api.github.com/users/ClonedOne/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Indeed. Probably we will want to have a similar method directly in the library",
"Related: #166 "
] | 1,589,675,304,000 | 1,592,497,403,000 | 1,592,497,403,000 | NONE | null | It would be nice if we could use sklearn `train_test_split` to quickly generate subsets from the dataset objects returned by `nlp.load_dataset`. At the moment the code:
```python
data = nlp.load_dataset('imdb', cache_dir=data_cache)
f_half, s_half = train_test_split(data['train'], test_size=0.5, random_state=seed)
```
throws:
```
ValueError: Can only get row(s) (int or slice) or columns (string).
```
It's not a big deal, since there are other ways to split the data, but it would be a cool thing to have. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/147/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/147/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/146 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/146/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/146/comments | https://api.github.com/repos/huggingface/datasets/issues/146/events | https://github.com/huggingface/datasets/pull/146 | 619,564,653 | MDExOlB1bGxSZXF1ZXN0NDE5MDI5MjUx | 146 | Add BERTScore to metrics | {
"login": "felixgwu",
"id": 7753366,
"node_id": "MDQ6VXNlcjc3NTMzNjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7753366?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/felixgwu",
"html_url": "https://github.com/felixgwu",
"followers_url": "https://api.github.com/users/felixgwu/followers",
"following_url": "https://api.github.com/users/felixgwu/following{/other_user}",
"gists_url": "https://api.github.com/users/felixgwu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/felixgwu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/felixgwu/subscriptions",
"organizations_url": "https://api.github.com/users/felixgwu/orgs",
"repos_url": "https://api.github.com/users/felixgwu/repos",
"events_url": "https://api.github.com/users/felixgwu/events{/privacy}",
"received_events_url": "https://api.github.com/users/felixgwu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,589,666,979,000 | 1,589,754,130,000 | 1,589,754,129,000 | CONTRIBUTOR | null | This PR adds [BERTScore](https://arxiv.org/abs/1904.09675) to metrics.
Here is an example of how to use it.
```sh
import nlp
bertscore = nlp.load_metric('metrics/bertscore') # or simply nlp.load_metric('bertscore') after this is added to huggingface's s3 bucket
predictions = ['example', 'fruit']
references = [['this is an example.', 'this is one example.'], ['apple']]
results = bertscore.compute(predictions, references, lang='en')
print(results)
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/146/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/146/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/146",
"html_url": "https://github.com/huggingface/datasets/pull/146",
"diff_url": "https://github.com/huggingface/datasets/pull/146.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/146.patch",
"merged_at": 1589754129000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/145 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/145/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/145/comments | https://api.github.com/repos/huggingface/datasets/issues/145/events | https://github.com/huggingface/datasets/pull/145 | 619,480,549 | MDExOlB1bGxSZXF1ZXN0NDE4OTcxMjg0 | 145 | [AWS Tests] Follow-up PR from #144 | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,589,637,226,000 | 1,589,637,263,000 | 1,589,637,262,000 | MEMBER | null | I forgot to add this line in PR #145 . | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/145/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/145/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/145",
"html_url": "https://github.com/huggingface/datasets/pull/145",
"diff_url": "https://github.com/huggingface/datasets/pull/145.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/145.patch",
"merged_at": 1589637262000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/144 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/144/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/144/comments | https://api.github.com/repos/huggingface/datasets/issues/144/events | https://github.com/huggingface/datasets/pull/144 | 619,477,367 | MDExOlB1bGxSZXF1ZXN0NDE4OTY5NjA1 | 144 | [AWS tests] AWS test should not run for canonical datasets | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,589,636,370,000 | 1,589,636,674,000 | 1,589,636,673,000 | MEMBER | null | AWS tests should in general not run for canonical datasets. Only local tests will run in this case. This way a PR is able to pass when adding a new dataset.
This PR changes to logic to the following:
1) All datasets that are present in `nlp/datasets` are tested only locally. This way when one adds a canonical dataset, the PR includes his dataset in the tests.
2) All datasets that are only present on AWS, such as `webis/tl_dr` atm are tested only on AWS.
I think the testing structure might need a bigger refactoring and better documentation very soon.
Merging for now to unblock new PRs @thomwolf @mariamabarham . | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/144/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/144/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/144",
"html_url": "https://github.com/huggingface/datasets/pull/144",
"diff_url": "https://github.com/huggingface/datasets/pull/144.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/144.patch",
"merged_at": 1589636673000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/143 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/143/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/143/comments | https://api.github.com/repos/huggingface/datasets/issues/143/events | https://github.com/huggingface/datasets/issues/143 | 619,457,641 | MDU6SXNzdWU2MTk0NTc2NDE= | 143 | ArrowTypeError in squad metrics | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067393914,
"node_id": "MDU6TGFiZWwyMDY3MzkzOTE0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug",
"name": "metric bug",
"color": "25b21e",
"default": false,
"description": "A bug in a metric script"
}
] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"There was an issue in the format, thanks.\r\nNow you can do\r\n```python3\r\nsquad_dset = nlp.load_dataset(\"squad\")\r\nsquad_metric = nlp.load_metric(\"/Users/quentinlhoest/Desktop/hf/nlp-bis/metrics/squad\")\r\npredictions = [\r\n {\"id\": v[\"id\"], \"prediction_text\": v[\"answers\"][\"text\"][0]} # take first possible answer\r\n for v in squad_dset[\"validation\"]\r\n]\r\nsquad_metric.compute(predictions, squad_dset[\"validation\"])\r\n```\r\n\r\nand the expected format is \r\n```\r\nArgs:\r\n predictions: List of question-answers dictionaries with the following key-values:\r\n - 'id': id of the question-answer pair as given in the references (see below)\r\n - 'prediction_text': the text of the answer\r\n references: List of question-answers dictionaries with the following key-values:\r\n - 'id': id of the question-answer pair (see above),\r\n - 'answers': a Dict {'text': list of possible texts for the answer, as a list of strings}\r\n```"
] | 1,589,630,797,000 | 1,590,154,732,000 | 1,590,154,608,000 | MEMBER | null | `squad_metric.compute` is giving following error
```
ArrowTypeError: Could not convert [{'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}] with type list: was not a dict, tuple, or recognized null value for conversion to struct type
```
This is how my predictions and references look like
```
predictions[0]
# {'id': '56be4db0acb8001400a502ec', 'prediction_text': 'Denver Broncos'}
```
```
references[0]
# {'answers': [{'text': 'Denver Broncos'},
{'text': 'Denver Broncos'},
{'text': 'Denver Broncos'}],
'id': '56be4db0acb8001400a502ec'}
```
These are structured as per the `squad_metric.compute` help string. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/143/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/143/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/142 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/142/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/142/comments | https://api.github.com/repos/huggingface/datasets/issues/142/events | https://github.com/huggingface/datasets/pull/142 | 619,450,068 | MDExOlB1bGxSZXF1ZXN0NDE4OTU0OTc1 | 142 | [WMT] Add all wmt | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,589,628,526,000 | 1,589,717,901,000 | 1,589,717,900,000 | MEMBER | null | This PR adds all wmt datasets scripts. At the moment the script is **not** functional for the language pairs "cs-en", "ru-en", "hi-en" because apparently it takes up to a week to get the manual data for these datasets: see http://ufal.mff.cuni.cz/czeng.
The datasets are fully functional though for the "big" language pairs "de-en" and "fr-en".
Overall I think the scripts are very messy and might need a big refactoring at some point.
For now I think there are good to merge (most dataset configs can be used). I will add "cs", "ru" and "hi" when the manual data is available. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/142/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/142/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/142",
"html_url": "https://github.com/huggingface/datasets/pull/142",
"diff_url": "https://github.com/huggingface/datasets/pull/142.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/142.patch",
"merged_at": 1589717900000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/141 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/141/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/141/comments | https://api.github.com/repos/huggingface/datasets/issues/141/events | https://github.com/huggingface/datasets/pull/141 | 619,447,090 | MDExOlB1bGxSZXF1ZXN0NDE4OTUzMzQw | 141 | [Clean up] remove bogus folder | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Same for the dataset_infos.json at the project root no ?",
"Sorry guys, I haven't noticed. Thank you for mentioning it."
] | 1,589,627,622,000 | 1,589,635,467,000 | 1,589,635,466,000 | MEMBER | null | @mariamabarham - I think you accidentally placed it there. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/141/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/141/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/141",
"html_url": "https://github.com/huggingface/datasets/pull/141",
"diff_url": "https://github.com/huggingface/datasets/pull/141.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/141.patch",
"merged_at": 1589635465000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/140 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/140/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/140/comments | https://api.github.com/repos/huggingface/datasets/issues/140/events | https://github.com/huggingface/datasets/pull/140 | 619,443,613 | MDExOlB1bGxSZXF1ZXN0NDE4OTUxMzg4 | 140 | [Tests] run local tests as default | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"You are right and I think those are usual best practice :) I'm 100% fine with this^^",
"Merging this for now to unblock other PRs."
] | 1,589,626,566,000 | 1,589,635,304,000 | 1,589,635,303,000 | MEMBER | null | This PR also enables local tests by default
I think it's safer for now to enable both local and aws tests for every commit. The problem currently is that when we do a PR to add a dataset, the dataset is not yet on AWS on therefore not tested on the PR itself. Thus the PR will always be green even if the datasets are not correct. This PR aims at fixing this.
## Suggestion on how to commit to the repo from now on:
Now since the repo is "online", I think we should adopt a couple of best practices:
1) - No direct committing to the repo anymore. Every change should be opened in a PR and be well documented so that we can find it later
2) - Every PR has to be reviewed by at least x people (I guess @thomwolf you should decide here) because we now have to be much more careful when doing changes to the API for backward compatibility, etc...
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/140/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/140/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/140",
"html_url": "https://github.com/huggingface/datasets/pull/140",
"diff_url": "https://github.com/huggingface/datasets/pull/140.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/140.patch",
"merged_at": 1589635303000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/139 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/139/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/139/comments | https://api.github.com/repos/huggingface/datasets/issues/139/events | https://github.com/huggingface/datasets/pull/139 | 619,327,409 | MDExOlB1bGxSZXF1ZXN0NDE4ODc4NzMy | 139 | Add GermEval 2014 NER dataset | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Had really fun playing around with this new library :heart: ",
"That's awesome - thanks @stefan-it :-) \r\n\r\nCould you maybe rebase to master and check if all dummy data tests are fine. I should have included the local tests directly in the test suite so that all PRs are fully checked: #140 - sorry :D ",
"@patrickvonplaten Rebased it 😅\r\n\r\nHow can it test 🤔 I used:\r\n\r\n```bash\r\nRUN_SLOW=1 RUN_LOCAL=1 pytest tests/test_dataset_common.py::DatasetTest::test_load_real_dataset_local_germeval_14\r\n# and\r\nRUN_SLOW=1 RUN_LOCAL=1 pytest tests/test_dataset_common.py::DatasetTest::test_load_dataset_all_configs_local_germeval_14\r\n```\r\n\r\nand the tests still pass :)",
"Perfect, if these tests pass that's great - I'll merge the PR then :-) Was it very difficult to create the dummy data structure? "
] | 1,589,586,129,000 | 1,589,637,397,000 | 1,589,637,382,000 | CONTRIBUTOR | null | Hi,
this PR adds the GermEval 2014 NER dataset 😃
> The GermEval 2014 NER Shared Task builds on a new dataset with German Named Entity annotation [1] with the following properties:
> - The data was sampled from German Wikipedia and News Corpora as a collection of citations.
> - The dataset covers over 31,000 sentences corresponding to over 590,000 tokens.
> - The NER annotation uses the NoSta-D guidelines, which extend the Tübingen Treebank guidelines, using four main NER categories with sub-structure, and annotating embeddings among NEs such as [ORG FC Kickers [LOC Darmstadt]].
Dataset will be downloaded from the [official GermEval 2014 website](https://sites.google.com/site/germeval2014ner/data).
## Dataset format
Here's an example of the dataset format from the original dataset:
```tsv
# http://de.wikipedia.org/wiki/Manfred_Korfmann [2009-10-17]
1 Aufgrund O O
2 seiner O O
3 Initiative O O
4 fand O O
5 2001/2002 O O
6 in O O
7 Stuttgart B-LOC O
8 , O O
9 Braunschweig B-LOC O
10 und O O
11 Bonn B-LOC O
12 eine O O
13 große O O
14 und O O
15 publizistisch O O
16 vielbeachtete O O
17 Troia-Ausstellung B-LOCpart O
18 statt O O
19 , O O
20 „ O O
21 Troia B-OTH B-LOC
22 - I-OTH O
23 Traum I-OTH O
24 und I-OTH O
25 Wirklichkeit I-OTH O
26 “ O O
27 . O O
```
The sentence is encoded as one token per line (tab separated columns.
The first column contains either a `#`, which signals the source the sentence is cited from and the date it was retrieved, or the token number within the sentence.
The second column contains the token.
Column three and four contain the named entity (in IOB2 scheme).
Outer spans are encoded in the third column, embedded/nested spans in the fourth column.
## Features
I decided to keep most information from the dataset. That means the so called "source" information (where the sentences come from + date information) is also returned for each sentence in the feature vector.
For each sentence in the dataset, one feature vector (`nlp.Features` definition) will be returned:
| Feature | Example | Description
| ---- | ---- | -----------------
| `id` | `0` | Number (id) of current sentence
| `source` | `http://de.wikipedia.org/wiki/Manfred_Korfmann [2009-10-17]` | URL and retrieval date as string
| `tokens` | `["Schwartau", "sagte", ":"]` | List of tokens (strings) for a sentence
| `labels` | `["B-PER", "O", "O"]` | List of labels (outer span)
| `nested-labels` | `["O", "O", "O"]` | List of labels for nested span
## Example
The following command downloads the dataset from the official GermEval 2014 page and pre-processed it:
```bash
python nlp-cli test datasets/germeval_14 --all_configs
```
It then outputs the number for training, development and testset. The training set consists of 24,000 sentences, the development set of 2,200 and the test of 5,100 sentences.
Now it can be imported and used with `nlp`:
```python
import nlp
germeval = nlp.load_dataset("./datasets/germeval_14/germeval_14.py")
assert len(germeval["train"]) == 24000
# Show first sentence of training set:
germeval["train"][0]
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/139/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/139/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/139",
"html_url": "https://github.com/huggingface/datasets/pull/139",
"diff_url": "https://github.com/huggingface/datasets/pull/139.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/139.patch",
"merged_at": 1589637382000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/138 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/138/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/138/comments | https://api.github.com/repos/huggingface/datasets/issues/138/events | https://github.com/huggingface/datasets/issues/138 | 619,225,191 | MDU6SXNzdWU2MTkyMjUxOTE= | 138 | Consider renaming to nld | {
"login": "honnibal",
"id": 8059750,
"node_id": "MDQ6VXNlcjgwNTk3NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8059750?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/honnibal",
"html_url": "https://github.com/honnibal",
"followers_url": "https://api.github.com/users/honnibal/followers",
"following_url": "https://api.github.com/users/honnibal/following{/other_user}",
"gists_url": "https://api.github.com/users/honnibal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/honnibal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/honnibal/subscriptions",
"organizations_url": "https://api.github.com/users/honnibal/orgs",
"repos_url": "https://api.github.com/users/honnibal/repos",
"events_url": "https://api.github.com/users/honnibal/events{/privacy}",
"received_events_url": "https://api.github.com/users/honnibal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067400324,
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion",
"name": "generic discussion",
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library"
}
] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"I would suggest `nlds`. NLP is a very general, broad and ambiguous term, the library is not about NLP (as in processing) per se, it is about accessing Natural Language related datasets. So the name should reflect its purpose.\r\n",
"Chiming in to second everything @honnibal said, and to add that I think the current name is going to impact the discoverability of this library. People who are looking for \"NLP Datasets\" through a search engine are going to see a library called `nlp` and think it's too broad. People who are looking to do NLP in python are going to search \"Python NLP\" and end up here, confused that this is a collection of datasets.\r\n\r\nThe names of the other huggingface libraries work because they're the only game in town: there are not very many robust, distinct libraries for `tokenizers` or `transformers` in python, for example. But there are several options for NLP in python, and adding this as a possible search result for \"python nlp\" when datasets are likely not what someone is searching for adds noise and frustrates potential users.",
"I'm also not sure whether the naming of `nlp` is the problem itself, as long as it comes with the appropriate identifier, so maybe something like `huggingface_nlp`? This is analogous to what @honnibal and spacy are doing for `spacy-transformers`. Of course, this is a \"step back\" from the recent changes/renaming of transformers, but may be some middle ground between a complete rebranding, and keeping it identifiable.",
"Interesting, thanks for sharing your thoughts.\r\n\r\nAs we’ll move toward a first non-beta release, we will pool the community of contributors/users of the library for their opinions on a good final name (like when we renamed the beautifully (?) named `pytorch-pretrained-bert`)\r\n\r\nIn the meantime, using `from nlp import load_dataset, load_metric` should work 😉",
"I feel like we are conflating two distinct subjects here:\r\n\r\n1. @honnibal's point is that using `nlp` as a package name might break existing code and bring developer usability issues in the future\r\n2. @pmbaumgartner's point is that the `nlp` package name is too broad and shouldn't be used by a package that exposes only datasets and metrics\r\n\r\n(let me know if I mischaracterize your point)\r\n\r\nI'll chime in to say that the first point is a bit silly IMO. As Python developers due to the limitations of the import system we already have to share:\r\n- a single flat namespace for packages\r\n- which also conflicts with local modules i.e. local files\r\n\r\nIf we add the constraint that this flat namespace also be shared with variable names this gets untractable pretty fast :)\r\n\r\nI also think all Python software developers/ML engineers/scientists are capable of at least a subset of:\r\n- importing only the methods that they need like @thomwolf suggested\r\n- aliasing their import\r\n- renaming a local variable",
"By the way, `nlp` will very likely not be only about datasets, and not even just about datasets and metrics.\r\n\r\nI see it as a laboratory for testing several long-term ideas about how we could do NLP in terms of research as well as open-source and community sharing, most of these ideas being too experimental/big to fit in `transformers`.\r\n\r\nSome of the directions we would like to explore are about sharing, traceability and more experimental models, as well as seeing a model as the community-based process of creating a composite entity from data, optimization, and code.\r\n\r\nWe'll see how these ideas end up being implemented and we'll better know how we should define the library when we start to dive into these topics. I'll try to get the `nlp` team to draft a roadmap on these topics at some point.",
"> If we add the constraint that this flat namespace also be shared with variable names this gets untractable pretty fast :)\r\n\r\nI'm sort of confused by your point here. The namespace *is* shared by variable names. You should not use local variables that are named the same as modules, because then you cannot use the module within the scope of your function.\r\n\r\nFor instance,\r\n\r\n```python\r\n\r\nimport nlp\r\nimport transformers\r\n\r\nnlp = transformers.pipeline(\"sentiment-analysis\")\r\n```\r\n\r\nThis is a bug: you've just overwritten the module, so now you can't use it. Or instead:\r\n\r\n```python\r\n\r\nimport transformers\r\n\r\nnlp = transformers.pipeline(\"sentiment-analysis\")\r\n# (Later, e.g. in a notebook)\r\nimport nlp\r\n```\r\n\r\nThis is also a bug: you've overwritten your variable with an import.\r\n\r\nIf you have a module named `nlp`, you should avoid using `nlp` as a variable, or you'll have bugs in some contexts and inconsistencies in other contexts. You'll have situations where you need to import differently in one module vs another, or name variables differently in one context vs another, which is bad.\r\n\r\n> importing only the methods that they need like @thomwolf suggested\r\n\r\nOkay but the same logic applies to naming the module *literally anything else*. There's absolutely no point in having a module name that's 3 letters if you always plan to do `import from`! It would be entirely better to name it `nlp_datasets` if you don't want people to do `import nlp`.\r\n\r\nAnd finally:\r\n\r\n> By the way, nlp will very likely not be only about datasets, and not even just about datasets and metrics.\r\n\r\nSo...it isn't a datasets library? https://twitter.com/Thom_Wolf/status/1261282491622731781\r\n\r\nI'm confused 😕 ",
"Dropping by as I noticed that the library has been renamed `datasets` so I wonder if the conversation above is settled (`nlp` not used anymore) :) ",
"I guess indeed",
"I'd argue that `datasets` is worse than `nlp`. Datasets should be a user specific decision and not encapsulate all of python (`pip install datasets`). If this package contained every dataset in the world (NLP / vision / etc) then it would make sense =/",
"I can't speak for the HF team @jramapuram, but as member of the community it looks to me that HF wanted to avoid the past path of changing names as scope broadened over time:\r\n\r\nRemember\r\nhttps://github.com/huggingface/pytorch-openai-transformer-lm\r\nhttps://github.com/huggingface/pytorch-pretrained-BERT\r\nhttps://github.com/huggingface/pytorch-transformers\r\nand now\r\nhttps://github.com/huggingface/transformers\r\n\r\n;) \r\n\r\nJokes aside, seems that the library is growing in a multi-modal direction (https://github.com/huggingface/datasets/pull/363) so the current name is not that implausible. Possibly HF ambition is really to grow its community and bring here a large chunk of datasets of the world (including tabular / vision / audio?).",
"Yea I see your point. However, wouldn't scoping solve the entire problem? \r\n\r\n```python\r\nimport huggingface.datasets as D\r\nimport huggingface.transformers as T\r\n```\r\n\r\nCalling something `datasets` is akin to saying I'm going to name my package `python` --> `import python` "
] | 1,589,574,207,000 | 1,608,238,591,000 | 1,601,251,690,000 | NONE | null | Hey :)
Just making a thread here recording what I said on Twitter, as it's impossible to follow discussion there. It's also just really not a good way to talk about this sort of thing.
The issue is that modules go into the global namespace, so you shouldn't use variable names that conflict with module names. This means the package makes `nlp` a bad variable name everywhere in the codebase. I've always used `nlp` as the canonical variable name of spaCy's `Language` objects, and this is a convention that a lot of other code has followed (Stanza, flair, etc). And actually, your `transformers` library uses `nlp` as the name for its `Pipeline` instance in your readme.
If you stick with the `nlp` name for this package, if anyone uses it then they should rewrite all of that code. If `nlp` is a bad choice of variable anywhere, it's a bad choice of variable everywhere --- because you shouldn't have to notice whether some other function uses a module when you're naming variables within a function. You want to have one convention that you can stick to everywhere.
If people use your `nlp` package and continue to use the `nlp` variable name, they'll find themselves with confusing bugs. There will be many many bits of code cut-and-paste from tutorials that give confusing results when combined with the data loading from the `nlp` library. The problem will be especially bad for shadowed modules (people might reasonably have a module named `nlp.py` within their codebase) and notebooks, as people might run notebook cells for data loading out-of-order.
I don't think it's an exaggeration to say that if your library becomes popular, we'll all be answering issues around this about once a week for the next few years. That seems pretty unideal, so I do hope you'll reconsider.
I suggest `nld` as a better name. It more accurately represents what the package actually does. It's pretty unideal to have a package named `nlp` that doesn't do any processing, and contains data about natural language generation or other non-NLP tasks. The name is equally short, and is sort of a visual pun on `nlp`, since a d is a rotated p. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/138/reactions",
"total_count": 32,
"+1": 32,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/138/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/137 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/137/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/137/comments | https://api.github.com/repos/huggingface/datasets/issues/137/events | https://github.com/huggingface/datasets/issues/137 | 619,214,645 | MDU6SXNzdWU2MTkyMTQ2NDU= | 137 | Tokenized BLEU considered harmful - Discussion on community-based process | {
"login": "kpu",
"id": 247512,
"node_id": "MDQ6VXNlcjI0NzUxMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/247512?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kpu",
"html_url": "https://github.com/kpu",
"followers_url": "https://api.github.com/users/kpu/followers",
"following_url": "https://api.github.com/users/kpu/following{/other_user}",
"gists_url": "https://api.github.com/users/kpu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kpu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kpu/subscriptions",
"organizations_url": "https://api.github.com/users/kpu/orgs",
"repos_url": "https://api.github.com/users/kpu/repos",
"events_url": "https://api.github.com/users/kpu/events{/privacy}",
"received_events_url": "https://api.github.com/users/kpu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067400324,
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion",
"name": "generic discussion",
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library"
},
{
"id": 2067400959,
"node_id": "MDU6TGFiZWwyMDY3NDAwOTU5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/Metric%20discussion",
"name": "Metric discussion",
"color": "d722e8",
"default": false,
"description": "Discussions on the metrics"
}
] | open | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"I second this request. The bottom line is that **scores produced with different reference tokenizations are not comparable**. To discourage (even inadvertent) cheating, the user should never touch the reference. The `v13a` tokenization standard is not ideal, but at least it has been consistently used at matrix.statmt.org, facilitating comparisons.\r\n\r\nSacrebleu exposes [all its data sources](https://github.com/mjpost/sacrebleu/blob/master/sacrebleu/dataset.py) and additionally provides [an API](https://github.com/mjpost/sacrebleu/blob/master/sacrebleu/__init__.py) to accessing the references, which seem to fit within the spirit of your codebase.",
"Didn't we have a slide and discussion at WMT admitting that, for production-quality models, BLEU doesn't correlate with human eval anyway?\r\n",
"Yes, there are slides like that at WMT every year :) BLEU correlates with human judgment only at coarse levels, and it seems to be getting worse when people try to use it to do model selection among high-performing neural systems.\r\n\r\nHowever, the point isn't whether BLEU is a good metric, but whether your BLEU score can be compared to other BLEU scores. They only can be compared if you use the same reference tokenization (similar to how you [can't compare LM perplexities across different segmentations](https://sjmielke.com/comparing-perplexities.htm)). sacrebleu was an attempt to get everyone to use WMT's reference tokenization (meaning, your system has to first remove its own tokenization) so that you could just compare across papers. This also prevents scores from being gamed.",
"I do not consider as a sufficient solution switching this library's default metric from BLEU to the wrapper around SacreBLEU. \r\n\r\nAs currently implemented, the wrapper allows end users to toggle SacreBLEU options, but doesn't pass along the SacreBLEU signature. As @mjpost showed in [Post18](https://www.aclweb.org/anthology/W18-6319.pdf), it's simply not credible to assume that people will stick to the defaults, therefore, the signature is necessary to be explicit about what options were used. \r\n\r\nIn addition to the `v13a` or `intl` options for the SacreBLEU `tokenize` argument, which was pointed out earlier, papers frequently differ on whether they lowercase text before scoring (`lowercase`) and the smoothing method used (`smooth_method`). BLEU scores can differ substantially (over 1 BLEU) just by changing these options. \r\n\r\nLosing the SacreBLEU signature is a regression in reproducibility and clarity.\r\n\r\n(Perhaps this should belong in a separate issue?)",
"Thanks for sharing your thoughts. This is a very important discussion.\r\n\r\nAlso one of the first items on our mid-term roadmap (we will try to clean it and share it soon) is to introduce mechanisms to get high-quality traceability and reproducibility for all the processes related to the library.\r\n\r\nSo having the signature for `sacrebleu` is really important!\r\n\r\nRegarding BLEU, I guess we can just remove it from the canonical metrics included in the repo itself (it won't prevent people to add it as \"user-metrics\" but at least we won't be promoting it).\r\n\r\nOn a more general note (definitely too large for the scope of this issue) we are wondering, with @srush in particular, how we could handle the selection of metrics/datasets with the most community-based and bottom-up approach possible. If you have opinions on this, please share!",
"Yeah, I would love to have discussions about ways this project can have an community-based, transparent process to arrive at strong default metrics. @kpu / @mjpost do you have any suggestions of how that might work or pointers to places where this is done right? Perhaps this question can be template for what is likely to be repeated for other datasets.",
"I think @bittlingmayer is referring to Figure 6 in http://statmt.org/wmt19/pdf/53/WMT02.pdf . When you look at Appendix A there are some cases where metrics fall apart at the high end and some where they correlate well. en-zh is arguably production-quality. \r\n\r\nThis could evolve into a metrics Bazaar where the value add is really the packaging and consistency: it installs/compiles the metrics for me, gives a reproducible name to use in publication (involve the authors; you don't want a different sacrebleu hash system), a version number, and evaluation of the metrics like http://ufallab.ms.mff.cuni.cz/~bojar/wmt19-metrics-task-package.tgz but run when code changes rather than once a year. ",
"While a Bazaar setup works for models / datasets, I am not sure it is ideal for metrics ? Ideal from my perspective would be to have tasks with metrics moderated by experts who document, cite, and codify known pitchfalls (as above^) and make it non-trivial for beginners to mess it up. ",
"@srush @thomwolf \r\n\r\nModelFront could provide (automated, \"QE-based\") evaluation for all the pretrained translation models you host. Not bottom-up and not valid for claiming SoTA, but independent, practical for builders and not top-down.\r\n\r\nFor that I would also suggest some diverse benchmarks (so split it out into datasets with only user-generated data, or only constants, or only UI strings, or only READMEs) which tease out known trade-offs. Even hypothetical magic eval is limited if we always reduce it to a single number.\r\n\r\nRealistically people want to know how a model compares to an API like Google Translate, Microsoft Translator, DeepL or Yandex (especially for a language pair like EN:RU, or for the many languages that only Yandex supports), and that could be done too.\r\n",
"Very important discussion.\r\nI am trying to understand the effects of tokenization.\r\nI wanted to ask which is a good practice.\r\nSacrebleu should be used on top of the tokenized output, or detokenized(raw) text?",
"Use sacrebleu on detokenized output and raw unmodified references. "
] | 1,589,573,314,000 | 1,610,016,088,000 | null | NONE | null | https://github.com/huggingface/nlp/blob/7d1526dfeeb29248d832f1073192dbf03ad642da/metrics/bleu/bleu.py#L76 assumes the inputs are tokenized by the user. This is bad practice because the user's tokenizer is usually not the same as the one used by `mteval-v13a.pl`, the closest thing we have to a standard. Moreover, tokenizers are like window managers: they can be endlessly customized and nobody has quite the same options.
As @mjpost reported in https://www.aclweb.org/anthology/W18-6319.pdf BLEU configurations can vary by 1.8. Yet people are incorrectly putting non-comparable BLEU scores in the same table, such as Table 1 in https://arxiv.org/abs/2004.04902 .
There are a few use cases for tokenized BLEU like Thai. For Chinese, people seem to use character BLEU for better or worse.
The default easy option should be the one that's correct more often. And that is sacrebleu. Please don't make it easy for people to run what is usually the wrong option; it definitely shouldn't be `bleu`.
Also, I know this is inherited from TensorFlow and, paging @lmthang, they should discourage it too. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/137/reactions",
"total_count": 13,
"+1": 13,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/137/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/136 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/136/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/136/comments | https://api.github.com/repos/huggingface/datasets/issues/136/events | https://github.com/huggingface/datasets/pull/136 | 619,211,018 | MDExOlB1bGxSZXF1ZXN0NDE4NzgxNzI4 | 136 | Update README.md | {
"login": "renaud",
"id": 75369,
"node_id": "MDQ6VXNlcjc1MzY5",
"avatar_url": "https://avatars.githubusercontent.com/u/75369?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/renaud",
"html_url": "https://github.com/renaud",
"followers_url": "https://api.github.com/users/renaud/followers",
"following_url": "https://api.github.com/users/renaud/following{/other_user}",
"gists_url": "https://api.github.com/users/renaud/gists{/gist_id}",
"starred_url": "https://api.github.com/users/renaud/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/renaud/subscriptions",
"organizations_url": "https://api.github.com/users/renaud/orgs",
"repos_url": "https://api.github.com/users/renaud/repos",
"events_url": "https://api.github.com/users/renaud/events{/privacy}",
"received_events_url": "https://api.github.com/users/renaud/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Thanks, this was fixed with #135 :)"
] | 1,589,572,867,000 | 1,589,717,848,000 | 1,589,717,848,000 | NONE | null | small typo | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/136/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/136/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/136",
"html_url": "https://github.com/huggingface/datasets/pull/136",
"diff_url": "https://github.com/huggingface/datasets/pull/136.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/136.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/135 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/135/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/135/comments | https://api.github.com/repos/huggingface/datasets/issues/135/events | https://github.com/huggingface/datasets/pull/135 | 619,206,708 | MDExOlB1bGxSZXF1ZXN0NDE4Nzc4MTMw | 135 | Fix print statement in READ.md | {
"login": "codehunk628",
"id": 51091425,
"node_id": "MDQ6VXNlcjUxMDkxNDI1",
"avatar_url": "https://avatars.githubusercontent.com/u/51091425?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/codehunk628",
"html_url": "https://github.com/codehunk628",
"followers_url": "https://api.github.com/users/codehunk628/followers",
"following_url": "https://api.github.com/users/codehunk628/following{/other_user}",
"gists_url": "https://api.github.com/users/codehunk628/gists{/gist_id}",
"starred_url": "https://api.github.com/users/codehunk628/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/codehunk628/subscriptions",
"organizations_url": "https://api.github.com/users/codehunk628/orgs",
"repos_url": "https://api.github.com/users/codehunk628/repos",
"events_url": "https://api.github.com/users/codehunk628/events{/privacy}",
"received_events_url": "https://api.github.com/users/codehunk628/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Indeed, thanks!"
] | 1,589,572,343,000 | 1,589,717,646,000 | 1,589,717,645,000 | CONTRIBUTOR | null | print statement was throwing generator object instead of printing names of available datasets/metrics | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/135/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/135/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/135",
"html_url": "https://github.com/huggingface/datasets/pull/135",
"diff_url": "https://github.com/huggingface/datasets/pull/135.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/135.patch",
"merged_at": 1589717645000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/134 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/134/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/134/comments | https://api.github.com/repos/huggingface/datasets/issues/134/events | https://github.com/huggingface/datasets/pull/134 | 619,112,641 | MDExOlB1bGxSZXF1ZXN0NDE4Njk5OTYz | 134 | Update README.md | {
"login": "pranv",
"id": 8753078,
"node_id": "MDQ6VXNlcjg3NTMwNzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8753078?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pranv",
"html_url": "https://github.com/pranv",
"followers_url": "https://api.github.com/users/pranv/followers",
"following_url": "https://api.github.com/users/pranv/following{/other_user}",
"gists_url": "https://api.github.com/users/pranv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pranv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pranv/subscriptions",
"organizations_url": "https://api.github.com/users/pranv/orgs",
"repos_url": "https://api.github.com/users/pranv/repos",
"events_url": "https://api.github.com/users/pranv/events{/privacy}",
"received_events_url": "https://api.github.com/users/pranv/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"the readme got removed, closing this one"
] | 1,589,561,774,000 | 1,590,654,109,000 | 1,590,654,109,000 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/134/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/134/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/134",
"html_url": "https://github.com/huggingface/datasets/pull/134",
"diff_url": "https://github.com/huggingface/datasets/pull/134.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/134.patch",
"merged_at": null
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/133 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/133/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/133/comments | https://api.github.com/repos/huggingface/datasets/issues/133/events | https://github.com/huggingface/datasets/issues/133 | 619,094,954 | MDU6SXNzdWU2MTkwOTQ5NTQ= | 133 | [Question] Using/adding a local dataset | {
"login": "zphang",
"id": 1668462,
"node_id": "MDQ6VXNlcjE2Njg0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1668462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zphang",
"html_url": "https://github.com/zphang",
"followers_url": "https://api.github.com/users/zphang/followers",
"following_url": "https://api.github.com/users/zphang/following{/other_user}",
"gists_url": "https://api.github.com/users/zphang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zphang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zphang/subscriptions",
"organizations_url": "https://api.github.com/users/zphang/orgs",
"repos_url": "https://api.github.com/users/zphang/repos",
"events_url": "https://api.github.com/users/zphang/events{/privacy}",
"received_events_url": "https://api.github.com/users/zphang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hi @zphang,\r\n\r\nSo you can just give the local path to a dataset script file and it should work.\r\n\r\nHere is an example:\r\n- you can download one of the scripts in the `datasets` folder of the present repo (or clone the repo)\r\n- then you can load it with `load_dataset('PATH/TO/YOUR/LOCAL/SCRIPT.py')`\r\n\r\nDoes it make sense?",
"Could you give a more concrete example, please? \r\n\r\nI looked up wikitext dataset script from the repo. Should I just overwrite the `data_file` on line 98 to point to the local dataset directory? Would it work for different configurations of wikitext (wikitext2, wikitext103 etc.)?\r\n\r\nOr maybe we can use DownloadManager to specify local dataset location? In that case, where do we use DownloadManager instance?\r\n\r\nThanks",
"Hi @MaveriQ , although what I am doing is to commit a new dataset, but I think looking at imdb script might help.\r\nYou may want to use `dl_manager.download_custom`, give it a url(arbitrary string), a custom_download(arbitrary function) and return a path, and finally use _get sample to fetch a sample.",
"The download manager supports local directories. You can specify a local directory instead of a url and it should work.",
"Closing this one.\r\nFeel free to re-open if you have other questions :)"
] | 1,589,559,966,000 | 1,595,522,649,000 | 1,595,522,649,000 | NONE | null | Users may want to either create/modify a local copy of a dataset, or use a custom-built dataset with the same `Dataset` API as externally downloaded datasets.
It appears to be possible to point to a local dataset path rather than downloading the external ones, but I'm not exactly sure how to go about doing this.
A notebook/example script demonstrating this would be very helpful. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/133/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/133/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/132 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/132/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/132/comments | https://api.github.com/repos/huggingface/datasets/issues/132/events | https://github.com/huggingface/datasets/issues/132 | 619,077,851 | MDU6SXNzdWU2MTkwNzc4NTE= | 132 | [Feature Request] Add the OpenWebText dataset | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"We're experimenting with hosting the OpenWebText corpus on Zenodo for easier downloading. https://zenodo.org/record/3834942#.Xs1w8i-z2J8",
"Closing since it's been added in #660 "
] | 1,589,558,249,000 | 1,602,080,568,000 | 1,602,080,568,000 | MEMBER | null | The OpenWebText dataset is an open clone of OpenAI's WebText dataset. It can be used to train ELECTRA as is specified in the [README](https://www.github.com/google-research/electra).
More information and the download link are available [here](https://skylion007.github.io/OpenWebTextCorpus/). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/132/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/132/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/131 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/131/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/131/comments | https://api.github.com/repos/huggingface/datasets/issues/131/events | https://github.com/huggingface/datasets/issues/131 | 619,073,731 | MDU6SXNzdWU2MTkwNzM3MzE= | 131 | [Feature request] Add Toronto BookCorpus dataset | {
"login": "jarednielsen",
"id": 4564897,
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jarednielsen",
"html_url": "https://github.com/jarednielsen",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"As far as I understand, `wikitext` is refer to `WikiText-103` and `WikiText-2` that created by researchers in Salesforce, and mostly used in traditional language modeling.\r\n\r\nYou might want to say `wikipedia`, a dump from wikimedia foundation.\r\n\r\nAlso I would like to have Toronto BookCorpus too ! Though it involves copyright problem...",
"Hi, @lhoestq, just a reminder that this is solved by #248 .😉 "
] | 1,589,557,844,000 | 1,593,379,651,000 | 1,593,379,651,000 | CONTRIBUTOR | null | I know the copyright/distribution of this one is complex, but it would be great to have! That, combined with the existing `wikitext`, would provide a complete dataset for pretraining models like BERT. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/131/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/131/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/130 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/130/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/130/comments | https://api.github.com/repos/huggingface/datasets/issues/130/events | https://github.com/huggingface/datasets/issues/130 | 619,035,440 | MDU6SXNzdWU2MTkwMzU0NDA= | 130 | Loading GLUE dataset loads CoLA by default | {
"login": "zphang",
"id": 1668462,
"node_id": "MDQ6VXNlcjE2Njg0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1668462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zphang",
"html_url": "https://github.com/zphang",
"followers_url": "https://api.github.com/users/zphang/followers",
"following_url": "https://api.github.com/users/zphang/following{/other_user}",
"gists_url": "https://api.github.com/users/zphang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zphang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zphang/subscriptions",
"organizations_url": "https://api.github.com/users/zphang/orgs",
"repos_url": "https://api.github.com/users/zphang/repos",
"events_url": "https://api.github.com/users/zphang/events{/privacy}",
"received_events_url": "https://api.github.com/users/zphang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"As a follow-up to this: It looks like the actual GLUE task name is supplied as the `name` argument. Is there a way to check what `name`s/sub-datasets are available under a grouping like GLUE? That information doesn't seem to be readily available in info from `nlp.list_datasets()`.\r\n\r\nEdit: I found the info under `Glue.BUILDER_CONFIGS`",
"Yes so the first config is loaded by default when no `name` is supplied but for GLUE this should probably throw an error indeed.\r\n\r\nWe can probably just add an `__init__` at the top of the `class Glue(nlp.GeneratorBasedBuilder)` in the `glue.py` script which does this check:\r\n```\r\nclass Glue(nlp.GeneratorBasedBuilder):\r\n def __init__(self, *args, **kwargs):\r\n assert 'name' in kwargs and kwargs[name] is not None, \"Glue has to be called with a configuration name\"\r\n super(Glue, self).__init__(*args, **kwargs)\r\n```",
"An error is raised if the sub-dataset is not specified :)\r\n```\r\nValueError: Config name is missing.\r\nPlease pick one among the available configs: ['cola', 'sst2', 'mrpc', 'qqp', 'stsb', 'mnli', 'mnli_mismatched', 'mnli_matched', 'qnli', 'rte', 'wnli', 'ax']\r\nExample of usage:\r\n\t`load_dataset('glue', 'cola')`\r\n```"
] | 1,589,554,550,000 | 1,590,617,295,000 | 1,590,617,295,000 | NONE | null | If I run:
```python
dataset = nlp.load_dataset('glue')
```
The resultant dataset seems to be CoLA be default, without throwing any error. This is in contrast to calling:
```python
metric = nlp.load_metric("glue")
```
which throws an error telling the user that they need to specify a task in GLUE. Should the same apply for loading datasets? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/130/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/130/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/129 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/129/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/129/comments | https://api.github.com/repos/huggingface/datasets/issues/129/events | https://github.com/huggingface/datasets/issues/129 | 618,997,725 | MDU6SXNzdWU2MTg5OTc3MjU= | 129 | [Feature request] Add Google Natural Question dataset | {
"login": "elyase",
"id": 1175888,
"node_id": "MDQ6VXNlcjExNzU4ODg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1175888?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elyase",
"html_url": "https://github.com/elyase",
"followers_url": "https://api.github.com/users/elyase/followers",
"following_url": "https://api.github.com/users/elyase/following{/other_user}",
"gists_url": "https://api.github.com/users/elyase/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elyase/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elyase/subscriptions",
"organizations_url": "https://api.github.com/users/elyase/orgs",
"repos_url": "https://api.github.com/users/elyase/repos",
"events_url": "https://api.github.com/users/elyase/events{/privacy}",
"received_events_url": "https://api.github.com/users/elyase/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Indeed, I think this one is almost ready cc @lhoestq ",
"I'm doing the latest adjustments to make the processing of the dataset run on Dataflow",
"Is there an update to this? It will be very beneficial for the QA community!",
"Still work in progress :)\r\nThe idea is to have the dataset already processed somewhere so that the user only have to download the processed files. I'm also doing it for wikipedia.",
"Super appreciate your hard work !!\r\nI'll cross my fingers and hope easily loadable wikipedia dataset will come soon. ",
"Quick update on NQ: due to some limitations I met using apache beam + parquet I was not able to use the dataset in a nested parquet structure in python to convert it to our Apache Arrow format yet.\r\nHowever we had planned to change this conversion step anyways so we'll make just sure that it enables to process and convert the NQ dataset to arrow.",
"NQ was added in #427 🎉"
] | 1,589,552,060,000 | 1,595,510,489,000 | 1,595,510,489,000 | NONE | null | Would be great to have https://github.com/google-research-datasets/natural-questions as an alternative to SQuAD. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/129/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/129/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/128 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/128/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/128/comments | https://api.github.com/repos/huggingface/datasets/issues/128/events | https://github.com/huggingface/datasets/issues/128 | 618,951,117 | MDU6SXNzdWU2MTg5NTExMTc= | 128 | Some error inside nlp.load_dataset() | {
"login": "polkaYK",
"id": 18486287,
"node_id": "MDQ6VXNlcjE4NDg2Mjg3",
"avatar_url": "https://avatars.githubusercontent.com/u/18486287?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polkaYK",
"html_url": "https://github.com/polkaYK",
"followers_url": "https://api.github.com/users/polkaYK/followers",
"following_url": "https://api.github.com/users/polkaYK/following{/other_user}",
"gists_url": "https://api.github.com/users/polkaYK/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polkaYK/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polkaYK/subscriptions",
"organizations_url": "https://api.github.com/users/polkaYK/orgs",
"repos_url": "https://api.github.com/users/polkaYK/repos",
"events_url": "https://api.github.com/users/polkaYK/events{/privacy}",
"received_events_url": "https://api.github.com/users/polkaYK/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Google colab has an old version of Apache Arrow built-in.\r\nBe sure you execute the \"pip install\" cell and restart the notebook environment if the colab asks for it.",
"Thanks for reply, worked fine!\r\n"
] | 1,589,547,689,000 | 1,589,548,240,000 | 1,589,548,240,000 | NONE | null | First of all, nice work!
I am going through [this overview notebook](https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb)
In simple step `dataset = nlp.load_dataset('squad', split='validation[:10%]')`
I get an error, which is connected with some inner code, I think:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-8-d848d3a99b8c> in <module>()
1 # Downloading and loading a dataset
2
----> 3 dataset = nlp.load_dataset('squad', split='validation[:10%]')
8 frames
/usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
515 download_mode=download_mode,
516 ignore_verifications=ignore_verifications,
--> 517 save_infos=save_infos,
518 )
519
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs)
361 verify_infos = not save_infos and not ignore_verifications
362 self._download_and_prepare(
--> 363 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
364 )
365 # Sync info
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
414 try:
415 # Prepare split will record examples associated to the split
--> 416 self._prepare_split(split_generator, **prepare_split_kwargs)
417 except OSError:
418 raise OSError("Cannot find data file. " + (self.MANUAL_DOWNLOAD_INSTRUCTIONS or ""))
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in _prepare_split(self, split_generator)
585 fname = "{}-{}.arrow".format(self.name, split_generator.name)
586 fpath = os.path.join(self._cache_dir, fname)
--> 587 examples_type = self.info.features.type
588 writer = ArrowWriter(data_type=examples_type, path=fpath, writer_batch_size=self._writer_batch_size)
589
/usr/local/lib/python3.6/dist-packages/nlp/features.py in type(self)
460 @property
461 def type(self):
--> 462 return get_nested_type(self)
463
464 @classmethod
/usr/local/lib/python3.6/dist-packages/nlp/features.py in get_nested_type(schema)
370 # Nested structures: we allow dict, list/tuples, sequences
371 if isinstance(schema, dict):
--> 372 return pa.struct({key: get_nested_type(value) for key, value in schema.items()})
373 elif isinstance(schema, (list, tuple)):
374 assert len(schema) == 1, "We defining list feature, you should just provide one example of the inner type"
/usr/local/lib/python3.6/dist-packages/nlp/features.py in <dictcomp>(.0)
370 # Nested structures: we allow dict, list/tuples, sequences
371 if isinstance(schema, dict):
--> 372 return pa.struct({key: get_nested_type(value) for key, value in schema.items()})
373 elif isinstance(schema, (list, tuple)):
374 assert len(schema) == 1, "We defining list feature, you should just provide one example of the inner type"
/usr/local/lib/python3.6/dist-packages/nlp/features.py in get_nested_type(schema)
379 # We allow to reverse list of dict => dict of list for compatiblity with tfds
380 if isinstance(inner_type, pa.StructType):
--> 381 return pa.struct(dict((f.name, pa.list_(f.type, schema.length)) for f in inner_type))
382 return pa.list_(inner_type, schema.length)
383
/usr/local/lib/python3.6/dist-packages/nlp/features.py in <genexpr>(.0)
379 # We allow to reverse list of dict => dict of list for compatiblity with tfds
380 if isinstance(inner_type, pa.StructType):
--> 381 return pa.struct(dict((f.name, pa.list_(f.type, schema.length)) for f in inner_type))
382 return pa.list_(inner_type, schema.length)
383
TypeError: list_() takes exactly one argument (2 given)
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/128/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/128/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/127 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/127/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/127/comments | https://api.github.com/repos/huggingface/datasets/issues/127/events | https://github.com/huggingface/datasets/pull/127 | 618,909,042 | MDExOlB1bGxSZXF1ZXN0NDE4NTQ1MDcy | 127 | Update Overview.ipynb | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,589,543,208,000 | 1,589,543,247,000 | 1,589,543,245,000 | MEMBER | null | update notebook | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/127/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/127/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/127",
"html_url": "https://github.com/huggingface/datasets/pull/127",
"diff_url": "https://github.com/huggingface/datasets/pull/127.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/127.patch",
"merged_at": 1589543245000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/126 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/126/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/126/comments | https://api.github.com/repos/huggingface/datasets/issues/126/events | https://github.com/huggingface/datasets/pull/126 | 618,897,499 | MDExOlB1bGxSZXF1ZXN0NDE4NTM1Mzc5 | 126 | remove webis | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,589,541,920,000 | 1,589,542,284,000 | 1,589,542,226,000 | MEMBER | null | Remove webis from dataset folder.
Our first dataset script that only lives on AWS :-) https://s3.console.aws.amazon.com/s3/buckets/datasets.huggingface.co/nlp/datasets/webis/tl_dr/?region=us-east-1 @julien-c @jplu | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/126/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/126/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/126",
"html_url": "https://github.com/huggingface/datasets/pull/126",
"diff_url": "https://github.com/huggingface/datasets/pull/126.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/126.patch",
"merged_at": 1589542226000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/125 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/125/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/125/comments | https://api.github.com/repos/huggingface/datasets/issues/125/events | https://github.com/huggingface/datasets/pull/125 | 618,869,048 | MDExOlB1bGxSZXF1ZXN0NDE4NTExNDE0 | 125 | [Newsroom] add newsroom | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,589,538,874,000 | 1,589,539,027,000 | 1,589,539,022,000 | MEMBER | null | I checked it with the data link of the mail you forwarded @thomwolf => works well! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/125/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/125/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/125",
"html_url": "https://github.com/huggingface/datasets/pull/125",
"diff_url": "https://github.com/huggingface/datasets/pull/125.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/125.patch",
"merged_at": 1589539022000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/124 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/124/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/124/comments | https://api.github.com/repos/huggingface/datasets/issues/124/events | https://github.com/huggingface/datasets/pull/124 | 618,864,284 | MDExOlB1bGxSZXF1ZXN0NDE4NTA3NDUx | 124 | Xsum, require manual download of some files | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,589,538,373,000 | 1,589,540,688,000 | 1,589,540,686,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/124/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/124/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/124",
"html_url": "https://github.com/huggingface/datasets/pull/124",
"diff_url": "https://github.com/huggingface/datasets/pull/124.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/124.patch",
"merged_at": 1589540686000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/123 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/123/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/123/comments | https://api.github.com/repos/huggingface/datasets/issues/123/events | https://github.com/huggingface/datasets/pull/123 | 618,820,140 | MDExOlB1bGxSZXF1ZXN0NDE4NDcxODU5 | 123 | [Tests] Local => aws | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"For each dataset, If there exist a `dataset_info.json`, then the command `nlp-cli test path/to/my/dataset --al_configs` is successful only if the `dataset_infos.json` is correct. The infos are correct if the size and checksums of the downloaded file are correct, and if the number of examples in each split are correct.\r\n\r\nNote: the `test` command is supposed to test the script, that's why it runs the script even if the cached files already exist. Let me know if it's good to you.",
"> For each dataset, If there exist a `dataset_info.json`, then the command `nlp-cli test path/to/my/dataset --al_configs` is successful only if the `dataset_infos.json` is correct. The infos are correct if the size and checksums of the downloaded file are correct, and if the number of examples in each split are correct.\r\n> \r\n> Note: the `test` command is supposed to test the script, that's why it runs the script even if the cached files already exist. Let me know if it's good to you.\r\n\r\nDoes it have to download the whole data to check if the checksums are correct? I guess so no? ",
"> > For each dataset, If there exist a `dataset_info.json`, then the command `nlp-cli test path/to/my/dataset --al_configs` is successful only if the `dataset_infos.json` is correct. The infos are correct if the size and checksums of the downloaded file are correct, and if the number of examples in each split are correct.\r\n> > Note: the `test` command is supposed to test the script, that's why it runs the script even if the cached files already exist. Let me know if it's good to you.\r\n> \r\n> Does it have to download the whole data to check if the checksums are correct? I guess so no?\r\n\r\nYes it has to download them all (unless they were already downloaded in which case it just uses the cached downloaded files)."
] | 1,589,533,945,000 | 1,589,537,172,000 | 1,589,537,006,000 | MEMBER | null | ## Change default Test from local => aws
As a default we set` aws=True`, `Local=False`, `slow=False`
### 1. RUN_AWS=1 (default)
This runs 4 tests per dataset script.
a) Does the dataset script have a valid etag / Can it be reached on AWS?
b) Can we load its `builder_class`?
c) Can we load **all** dataset configs?
d) _Most importantly_: Can we load the dataset?
Important - we currently only test the first config of each dataset to reduce test time. Total test time is around 1min20s.
### 2. RUN_LOCAL=1 RUN_AWS=0
***This should be done when debugging dataset scripts of the ./datasets folder***
This only runs 1 test per dataset test, which is equivalent to aws d) - Can we load the dataset from the local `datasets` directory?
### 3. RUN_SLOW=1
We should set up to run these tests maybe 1 time per week ? @thomwolf
The `slow` tests include two more important tests.
e) Can we load the dataset with all possible configs? This test will probably fail at the moment because a lot of dummy data is missing. We should add the dummy data step by step to be sure that all configs work.
f) Test that the actual dataset can be loaded. This will take quite some time to run, but is important to make sure that the "real" data can be loaded. It will also test whether the dataset script has the correct checksums file which is currently not tested with `aws=True`. @lhoestq - is there an easy way to check cheaply whether the `dataset_info.json` is correct for each dataset script? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/123/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/123/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/123",
"html_url": "https://github.com/huggingface/datasets/pull/123",
"diff_url": "https://github.com/huggingface/datasets/pull/123.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/123.patch",
"merged_at": 1589537006000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/122 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/122/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/122/comments | https://api.github.com/repos/huggingface/datasets/issues/122/events | https://github.com/huggingface/datasets/pull/122 | 618,813,182 | MDExOlB1bGxSZXF1ZXN0NDE4NDY2Mzc3 | 122 | Final cleanup of readme and metrics | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,589,533,252,000 | 1,630,698,009,000 | 1,589,533,342,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/122/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/122/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/122",
"html_url": "https://github.com/huggingface/datasets/pull/122",
"diff_url": "https://github.com/huggingface/datasets/pull/122.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/122.patch",
"merged_at": 1589533342000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/121 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/121/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/121/comments | https://api.github.com/repos/huggingface/datasets/issues/121/events | https://github.com/huggingface/datasets/pull/121 | 618,790,040 | MDExOlB1bGxSZXF1ZXN0NDE4NDQ4MTkx | 121 | make style | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,589,531,016,000 | 1,589,531,139,000 | 1,589,531,138,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/121/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/121/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/121",
"html_url": "https://github.com/huggingface/datasets/pull/121",
"diff_url": "https://github.com/huggingface/datasets/pull/121.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/121.patch",
"merged_at": 1589531138000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/120 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/120/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/120/comments | https://api.github.com/repos/huggingface/datasets/issues/120/events | https://github.com/huggingface/datasets/issues/120 | 618,737,783 | MDU6SXNzdWU2MTg3Mzc3ODM= | 120 | 🐛 `map` not working | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"I didn't assign the output 🤦♂️\r\n\r\n```python\r\ndataset.map(test)\r\n```\r\n\r\nshould be :\r\n\r\n```python\r\ndataset = dataset.map(test)\r\n```"
] | 1,589,524,988,000 | 1,589,526,158,000 | 1,589,526,158,000 | NONE | null | I'm trying to run a basic example (mapping function to add a prefix).
[Here is the colab notebook I'm using.](https://colab.research.google.com/drive/1YH4JCAy0R1MMSc-k_Vlik_s1LEzP_t1h?usp=sharing)
```python
import nlp
dataset = nlp.load_dataset('squad', split='validation[:10%]')
def test(sample):
sample['title'] = "test prefix @@@ " + sample["title"]
return sample
print(dataset[0]['title'])
dataset.map(test)
print(dataset[0]['title'])
```
Output :
> Super_Bowl_50
Super_Bowl_50
Expected output :
> Super_Bowl_50
test prefix @@@ Super_Bowl_50 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/120/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/120/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/119 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/119/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/119/comments | https://api.github.com/repos/huggingface/datasets/issues/119/events | https://github.com/huggingface/datasets/issues/119 | 618,652,145 | MDU6SXNzdWU2MTg2NTIxNDU= | 119 | 🐛 Colab : type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array' | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"It's strange, after installing `nlp` on Colab, the `pyarrow` version seems fine from `pip` but not from python :\r\n\r\n```python\r\nimport pyarrow\r\n\r\n!pip show pyarrow\r\nprint(\"version = {}\".format(pyarrow.__version__))\r\n```\r\n\r\n> Name: pyarrow\r\nVersion: 0.17.0\r\nSummary: Python library for Apache Arrow\r\nHome-page: https://arrow.apache.org/\r\nAuthor: None\r\nAuthor-email: None\r\nLicense: Apache License, Version 2.0\r\nLocation: /usr/local/lib/python3.6/dist-packages\r\nRequires: numpy\r\nRequired-by: nlp, feather-format\r\n> \r\n> version = 0.14.1",
"Ok I just had to restart the runtime after installing `nlp`. After restarting, the version of `pyarrow` is fine."
] | 1,589,509,646,000 | 1,589,519,482,000 | 1,589,510,728,000 | NONE | null | I'm trying to load CNN/DM dataset on Colab.
[Colab notebook](https://colab.research.google.com/drive/11Mf7iNhIyt6GpgA1dBEtg3cyMHmMhtZS?usp=sharing)
But I meet this error :
> AttributeError: type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array'
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/119/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/119/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/118 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/118/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/118/comments | https://api.github.com/repos/huggingface/datasets/issues/118/events | https://github.com/huggingface/datasets/issues/118 | 618,643,088 | MDU6SXNzdWU2MTg2NDMwODg= | 118 | ❓ How to apply a map to all subsets ? | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"That's the way!"
] | 1,589,507,932,000 | 1,589,526,349,000 | 1,589,526,265,000 | NONE | null | I'm working with CNN/DM dataset, where I have 3 subsets : `train`, `test`, `validation`.
Should I apply my map function on the subsets one by one ?
```python
import nlp
cnn_dm = nlp.load_dataset('cnn_dailymail')
for corpus in ['train', 'test', 'validation']:
cnn_dm[corpus] = cnn_dm[corpus].map(my_func)
```
Or is there a better way to do this ? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/118/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/118/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/117 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/117/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/117/comments | https://api.github.com/repos/huggingface/datasets/issues/117/events | https://github.com/huggingface/datasets/issues/117 | 618,632,573 | MDU6SXNzdWU2MTg2MzI1NzM= | 117 | ❓ How to remove specific rows of a dataset ? | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hi, you can't do that at the moment."
] | 1,589,505,906,000 | 1,620,964,939,000 | 1,589,526,272,000 | NONE | null | I saw on the [example notebook](https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb#scrollTo=efFhDWhlvSVC) how to remove a specific column :
```python
dataset.drop('id')
```
But I didn't find how to remove a specific row.
**For example, how can I remove all sample with `id` < 10 ?** | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/117/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/117/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/116 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/116/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/116/comments | https://api.github.com/repos/huggingface/datasets/issues/116/events | https://github.com/huggingface/datasets/issues/116 | 618,628,264 | MDU6SXNzdWU2MTg2MjgyNjQ= | 116 | 🐛 Trying to use ROUGE metric : pyarrow.lib.ArrowInvalid: Column 1 named references expected length 534 but got length 323 | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067393914,
"node_id": "MDU6TGFiZWwyMDY3MzkzOTE0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug",
"name": "metric bug",
"color": "25b21e",
"default": false,
"description": "A bug in a metric script"
}
] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Can you share your data files or a minimally reproducible example?",
"Sure, [here is a Colab notebook](https://colab.research.google.com/drive/1uiS89fnHMG7HV_cYxp3r-_LqJQvNNKs9?usp=sharing) reproducing the error.\r\n\r\n> ArrowInvalid: Column 1 named references expected length 36 but got length 56",
"This is because `add` takes as input a batch of elements and you provided only one. I think we should have `add` for one prediction/reference and `add_batch` for a batch of predictions/references. This would make it more coherent with the way we use Arrow.\r\n\r\nLet me do this change",
"Thanks for noticing though. I was mainly used to do `.compute` directly ^^",
"Thanks @lhoestq it works :)"
] | 1,589,505,126,000 | 1,590,709,387,000 | 1,590,709,387,000 | NONE | null | I'm trying to use rouge metric.
I have to files : `test.pred.tokenized` and `test.gold.tokenized` with each line containing a sentence.
I tried :
```python
import nlp
rouge = nlp.load_metric('rouge')
with open("test.pred.tokenized") as p, open("test.gold.tokenized") as g:
for lp, lg in zip(p, g):
rouge.add(lp, lg)
```
But I meet following error :
> pyarrow.lib.ArrowInvalid: Column 1 named references expected length 534 but got length 323
---
Full stack-trace :
```
Traceback (most recent call last):
File "<stdin>", line 3, in <module>
File "/home/me/.venv/transformers/lib/python3.6/site-packages/nlp/metric.py", line 224, in add
self.writer.write_batch(batch)
File "/home/me/.venv/transformers/lib/python3.6/site-packages/nlp/arrow_writer.py", line 148, in write_batch
pa_table: pa.Table = pa.Table.from_pydict(batch_examples, schema=self._schema)
File "pyarrow/table.pxi", line 1550, in pyarrow.lib.Table.from_pydict
File "pyarrow/table.pxi", line 1503, in pyarrow.lib.Table.from_arrays
File "pyarrow/public-api.pxi", line 390, in pyarrow.lib.pyarrow_wrap_table
File "pyarrow/error.pxi", line 85, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Column 1 named references expected length 534 but got length 323
```
(`nlp` installed from source) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/116/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/116/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/115 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/115/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/115/comments | https://api.github.com/repos/huggingface/datasets/issues/115/events | https://github.com/huggingface/datasets/issues/115 | 618,615,855 | MDU6SXNzdWU2MTg2MTU4NTU= | 115 | AttributeError: 'dict' object has no attribute 'info' | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"I could access the info by first accessing the different splits :\r\n\r\n```python\r\nimport nlp\r\n\r\ncnn_dm = nlp.load_dataset('cnn_dailymail')\r\nprint(cnn_dm['train'].info)\r\n```\r\n\r\nInformation seems to be duplicated between the subsets :\r\n\r\n```python\r\nprint(cnn_dm[\"train\"].info == cnn_dm[\"test\"].info == cnn_dm[\"validation\"].info)\r\n# True\r\n```\r\n\r\nIs it expected ?",
"Good point @Colanim ! What happens under the hood when running:\r\n\r\n```python\r\nimport nlp\r\n\r\ncnn_dm = nlp.load_dataset('cnn_dailymail')\r\n```\r\n\r\nis that for every split in `cnn_dailymail`, a different dataset object (which all holds the same info) is created. This has the advantages that the datasets are easily separable in a training setup. \r\nAlso note that you can load e.g. only the `train` split of the dataset via:\r\n\r\n```python\r\ncnn_dm_train = nlp.load_dataset('cnn_dailymail', split=\"train\")\r\nprint(cnn_dm_train.info)\r\n```\r\n\r\nI think we should make the `info` object slightly different when creating the dataset for each split - at the moment it contains for example the variable `splits` which should maybe be renamed to `split` and contain only one `SplitInfo` object ...\r\n"
] | 1,589,502,587,000 | 1,589,721,060,000 | 1,589,721,060,000 | NONE | null | I'm trying to access the information of CNN/DM dataset :
```python
cnn_dm = nlp.load_dataset('cnn_dailymail')
print(cnn_dm.info)
```
returns :
> AttributeError: 'dict' object has no attribute 'info' | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/115/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/115/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/114 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/114/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/114/comments | https://api.github.com/repos/huggingface/datasets/issues/114/events | https://github.com/huggingface/datasets/issues/114 | 618,611,310 | MDU6SXNzdWU2MTg2MTEzMTA= | 114 | Couldn't reach CNN/DM dataset | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Installing from source (instead of Pypi package) solved the problem."
] | 1,589,501,777,000 | 1,589,501,992,000 | 1,589,501,991,000 | NONE | null | I can't get CNN / DailyMail dataset.
```python
import nlp
assert "cnn_dailymail" in [dataset.id for dataset in nlp.list_datasets()]
cnn_dm = nlp.load_dataset('cnn_dailymail')
```
[Colab notebook](https://colab.research.google.com/drive/1zQ3bYAVzm1h0mw0yWPqKAg_4EUlSx5Ex?usp=sharing)
gives following error :
```
ConnectionError: Couldn't reach https://s3.amazonaws.com/datasets.huggingface.co/nlp/cnn_dailymail/cnn_dailymail.py
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/114/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/114/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/113 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/113/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/113/comments | https://api.github.com/repos/huggingface/datasets/issues/113/events | https://github.com/huggingface/datasets/pull/113 | 618,590,562 | MDExOlB1bGxSZXF1ZXN0NDE4MjkxNjIx | 113 | Adding docstrings and some doc | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,589,498,081,000 | 1,589,498,565,000 | 1,589,498,564,000 | MEMBER | null | Some doc | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/113/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/113/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/113",
"html_url": "https://github.com/huggingface/datasets/pull/113",
"diff_url": "https://github.com/huggingface/datasets/pull/113.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/113.patch",
"merged_at": 1589498564000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/112 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/112/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/112/comments | https://api.github.com/repos/huggingface/datasets/issues/112/events | https://github.com/huggingface/datasets/pull/112 | 618,569,195 | MDExOlB1bGxSZXF1ZXN0NDE4Mjc0MTU4 | 112 | Qa4mre - add dataset | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,589,494,671,000 | 1,589,534,203,000 | 1,589,534,202,000 | MEMBER | null | Added dummy data test only for the first config. Will do the rest later.
I had to do add some minor hacks to an important function to make it work.
There might be a cleaner way to handle it - can you take a look @thomwolf ? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/112/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/112/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/112",
"html_url": "https://github.com/huggingface/datasets/pull/112",
"diff_url": "https://github.com/huggingface/datasets/pull/112.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/112.patch",
"merged_at": 1589534202000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/111 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/111/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/111/comments | https://api.github.com/repos/huggingface/datasets/issues/111/events | https://github.com/huggingface/datasets/pull/111 | 618,528,060 | MDExOlB1bGxSZXF1ZXN0NDE4MjQwMjMy | 111 | [Clean-up] remove under construction datastes | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,589,489,533,000 | 1,589,489,543,000 | 1,589,489,542,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/111/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/111/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/111",
"html_url": "https://github.com/huggingface/datasets/pull/111",
"diff_url": "https://github.com/huggingface/datasets/pull/111.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/111.patch",
"merged_at": 1589489542000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/110 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/110/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/110/comments | https://api.github.com/repos/huggingface/datasets/issues/110/events | https://github.com/huggingface/datasets/pull/110 | 618,520,325 | MDExOlB1bGxSZXF1ZXN0NDE4MjMzODIy | 110 | fix reddit tifu dummy data | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,589,488,657,000 | 1,589,488,814,000 | 1,589,488,813,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/110/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/110/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/110",
"html_url": "https://github.com/huggingface/datasets/pull/110",
"diff_url": "https://github.com/huggingface/datasets/pull/110.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/110.patch",
"merged_at": 1589488813000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/109 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/109/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/109/comments | https://api.github.com/repos/huggingface/datasets/issues/109/events | https://github.com/huggingface/datasets/pull/109 | 618,508,359 | MDExOlB1bGxSZXF1ZXN0NDE4MjI0MDYw | 109 | [Reclor] fix reclor | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,589,487,386,000 | 1,589,487,549,000 | 1,589,487,548,000 | MEMBER | null | - That's probably one me. Could have made the manual data test more flexible. @mariamabarham | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/109/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/109/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/109",
"html_url": "https://github.com/huggingface/datasets/pull/109",
"diff_url": "https://github.com/huggingface/datasets/pull/109.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/109.patch",
"merged_at": 1589487548000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/108 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/108/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/108/comments | https://api.github.com/repos/huggingface/datasets/issues/108/events | https://github.com/huggingface/datasets/pull/108 | 618,386,394 | MDExOlB1bGxSZXF1ZXN0NDE4MTIzMzc3 | 108 | convert can use manual dir as second argument | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,589,475,152,000 | 1,589,475,163,000 | 1,589,475,162,000 | MEMBER | null | @mariamabarham | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/108/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/108/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/108",
"html_url": "https://github.com/huggingface/datasets/pull/108",
"diff_url": "https://github.com/huggingface/datasets/pull/108.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/108.patch",
"merged_at": 1589475162000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/107 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/107/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/107/comments | https://api.github.com/repos/huggingface/datasets/issues/107/events | https://github.com/huggingface/datasets/pull/107 | 618,373,045 | MDExOlB1bGxSZXF1ZXN0NDE4MTEyNzcx | 107 | add writer_batch_size to GeneratorBasedBuilder | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Awesome that's great!"
] | 1,589,474,139,000 | 1,589,475,030,000 | 1,589,475,029,000 | MEMBER | null | You can now specify `writer_batch_size` in the builder arguments or directly in `load_dataset` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/107/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/107/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/107",
"html_url": "https://github.com/huggingface/datasets/pull/107",
"diff_url": "https://github.com/huggingface/datasets/pull/107.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/107.patch",
"merged_at": 1589475029000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/106 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/106/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/106/comments | https://api.github.com/repos/huggingface/datasets/issues/106/events | https://github.com/huggingface/datasets/pull/106 | 618,361,418 | MDExOlB1bGxSZXF1ZXN0NDE4MTAzMjM3 | 106 | Add data dir test command | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Nice - I think we can merge this. I will update the checksums for `wikihow` then as well"
] | 1,589,473,119,000 | 1,589,474,951,000 | 1,589,474,950,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/106/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/106/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/106",
"html_url": "https://github.com/huggingface/datasets/pull/106",
"diff_url": "https://github.com/huggingface/datasets/pull/106.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/106.patch",
"merged_at": 1589474950000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/105 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/105/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/105/comments | https://api.github.com/repos/huggingface/datasets/issues/105/events | https://github.com/huggingface/datasets/pull/105 | 618,345,191 | MDExOlB1bGxSZXF1ZXN0NDE4MDg5Njgz | 105 | [New structure on AWS] Adapt paths | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,589,471,757,000 | 1,589,471,788,000 | 1,589,471,787,000 | MEMBER | null | Some small changes so that we have the correct paths. @julien-c | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/105/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/105/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/105",
"html_url": "https://github.com/huggingface/datasets/pull/105",
"diff_url": "https://github.com/huggingface/datasets/pull/105.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/105.patch",
"merged_at": 1589471787000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/104 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/104/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/104/comments | https://api.github.com/repos/huggingface/datasets/issues/104/events | https://github.com/huggingface/datasets/pull/104 | 618,277,081 | MDExOlB1bGxSZXF1ZXN0NDE4MDMzOTY0 | 104 | Add trivia_q | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,589,466,439,000 | 1,594,532,060,000 | 1,589,487,812,000 | MEMBER | null | Currently tested only for one config to pass tests. Needs to add more dummy data later. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/104/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/104/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/104",
"html_url": "https://github.com/huggingface/datasets/pull/104",
"diff_url": "https://github.com/huggingface/datasets/pull/104.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/104.patch",
"merged_at": 1589487812000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/103 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/103/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/103/comments | https://api.github.com/repos/huggingface/datasets/issues/103/events | https://github.com/huggingface/datasets/pull/103 | 618,233,637 | MDExOlB1bGxSZXF1ZXN0NDE3OTk5MDIy | 103 | [Manual downloads] add logic proposal for manual downloads and add wikihow | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"> Wikihow is an example that needs to manually download two files as stated in: https://github.com/mahnazkoupaee/WikiHow-Dataset.\r\n> \r\n> The user can then store these files under a hard-coded name: `wikihowAll.csv` and `wikihowSep.csv` in this case in a directory of his choice, e.g. `~/wikihow/manual_dir`.\r\n> \r\n> The dataset can then be loaded via:\r\n> \r\n> ```python\r\n> import nlp\r\n> nlp.load_dataset(\"wikihow\", data_dir=\"~/wikihow/manual_dir\")\r\n> ```\r\n> \r\n> I added/changed so that there are explicit error messages when using manually downloaded files.\r\n\r\nwouldn't be nicer if we can have `manual_dir/wikihow`? ",
"> > Wikihow is an example that needs to manually download two files as stated in: https://github.com/mahnazkoupaee/WikiHow-Dataset.\r\n> > The user can then store these files under a hard-coded name: `wikihowAll.csv` and `wikihowSep.csv` in this case in a directory of his choice, e.g. `~/wikihow/manual_dir`.\r\n> > The dataset can then be loaded via:\r\n> > ```python\r\n> > import nlp\r\n> > nlp.load_dataset(\"wikihow\", data_dir=\"~/wikihow/manual_dir\")\r\n> > ```\r\n> > \r\n> > \r\n> > I added/changed so that there are explicit error messages when using manually downloaded files.\r\n> \r\n> wouldn't be nicer if we can have `manual_dir/wikihow`?\r\n\r\nSure, I mean the user can decide whatever he likes best :-) The path one puts in `data_dir` will be used as the path to the manual dir. `nlp.load_dataset(\"wikihow\", data_dir=\"~/manual_dir/wikihow\")` would work as well as any other path ;-) ",
"Perfect! You can merge!"
] | 1,589,463,036,000 | 1,589,466,461,000 | 1,589,466,460,000 | MEMBER | null | Wikihow is an example that needs to manually download two files as stated in: https://github.com/mahnazkoupaee/WikiHow-Dataset.
The user can then store these files under a hard-coded name: `wikihowAll.csv` and `wikihowSep.csv` in this case in a directory of his choice, e.g. `~/wikihow/manual_dir`.
The dataset can then be loaded via:
```python
import nlp
nlp.load_dataset("wikihow", data_dir="~/wikihow/manual_dir")
```
I added/changed so that there are explicit error messages when using manually downloaded files.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/103/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/103/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/103",
"html_url": "https://github.com/huggingface/datasets/pull/103",
"diff_url": "https://github.com/huggingface/datasets/pull/103.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/103.patch",
"merged_at": 1589466460000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/102 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/102/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/102/comments | https://api.github.com/repos/huggingface/datasets/issues/102/events | https://github.com/huggingface/datasets/pull/102 | 618,231,216 | MDExOlB1bGxSZXF1ZXN0NDE3OTk3MDQz | 102 | Run save infos | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Haha that cornell dialogue dataset - that ran for 3h on my computer as well. The `generate_examples` method in this script is one of the most inefficient code samples I've ever seen :D ",
"Indeed it's been 3 hours already\r\n```73111 examples [3:07:48, 2.40 examples/s]```"
] | 1,589,462,846,000 | 1,589,470,984,000 | 1,589,470,983,000 | MEMBER | null | I replaced the old checksum file with the new `dataset_infos.json` by running the script on almost all the datasets we have. The only one that is still running on my side is the cornell dialog | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/102/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/102/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/102",
"html_url": "https://github.com/huggingface/datasets/pull/102",
"diff_url": "https://github.com/huggingface/datasets/pull/102.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/102.patch",
"merged_at": 1589470983000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/101 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/101/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/101/comments | https://api.github.com/repos/huggingface/datasets/issues/101/events | https://github.com/huggingface/datasets/pull/101 | 618,111,651 | MDExOlB1bGxSZXF1ZXN0NDE3ODk5OTQ2 | 101 | [Reddit] add reddit | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,589,451,902,000 | 1,589,452,045,000 | 1,589,452,044,000 | MEMBER | null | - Everything worked fine @mariamabarham. Made my computer nearly crash, but all seems to be working :-) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/101/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/101/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/101",
"html_url": "https://github.com/huggingface/datasets/pull/101",
"diff_url": "https://github.com/huggingface/datasets/pull/101.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/101.patch",
"merged_at": 1589452044000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/100 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/100/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/100/comments | https://api.github.com/repos/huggingface/datasets/issues/100/events | https://github.com/huggingface/datasets/pull/100 | 618,081,602 | MDExOlB1bGxSZXF1ZXN0NDE3ODc1MjE2 | 100 | Add per type scores in seqeval metric | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"LGTM :-) Some small suggestions to shorten the code a bit :-) ",
"Can you put the kwargs as normal kwargs instead of a dict? (And add them to the kwargs description As well)",
"@thom Is-it what you meant?",
"Yes and there is a dynamically generated doc string in the metric script KWARGS DESCRIPTION"
] | 1,589,449,072,000 | 1,589,498,495,000 | 1,589,498,494,000 | CONTRIBUTOR | null | This PR add a bit more detail in the seqeval metric. Now the usage and output are:
```python
import nlp
met = nlp.load_metric('metrics/seqeval')
references = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
predictions = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
met.compute(predictions, references)
#Output: {'PER': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1}, 'MISC': {'precision': 0.0, 'recall': 0.0, 'f1': 0, 'number': 1}, 'overall_precision': 0.5, 'overall_recall': 0.5, 'overall_f1': 0.5, 'overall_accuracy': 0.8}
```
It is also possible to compute scores for non IOB notations, POS tagging for example hasn't this kind of notation. Add `suffix` parameter:
```python
import nlp
met = nlp.load_metric('metrics/seqeval')
references = [['O', 'O', 'O', 'MISC', 'MISC', 'MISC', 'O'], ['PER', 'PER', 'O']]
predictions = [['O', 'O', 'MISC', 'MISC', 'MISC', 'MISC', 'O'], ['PER', 'PER', 'O']]
met.compute(predictions, references, metrics_kwargs={"suffix": True})
#Output: {'PER': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1}, 'MISC': {'precision': 0.0, 'recall': 0.0, 'f1': 0, 'number': 1}, 'overall_precision': 0.5, 'overall_recall': 0.5, 'overall_f1': 0.5, 'overall_accuracy': 0.9}
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/100/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/100/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/100",
"html_url": "https://github.com/huggingface/datasets/pull/100",
"diff_url": "https://github.com/huggingface/datasets/pull/100.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/100.patch",
"merged_at": 1589498494000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/99 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/99/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/99/comments | https://api.github.com/repos/huggingface/datasets/issues/99/events | https://github.com/huggingface/datasets/pull/99 | 618,026,700 | MDExOlB1bGxSZXF1ZXN0NDE3ODMxNjky | 99 | [Cmrc 2018] fix cmrc2018 | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,589,444,523,000 | 1,589,446,182,000 | 1,589,446,181,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/99/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/99/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/99",
"html_url": "https://github.com/huggingface/datasets/pull/99",
"diff_url": "https://github.com/huggingface/datasets/pull/99.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/99.patch",
"merged_at": 1589446181000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/98 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/98/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/98/comments | https://api.github.com/repos/huggingface/datasets/issues/98/events | https://github.com/huggingface/datasets/pull/98 | 617,957,739 | MDExOlB1bGxSZXF1ZXN0NDE3Nzc3NDcy | 98 | Webis tl-dr | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Should that rather be in an organization scope, @thomwolf @patrickvonplaten ?",
"> Should that rather be in an organization scope, @thomwolf @patrickvonplaten ?\r\n\r\nI'm a bit indifferent - both would be fine for me!",
"@jplu - if creating the dummy_data is too tedious, I can do it as well :-) ",
"There is dummy_data here, no ?",
"Yeah I think naming it webis/tl_dr would be best @jplu if that works for you",
"No problem at all!! On it^^",
"> There is dummy_data here, no ?\r\n\r\nSome paths were wrong - the structure is really confusing and the error messages don't really help either - I have to think about how to make this easier to understand!\r\n\r\nHope it was ok that I fiddled with your PR !",
"> Some paths were wrong - the structure is really confusing and the error message don't really help either - I have to think about how to make this easier to understand!\r\n\r\nOh ok! I haven't noticed that sorry :(\r\n\r\n> Hope it was ok that I fiddled with your PR !\r\n\r\nOf course it was ok :)",
"@julien-c Looks like what you have in mind?\r\n\r\n```python\r\nimport nlp\r\nnlp.load_dataset(\"datasets/webis\", \"tl_dr\")\r\n\r\n#Output: Downloading and preparing dataset webis/tl_dr (download: Unknown size, generated: Unknown size, total: Unknown size) to /home/jplu/.cache/huggingface/datasets/webis/tl_dr/1.0.0...\r\n```",
"Merging this for now. Maybe we can see whether to rename it in a different PR @julien-c ? \r\n",
"Hi, \r\nAuthor here of the webis-tldr corpus. Any plans on integrating this dataset into the hub? I remember we could access it in the previous versions of the library. If there is a particular issue that I can help with, do let me know.\r\n\r\nThanks!",
"Hi @shahbazsyed, this dataset _is_ inside the hub but it's namespaced by the organization name `webis`.\r\n\r\nYou can load it following the steps described in https://huggingface.co/datasets/webis/tl_dr\r\n\r\nHere's a Colab showcasing that it works: https://colab.research.google.com/drive/11IrzRVpnMLJZ8_UFFHLR8FhiajjAHRUU?usp=sharing\r\n\r\nThe reason the code is in S3 and not in this repo is that the dataset is namespaced under the `webis` organization. We don't have a lot of namespaced datasets yet but this should become the main way we add more datasets in the future.\r\nLet us know if that's an issue for you. Thank you!"
] | 1,589,437,338,000 | 1,599,127,221,000 | 1,589,489,656,000 | CONTRIBUTOR | null | Add the Webid TL:DR dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/98/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/98/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/98",
"html_url": "https://github.com/huggingface/datasets/pull/98",
"diff_url": "https://github.com/huggingface/datasets/pull/98.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/98.patch",
"merged_at": 1589489655000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/97 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/97/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/97/comments | https://api.github.com/repos/huggingface/datasets/issues/97/events | https://github.com/huggingface/datasets/pull/97 | 617,809,431 | MDExOlB1bGxSZXF1ZXN0NDE3NjU4MDcy | 97 | [Csv] add tests for csv dataset script | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"@thomwolf - can you check and merge if ok? "
] | 1,589,411,171,000 | 1,589,412,196,000 | 1,589,412,195,000 | MEMBER | null | Adds dummy data tests for csv. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/97/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/97/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/97",
"html_url": "https://github.com/huggingface/datasets/pull/97",
"diff_url": "https://github.com/huggingface/datasets/pull/97.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/97.patch",
"merged_at": 1589412195000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/96 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/96/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/96/comments | https://api.github.com/repos/huggingface/datasets/issues/96/events | https://github.com/huggingface/datasets/pull/96 | 617,739,521 | MDExOlB1bGxSZXF1ZXN0NDE3NjAwMjY4 | 96 | lm1b | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"I might have a different version of `isort` than others. It seems like I'm always reordering the imports of others. But isn't really a problem..."
] | 1,589,402,324,000 | 1,589,465,610,000 | 1,589,465,609,000 | CONTRIBUTOR | null | Add lm1b dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/96/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/96/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/96",
"html_url": "https://github.com/huggingface/datasets/pull/96",
"diff_url": "https://github.com/huggingface/datasets/pull/96.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/96.patch",
"merged_at": 1589465609000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/95 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/95/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/95/comments | https://api.github.com/repos/huggingface/datasets/issues/95/events | https://github.com/huggingface/datasets/pull/95 | 617,703,037 | MDExOlB1bGxSZXF1ZXN0NDE3NTY5NzA4 | 95 | Replace checksums files by Dataset infos json | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Great! LGTM :-) ",
"> Ok, really clean!\r\n> I like the logic (not a huge fan of using `_asdict_inner` but it makes sense).\r\n> I think it's a nice improvement!\r\n> \r\n> How should we update the files in the repo? Run a big job on a server or on somebody's computer who has most of the datasets already downloaded?\r\n\r\nMaybe we can split the updates among us...IMO most datasets run very quickly. \r\nI think I've downloaded 50 datasets and 80% are loaded in <5min, 15% in <1h and then `wmt` which is still downloading (since 12h). \r\nI deleted my cache because the `wmt` downloads require quite a lot of space, so I only have parts of the `wmt` datasets on my computer. \r\n\r\n@mariamabarham I guess you have downloaded most of the datasets no? "
] | 1,589,398,576,000 | 1,589,446,723,000 | 1,589,446,722,000 | MEMBER | null | ### Better verifications when loading a dataset
I replaced the `urls_checksums` directory that used to contain `checksums.txt` and `cached_sizes.txt`, by a single file `dataset_infos.json`. It's just a dict `config_name` -> `DatasetInfo`.
It simplifies and improves how verifications of checksums and splits sizes are done, as they're all stored in `DatasetInfo` (one per config). Also, having already access to `DatasetInfo` enables to check disk space before running `download_and_prepare` for a given config.
The dataset infos json file is user readable, you can take a look at the squad one that I generated in this PR.
### Renaming
According to these changes, I did some renaming:
`save_checksums` -> `save_infos`
`ignore_checksums` -> `ignore_verifications`
for example, when you are creating a dataset you have to run
```nlp-cli test path/to/my/dataset --save_infos --all_configs```
instead of
```nlp-cli test path/to/my/dataset --save_checksums --all_configs```
### And now, the fun part
We'll have to rerun the `nlp-cli test ... --save_infos --all_configs` for all the datasets
-----------------
feedback appreciated ! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/95/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/95/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/95",
"html_url": "https://github.com/huggingface/datasets/pull/95",
"diff_url": "https://github.com/huggingface/datasets/pull/95.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/95.patch",
"merged_at": 1589446722000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/94 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/94/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/94/comments | https://api.github.com/repos/huggingface/datasets/issues/94/events | https://github.com/huggingface/datasets/pull/94 | 617,571,340 | MDExOlB1bGxSZXF1ZXN0NDE3NDYyMTIw | 94 | Librispeech | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"@jplu - I changed this weird archieve - iter method to something simpler. It's only one file to download anyways so I don't see the point of using weird iter methods...It's a huge file though :D 30 million lines of text. Took me quite some time to download :D "
] | 1,589,385,854,000 | 1,589,405,343,000 | 1,589,405,342,000 | CONTRIBUTOR | null | Add librispeech dataset and remove some useless content. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/94/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/94/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/94",
"html_url": "https://github.com/huggingface/datasets/pull/94",
"diff_url": "https://github.com/huggingface/datasets/pull/94.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/94.patch",
"merged_at": 1589405342000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/93 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/93/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/93/comments | https://api.github.com/repos/huggingface/datasets/issues/93/events | https://github.com/huggingface/datasets/pull/93 | 617,522,029 | MDExOlB1bGxSZXF1ZXN0NDE3NDIxODUy | 93 | Cleanup notebooks and various fixes | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,589,381,938,000 | 1,589,382,108,000 | 1,589,382,107,000 | MEMBER | null | Fixes on dataset (more flexible) metrics (fix) and general clean ups | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/93/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/93/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/93",
"html_url": "https://github.com/huggingface/datasets/pull/93",
"diff_url": "https://github.com/huggingface/datasets/pull/93.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/93.patch",
"merged_at": 1589382107000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/92 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/92/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/92/comments | https://api.github.com/repos/huggingface/datasets/issues/92/events | https://github.com/huggingface/datasets/pull/92 | 617,341,505 | MDExOlB1bGxSZXF1ZXN0NDE3Mjc1ODky | 92 | [WIP] add wmt14 | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,589,366,523,000 | 1,589,627,858,000 | 1,589,627,857,000 | MEMBER | null | WMT14 takes forever to download :-/
- WMT is the first dataset that uses an abstract class IMO, so I had to modify the `load_dataset_module` a bit. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/92/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/92/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/92",
"html_url": "https://github.com/huggingface/datasets/pull/92",
"diff_url": "https://github.com/huggingface/datasets/pull/92.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/92.patch",
"merged_at": 1589627857000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/91 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/91/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/91/comments | https://api.github.com/repos/huggingface/datasets/issues/91/events | https://github.com/huggingface/datasets/pull/91 | 617,339,484 | MDExOlB1bGxSZXF1ZXN0NDE3Mjc0MjA0 | 91 | [Paracrawl] add paracrawl | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,589,366,340,000 | 1,589,366,415,000 | 1,589,366,414,000 | MEMBER | null | - Huge dataset - took ~1h to download
- Also this PR reformats all dataset scripts and adds `datasets` to `make style` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/91/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/91/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/91",
"html_url": "https://github.com/huggingface/datasets/pull/91",
"diff_url": "https://github.com/huggingface/datasets/pull/91.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/91.patch",
"merged_at": 1589366414000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/90 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/90/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/90/comments | https://api.github.com/repos/huggingface/datasets/issues/90/events | https://github.com/huggingface/datasets/pull/90 | 617,311,877 | MDExOlB1bGxSZXF1ZXN0NDE3MjUxODE0 | 90 | Add download gg drive | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"awesome - so no manual downloaded needed here? ",
"Yes exactly. It works like a standard download"
] | 1,589,363,762,000 | 1,589,373,988,000 | 1,589,364,331,000 | MEMBER | null | We can now add datasets that download from google drive | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/90/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/90/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/90",
"html_url": "https://github.com/huggingface/datasets/pull/90",
"diff_url": "https://github.com/huggingface/datasets/pull/90.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/90.patch",
"merged_at": 1589364331000
} | true |