Dataset Viewer
Full Screen Viewer
Full Screen
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code: FeaturesError Exception: HfHubHTTPError Message: 404 Client Error: Not Found for url: https://s3.us-east-1.amazonaws.com/lfs.huggingface.co/repos/c0/67/c067e018e76b835433a1634d59a85150013b811f7fd07c0b40787b4169e996cc/31866264531f849b3eddd5a8bc51e6e611226425654188ac05c94fd283a658c7?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGO27GPWFUO%2F20240421%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240421T072652Z&X-Amz-Expires=259200&X-Amz-Signature=35e65c6460008512a36a8a579a8160fd43a2ecf0343f804d9a8bed51e982832c&X-Amz-SignedHeaders=host&response-content-disposition=attachment%3B%20filename%2A%3DUTF-8%27%27firefly-train-1.1M.jsonl%3B%20filename%3D%22firefly-train-1.1M.jsonl%22%3B&x-id=GetObject Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 328, in compute compute_first_rows_from_parquet_response( File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 88, in compute_first_rows_from_parquet_response rows_index = indexer.get_rows_index( File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 631, in get_rows_index return RowsIndex( File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 512, in __init__ self.parquet_index = self._init_parquet_index( File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 529, in _init_parquet_index response = get_previous_step_or_raise( File "/src/libs/libcommon/src/libcommon/simple_cache.py", line 566, in get_previous_step_or_raise raise CachedArtifactError( libcommon.simple_cache.CachedArtifactError: The previous step failed. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py", line 304, in hf_raise_for_status response.raise_for_status() File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://s3.us-east-1.amazonaws.com/lfs.huggingface.co/repos/c0/67/c067e018e76b835433a1634d59a85150013b811f7fd07c0b40787b4169e996cc/31866264531f849b3eddd5a8bc51e6e611226425654188ac05c94fd283a658c7?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGO27GPWFUO%2F20240421%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240421T072652Z&X-Amz-Expires=259200&X-Amz-Signature=35e65c6460008512a36a8a579a8160fd43a2ecf0343f804d9a8bed51e982832c&X-Amz-SignedHeaders=host&response-content-disposition=attachment%3B%20filename%2A%3DUTF-8%27%27firefly-train-1.1M.jsonl%3B%20filename%3D%22firefly-train-1.1M.jsonl%22%3B&x-id=GetObject The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response iterable_dataset = iterable_dataset._resolve_features() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2215, in _resolve_features features = _infer_features_from_batch(self.with_format(None)._head()) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1239, in _head return _examples_to_batch(list(self.take(n))) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1388, in __iter__ for key, example in ex_iterable: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1044, in __iter__ yield from islice(self.ex_iterable, self.n) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 282, in __iter__ for key, pa_table in self.generate_tables_fn(**self.kwargs): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 107, in _generate_tables batch = f.read(self.config.chunksize) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 342, in read_with_retries out = read(*args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 742, in read return super().read(length) File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 1856, in read out = self.cache._fetch(self.loc, self.loc + length) File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/caching.py", line 189, in _fetch self.cache = self.fetcher(start, end) # new block replaces old File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 705, in _fetch_range hf_raise_for_status(r) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py", line 371, in hf_raise_for_status raise HfHubHTTPError(str(e), response=response) from e huggingface_hub.utils._errors.HfHubHTTPError: 404 Client Error: Not Found for url: https://s3.us-east-1.amazonaws.com/lfs.huggingface.co/repos/c0/67/c067e018e76b835433a1634d59a85150013b811f7fd07c0b40787b4169e996cc/31866264531f849b3eddd5a8bc51e6e611226425654188ac05c94fd283a658c7?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGO27GPWFUO%2F20240421%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240421T072652Z&X-Amz-Expires=259200&X-Amz-Signature=35e65c6460008512a36a8a579a8160fd43a2ecf0343f804d9a8bed51e982832c&X-Amz-SignedHeaders=host&response-content-disposition=attachment%3B%20filename%2A%3DUTF-8%27%27firefly-train-1.1M.jsonl%3B%20filename%3D%22firefly-train-1.1M.jsonl%22%3B&x-id=GetObject
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/datasets-cards)
本数据应用于项目:Firefly(流萤): 中文对话式大语言模型 ,训练后得到的模型firefly-1b4
如果您觉得此数据集对您有帮助,请like此数据集并在Github项目中star我们。
我们收集了23个常见的中文数据集,对于每个任务,由人工书写若干种指令模板,保证数据的高质量与丰富度,数据量为115万 。数据分布如下图所示:
每条数据的格式如下,包含任务类型、输入、目标输出:
{
"kind": "ClassicalChinese",
"input": "将下面句子翻译成现代文:\n石中央又生一树,高百余尺,条干偃阴为五色,翠叶如盘,花径尺余,色深碧,蕊深红,异香成烟,著物霏霏。",
"target": "大石的中央长着一棵树,一百多尺高,枝干是彩色的,树叶有盘子那样大,花的直径有一尺宽,花瓣深蓝色,花中飘出奇异的香气笼罩着周围,如烟似雾。"
}
- Downloads last month
- 28