Dataset Preview
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 4 new columns ({'filesize', 'files', 'hash_lfs', 'hash'}) and 1 missing columns ({'text'}).

This happened while the json dataset builder was generating data using

hf://datasets/deepghs/danbooru2023_index/original/data-0000.json (at revision 1519a0805724ff03d3cd99e496338bcf2eb799c2)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1870, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 622, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2292, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2240, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              filesize: int64
              hash: string
              hash_lfs: string
              files: struct<./1000.png: struct<offset: int64, size: int64, sha256: string>, ./10000.jpg: struct<offset: int64, size: int64, sha256: string>, ./100000.jpg: struct<offset: int64, size: int64, sha256: string>, ./1000000.jpg: struct<offset: int64, size: int64, sha256: string>, ./1001000.png: struct<offset: int64, size: int64, sha256: string>, ./1002000.jpg: struct<offset: int64, size: int64, sha256: string>, ./1003000.jpg: struct<offset: int64, size: int64, sha256: string>, ./1004000.jpg: struct<offset: int64, size: int64, sha256: string>, ./1005000.jpg: struct<offset: int64, size: int64, sha256: string>, ./1006000.png: struct<offset: int64, size: int64, sha256: string>, ./1007000.jpg: struct<offset: int64, size: int64, sha256: string>, ./1008000.jpg: struct<offset: int64, size: int64, sha256: string>, ./1009000.jpg: struct<offset: int64, size: int64, sha256: string>, ./101000.jpg: struct<offset: int64, size: int64, sha256: string>, ./1010000.jpg: struct<offset: int64, size: int64, sha256: string>, ./1011000.jpg: struct<offset: int64, size: int64, sha256: string>, ./1012000.jpg: struct<offset: int64, size: int64, sha256: string>, ./1013000.jpg: struct<offset: int64, size: int64, sha256: string>, ./1014000.jpg: struct<offset: int64, size: int64, sha256: string>, ./1015000.jpg: struct<offset: int64, size: int64, sha256: string>, ./1016000.jpg: struct<offset: int64, size: int64, sha256: string>, ./1017000.jpg: struct<offset: int64, siz
              ...
               sha256: string
                child 6757, ./991000.jpg: struct<offset: int64, size: int64, sha256: string>
                    child 0, offset: int64
                    child 1, size: int64
                    child 2, sha256: string
                child 6758, ./992000.jpg: struct<offset: int64, size: int64, sha256: string>
                    child 0, offset: int64
                    child 1, size: int64
                    child 2, sha256: string
                child 6759, ./993000.jpg: struct<offset: int64, size: int64, sha256: string>
                    child 0, offset: int64
                    child 1, size: int64
                    child 2, sha256: string
                child 6760, ./994000.jpg: struct<offset: int64, size: int64, sha256: string>
                    child 0, offset: int64
                    child 1, size: int64
                    child 2, sha256: string
                child 6761, ./995000.jpg: struct<offset: int64, size: int64, sha256: string>
                    child 0, offset: int64
                    child 1, size: int64
                    child 2, sha256: string
                child 6762, ./996000.jpg: struct<offset: int64, size: int64, sha256: string>
                    child 0, offset: int64
                    child 1, size: int64
                    child 2, sha256: string
                child 6763, ./997000.jpg: struct<offset: int64, size: int64, sha256: string>
                    child 0, offset: int64
                    child 1, size: int64
                    child 2, sha256: string
                child 6764, ./998000.jpg: struct<offset: int64, size: int64, sha256: string>
                    child 0, offset: int64
                    child 1, size: int64
                    child 2, sha256: string
                child 6765, ./999000.jpg: struct<offset: int64, size: int64, sha256: string>
                    child 0, offset: int64
                    child 1, size: int64
                    child 2, sha256: string
              to
              {'text': Value(dtype='int64', id=None)}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1417, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1049, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 924, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1000, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1741, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1872, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 4 new columns ({'filesize', 'files', 'hash_lfs', 'hash'}) and 1 missing columns ({'text'}).
              
              This happened while the json dataset builder was generating data using
              
              hf://datasets/deepghs/danbooru2023_index/original/data-0000.json (at revision 1519a0805724ff03d3cd99e496338bcf2eb799c2)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

text
int64
1
2
3
4
5
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
38
40
41
42
43
44
45
46
47
48
49
53
54
55
56
57
58
59
60
61
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
104
105
106
111
112
End of preview.

Tar index files for nyanko7/danbooru2023.

You can download images from both nyanko7/danbooru2023 and deepghs/danbooru_newest with cheesechaser.

from cheesechaser.datapool import DanbooruNewestDataPool

pool = DanbooruNewestDataPool()

# download danbooru original images from 7200000-7201000, to directory /data/danbooru_original
pool.batch_download_to_directory(
    resource_ids=range(7200000, 7201000),
    dst_dir='/data/danbooru_original',
    max_workers=12,
)
Downloads last month
4,614

Collection including deepghs/danbooru2023_index