The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    CastError
Message:      Couldn't cast
filename: string
image_hash: uint64
width: int64
height: int64
luminance: double
image: list<element: uint8>
  child 0, element: uint8
-- schema metadata --
pandas: '{"index_columns": [], "column_indexes": [], "columns": [{"name":' + 759
to
{'filename': Value(dtype='string', id=None), 'image': Image(mode=None, decode=True, id=None)}
because column names don't match
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 323, in compute
                  compute_first_rows_from_parquet_response(
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 88, in compute_first_rows_from_parquet_response
                  rows_index = indexer.get_rows_index(
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 631, in get_rows_index
                  return RowsIndex(
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 512, in __init__
                  self.parquet_index = self._init_parquet_index(
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 529, in _init_parquet_index
                  response = get_previous_step_or_raise(
                File "/src/libs/libcommon/src/libcommon/simple_cache.py", line 566, in get_previous_step_or_raise
                  raise CachedArtifactError(
              libcommon.simple_cache.CachedArtifactError: The previous step failed.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 92, in get_rows_or_raise
                  return get_rows(
                File "/src/libs/libcommon/src/libcommon/utils.py", line 183, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/utils.py", line 69, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1389, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 282, in __iter__
                  for key, pa_table in self.generate_tables_fn(**self.kwargs):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 97, in _generate_tables
                  yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 75, in _cast_table
                  pa_table = table_cast(pa_table, self.info.features.arrow_schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              filename: string
              image_hash: uint64
              width: int64
              height: int64
              luminance: double
              image: list<element: uint8>
                child 0, element: uint8
              -- schema metadata --
              pandas: '{"index_columns": [], "column_indexes": [], "columns": [{"name":' + 759
              to
              {'filename': Value(dtype='string', id=None), 'image': Image(mode=None, decode=True, id=None)}
              because column names don't match

Need help to make the dataset viewer work? Open a discussion for direct support.

Not a Hot-dog

(The name of this dataset is a reference to the show Silicon Valley.)

Dataset Details

Dataset Description

This dataset is a small collection of user-submitted images that contain objects that are not hot dogs, but might be perceptibly shaped like one.

  • Curated by: Public users
  • Language(s) (NLP): None (no captions included)
  • License: MIT

Dataset Sources

The images provided were submitted by random internet users on Reddit.

Uses

This dataset may be used to train safety checking neural networks, or low-rank adaptation networks that might be useful for a funny joke or two.

Direct Use

This dataset does not have any captions or phrases supplied with it.

The image column contains a byte string of the JPEG data as read from storage.

Out-of-Scope Use

This data should not be used to generate offensive content.

Dataset Structure

Fields:

  • filename (str)
  • image hash (str)
  • width (int)
  • height (int)
  • image (bytes, JPEG)

Dataset Creation

Curation Rationale

A LoRA exists that seems to use a similar dataset, but the dataset was not provided or available at any point in time. This dataset is an attempt to reproduce the same results.

Source Data

  • User-submitted photographs

Personal and Sensitive Information

Some images may contain faces or identities of individuals. By using this dataset, you agree not to attempt to discover the identity of these people.

Bias, Risks, and Limitations

This dataset's bias is that of its users', as all images were hand-selected for inclusion.

Downloads last month
3