Dataset Preview
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowNotImplementedError
Message:      Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 583, in write_table
                  self._build_writer(inferred_schema=pa_table.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 404, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2027, in _prepare_split_single
                  num_examples, num_bytes = writer.finalize()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 602, in finalize
                  self._build_writer(self.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 404, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

_data_files
list
_fingerprint
string
_format_columns
sequence
_format_kwargs
dict
_format_type
null
_indexes
dict
_output_all_columns
bool
_split
null
[ { "filename": "dataset.arrow" } ]
39b0c9e71948b2f3
[ "feat_annot_utt", "feat_id", "feat_judgments.grammar_score", "feat_judgments.intent_score", "feat_judgments.language_identification", "feat_judgments.slots_score", "feat_judgments.spelling_score", "feat_judgments.worker_id", "feat_locale", "feat_partition", "feat_scenario", "feat_slot_method.method", "feat_slot_method.slot", "feat_worker_id", "target", "text" ]
{}
null
{}
false
null

AutoTrain Dataset for project: massive-4-catalan

Dataset Description

This dataset has been automatically processed by AutoTrain for project massive-4-catalan.

Languages

The BCP-47 code for the dataset's language is unk.

Dataset Structure

Data Instances

A sample from this dataset looks as follows:

[
  {
    "feat_id": "1",
    "feat_locale": "ca-ES",
    "feat_partition": "train",
    "feat_scenario": 0,
    "target": 2,
    "text": "desperta'm a les nou a. m. del divendres",
    "feat_annot_utt": "desperta'm a les [time : nou a. m.] del [date : divendres]",
    "feat_worker_id": "42",
    "feat_slot_method.slot": [
      "time",
      "date"
    ],
    "feat_slot_method.method": [
      "translation",
      "translation"
    ],
    "feat_judgments.worker_id": [
      "42",
      "30",
      "3"
    ],
    "feat_judgments.intent_score": [
      1,
      1,
      1
    ],
    "feat_judgments.slots_score": [
      1,
      1,
      1
    ],
    "feat_judgments.grammar_score": [
      4,
      3,
      4
    ],
    "feat_judgments.spelling_score": [
      2,
      2,
      2
    ],
    "feat_judgments.language_identification": [
      "target",
      "target|english",
      "target"
    ]
  },
  {
    "feat_id": "2",
    "feat_locale": "ca-ES",
    "feat_partition": "train",
    "feat_scenario": 0,
    "target": 2,
    "text": "posa una alarma per d\u2019aqu\u00ed a dues hores",
    "feat_annot_utt": "posa una alarma per [time : d\u2019aqu\u00ed a dues hores]",
    "feat_worker_id": "15",
    "feat_slot_method.slot": [
      "time"
    ],
    "feat_slot_method.method": [
      "translation"
    ],
    "feat_judgments.worker_id": [
      "42",
      "30",
      "24"
    ],
    "feat_judgments.intent_score": [
      1,
      1,
      1
    ],
    "feat_judgments.slots_score": [
      1,
      1,
      1
    ],
    "feat_judgments.grammar_score": [
      4,
      4,
      4
    ],
    "feat_judgments.spelling_score": [
      2,
      2,
      2
    ],
    "feat_judgments.language_identification": [
      "target",
      "target",
      "target"
    ]
  }
]

Dataset Fields

The dataset has the following fields (also called "features"):

{
  "feat_id": "Value(dtype='string', id=None)",
  "feat_locale": "Value(dtype='string', id=None)",
  "feat_partition": "Value(dtype='string', id=None)",
  "feat_scenario": "ClassLabel(num_classes=18, names=['alarm', 'audio', 'calendar', 'cooking', 'datetime', 'email', 'general', 'iot', 'lists', 'music', 'news', 'play', 'qa', 'recommendation', 'social', 'takeaway', 'transport', 'weather'], id=None)",
  "target": "ClassLabel(num_classes=60, names=['alarm_query', 'alarm_remove', 'alarm_set', 'audio_volume_down', 'audio_volume_mute', 'audio_volume_other', 'audio_volume_up', 'calendar_query', 'calendar_remove', 'calendar_set', 'cooking_query', 'cooking_recipe', 'datetime_convert', 'datetime_query', 'email_addcontact', 'email_query', 'email_querycontact', 'email_sendemail', 'general_greet', 'general_joke', 'general_quirky', 'iot_cleaning', 'iot_coffee', 'iot_hue_lightchange', 'iot_hue_lightdim', 'iot_hue_lightoff', 'iot_hue_lighton', 'iot_hue_lightup', 'iot_wemo_off', 'iot_wemo_on', 'lists_createoradd', 'lists_query', 'lists_remove', 'music_dislikeness', 'music_likeness', 'music_query', 'music_settings', 'news_query', 'play_audiobook', 'play_game', 'play_music', 'play_podcasts', 'play_radio', 'qa_currency', 'qa_definition', 'qa_factoid', 'qa_maths', 'qa_stock', 'recommendation_events', 'recommendation_locations', 'recommendation_movies', 'social_post', 'social_query', 'takeaway_order', 'takeaway_query', 'transport_query', 'transport_taxi', 'transport_ticket', 'transport_traffic', 'weather_query'], id=None)",
  "text": "Value(dtype='string', id=None)",
  "feat_annot_utt": "Value(dtype='string', id=None)",
  "feat_worker_id": "Value(dtype='string', id=None)",
  "feat_slot_method.slot": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
  "feat_slot_method.method": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
  "feat_judgments.worker_id": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
  "feat_judgments.intent_score": "Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None)",
  "feat_judgments.slots_score": "Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None)",
  "feat_judgments.grammar_score": "Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None)",
  "feat_judgments.spelling_score": "Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None)",
  "feat_judgments.language_identification": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)"
}

Dataset Splits

This dataset is split into a train and validation split. The split sizes are as follow:

Split name Num samples
train 11514
valid 2033
Downloads last month
42

Models trained or fine-tuned on crodri/autotrain-data-massive-4-catalan