Dataset Preview
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowNotImplementedError
Message:      Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 583, in write_table
                  self._build_writer(inferred_schema=pa_table.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 404, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2027, in _prepare_split_single
                  num_examples, num_bytes = writer.finalize()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 602, in finalize
                  self._build_writer(self.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 404, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

_data_files
list
_fingerprint
string
_format_columns
sequence
_format_kwargs
dict
_format_type
null
_indexes
dict
_output_all_columns
bool
_split
null
[ { "filename": "dataset.arrow" } ]
e84eebabf05c3bc0
[ "target", "text" ]
{}
null
{}
false
null
YAML Metadata Warning: The task_categories "conditional-text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, other

AutoTrain Dataset for project: mt5_chinese_small_finetune

Dataset Descritpion

This dataset has been automatically processed by AutoTrain for project mt5_chinese_small_finetune.

Languages

The BCP-47 code for the dataset's language is unk.

Dataset Structure

Data Instances

A sample from this dataset looks as follows:

[
  {
    "text": "\u8fd1\u671f\uff0c\u7f8e\u56fd\u56fd\u4f1a\u4f17\u9662\u901a\u8fc7\u6cd5\u6848\uff0c\u91cd\u7533\u7f8e\u56fd\u5bf9\u53f0\u6e7e\u7684\u627f\u8bfa\u3002\u5bf9\u6b64\uff0c\u4e2d\u56fd\u5916\u4ea4\u90e8\u53d1\u8a00\u4eba\u8868\u793a\uff0c\u6709\u5173\u6cd5\u6848\u4e25\u91cd\u8fdd\u53cd\u4e00\u4e2a\u4e2d\u56fd\u539f\u5219\u548c\u4e2d\u7f8e\u4e09\u4e2a\u8054\u5408\u516c\u62a5\u89c4\u5b9a\uff0c\u7c97\u66b4\u5e72\u6d89\u4e2d\u56fd\u5185\u653f\uff0c\u4e2d\u65b9\u5bf9\u6b64\u575a\u51b3\u53cd\u5bf9\u5e76\u5df2\u5411\u7f8e\u65b9\u63d0\u51fa\u4e25\u6b63\u4ea4\u6d89\u3002\n\u4e8b\u5b9e\u4e0a\uff0c\u4e2d[...]",
    "target": "\u671b\u6d77\u697c\u7f8e\u56fd\u6253\u201c\u53f0\u6e7e\u724c\u201d\u662f\u5371\u9669\u7684\u8d4c\u535a"
  },
  {
    "text": "\u5728\u63a8\u8fdb\u201c\u53cc\u4e00\u6d41\u201d\u9ad8\u6821\u5efa\u8bbe\u8fdb\u7a0b\u4e2d\uff0c\u6211\u4eec\u8981\u7d27\u7d27\u56f4\u7ed5\u4e3a\u515a\u80b2\u4eba\u3001\u4e3a\u56fd\u80b2\u624d\uff0c\u627e\u51c6\u95ee\u9898\u3001\u7834\u89e3\u96be\u9898\uff0c\u4ee5\u4e00\u6d41\u610f\u8bc6\u548c\u62c5\u5f53\u7cbe\u795e\uff0c\u5927\u529b\u63a8\u8fdb\u9ad8\u6821\u7684\u6cbb\u7406\u80fd\u529b\u5efa\u8bbe\u3002\n\u589e\u5f3a\u653f\u6cbb\u5f15\u9886\u529b\u3002\u575a\u6301\u515a\u5bf9\u9ad8\u6821\u5de5\u4f5c\u7684\u5168\u9762\u9886\u5bfc\uff0c\u59cb\u7ec8\u628a\u653f\u6cbb\u5efa\u8bbe\u6446\u5728[...]",
    "target": "\u5927\u529b\u63a8\u8fdb\u9ad8\u6821\u6cbb\u7406\u80fd\u529b\u5efa\u8bbe"
  }
]

Dataset Fields

The dataset has the following fields (also called "features"):

{
  "text": "Value(dtype='string', id=None)",
  "target": "Value(dtype='string', id=None)"
}

Dataset Splits

This dataset is split into a train and validation split. The split sizes are as follow:

Split name Num samples
train 5850
valid 1679
Downloads last month
31

Models trained or fine-tuned on dddb/autotrain-data-mt5_chinese_small_finetune