Dataset Preview
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowNotImplementedError
Message:      Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 583, in write_table
                  self._build_writer(inferred_schema=pa_table.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 404, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2027, in _prepare_split_single
                  num_examples, num_bytes = writer.finalize()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 602, in finalize
                  self._build_writer(self.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 404, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

_data_files
list
_fingerprint
string
_format_columns
null
_format_kwargs
dict
_format_type
null
_output_all_columns
bool
_split
string
[ { "filename": "data-00000-of-00001.arrow" } ]
7a4f5d6ab555757d
null
{}
null
false
train

崩坏三游戏剧情语料

总计 92,421 句剧情对白(带有角色标签)+旁白,从崩坏3的“主线1黄昏、少女、战舰”到“主线第二部03间章:一个梦游者的苦痛”

本数据集从 honkai_impact_3rd_game_playthrough 视频数据集出发,经过 AI pipeline 最终获取结构化的文本剧情语料。

AI pipeline 概述如下:

  1. 分P下载视频(使用 BBDown 下载 BiliBili崩三剧情视频
  2. 视频帧分割(每1秒取一帧画面)
  3. 逐帧 OCR 检测文本(使用 Paddle-OCR
  4. 逐帧 VLM 结构化解析(使用 MiniCPM-V-2_6,输入为帧图像 + OCR结果,输出为结构化 JSON)
  5. 基于规则的后处理
    • 规范化 VLM 输出(e.g., 去噪、排除格式有问题的输出)
    • 中间帧的信息去重与归并(e.g., 由于对白的逐字动画播放需要时间,因此“每1秒取一帧”的做法会导致大量“中间帧”、“重复帧”的存在,也就是“话说一半还没播放完”的帧、以及“画面重复”的帧。对于这种帧解析出来的结构化信息,我们会基于计算编辑距离等容错的判断方法(比如中间帧错误识别了一两个字符),把它 merge 到后续话说完整的帧信息中,合并成一帧))

注意!由于是通过 AI pipeline,所以难免存在误差(识别错误等),但此数据质量依旧还算尚可

Example 1 (Good Case)

  • Some frames
Frame a Frame b
frame_129.jpg frame_144.jpg

(备注:这里的 Frame a 就是个“中间帧”,话其实还没说完。但得益于“中间帧的信息归并”,可以看到下面 Parse Result 中的第一句其实是完整内容)

  • Parsed Result
{"chapter": "主线32世界的止境-03", "chapter_id": 138, "utter_id": "138-22", "type": "dialogue", "role": "幽兰黛尔", "content": "倒也不能掉以轻心。对于将世界泡重新连回「虚数的末梢」这一行为来说,真正的关键在于锚点本身的稳定性。这和空间上的远近不完全是一回事。"}
{"chapter": "主线32世界的止境-03", "chapter_id": 138, "utter_id": "138-23", "type": "dialogue", "role": "琪亚娜", "content": "……锚点?那是什么?"}
{"chapter": "主线32世界的止境-03", "chapter_id": 138, "utter_id": "138-24", "type": "dialogue", "role": "幽兰黛尔", "content": "「锚点」是允许世界泡连接到其他空间的一种反演基点。举个例子的话……"}
{"chapter": "主线32世界的止境-03", "chapter_id": 138, "utter_id": "138-25", "type": "dialogue", "role": "幽兰黛尔", "content": "就像我体内的世界泡需要锚定在我自己的身上,而你的记忆空间也会固定在你的律者核心上。"}
{"chapter": "主线32世界的止境-03", "chapter_id": 138, "utter_id": "138-26", "type": "dialogue", "role": "幽兰黛尔", "content": "那边的梅博士也一定需要现实世界的某样东西来做到这一点。"}
{"chapter": "主线32世界的止境-03", "chapter_id": 138, "utter_id": "138-27", "type": "dialogue", "role": "幽兰黛尔", "content": "难道就是那座桥?"}
{"chapter": "主线32世界的止境-03", "chapter_id": 138, "utter_id": "138-28", "type": "dialogue", "role": "琪亚娜", "content": "……?"}
{"chapter": "主线32世界的止境-03", "chapter_id": 138, "utter_id": "138-29", "type": "dialogue", "role": "琪亚娜", "content": "博士完全没有提到这一点啊。"}

Example 2 (Bad Case)

  • Some frames
Frame a Frame b
frame_130.jpg frame_941.jpg
  • Parsed Result
{"chapter": "开放世界-天命总部 第1章(2.3活动)(时间线-主线7攻占第三空港后)", "chapter_id": 12, "utter_id": "12-145", "type": "dialogue", "role": "爱酱", "content": "这里怎么还会有防空炮啊!"}
{"chapter": "开放世界-天命总部 第1章(2.3活动)(时间线-主线7攻占第三空港后)", "chapter_id": 12, "utter_id": "12-146", "type": "dialogue", "role": "爱酱", "content": "等我的骇入权限高了再来收拾你们!"}
{"chapter": "开放世界-天命总部 第1章(2.3活动)(时间线-主线7攻占第三空港后)", "chapter_id": 12, "utter_id": "12-147", "type": "narration", "role": "narration", "content": "剧情战场会议调查空港的中心区域[0/1]破坏生成装置"}
{"chapter": "开放世界-天命总部 第1章(2.3活动)(时间线-主线7攻占第三空港后)", "chapter_id": 12, "utter_id": "12-148", "type": "dialogue", "role": "无量塔姬子", "content": "防御系统已经解除,我们暂时安全了。但还是不知道琪亚娜在哪里。"}
{"chapter": "开放世界-天命总部 第1章(2.3活动)(时间线-主线7攻占第三空港后)", "chapter_id": 12, "utter_id": "12-149", "type": "dialogue", "role": "德丽莎", "content": "给逆的两位博士一点时间,她们侵入了空港的系统寻找线索。"}
{"chapter": "开放世界-天命总部 第1章(2.3活动)(时间线-主线7攻占第三空港后)", "chapter_id": 12, "utter_id": "12-150", "type": "dialogue", "role": "德丽莎女士", "content": "我有一个提案。"}
{"chapter": "开放世界-天命总部 第1章(2.3活动)(时间线-主线7攻占第三空港后)", "chapter_id": 12, "utter_id": "12-151", "type": "dialogue", "role": "德丽莎", "content": "你想说天父吧?爱因斯坦,你能启动它吗?"}
{"chapter": "开放世界-天命总部 第1章(2.3活动)(时间线-主线7攻占第三空港后)", "chapter_id": 12, "utter_id": "12-152", "type": "dialogue", "role": "德丽莎", "content": "并不能完全启动。"}
  • Explanation

You can see that sometimes the role names are not consistent (e.g., 德丽莎女士 v.s. 德丽莎), and there are possibly OCR errors (e.g., "给逆的两位博士" should actually be "给逆熵的两位博士").

大概率原因是由于本身画面中字体就是斜的,并非常规的 OCR,因此难度较高,导致 OCR 出错概率增加。

Top Speaking Roles

# role count
芽衣 4859
希儿 3942
琪亚娜 3189
符华 2564
布洛妮娅 2458
德丽莎 2091
松雀 1970
爱莉希雅 1669
幽兰黛尔 1537
薇塔 1246
凯文 1155
苏莎娜 1144
识之律者 1133
时雨绮罗 1113
爱因斯坦 1013
格蕾修 1009
奥托 999
普罗米修斯 981
特斯拉 959
渡鸦 949
希娜狄雅 887
科拉莉 860
丽塔 779
米丝忒琳 689
华 598
阿波尼亚 571
灰蛇 562
??? 537
维尔薇 520
苏 507
白及 493
帕朵菲莉丝 488
瑟莉姆 485
梅比乌斯 472
梅 446
姬子 441
人偶 433
李素裳 427
穷困潦倒乐乐酱 421
侵蚀之律者 418
赫丽娅 398
莫里亚蒂 386
薛定谔 385
樱 370
大魔术师维尔薇 360
萝莎莉娅 331
长光 302
羽兔 293

VLM Prompt

PROMPT = """This is an image of RPG game. Given associated OCR result, please help us identify the existence of story narrations and dialogues and extract them in structured format.
This is the associated OCR results:
\`\`\`ocr
{ocr}
\`\`\`

There are two types of story content you should extract:

- Narration: single line or paragraph of narration, telling the story background and plots
- Dialogue: dialogue contents spoken by a character. The speaker character name and spoken content must co-appear in the image.

Note:

- Be strict with OCR texts, you are NOT allowed to fabricate contents that are not captured by OCR results.
- The OCR often separate multiline texts, and it's your task to concatenate consecutive lines if necessary.
- There might be noisy textual contents (e.g., advertisement, UI elements, combos, etc.), which are not our interest.
- There might be texts indicating state/environment information (e.g., location, time, source, state, etc), you can extract them as well in environment field.

Please output your response in JSON structure in one of the 3 following ways:

1. In case of no desired content (neither dialogue nor narration), output a JSON dict whose type is null.

\`\`\`json
{{"type": null}}
\`\`\`

2. In case of dialogue

\`\`\`json
{{
    "type": "dialogue",
    "role": "<speaker name>",
    "content": "<spoken content>",
    "state": "<state/environment info, null if there isn't any>"
}}
\`\`\`

3. In case of narration

\`\`\`json
{{
    "type": "narration",
    "content": "<narrative content>"
}}
\`\`\`"""

VLM code snippet

# generate
for batch in tqdm(batches):
    msgs = [
        [{"role": "user", "content": [Image.open(b["frame_path"]), format_template(b["ocr"])]}]
        for b in batch
    ]
    outputs = model.chat(
        image=None,
        msgs=msgs,
        tokenizer=tokenizer
    )
Downloads last month
92