|
--- |
|
license: cc-by-nc-sa-4.0 |
|
extra_gated_prompt: >- |
|
The LongVideoBench dataset contains links to web videos for data collection |
|
purposes. LongVideoBench does not own the content linked within this dataset; |
|
all rights and copyright belong to the respective channel owners. Ensuring |
|
compliance with platform terms and conditions is the responsibility of these |
|
source channels. By accessing this dataset, you acknowledge and agree to the |
|
following terms: |
|
extra_gated_fields: |
|
I understand that LongVideoBench does not own the videos in this dataset: checkbox |
|
I understand that LongVideoBench is not the creator of the videos in this dataset: checkbox |
|
I understand that, LongVideoBench may modify/delete its contents subject to the requirements of the creators or source platforms: checkbox |
|
I agree to use this dataset for non-commercial use ONLY: checkbox |
|
I agree with the data license (CC-BY-NC-SA 4-0) for this dataset: checkbox |
|
task_categories: |
|
- multiple-choice |
|
- visual-question-answering |
|
language: |
|
- en |
|
tags: |
|
- video understanding |
|
- long-context |
|
- multimodal |
|
pretty_name: longvideobench |
|
--- |
|
|
|
|
|
![](https://github.com/longvideobench/longvideobench.github.io/blob/main/logo.png?raw=true) |
|
|
|
|
|
# Dataset Card for LongVideoBench |
|
|
|
<!-- Provide a quick summary of the dataset. --> |
|
|
|
|
|
|
|
|
|
Large multimodal models (LMMs) are handling increasingly longer and more complex inputs. However, few public benchmarks are available to assess these advancements. To address this, we introduce LongVideoBench, a question-answering benchmark with video-language interleaved inputs up to an hour long. It comprises 3,763 web-collected videos with subtitles across diverse themes, designed to evaluate LMMs on long-term multimodal understanding. |
|
|
|
The main challenge that LongVideoBench targets is to accurately retrieve and reason over detailed information from lengthy inputs. We present a novel task called referring reasoning, where questions contain a referring query that references related video contexts, requiring the model to reason over these details. |
|
|
|
LongVideoBench includes 6,678 human-annotated multiple-choice questions across 17 categories, making it one of the most comprehensive benchmarks for long-form video understanding. Evaluations show significant challenges even for advanced proprietary models (e.g., GPT-4o, Gemini-1.5-Pro, GPT-4-Turbo), with open-source models performing worse. Performance improves only when models process more frames, establishing LongVideoBench as a valuable benchmark for future long-context LMMs. |
|
|
|
|
|
## Dataset Details |
|
|
|
### Dataset Description |
|
|
|
<!-- Provide a longer summary of what this dataset is. --> |
|
|
|
- **Curated by:** LongVideoBench Team |
|
- **Language(s) (NLP):** English |
|
- **License:** CC-BY-NC-SA 4.0 |
|
|
|
### Dataset Sources [optional] |
|
|
|
<!-- Provide the basic links for the dataset. --> |
|
|
|
- **Repository:** [https://github.com/longvideobench/LongVideoBench](https://github.com/longvideobench/LongVideoBench) |
|
- **Homepage:** [https://longvideobench.github.io](https://longvideobench.github.io) |
|
- **Leaderboard:** [https://huggingface.co/spaces/longvideobench/LongVideoBench](https://huggingface.co/spaces/longvideobench/LongVideoBench) |
|
|
|
## Uses |
|
|
|
<!-- Address questions around how the dataset is intended to be used. --> |
|
|
|
1. Download the dataset via Hugging Face Client: |
|
|
|
```shell |
|
huggingface-cli download longvideobench/LongVideoBench --repo-type dataset --local-dir LongVideoBench --local-dir-use-symlinks False |
|
``` |
|
|
|
2. Extract from the `.tar` files: |
|
|
|
```shell |
|
cat videos.tar.part.* > videos.tar |
|
tar -xvf videos.tar |
|
tar -xvf subtitles.tar |
|
``` |
|
|
|
3. Use the [LongVideoBench] dataloader to load the data from raw MP4 files and subtitles: |
|
|
|
- (a) Install the dataloader: |
|
|
|
```shell |
|
git clone https://github.com/LongVideoBench/LongVideoBench.git |
|
cd LongVideoBench |
|
pip install -e . |
|
``` |
|
- (b) Load the dataset in python scripts: |
|
|
|
```python |
|
from longvideobench import LongVideoBenchDataset |
|
|
|
# validation |
|
dataset = LongVideoBenchDataset(YOUR_DATA_PATH, "lvb_val.json", max_num_frames=64) |
|
|
|
# test |
|
dataset = LongVideoBenchDataset(YOUR_DATA_PATH, "lvb_test_wo_gt.json", max_num_frames=64) |
|
|
|
print(dataset[0]["inputs"]) # A list consisting of PIL.Image and strings. |
|
``` |
|
|
|
The "inputs" are interleaved video frames and text subtitles, followed by questions and option prompts. You can then convert them to the format that your LMMs can accept. |
|
|
|
|
|
### Direct Use |
|
|
|
<!-- This section describes suitable use cases for the dataset. --> |
|
|
|
This dataset is meant to evaluate LMMs on video understanding and long-context understanding abilities. |
|
|
|
### Out-of-Scope Use |
|
|
|
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> |
|
|
|
We do not advise to use this dataset for training. |
|
|
|
## Dataset Structure |
|
|
|
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> |
|
|
|
- `lvb_val.json`: Validation set annotations. |
|
|
|
- `lvb_test_wo_gt.json`: Test set annotations. Correct choice is not provided. |
|
|
|
- `videos.tar.*`: Links to Videos. |
|
|
|
- `subtitles.tar`: Links to Subtitles. |
|
|
|
|
|
## Dataset Card Contact |
|
|
|
haoning001@e.ntu.edu.sg |
|
|
|
|
|
``` |
|
@misc{wu2024longvideobenchbenchmarklongcontextinterleaved, |
|
title={LongVideoBench: A Benchmark for Long-context Interleaved Video-Language Understanding}, |
|
author={Haoning Wu and Dongxu Li and Bei Chen and Junnan Li}, |
|
year={2024}, |
|
eprint={2407.15754}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CV}, |
|
url={https://arxiv.org/abs/2407.15754}, |
|
} |
|
``` |