Papers
arxiv:2503.11495

V-STaR: Benchmarking Video-LLMs on Video Spatio-Temporal Reasoning

Published on Mar 14
· Submitted by lwpyh on Mar 18
Authors:
,
,

Abstract

Human processes video reasoning in a sequential spatio-temporal reasoning logic, we first identify the relevant frames ("when") and then analyse the spatial relationships ("where") between key objects, and finally leverage these relationships to draw inferences ("what"). However, can Video Large Language Models (Video-LLMs) also "reason through a sequential spatio-temporal logic" in videos? Existing Video-LLM benchmarks primarily focus on assessing object presence, neglecting relational reasoning. Consequently, it is difficult to measure whether a model truly comprehends object interactions (actions/events) in videos or merely relies on pre-trained "memory" of co-occurrences as biases in generating answers. In this work, we introduce a Video Spatio-Temporal Reasoning (V-STaR) benchmark to address these shortcomings. The key idea is to decompose video understanding into a Reverse Spatio-Temporal Reasoning (RSTR) task that simultaneously evaluates what objects are present, when events occur, and where they are located while capturing the underlying Chain-of-thought (CoT) logic. To support this evaluation, we construct a dataset to elicit the spatial-temporal reasoning process of Video-LLMs. It contains coarse-to-fine CoT questions generated by a semi-automated GPT-4-powered pipeline, embedding explicit reasoning chains to mimic human cognition. Experiments from 14 Video-LLMs on our V-STaR reveal significant gaps between current Video-LLMs and the needs for robust and consistent spatio-temporal reasoning.

Community

Paper author Paper submitter

📢 New Benchmark Release | V-STaR: Benchmarking Video-LLMs on Video Spatio-Temporal Reasoning

💡Key Innovations

V-STaR is the first benchmark explicitly designed to evaluate Video-LLM’s spatio-temporal reasoning ability in answering questions explicitly in the context
of “when”, “where”, and “what”, spanning:

  • 9 video domains
  • 2094 spatio-temporal reasoning samples
  • 2 reverse Spatio-Temporal Reasoning (RSTR) question chains: "what-when-where" or "what-where-when"
  • A github MLLM reasoning collection repository: Awesome-MLLM-Reasoning-Collection

V-STaR reveals a fundamental weakness in existing Video-LLMs regarding causal spatio-temporal reasoning and inspires research in improving trustworthy spatio-temporal understanding in future Video-LLMs.

👉Try it Now: GitHub | HuggingFace

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2503.11495 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2503.11495 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2503.11495 in a Space README.md to link it from this page.

Collections including this paper 1