Datasets:
File size: 965 Bytes
a8e9413 40a737c 27b8542 0330e7a 27b8542 0330e7a 27b8542 0330e7a fabb858 0330e7a 27b8542 0330e7a a8e9413 a87b463 fabb858 a87b463 81f238b a87b463 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
---
configs:
- config_name: train_9k
data_files:
- split: train
path: "msrvtt_train_9k.json"
- config_name: train_7k
data_files:
- split: train
path: "msrvtt_train_7k.json"
- config_name: test_1k
data_files:
- split: test
path: "msrvtt_test_1k.json"
task_categories:
- text-to-video
- text-retrieval
- video-classification
language:
- en
size_categories:
- 1K<n<10K
---
[MSRVTT](https://openaccess.thecvf.com/content_cvpr_2016/html/Xu_MSR-VTT_A_Large_CVPR_2016_paper.html) contains 10K video clips and 200K captions.
We adopt the standard `1K-A split` protocol, which was introduced in [JSFusion](https://openaccess.thecvf.com/content_ECCV_2018/html/Youngjae_Yu_A_Joint_Sequence_ECCV_2018_paper.html) and has since become the de facto benchmark split in the `Text-Video Retrieval` field.
Train:
- train_7k: 7,010 videos, 140,200 captions
- train_9k: 9,000 videos, 180,000 captions
Test:
- test_1k: 1,000 videos, 1,000 captions |