Vision-CAIR commited on
Commit
3102606
·
verified ·
1 Parent(s): 8be3faa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -3
README.md CHANGED
@@ -1,3 +1,40 @@
1
- ---
2
- license: bsd-3-clause
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: bsd-3-clause
3
+ pretty_name: "TVQA-Long"
4
+ tags:
5
+ - video_understanding
6
+ - long-video-benchmark
7
+ - long-video-QA
8
+
9
+ ---
10
+
11
+
12
+ # Dataset Card for TVQA-Long
13
+
14
+ <!-- Provide a quick summary of the dataset. -->
15
+
16
+ This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
17
+
18
+
19
+
20
+ ### Dataset Sources [optional]
21
+
22
+ <!-- Provide the basic links for the dataset. -->
23
+
24
+ - **Repository:** https://github.com/Vision-CAIR/MiniGPT4-video
25
+ - **Paper [optional]:** https://arxiv.org/abs/2407.12679
26
+
27
+ ## Citation [optional]
28
+
29
+ <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
30
+
31
+ **BibTeX:**
32
+ @misc{ataallah2024goldfishvisionlanguageunderstandingarbitrarily,
33
+ title={Goldfish: Vision-Language Understanding of Arbitrarily Long Videos},
34
+ author={Kirolos Ataallah and Xiaoqian Shen and Eslam Abdelrahman and Essam Sleiman and Mingchen Zhuge and Jian Ding and Deyao Zhu and Jürgen Schmidhuber and Mohamed Elhoseiny},
35
+ year={2024},
36
+ eprint={2407.12679},
37
+ archivePrefix={arXiv},
38
+ primaryClass={cs.CV},
39
+ url={https://arxiv.org/abs/2407.12679},
40
+ }