ikodoh commited on
Commit
8089af6
1 Parent(s): 5578909

first commit

Browse files
.gitattributes CHANGED
@@ -53,3 +53,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
53
  *.jpg filter=lfs diff=lfs merge=lfs -text
54
  *.jpeg filter=lfs diff=lfs merge=lfs -text
55
  *.webp filter=lfs diff=lfs merge=lfs -text
 
 
 
 
53
  *.jpg filter=lfs diff=lfs merge=lfs -text
54
  *.jpeg filter=lfs diff=lfs merge=lfs -text
55
  *.webp filter=lfs diff=lfs merge=lfs -text
56
+ *.json filter=lfs diff=lfs merge=lfs -text
57
+ *.jsonl filter=lfs diff=lfs merge=lfs -text
58
+ *.csv filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,115 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Large Language Models are Temporal and Causal Reasoners for Video Question Answering
2
+
3
+ This is the official implementation of Flipped-VQA (EMNLP 2023) ([arxiv](https://arxiv.org/abs/2310.15747)) ([demo](https://ikodoh.github.io/flipped_vqa_demo.html)).
4
+
5
+ > Dohwan Ko<sup>1*</sup>, Ji Soo Lee<sup>1*</sup>, Wooyoung Kang<sup>2</sup>, Byungseok Roh<sup>2</sup>, Hyunwoo J. Kim<sup>1</sup>.
6
+ >
7
+ ><sup>1</sup>Department of Computer Science and Engineering, Korea University <sup>2</sup>Kakao Brain
8
+
9
+ [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/large-language-models-are-temporal-and-causal/video-question-answering-on-next-qa)](https://paperswithcode.com/sota/video-question-answering-on-next-qa?p=large-language-models-are-temporal-and-causal) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/large-language-models-are-temporal-and-causal/video-question-answering-on-situated)](https://paperswithcode.com/sota/video-question-answering-on-situated?p=large-language-models-are-temporal-and-causal) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/large-language-models-are-temporal-and-causal/video-question-answering-on-dramaqa)](https://paperswithcode.com/sota/video-question-answering-on-dramaqa?p=large-language-models-are-temporal-and-causal) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/large-language-models-are-temporal-and-causal/video-question-answering-on-vlep)](https://paperswithcode.com/sota/video-question-answering-on-vlep?p=large-language-models-are-temporal-and-causal) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/large-language-models-are-temporal-and-causal/video-question-answering-on-tvqa)](https://paperswithcode.com/sota/video-question-answering-on-tvqa?p=large-language-models-are-temporal-and-causal)
10
+
11
+ <div align="center">
12
+ <img src="asset/main.png" width="900px" />
13
+ </div>
14
+
15
+ ## Setup
16
+ To install requirements, run:
17
+ ```
18
+ git clone https://github.com/mlvlab/Flipped-VQA.git
19
+ cd Flipped-VQA
20
+ mkdir pretrained
21
+ conda create -n flipped-vqa python=3.8
22
+ conda activate flipped-vqa
23
+ sh setup.sh
24
+ ```
25
+
26
+ ## Dataset & LLaMA Preparation
27
+
28
+ * You can download our preprocessed datasets (NExT-QA, STAR, DramaQA, VLEP and TVQA) in [huggingface](https://huggingface.co/datasets/ikodoh/Flipped-VQA-Data) (We also provide the fine-tuned model on each dataset).
29
+
30
+ ```
31
+ git lfs install
32
+ git clone https://huggingface.co/datasets/ikodoh/Flipped-VQA-Data
33
+ mv ./Flipped-VQA-Data/data ./
34
+ mv ./Flipped-VQA-Data/checkpoint ./
35
+ unzip ./data/tvqa/tvqa_subtitles.zip -d ./data/tvqa
36
+ rm -rf Flipped-VQA-Data ./data/tvqa/tvqa_subtitles.zip
37
+ ```
38
+
39
+ * You can download original LLaMA at [here](https://github.com/facebookresearch/llama/tree/llama_v1), and put the checkpoint in ```./pretrained```.
40
+
41
+ ```
42
+ ./pretrained
43
+ └─ llama
44
+ |─ 7B
45
+ | |─ consolidated.00.pth
46
+ | └─ params.json
47
+ |─ 13B
48
+ | :
49
+ |─ 33B
50
+ | :
51
+ └─ tokenizer.model
52
+ ```
53
+
54
+ ## Training LLaMA-VQA (LLaMA + Flipped-VQA)
55
+
56
+ ### NExT-QA
57
+
58
+ ```
59
+ torchrun --rdzv_endpoint 127.0.0.1:1234 --nproc_per_node 4 train.py --model 7B \
60
+ --max_seq_len 128 --batch_size 8 --epochs 5 --warmup_epochs 2 --bias 3.5 --tau 100. --max_feats 10 --dataset nextqa \
61
+ --blr 9e-2 --weight_decay 0.14 --output_dir ./checkpoint/nextqa --accum_iter 2 --vaq --qav
62
+ ```
63
+
64
+ ### STAR
65
+
66
+ ```
67
+ torchrun --rdzv_endpoint 127.0.0.1:1234 --nproc_per_node 4 train.py --model 7B \
68
+ --max_seq_len 128 --batch_size 8 --epochs 5 --warmup_epochs 2 --bias 3 --tau 100. --max_feats 10 --dataset star \
69
+ --blr 9e-2 --weight_decay 0.16 --output_dir ./checkpoint/star --accum_iter 1 --vaq --qav
70
+ ```
71
+
72
+ ### DramaQA
73
+
74
+ ```
75
+ torchrun --rdzv_endpoint 127.0.0.1:1234 --nproc_per_node 4 train.py --model 7B \
76
+ --max_seq_len 384 --batch_size 2 --epochs 5 --warmup_epochs 2 --bias 3 --tau 100. --max_feats 10 --dataset dramaqa \
77
+ --blr 9e-2 --weight_decay 0.10 --output_dir ./checkpoint/dramaqa --accum_iter 8 --vaq --qav
78
+ ```
79
+
80
+ ### VLEP
81
+
82
+ ```
83
+ torchrun --rdzv_endpoint 127.0.0.1:1234 --nproc_per_node 4 train.py --model 7B \
84
+ --max_seq_len 256 --batch_size 4 --epochs 5 --warmup_epochs 2 --bias 3 --tau 100. --max_feats 10 --dataset vlep \
85
+ --blr 6e-2 --weight_decay 0.20 --output_dir ./checkpoint/vlep --accum_iter 8 --sub --qav
86
+ ```
87
+
88
+ ### TVQA
89
+
90
+ ```
91
+ torchrun --rdzv_endpoint 127.0.0.1:1234 --nproc_per_node 8 train.py --model 7B \
92
+ --max_seq_len 650 --batch_size 1 --epochs 5 --warmup_epochs 2 --bias 3 --tau 100. --max_feats 10 --dataset tvqa \
93
+ --blr 7e-2 --weight_decay 0.02 --output_dir ./checkpoint/tvqa --dataset tvqa --accum_iter 4 --sub --vaq --qav
94
+ ```
95
+
96
+ The fine-tuned checkpoints on each dataset are [here](https://huggingface.co/datasets/ikodoh/Flipped-VQA-Data).
97
+
98
+ ## Evaluation
99
+ From the training command, simply replace ```train.py``` with ```eval.py``` and add ```--resume ./your/checkpoint.pth```.
100
+
101
+ ## Acknowledgements
102
+
103
+ This repo is built upon [LLaMA-Adapter](https://github.com/OpenGVLab/LLaMA-Adapter).
104
+
105
+ ## Citations
106
+
107
+ ```
108
+ @inproceedings{ko2023large,
109
+ title={Large Language Models are Temporal and Causal Reasoners for Video Question Answering},
110
+ author={Ko, Dohwan and Lee, Ji Soo and Kang, Wooyoung and Roh, Byungseok and Kim, Hyunwoo J},
111
+ booktitle={EMNLP},
112
+ year={2023}
113
+ }
114
+ ```
115
+
asset/main.png ADDED

Git LFS Details

  • SHA256: 77b321b1f29db11a109b8d3f93f6e8b4b20c110122096b6fe8c3dc3fb7d3f849
  • Pointer size: 132 Bytes
  • Size of remote file: 1.38 MB
checkpoint/dramaqa.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2563e5daa3614f31d8e8806e7fdca9da96094811d2dcd6aa97ea6961d24df5fa
3
+ size 54058095
checkpoint/nextqa.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c6509d11e3651564bc1d30f51cabe5e50419b84610fed86524011825eca59fb4
3
+ size 54058095
checkpoint/star.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:de117ad5552a3f25257df644789e6432e6dae47c8f206230fc9b20a8a2203d4c
3
+ size 54058095
checkpoint/tvqa.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:54bbe0547765d20262c579ba7e3f37f9b5e3a737e372d1dda9a9b82f7acbfede
3
+ size 54055407
checkpoint/vlep.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0044346ea500cffeef940b38152b149f7fa9b1223279a5a235e49553ad1be92c
3
+ size 54058095
data/dramaqa/AnotherMissOhQA_test_set.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d289df433b8890edd565912a0039be754c79d44280399dcc3eaf8714fb5645b
3
+ size 2442248
data/dramaqa/AnotherMissOhQA_train_set.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:12a0d579824c5e5eaacba4aff4559547da6b375638d506220d32c4e83c49e5c3
3
+ size 11454438
data/dramaqa/AnotherMissOhQA_val_set.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5e0f84e48eb2f6141dbc28a73d8de6589ff7e101495aebd1222982e0fa8e4f7d
3
+ size 2467499
data/dramaqa/clipvitl14.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c07a8eb4964a484184472efb0cdee53a873c7084cf09a00475d9571f8da33a21
3
+ size 347845769
data/nextqa/clipvitl14.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d73ea6941a096d5497443b325e0c693ca5cb5e0e58db44db32aa614f37f80176
3
+ size 366419999
data/nextqa/train.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:51ae6e611fa58835f63a0beb325fe647bb197c7e320a31f0bb7a9cf969e082b2
3
+ size 5578444
data/nextqa/val.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:43198bdef8436b8d64a9b75d846b0987c10cbf94ebf4be325c4a4e54634d66b8
3
+ size 814107
data/star/STAR_test.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:769f3033a71bfd55a5042a94b1463696d563ce37447addd42b65e208ee5678fc
3
+ size 2591326
data/star/STAR_train.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9dc9b8fd6026154ea99cd76ae2975957dd787312d88bb35b9d42df674fc3e257
3
+ size 411564765
data/star/STAR_val.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c88335ce19281d882cb9b47f8126296cddee864f4ad0cf99bebc9c0a8533097c
3
+ size 60041967
data/star/clipvitl14.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fd32069bd79014196aac82b9949ae226a9e72efb903bac3b8ee75083883453e0
3
+ size 452328545
data/tvqa/clipvitl14.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:26187056e6c93102b328cd1d30a7f327da3f4741d2a9a88fe27ce94e2e87b33d
3
+ size 7700125181
data/tvqa/tvqa_subtitles.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b711a623a41cf4bc497b096d7cf836b3cadfb590318923dafd58f84440dac4c8
3
+ size 23814412
data/tvqa/tvqa_test_public.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:abe4718477756e9d37832e03b30718ce9096e41c103fa8297072c503d0264287
3
+ size 2724100
data/tvqa/tvqa_train.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b0eb4297aa82604d7844ef011b97a125be4805668283772c93729db96dbddc02
3
+ size 45599736
data/tvqa/tvqa_val.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e85ca671592ad915f53dddfb643eec643efd63c291578c830e4e1e46ade0015c
3
+ size 5694418
data/vlep/clipvitl14.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a29f10d540d29b8d4a38defffb21b00afe4820e71db9c24ff96eef48a66e6a58
3
+ size 509358025
data/vlep/vlep_dev_release.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e6d12fcc25900220c95ca458a803ee03d87ebbfbb665c7817792da260fdd6c46
3
+ size 1078762
data/vlep/vlep_subtitles.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a67f471a16ec582daa1b545bcd246fb269239741ebd28b716f20e206f2d6e18b
3
+ size 11616660
data/vlep/vlep_test_release.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e3b23f549fb3133c524798cacf77ed4601b226067a1437ece3ba37f40fccc539
3
+ size 977615
data/vlep/vlep_train_release.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bdae52205775829db507693386b8d24807b3f708bae0f3eea14f52a96000e085
3
+ size 4984854