|
--- |
|
license: apache-2.0 |
|
--- |
|
> # Cloned from https://github.com/amazon-science/mm-cot |
|
|
|
# Multimodal Chain-of-Thought Reasoning in Language Models |
|
|
|
<h5 align="center"><i>"Imagine learning a textbook without figures or tables."</i></h5> |
|
|
|
Multimodal-CoT incorporates vision features in a decoupled training framework. The framework consists of two training stages: (i) rationale generation and (ii) answer inference. Both stages share the same model architecture but differ in the input and output. |
|
|
|
![](vision_features/mm-cot.png) |
|
|
|
|
|
## Requirements |
|
|
|
Install all required python dependencies: |
|
|
|
``` |
|
pip install -r requirements.txt |
|
``` |
|
|
|
## Datasets |
|
|
|
Download the dataset from the following repository: |
|
|
|
``` |
|
https://github.com/lupantech/ScienceQA/tree/main/data |
|
``` |
|
|
|
Download the extracted vision features from [vision_features](https://drive.google.com/file/d/13B0hc_F_45-UlqPLKSgRz-ALtFQ8kIJr/view?usp=share_link) and unzip the files under `vision_features` |
|
|
|
## Instructions |
|
|
|
### Training |
|
|
|
``` |
|
# rationale generation |
|
CUDA_VISIBLE_DEVICES=0,1 python main.py \ |
|
--model allenai/unifiedqa-t5-base \ |
|
--user_msg rationale --img_type detr \ |
|
--bs 8 --eval_bs 4 --eval_acc 10 --output_len 512 \ |
|
--final_eval --prompt_format QCM-LE |
|
|
|
# answer inference |
|
CUDA_VISIBLE_DEVICES=0,1 python main.py \ |
|
--model allenai/unifiedqa-t5-base \ |
|
--user_msg answer --img_type detr \ |
|
--bs 8 --eval_bs 4 --eval_acc 10 --output_len 64 \ |
|
--final_eval --prompt_format QCMG-A \ |
|
--eval_le experiments/rationale_allenai-unifiedqa-t5-base_detr_QCM-LE_lr5e-05_bs16_op512_ep20/predictions_ans_eval.json \ |
|
--test_le experiments/rationale_allenai-unifiedqa-t5-base_detr_QCM-LE_lr5e-05_bs16_op512_ep20/predictions_ans_test.json |
|
``` |
|
|
|
### Inference |
|
|
|
Our trained models are available at [models](https://drive.google.com/file/d/1FtTYOJPHnWnFfCxNC6M3gar4RAX5E21b/view?usp=share_link). To use our trained models, please put the them under the ```models``` folder. |
|
|
|
``` |
|
# rationale generation |
|
CUDA_VISIBLE_DEVICES=0,1 python main.py \ |
|
--model allenai/unifiedqa-t5-base \ |
|
--user_msg rationale --img_type detr \ |
|
--bs 8 --eval_bs 4 --eval_acc 10 --output_len 512 \ |
|
--final_eval --prompt_format QCM-LE \ |
|
--evaluate_dir models/MM-CoT-UnifiedQA-base-Rationale |
|
|
|
# answer inference |
|
CUDA_VISIBLE_DEVICES=0,1 python main.py \ |
|
--model allenai/unifiedqa-t5-base \ |
|
--user_msg answer --img_type detr \ |
|
--bs 8 --eval_bs 4 --eval_acc 10 --output_len 64 \ |
|
--final_eval --prompt_format QCMG-A \ |
|
--eval_le models/rationale/predictions_ans_eval.json \ |
|
--test_le models/rationale/predictions_ans_test.json \ |
|
--evaluate_dir models/MM-CoT-UnifiedQA-base-Answer |
|
``` |
|
|
|
## Citing MM-CoT |
|
|
|
``` |
|
@article{zhang2023multicot, |
|
title={Multimodal Chain-of-Thought Reasoning in Language Models}, |
|
author={Zhang, Zhuosheng and Zhang, Aston and Li, Mu and Zhao, Hai and Karypis, George and Smola, Alex}, |
|
journal={arXiv preprint arXiv:2302.00923}, |
|
year={2023} |
|
} |
|
``` |
|
|
|
## License |
|
|
|
This project is licensed under the Apache-2.0 License. |
|
|
|
## Acknowledgement |
|
|
|
Part of our codes are adapted from [ScienceQA](https://github.com/lupantech/ScienceQA) and [Transformers](https://github.com/huggingface/transformers). |
|
|
|
We thank Pan Lu for providing parameter size for ScienceQA baselines. |
|
|