--- dataset_info: features: - name: id dtype: string - name: question dtype: string - name: options list: string - name: answer dtype: string - name: task_plan dtype: string - name: image dtype: image splits: - name: 3d_how_many num_bytes: 964232493.0 num_examples: 654 - name: 3d_what num_bytes: 944850246.0 num_examples: 645 - name: 3d_where num_bytes: 989034725.0 num_examples: 669 - name: 3d_what_attribute num_bytes: 931184419.0 num_examples: 639 - name: 3d_where_attribute num_bytes: 897312251.0 num_examples: 609 - name: 3d_what_distance num_bytes: 836764094.0 num_examples: 585 - name: 3d_where_distance num_bytes: 925465404.0 num_examples: 645 - name: 3d_what_attribute_distance num_bytes: 970396774.0 num_examples: 678 - name: 3d_what_size num_bytes: 988177167.0 num_examples: 675 - name: 3d_where_size num_bytes: 898574558.0 num_examples: 618 - name: 3d_what_attribute_size num_bytes: 993251978.0 num_examples: 678 - name: 2d_how_many num_bytes: 40708392.0 num_examples: 606 - name: 2d_what num_bytes: 46567124.0 num_examples: 681 - name: 2d_where num_bytes: 47803083.0 num_examples: 699 - name: 2d_what_attribute num_bytes: 46026755.0 num_examples: 657 - name: 2d_where_attribute num_bytes: 47675852.0 num_examples: 636 - name: sg_what_object num_bytes: 24281703.0 num_examples: 633 - name: sg_what_attribute num_bytes: 26390284.0 num_examples: 645 - name: sg_what_relation num_bytes: 27153148.0 num_examples: 618 download_size: 10589322704 dataset_size: 10645850450.0 configs: - config_name: default data_files: - split: 3d_how_many path: data/3d_how_many-* - split: 3d_what path: data/3d_what-* - split: 3d_where path: data/3d_where-* - split: 3d_what_attribute path: data/3d_what_attribute-* - split: 3d_where_attribute path: data/3d_where_attribute-* - split: 3d_what_distance path: data/3d_what_distance-* - split: 3d_where_distance path: data/3d_where_distance-* - split: 3d_what_attribute_distance path: data/3d_what_attribute_distance-* - split: 3d_what_size path: data/3d_what_size-* - split: 3d_where_size path: data/3d_where_size-* - split: 3d_what_attribute_size path: data/3d_what_attribute_size-* - split: 2d_how_many path: data/2d_how_many-* - split: 2d_what path: data/2d_what-* - split: 2d_where path: data/2d_where-* - split: 2d_what_attribute path: data/2d_what_attribute-* - split: 2d_where_attribute path: data/2d_where_attribute-* - split: sg_what_object path: data/sg_what_object-* - split: sg_what_attribute path: data/sg_what_attribute-* - split: sg_what_relation path: data/sg_what_relation-* --- # Dataset Card for TaskMeAnything-v1-imageqa-2024

TaskMeAnything-v1-imageqa-2024 benchmark dataset

🌐 Website | 📑 Paper | 🤗 Huggingface | 💻 Interface

If you like our project, please give us a star ⭐ on GitHub for latest update.
## TaskMeAnything-v1-2024 [TaskMeAnything-v1-imageqa-2024](https://huggingface.co/datasets/weikaih/TaskMeAnything-v1-imageqa-2024) is a benchmark for reflecting the current progress of MLMs by `automatically` finding tasks that SOTA MLMs struggle with using the TaskMeAnything Top-K queries. This benchmark includes 3,279 2d questions, 7,095 3d questions, and 1,896 real image questions that the TaskMeAnything algorithm automatically approximated as challenging for over 12 popular MLMs. The dataset contains 19 splits, while each splits contains 600+ questions from a specific task generator in TaskMeAnything-v1. For each row of dataset, it includes: image, question, options, answer and its corresponding task plan. ## Load TaskMeAnything-v1-2024 ImageQA Dataset ``` import datasets dataset_name = 'weikaih/TaskMeAnything-v1-imageqa-2024' dataset = datasets.load_dataset(dataset_name, split = TASK_GENERATOR_SPLIT) ``` where `TASK_GENERATOR_SPLIT` is one of the task generators, eg, `2024_2d_how_many`. ## Evaluation Results ### Overall ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65cb0dcc4913057ac82a7a31/_KadJKJSHhZXXfIfePaUg.png) ### Breakdown performance on each task types ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65cb0dcc4913057ac82a7a31/-DrQ90FuGatJE4CuHsWS9.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65cb0dcc4913057ac82a7a31/6D33K2tSc1OYF4_f6YJ63.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65cb0dcc4913057ac82a7a31/eKzh5ghGNVrCluVmnkZW0.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65cb0dcc4913057ac82a7a31/sm8dAmjxsXmJu8oeqLaeQ.png) ## Out-of-Scope Use This dataset should not be used for training models. ## Disclaimers **TaskMeAnything** and its associated resources are provided for research and educational purposes only. The authors and contributors make no warranties regarding the accuracy or reliability of the data and software. Users are responsible for ensuring their use complies with applicable laws and regulations. The project is not liable for any damages or losses resulting from the use of these resources. ## Contact - Jieyu Zhang: jieyuz2@cs.washington.edu ## Citation **BibTeX:** ```bibtex @article{zhang2024task, title={Task Me Anything}, author={Zhang, Jieyu and Huang, Weikai and Ma, Zixian and Michel, Oscar and He, Dong and Gupta, Tanmay and Ma, Wei-Chiu and Farhadi, Ali and Kembhavi, Aniruddha and Krishna, Ranjay}, journal={arXiv preprint arXiv:2406.11775}, year={2024} } ```