bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 302
2.02k
| abstract
stringlengths 566
2.48k
| title
stringlengths 16
179
| authors
sequencelengths 1
76
| id
stringclasses 1
value | type
stringclasses 2
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringlengths 0
40
| n_linked_authors
int64 -1
24
| upvotes
int64 -1
86
| num_comments
int64 -1
10
| n_authors
int64 -1
75
| Models
sequencelengths 0
37
| Datasets
sequencelengths 0
10
| Spaces
sequencelengths 0
26
| old_Models
sequencelengths 0
37
| old_Datasets
sequencelengths 0
10
| old_Spaces
sequencelengths 0
26
| paper_page_exists_pre_conf
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://openreview.net/forum?id=lygceqe21t | @inproceedings{
vladimirova2024fairjob,
title={FairJob: A Real-World Dataset for Fairness in Online Systems},
author={Mariia Vladimirova and Federico Pavone and Eustache Diemert},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=lygceqe21t}
} | We introduce a fairness-aware dataset for job recommendation in advertising, designed to foster research in algorithmic fairness within real-world scenarios. It was collected and prepared to comply with privacy standards and business confidentiality. An additional challenge is the lack of access to protected user attributes such as gender, for which we propose a pragmatic solution to obtain a proxy estimate. Despite being anonymized and including a proxy for a sensitive attribute, our dataset preserves predictive power and maintains a realistic and challenging benchmark. This dataset addresses a significant gap in the availability of fairness-focused resources for high-impact domains like advertising -- the actual impact being having access or not to precious employment opportunities, where balancing fairness and utility is a common industrial challenge. We also explore various stages in the advertising process where unfairness can occur and introduce a method to compute a fair utility metric for the job recommendations in online systems case from a biased dataset. Experimental evaluations of bias mitigation techniques on the released dataset demonstrate potential improvements in fairness and the associated trade-offs with utility.
The dataset is hosted at https://huggingface.co/datasets/criteo/FairJob. Source code for the experiments is hosted at https://github.com/criteo-research/FairJob-dataset/. | FairJob: A Real-World Dataset for Fairness in Online Systems | [
"Mariia Vladimirova",
"Federico Pavone",
"Eustache Diemert"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2407.03059 | [
"https://github.com/criteo-research/fairjob-dataset"
] | https://huggingface.co/papers/2407.03059 | 0 | 1 | 0 | 3 | [] | [
"criteo/FairJob"
] | [] | [] | [
"criteo/FairJob"
] | [] | 1 |
null | https://openreview.net/forum?id=loJM1acwzf | @inproceedings{
ma2024mmlongbenchdoc,
title={{MMLONGBENCH}-{DOC}: Benchmarking Long-context Document Understanding with Visualizations},
author={Yubo Ma and Yuhang Zang and Liangyu Chen and Meiqi Chen and Yizhu Jiao and Xinze Li and Xinyuan Lu and Ziyu Liu and Yan Ma and Xiaoyi Dong and Pan Zhang and Liangming Pan and Yu-Gang Jiang and Jiaqi Wang and Yixin Cao and Aixin Sun},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=loJM1acwzf}
} | Understanding documents with rich layouts and multi-modal components is a long-standing and practical task. Recent Large Vision-Language Models (LVLMs) have made remarkable strides in various tasks, particularly in single-page document understanding (DU). However, their abilities on long-context DU remain an open problem. This work presents MMLONGBENCH-DOC, a long-context, multi- modal benchmark comprising 1,082 expert-annotated questions. Distinct from previous datasets, it is constructed upon 135 lengthy PDF-formatted documents with an average of 47.5 pages and 21,214 textual tokens. Towards comprehensive evaluation, answers to these questions rely on pieces of evidence from (1) different sources (text, image, chart, table, and layout structure) and (2) various locations (i.e., page number). Moreover, 33.7\% of the questions are cross-page questions requiring evidence across multiple pages. 20.6\% of the questions are designed to be unanswerable for detecting potential hallucinations. Experiments on 14 LVLMs demonstrate that long-context DU greatly challenges current models. Notably, the best-performing model, GPT-4o, achieves an F1 score of only 44.9\%, while the second-best, GPT-4V, scores 30.5\%. Furthermore, 12 LVLMs (all except GPT-4o and GPT-4V) even present worse performance than their LLM counterparts which are fed with lossy-parsed OCR documents. These results validate the necessity of future research toward more capable long-context LVLMs. | MMLONGBENCH-DOC: Benchmarking Long-context Document Understanding with Visualizations | [
"Yubo Ma",
"Yuhang Zang",
"Liangyu Chen",
"Meiqi Chen",
"Yizhu Jiao",
"Xinze Li",
"Xinyuan Lu",
"Ziyu Liu",
"Yan Ma",
"Xiaoyi Dong",
"Pan Zhang",
"Liangming Pan",
"Yu-Gang Jiang",
"Jiaqi Wang",
"Yixin Cao",
"Aixin Sun"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | oral | 2407.01523 | [
""
] | https://huggingface.co/papers/2407.01523 | 2 | 0 | 0 | 16 | [] | [] | [] | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=loDHZstVP6 | @inproceedings{
xie2024finben,
title={FinBen: An Holistic Financial Benchmark for Large Language Models},
author={Qianqian Xie and Weiguang Han and Zhengyu Chen and Ruoyu Xiang and Xiao Zhang and Yueru He and Mengxi Xiao and Dong Li and Yongfu Dai and Duanyu Feng and Yijing Xu and Haoqiang Kang and Ziyan Kuang and Chenhan Yuan and Kailai Yang and Zheheng Luo and Tianlin Zhang and Zhiwei Liu and GUOJUN XIONG and Zhiyang Deng and Yuechen Jiang and Zhiyuan Yao and Haohang Li and Yangyang Yu and Gang Hu and Huang Jiajia and Xiao-Yang Liu and Alejandro Lopez-Lira and Benyou Wang and Yanzhao Lai and Hao Wang and Min Peng and Sophia Ananiadou and Jimin Huang},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=loDHZstVP6}
} | LLMs have transformed NLP and shown promise in various fields, yet their potential in finance is underexplored due to a lack of comprehensive benchmarks, the rapid development of LLMs, and the complexity of financial tasks. In this paper, we introduce FinBen, the first extensive open-source evaluation benchmark, including 42 datasets spanning 24 financial tasks, covering eight critical aspects: information extraction (IE), textual analysis, question answering (QA), text generation, risk management, forecasting, decision-making, and bilingual (English and Spanish). FinBen offers several key innovations: a broader range of tasks and datasets, the first evaluation of stock trading, novel agent and Retrieval-Augmented Generation (RAG) evaluation, and two novel datasets for regulations and stock trading. Our evaluation of 21 representative LLMs, including GPT-4, ChatGPT, and the latest Gemini, reveals several key findings: While LLMs excel in IE and textual analysis, they struggle with advanced reasoning and complex tasks like text generation and forecasting. GPT-4 excels in IE and stock trading, while Gemini is better at text generation and forecasting. Instruction-tuned LLMs improve textual analysis but offer limited benefits for complex tasks such as QA. FinBen has been used to host the first financial LLMs shared task at the FinNLP-AgentScen workshop during IJCAI-2024, attracting 12 teams. Their novel solutions outperformed GPT-4, showcasing FinBen's potential to drive innovations in financial LLMs. All datasets and code are publicly available for the research community, with results shared and updated regularly on the Open Financial LLM Leaderboard. | FinBen: An Holistic Financial Benchmark for Large Language Models | [
"Qianqian Xie",
"Weiguang Han",
"Zhengyu Chen",
"Ruoyu Xiang",
"Xiao Zhang",
"Yueru He",
"Mengxi Xiao",
"Dong Li",
"Yongfu Dai",
"Duanyu Feng",
"Yijing Xu",
"Haoqiang Kang",
"Ziyan Kuang",
"Chenhan Yuan",
"Kailai Yang",
"Zheheng Luo",
"Tianlin Zhang",
"Zhiwei Liu",
"GUOJUN XIONG",
"Zhiyang Deng",
"Yuechen Jiang",
"Zhiyuan Yao",
"Haohang Li",
"Yangyang Yu",
"Gang Hu",
"Huang Jiajia",
"Xiao-Yang Liu",
"Alejandro Lopez-Lira",
"Benyou Wang",
"Yanzhao Lai",
"Hao Wang",
"Min Peng",
"Sophia Ananiadou",
"Jimin Huang"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=lnuXaRpwvw | @inproceedings{
weber2024redpajama,
title={RedPajama: an Open Dataset for Training Large Language Models},
author={Maurice Weber and Daniel Y Fu and Quentin Gregory Anthony and Yonatan Oren and Shane Adams and Anton Alexandrov and Xiaozhong Lyu and Huu Nguyen and Xiaozhe Yao and Virginia Adams and Ben Athiwaratkun and Rahul Chalamala and Kezhen Chen and Max Ryabinin and Tri Dao and Percy Liang and Christopher Re and Irina Rish and Ce Zhang},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=lnuXaRpwvw}
} | Large language models are increasingly becoming a cornerstone technology in artificial intelligence, the sciences, and society as a whole, yet the optimal strategies for dataset composition and filtering remain largely elusive. Many of the top-performing models lack transparency in their dataset curation and model development processes, posing an obstacle to the development of fully open language models.
In this paper, we identify three core data-related challenges that must be addressed to advance open-source language models. These include (1) transparency in model development, including the data curation process, (2) access to large quantities of high-quality data, and (3) availability of artifacts and metadata for dataset curation and analysis.
To address these challenges, we release RedPajama-V1, an open reproduction of the LLaMA training dataset. In addition, we release RedPajama-V2, a massive web-only dataset consisting of raw, unfiltered text data together with quality signals and metadata.
Together, the RedPajama datasets comprise over 100 trillion tokens spanning multiple domains and with their quality signals facilitate the filtering of data, aiming to inspire the development of numerous new datasets. To date, these datasets have already been used in the training of strong language models used in production, such as Snowflake Arctic, Salesforce's XGen and AI2's OLMo. To provide insight into the quality of RedPajama, we present a series of analyses and ablation studies with decoder-only language models with up to 1.6B parameters. Our findings demonstrate how quality signals for web data can be effectively leveraged to curate high-quality subsets of the dataset, underscoring the potential of RedPajama to advance the development of transparent and high-performing language models at scale. | RedPajama: an Open Dataset for Training Large Language Models | [
"Maurice Weber",
"Daniel Y Fu",
"Quentin Gregory Anthony",
"Yonatan Oren",
"Shane Adams",
"Anton Alexandrov",
"Xiaozhong Lyu",
"Huu Nguyen",
"Xiaozhe Yao",
"Virginia Adams",
"Ben Athiwaratkun",
"Rahul Chalamala",
"Kezhen Chen",
"Max Ryabinin",
"Tri Dao",
"Percy Liang",
"Christopher Re",
"Irina Rish",
"Ce Zhang"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | oral | 2411.12372 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=lk7SW0bH4x | @inproceedings{
zhang2024probts,
title={Prob{TS}: Benchmarking Point and Distributional Forecasting across Diverse Prediction Horizons},
author={Jiawen Zhang and Xumeng Wen and Zhenwei Zhang and Shun Zheng and Jia Li and Jiang Bian},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=lk7SW0bH4x}
} | Delivering precise point and distributional forecasts across a spectrum of prediction horizons represents a significant and enduring challenge in the application of time-series forecasting within various industries.
Prior research on developing deep learning models for time-series forecasting has often concentrated on isolated aspects, such as long-term point forecasting or short-term probabilistic estimations. This narrow focus may result in skewed methodological choices and hinder the adaptability of these models to uncharted scenarios.
While there is a rising trend in developing universal forecasting models, a thorough understanding of their advantages and drawbacks, especially regarding essential forecasting needs like point and distributional forecasts across short and long horizons, is still lacking.
In this paper, we present ProbTS, a benchmark tool designed as a unified platform to evaluate these fundamental forecasting needs and to conduct a rigorous comparative analysis of numerous cutting-edge studies from recent years.
We dissect the distinctive data characteristics arising from disparate forecasting requirements and elucidate how these characteristics can skew methodological preferences in typical research trajectories, which often fail to fully accommodate essential forecasting needs.
Building on this, we examine the latest models for universal time-series forecasting and discover that our analyses of methodological strengths and weaknesses are also applicable to these universal models.
Finally, we outline the limitations inherent in current research and underscore several avenues for future exploration. | ProbTS: Benchmarking Point and Distributional Forecasting across Diverse Prediction Horizons | [
"Jiawen Zhang",
"Xumeng Wen",
"Zhenwei Zhang",
"Shun Zheng",
"Jia Li",
"Jiang Bian"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2310.07446 | [
"https://github.com/microsoft/probts"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=li3iFfkwRL | @inproceedings{
allen2024mleo,
title={M3{LEO}: A Multi-Modal, Multi-Label Earth Observation Dataset Integrating Interferometric {SAR} and Multispectral Data},
author={Matthew J Allen and Francisco Dorr and Joseph Alejandro Gallego Mejia and Laura Mart{\'\i}nez-Ferrer and Anna Jungbluth and Freddie Kalaitzis and Ra{\'u}l Ramos-Poll{\'a}n},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=li3iFfkwRL}
} | Satellite-based remote sensing has revolutionised the way we address global challenges in a rapidly evolving world. Huge quantities of Earth Observation (EO) data are generated by satellite sensors daily, but processing these large datasets for use in ML pipelines is technically and computationally challenging. Specifically, different types of EO data are often hosted on a variety of platforms, with
differing degrees of availability for Python preprocessing tools. In addition, spatial alignment across data sources and data tiling for easier handling can present significant technical hurdles for novice users. While some preprocessed Earth observation datasets exist, their content is often limited to optical or near-optical wavelength data, which is ineffective at night or in adverse weather conditions.
Synthetic Aperture Radar (SAR), an active sensing technique based on microwave length radiation, offers a viable alternative. However, the application of machine learning to SAR has been limited due to a lack of ML-ready data and pipelines, particularly for the full diversity of SAR data, including polarimetry, coherence and interferometry. In this work, we introduce M3LEO, a multi-modal, multi-label
Earth observation dataset that includes polarimetric, interferometric, and coherence SAR data derived from Sentinel-1, alongside multispectral Sentinel-2 imagery and a suite of auxiliary data describing terrain properties such as land use. M3LEO spans approximately 17M data chips, each measuring 4x4 km, across six diverse geographic regions. The dataset is complemented by a flexible PyTorch Lightning framework, with configuration management using Hydra, to accommodate its use across diverse ML applications in Earth observation. Additionally, we provide tools to process any dataset available on popular platforms such as Google Earth Engine for seamless integration with our framework. We show that the distribution shift in self-supervised embeddings is substantial across geographic regions, even when controlling for terrain properties. Data is available at huggingface.co/M3LEO, and code at github.com/spaceml-org/M3LEO. | M3LEO: A Multi-Modal, Multi-Label Earth Observation Dataset Integrating Interferometric SAR and Multispectral Data | [
"Matthew J Allen",
"Francisco Dorr",
"Joseph Alejandro Gallego Mejia",
"Laura Martínez-Ferrer",
"Anna Jungbluth",
"Freddie Kalaitzis",
"Raúl Ramos-Pollán"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2406.04230 | [
"https://github.com/spaceml-org/m3leo"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=l0Ydsl10ci | @inproceedings{
shomee2024impact,
title={{IMPACT}: A Large-scale Integrated Multimodal Patent Analysis and Creation Dataset for Design Patents},
author={Homaira Huda Shomee and Zhu Wang and Sathya N. Ravi and Sourav Medya},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=l0Ydsl10ci}
} | In this paper, we introduce IMPACT (Integrated Multimodal Patent Analysis and Creation Dataset for Design Patents), a large-scale multimodal patent dataset with detailed captions for design patent figures. Our dataset includes half a million design patents comprising 3.61 million figures along with captions from patents granted by the United States Patent and Trademark Office (USPTO) over a 16-year period from 2007 to 2022. We incorporate the metadata of each patent application with elaborate captions that are coherent with multiple viewpoints of designs. Even though patents themselves contain a variety of design figures, titles, and descriptions of viewpoints, we find that they lack detailed descriptions that are necessary to perform multimodal tasks such as classification and retrieval. IMPACT closes this gap thereby providing researchers with necessary ingredients to instantiate a variety of multimodal tasks. Our dataset has a huge potential for novel design inspiration and can be used with advanced computer vision models in tandem. We perform preliminary evaluations on the dataset on the popular patent analysis tasks such as classification and retrieval. Our results indicate that integrating images with generated captions significantly improves the performance of different models on the corresponding tasks. Given that design patents offer various benefits for modeling novel tasks, we propose two standard computer vision tasks that have not been investigated in analyzing patents as future directions using IMPACT as a benchmark viz., 3D Image Construction and Visual Question Answering (VQA). To facilitate research in these directions, we make our IMPACT dataset and the code/models used in this work publicly available at https://github.com/AI4Patents/IMPACT. | IMPACT: A Large-scale Integrated Multimodal Patent Analysis and Creation Dataset for Design Patents | [
"Homaira Huda Shomee",
"Zhu Wang",
"Sathya N. Ravi",
"Sourav Medya"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=kwDOxOmGE0 | @inproceedings{
li2024vrsbench,
title={{VRSB}ench: A Versatile Vision-Language Benchmark Dataset for Remote Sensing Image Understanding},
author={Xiang Li and Jian Ding and Mohamed Elhoseiny},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=kwDOxOmGE0}
} | We introduce a new benchmark designed to advance the development of general-purpose, large-scale vision-language models for remote sensing images. Although several vision-language datasets in remote sensing have been proposed to pursue this goal, existing datasets are typically tailored to single tasks, lack detailed object information, or suffer from inadequate quality control. Exploring these improvement opportunities, we present a Versatile vision-language Benchmark for Remote Sensing image understanding, termed VRSBench. This benchmark comprises 29,614 images, with 29,614 human-verified detailed captions, 52,472 object references, and 123,221 question-answer pairs. It facilitates the training and evaluation of vision-language models across a broad spectrum of remote sensing image understanding tasks. We further evaluated state-of-the-art models on this benchmark for three vision-language tasks: image captioning, visual grounding, and visual question answering. Our work aims to significantly contribute to the development of advanced vision-language models in the field of remote sensing. The data and code can be accessed at https://vrsbench.github.io. | VRSBench: A Versatile Vision-Language Benchmark Dataset for Remote Sensing Image Understanding | [
"Xiang Li",
"Jian Ding",
"Mohamed Elhoseiny"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2406.12384 | [
"https://github.com/lx709/vrsbench"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=kvjbFVHpny | @inproceedings{
li2024evocodebench,
title={EvoCodeBench: An Evolving Code Generation Benchmark with Domain-Specific Evaluations},
author={Jia Li and Ge Li and Xuanming Zhang and Yunfei Zhao and Yihong Dong and Zhi Jin and Binhua Li and Fei Huang and Yongbin Li},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=kvjbFVHpny}
} | How to evaluate Large Language Models (LLMs) in code generation remains an open question.
Many benchmarks have been proposed, but they have two limitations, i.e., data leakage and lack of domain-specific evaluation.
The former hurts the fairness of benchmarks, and the latter hinders practitioners from selecting superior LLMs for specific programming domains.
To address these two limitations, we propose a new benchmark - EvoCodeBench, which has the following advances:
(1) Evolving data. EvoCodeBench will be dynamically updated every period (e.g., 6 months) to avoid data leakage. This paper releases the first version - EvoCodeBench-2403, containing 275 samples from 25 repositories.
(2) A domain taxonomy and domain labels. Based on the statistics of open-source communities, we design a programming domain taxonomy consisting of 10 popular domains. Based on the taxonomy, we annotate each sample in EvoCodeBench with a domain label. EvoCodeBench provides a broad platform for domain-specific evaluations.
(3) Domain-specific evaluations. Besides the Pass@k, we compute the Domain-Specific Improvement (DSI) and define LLMs' comfort and strange domains. These evaluations help practitioners select superior LLMs in specific domains and discover the shortcomings of existing LLMs.
Besides, EvoCodeBench is collected by a rigorous pipeline and aligns with real-world repositories in multiple aspects (e.g., code distributions).
We evaluate 8 popular LLMs (e.g., gpt-4, DeepSeek Coder, StarCoder 2) on EvoCodeBench and summarize some insights. EvoCodeBench reveals the actual abilities of these LLMs in real-world repositories. For example, the highest Pass@1 of gpt-4 on EvoCodeBench-2403 is only 20.74%. Besides, we evaluate LLMs in different domains and discover their comfort and strange domains. For example, gpt-4 performs best in most domains but falls behind others in the Internet domain. StarCoder 2-15B unexpectedly performs well in the Database domain and even outperforms 33B LLMs. We release EvoCodeBench, all prompts, and LLMs' completions for further community analysis. | EvoCodeBench: An Evolving Code Generation Benchmark with Domain-Specific Evaluations | [
"Jia Li",
"Ge Li",
"Xuanming Zhang",
"Yunfei Zhao",
"Yihong Dong",
"Zhi Jin",
"Binhua Li",
"Fei Huang",
"Yongbin Li"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2410.22821 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=ktYaxX12RN | @inproceedings{
nikitin2024tsgm,
title={{TSGM}: A Flexible Framework for Generative Modeling of Synthetic Time Series},
author={Alexander V Nikitin and Letizia Iannucci and Samuel Kaski},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=ktYaxX12RN}
} | Time series data are essential in a wide range of machine learning (ML) applications. However, temporal data are often scarce or highly sensitive, limiting data sharing and the use of data-intensive ML methods. A possible solution to this problem is the generation of synthetic datasets that resemble real data. In this work, we introduce Time Series Generative Modeling (TSGM), an open-source framework for the generative modeling and evaluation of synthetic time series datasets. TSGM includes a broad repertoire of machine learning methods: generative models, probabilistic, simulation-based approaches, and augmentation techniques. The framework enables users to evaluate the quality of the produced data from different angles: similarity, downstream effectiveness, predictive consistency, diversity, fairness, and privacy. TSGM is extensible and user-friendly, which allows researchers to rapidly implement their own methods and compare them in a shareable environment. The framework has been tested on open datasets and in production and proved to be beneficial in both cases. https://github.com/AlexanderVNikitin/tsgm | TSGM: A Flexible Framework for Generative Modeling of Synthetic Time Series | [
"Alexander V Nikitin",
"Letizia Iannucci",
"Samuel Kaski"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2305.11567 | [
"https://github.com/alexandervnikitin/tsgm"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=knxGmi6SJi | @inproceedings{
muschalik2024shapiq,
title={shapiq: Shapley Interactions for Machine Learning},
author={Maximilian Muschalik and Hubert Baniecki and Fabian Fumagalli and Patrick Kolpaczki and Barbara Hammer and Eyke H{\"u}llermeier},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=knxGmi6SJi}
} | Originally rooted in game theory, the Shapley Value (SV) has recently become an important tool in machine learning research. Perhaps most notably, it is used for feature attribution and data valuation in explainable artificial intelligence. Shapley Interactions (SIs) naturally extend the SV and address its limitations by assigning joint contributions to groups of entities, which enhance understanding of black box machine learning models. Due to the exponential complexity of computing SVs and SIs, various methods have been proposed that exploit structural assumptions or yield probabilistic estimates given limited resources. In this work, we introduce shapiq, an open-source Python package that unifies state-of-the-art algorithms to efficiently compute SVs and any-order SIs in an application-agnostic framework. Moreover, it includes a benchmarking suite containing 11 machine learning applications of SIs with pre-computed games and ground-truth values to systematically assess computational performance across domains. For practitioners, shapiq is able to explain and visualize any-order feature interactions in predictions of models, including vision transformers, language models, as well as XGBoost and LightGBM with TreeSHAP-IQ. With shapiq, we extend shap beyond feature attributions and consolidate the application of SVs and SIs in machine learning that facilitates future research. The source code and documentation are available at https://github.com/mmschlk/shapiq. | shapiq: Shapley Interactions for Machine Learning | [
"Maximilian Muschalik",
"Hubert Baniecki",
"Fabian Fumagalli",
"Patrick Kolpaczki",
"Barbara Hammer",
"Eyke Hüllermeier"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2410.01649 | [
"https://github.com/mmschlk/shapiq"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=kWTvdSSH5W | @inproceedings{
tschalzev2024a,
title={A Data-Centric Perspective on Evaluating Machine Learning Models for Tabular Data},
author={Andrej Tschalzev and Sascha Marton and Stefan L{\"u}dtke and Christian Bartelt and Heiner Stuckenschmidt},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=kWTvdSSH5W}
} | Tabular data is prevalent in real-world machine learning applications, and new models for supervised learning of tabular data are frequently proposed. Comparative studies assessing performance differences typically have model-centered evaluation setups with overly standardized data preprocessing. This limits the external validity of these studies, as in real-world modeling pipelines, models are typically applied after dataset-specific preprocessing and feature engineering. We address this gap by proposing a data-centric evaluation framework. We select 10 relevant datasets from Kaggle competitions and implement expert-level preprocessing pipelines for each dataset. We conduct experiments with different preprocessing pipelines and hyperparameter optimization (HPO) regimes to quantify the impact of model selection, HPO, feature engineering, and test-time adaptation. Our main findings reveal: 1) After dataset-specific feature engineering, model rankings change considerably, performance differences decrease, and the importance of model selection reduces. 2) Recent models, despite their measurable progress, still significantly benefit from manual feature engineering. This holds true for both tree-based models and neural networks. 3) While tabular data is typically considered static, samples are often collected over time, and adapting to distribution shifts can be important even in supposedly static data. These insights suggest that research efforts should be directed toward a data-centric perspective, acknowledging that tabular data requires feature engineering and often exhibits temporal characteristics. | A Data-Centric Perspective on Evaluating Machine Learning Models for Tabular Data | [
"Andrej Tschalzev",
"Sascha Marton",
"Stefan Lüdtke",
"Christian Bartelt",
"Heiner Stuckenschmidt"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2407.02112 | [
"https://github.com/atschalz/dc_tabeval"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=kP92Fyc6ry | @inproceedings{
jeon2024cryobench,
title={CryoBench: Datasets and Benchmarks for Heterogeneous Cryo-{EM} Reconstruction},
author={Minkyu Jeon and Rishwanth Raghu and Miro A. Astore and Geoffrey Woollard and J. Ryan Feathers and Alkin Kaz and Sonya M Hanson and Pilar Cossio and Ellen D Zhong},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=kP92Fyc6ry}
} | Cryo-electron microscopy (cryo-EM) is a powerful technique for determining high resolution 3D biomolecular structures from imaging data. As this technique can capture dynamic biomolecular complexes, 3D reconstruction methods are increasingly being developed to resolve this intrinsic structural heterogeneity. However, the absence of standardized benchmarks with ground truth structures and validation metrics limits the advancement of the field. Here, we propose CryoBench, a suite of datasets, metrics, and performance benchmarks for heterogeneous reconstruction in cryo-EM. We propose five datasets representing different sources of heterogeneity and degrees of difficulty. These include conformational heterogeneity generated from simple motions and random configurations of antibody complexes and from tens of thousands of structures sampled from a molecular dynamics simulation. We also design datasets containing compositional heterogeneity from mixtures of ribosome assembly states and 100 common complexes present in cells. We then perform a comprehensive analysis of state-of-the-art heterogeneous reconstruction tools including neural and non-neural methods and their sensitivity to noise, and propose new metrics for quantitative comparison of methods. We hope that this benchmark will be a foundational resource for analyzing existing methods and new algorithmic development in both the cryo-EM and machine learning communities. Project page: https://cryobench.cs.princeton.edu. | CryoBench: Datasets and Benchmarks for Heterogeneous Cryo-EM Reconstruction | [
"Minkyu Jeon",
"Rishwanth Raghu",
"Miro A. Astore",
"Geoffrey Woollard",
"J. Ryan Feathers",
"Alkin Kaz",
"Sonya M Hanson",
"Pilar Cossio",
"Ellen D Zhong"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=kKtalvwqBZ | @inproceedings{
wang2024benchmarking,
title={Benchmarking Structural Inference Methods for Interacting Dynamical Systems with Synthetic Data},
author={Aoran Wang and Tsz Pan Tong and Andrzej Mizera and Jun Pang},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=kKtalvwqBZ}
} | Understanding complex dynamical systems begins with identifying their topological structures, which expose the organization of the systems. This requires robust structural inference methods that can deduce structure from observed behavior. However, existing methods are often domain-specific and lack a standardized, objective comparison framework. We address this gap by benchmarking 13 structural inference methods from various disciplines on simulations representing two types of dynamics and 11 interaction graph models, supplemented by a biological experimental dataset to mirror real-world application. We evaluated the methods for accuracy, scalability, robustness, and sensitivity to graph properties. Our findings indicate that deep learning methods excel with multi-dimensional data, while classical statistics and information theory based approaches are notably accurate and robust. Additionally, performance correlates positively with the graph's average shortest path length. This benchmark should aid researchers in selecting suitable methods for their specific needs and stimulate further methodological innovation. | Benchmarking Structural Inference Methods for Interacting Dynamical Systems with Synthetic Data | [
"Aoran Wang",
"Tsz Pan Tong",
"Andrzej Mizera",
"Jun Pang"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=kD1kpLtrmX | @inproceedings{
madan2024benchmarking,
title={Benchmarking Out-of-Distribution Generalization Capabilities of {DNN}-based Encoding Models for the Ventral Visual Cortex.},
author={Spandan Madan and Will Xiao and Mingran Cao and Hanspeter Pfister and Margaret Livingstone and Gabriel Kreiman},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=kD1kpLtrmX}
} | We characterized the generalization capabilities of deep neural network encoding models when predicting neuronal responses from the visual cortex to flashed images. We collected MacaqueITBench, a large-scale dataset of neuronal population responses from the macaque inferior temporal (IT) cortex to over $300,000$ images, comprising $8,233$ unique natural images presented to seven monkeys over $109$ sessions. Using MacaqueITBench, we investigated the impact of distribution shifts on models predicting neuronal activity by dividing the images into Out-Of-Distribution (OOD) train and test splits. The OOD splits included variations in image contrast, hue, intensity, temperature, and saturation. Compared to the performance on in-distribution test images---the conventional way in which these models have been evaluated---models performed worse at predicting neuronal responses to out-of-distribution images, retaining as little as $20\\%$ of the performance on in-distribution test images. Additionally, the relative ranking of different models in terms of their ability to predict neuronal responses changed drastically across OOD shifts. The generalization performance under OOD shifts can be well accounted by a simple image similarity metric---the cosine distance between image representations extracted from a pre-trained object recognition model is a strong predictor of neuronal predictivity under different distribution shifts. The dataset of images, neuronal firing rate recordings, and computational benchmarks are hosted publicly at: https://github.com/Spandan-Madan/benchmarking_ood_generalization_visual_cortex. | Benchmarking Out-of-Distribution Generalization Capabilities of DNN-based Encoding Models for the Ventral Visual Cortex. | [
"Spandan Madan",
"Will Xiao",
"Mingran Cao",
"Hanspeter Pfister",
"Margaret Livingstone",
"Gabriel Kreiman"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=kChaL3rZxi | @inproceedings{
pi2024image,
title={Image Textualization: An Automatic Framework for Generating Rich and Detailed Image Descriptions},
author={Renjie Pi and Jianshu Zhang and Jipeng Zhang and Rui Pan and Zhekai Chen and Tong Zhang},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=kChaL3rZxi}
} | Image description datasets play a crucial role in the advancement of various applications such as image understanding, text-to-image generation, and text-image retrieval. Currently, image description datasets primarily originate from two sources. One source is the scraping of image-text pairs from the web. Despite their abundance, these descriptions are often of low quality and noisy. Another way is through human labeling. Datasets such as COCO are generally very short and lack details. Although detailed image descriptions can be annotated by humans, the high cost limits their quantity and feasibility. These limitations underscore the need for more efficient and scalable methods to generate accurate and detailed image descriptions. In this paper, we propose an innovative framework termed Image Textualization, which automatically produces high-quality image descriptions by leveraging existing mult-modal large language models (MLLMs) and multiple vision expert models in a collaborative manner. We conduct various experiments to validate the high quality of the descriptions constructed by our framework. Furthermore, we show that MLLMs fine-tuned on our dataset acquire an unprecedented capability of generating richer image descriptions, substantially increasing the length and detail of their output with even less hallucinations. | Image Textualization: An Automatic Framework for Generating Rich and Detailed Image Descriptions | [
"Renjie Pi",
"Jianshu Zhang",
"Jipeng Zhang",
"Rui Pan",
"Zhekai Chen",
"Tong Zhang"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=kBvwv92E1S | @inproceedings{
alexos2024nuclear,
title={Nuclear Fusion Diamond Polishing Dataset},
author={Antonios Alexos and Junze Liu and Shashank Galla and Sean Hayes and Kshitij Bhardwaj and Alexander Schwartz and Monika Biener and Pierre Baldi and Satish Bukkapatnam and Suhas Bhandarkar},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=kBvwv92E1S}
} | In the Inertial Confinement Fusion (ICF) process, roughly a 2mm spherical shell made of high-density carbon is used as a target for laser beams, which compress and heat it to energy levels needed for high fusion yield in nuclear fusion. These shells are polished meticulously to meet the standards for a fusion shot. However, the polishing of these shells involves multiple stages, with each stage taking several hours. To make sure that the polishing process is advancing in the right direction, we are able to measure the shell surface roughness. This measurement, however, is very labor-intensive, time-consuming, and requires a human operator. To help improve the polishing process we have released the first dataset to the public that consists of raw vibration signals with the corresponding polishing surface roughness changes. We show that this dataset can be used with a variety of neural network based methods for prediction of the change of polishing surface roughness, hence eliminating the need for the time-consuming manual process. This is the first dataset of its kind to be released in public and its use will allow the operator to make any necessary changes to the ICF polishing process for optimal results. This dataset contains the raw vibration data of multiple polishing runs with their extracted statistical features and the corresponding surface roughness values. Additionally, to generalize the prediction models to different polishing conditions, we also apply domain adaptation techniques to improve prediction accuracy for conditions unseen by the trained model. The dataset is available in \url{https://junzeliu.github.io/Diamond-Polishing-Dataset/}. | Nuclear Fusion Diamond Polishing Dataset | [
"Antonios Alexos",
"Junze Liu",
"Shashank Galla",
"Sean Hayes",
"Kshitij Bhardwaj",
"Alexander Schwartz",
"Monika Biener",
"Pierre Baldi",
"Satish Bukkapatnam",
"Suhas Bhandarkar"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=k4tuZmvSnl | @inproceedings{
gu2024mllmguard,
title={{MLLMG}uard: A Multi-dimensional Safety Evaluation Suite for Multimodal Large Language Models},
author={Tianle Gu and Zeyang Zhou and Kexin Huang and Liang Dandan and Yixu Wang and Haiquan Zhao and Yuanqi Yao and xingge qiao and Keqing wang and Yujiu Yang and Yan Teng and Yu Qiao and Yingchun Wang},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=k4tuZmvSnl}
} | Powered by remarkable advancements in Large Language Models (LLMs), Multimodal Large Language Models (MLLMs) demonstrate impressive capabilities in manifold tasks.
However, the practical application scenarios of MLLMs are intricate, exposing them to potential malicious instructions and thereby posing safety risks.
While current benchmarks do incorporate certain safety considerations, they often lack comprehensive coverage and fail to exhibit the necessary rigor and robustness.
For instance, the common practice of employing GPT-4V as both the evaluator and a model to be evaluated lacks credibility, as it tends to exhibit a bias toward its own responses.
In this paper, we present MLLMGuard, a multi-dimensional safety evaluation suite for MLLMs, including a bilingual image-text evaluation dataset, inference utilities, and a lightweight evaluator.
MLLMGuard's assessment comprehensively covers two languages (English and Chinese) and five important safety dimensions (Privacy, Bias, Toxicity, Truthfulness, and Legality), each with corresponding rich subtasks.
Focusing on these dimensions, our evaluation dataset is primarily sourced from platforms such as social media, and
it integrates text-based and image-based red teaming techniques with meticulous annotation by human experts.
This can prevent inaccurate evaluation caused by data leakage when using open-source datasets and ensures the quality and challenging nature of our benchmark.
Additionally, a fully automated lightweight evaluator termed GuardRank is developed, which achieves significantly higher evaluation accuracy than GPT-4.
Our evaluation results across 13 advanced models indicate that MLLMs still have a substantial journey ahead before they can be considered safe and responsible. | MLLMGuard: A Multi-dimensional Safety Evaluation Suite for Multimodal Large Language Models | [
"Tianle Gu",
"Zeyang Zhou",
"Kexin Huang",
"Liang Dandan",
"Yixu Wang",
"Haiquan Zhao",
"Yuanqi Yao",
"xingge qiao",
"Keqing wang",
"Yujiu Yang",
"Yan Teng",
"Yu Qiao",
"Yingchun Wang"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2406.07594 | [
"https://github.com/Carol-gutianle/MLLMGuard"
] | https://huggingface.co/papers/2406.07594 | 0 | 0 | 0 | 13 | [] | [
"Carol0110/MLLMGuard"
] | [] | [] | [
"Carol0110/MLLMGuard"
] | [] | 1 |
null | https://openreview.net/forum?id=jz2CTTCABH | @inproceedings{
baek2024a,
title={A New Multi-Source Light Detection Benchmark and Semi-Supervised Focal Light Detection},
author={Jae-Yong Baek and Yong-Sang Yoo and Seung-Hwan Bae},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=jz2CTTCABH}
} | This paper addresses a multi-source light detection (LD) problem from vehicles, traffic signals, and streetlights under driving scenarios. Albeit it is crucial for autonomous driving and night vision, this problem has not been yet focused on as much as other object detection (OD). One of the main reasons is the absence of a public available LD benchmark dataset. Therefore, we construct a new large LD dataset consisting of different light sources via heavy annotation: YouTube Driving Light Detection dataset (YDLD). Compared to the existing LD datasets, our dataset has much more images and box annotations for multi-source lights. We also provide rigorous statistical analysis and transfer learning comparison of other well-known detection benchmark datasets to prove the generality of our YDLD.
For the recent object detectors, we achieve the extensive comparison results on YDLD. However, they tend to yield the low mAP scores due to the intrinsic challenges of LD caused by very tiny size and similar appearance. To resolve those, we design a novel lightness focal loss which penalizes miss-classified samples more and a lightness spatial attention prior by reflecting a global scene context. In addition, we develop a semi-supervised focal light detection (SS-FLD) by embedding our lightness focal loss into the semi-supervised object detection (SSOD). We prove that our methods can consistently boost mAP to the variety of types of recent detectors on YDLD. We will open both YDLD and SS-FLD code at https://github.com/YDLD-dataset/YDLD. | A New Multi-Source Light Detection Benchmark and Semi-Supervised Focal Light Detection | [
"Jae-Yong Baek",
"Yong-Sang Yoo",
"Seung-Hwan Bae"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=ji5isUwL3r | @inproceedings{
dong2024lucidaction,
title={LucidAction: A Hierarchical and Multi-model Dataset for Comprehensive Action Quality Assessment},
author={Linfeng Dong and Wei Wang and Yu Qiao and Xiao Sun},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=ji5isUwL3r}
} | Action Quality Assessment (AQA) research confronts formidable obstacles due to limited, mono-modal datasets sourced from one-shot competitions, which hinder the generalizability and comprehensiveness of AQA models. To address these limitations, we present LucidAction, the first systematically collected multi-view AQA dataset structured on curriculum learning principles. LucidAction features a three-tier hierarchical structure, encompassing eight diverse sports events with four curriculum levels, facilitating sequential skill mastery and supporting a wide range of athletic abilities. The dataset encompasses multi-modal data, including multi-view RGB video, 2D and 3D pose sequences, enhancing the richness of information available for analysis. Leveraging a high-precision multi-view Motion Capture (MoCap) system ensures precise capture of complex movements. Meticulously annotated data, incorporating detailed penalties from professional gymnasts, ensures the establishment of robust and comprehensive ground truth annotations. Experimental evaluations employing diverse contrastive regression baselines on LucidAction elucidate the dataset's complexities. Through ablation studies, we investigate the advantages conferred by multi-modal data and fine-grained annotations, offering insights into improving AQA performance. The data and code will be openly released to support advancements in the AI sports field. | LucidAction: A Hierarchical and Multi-model Dataset for Comprehensive Action Quality Assessment | [
"Linfeng Dong",
"Wei Wang",
"Yu Qiao",
"Xiao Sun"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=jbrMS0DNaD | @inproceedings{
vendrow2024inquire,
title={{INQUIRE}: A Natural World Text-to-Image Retrieval Benchmark},
author={Edward Vendrow and Omiros Pantazis and Alexander Shepard and Gabriel Brostow and Kate E. Jones and Oisin Mac Aodha and Sara Beery and Grant Van Horn},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=jbrMS0DNaD}
} | We introduce INQUIRE, a text-to-image retrieval benchmark designed to challenge multimodal vision-language models on expert-level queries. INQUIRE includes iNaturalist 2024 (iNat24), a new dataset of five million natural world images, along with 250 expert-level retrieval queries. These queries are paired with all relevant images comprehensively labeled within iNat24, comprising 33,000 total matches. Queries span categories such as species identification, context, behavior, and appearance, emphasizing tasks that require nuanced image understanding and domain expertise. Our benchmark evaluates two core retrieval tasks: (1) INQUIRE-Fullrank, a full dataset ranking task, and (2) INQUIRE-Rerank, a reranking task for refining top-100 retrievals. Detailed evaluation of a range of recent multimodal models demonstrates that INQUIRE poses a significant challenge, with the best models failing to achieve an mAP@50 above 50%. In addition, we show that reranking with more powerful multimodal models can enhance retrieval performance, yet there remains a significant margin for improvement. By focusing on scientifically-motivated ecological challenges, INQUIRE aims to bridge the gap between AI capabilities and the needs of real-world scientific inquiry, encouraging the development of retrieval systems that can assist with accelerating ecological and biodiversity research. | INQUIRE: A Natural World Text-to-Image Retrieval Benchmark | [
"Edward Vendrow",
"Omiros Pantazis",
"Alexander Shepard",
"Gabriel Brostow",
"Kate E. Jones",
"Oisin Mac Aodha",
"Sara Beery",
"Grant Van Horn"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2411.02537 | [
"https://github.com/inquire-benchmark/INQUIRE"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=jSKtxmxc0M | @inproceedings{
lin2024videogui,
title={Video{GUI}: A Benchmark for {GUI} Automation from Instructional Videos},
author={Kevin Qinghong Lin and Linjie Li and Difei Gao and Qinchen WU and Mingyi Yan and Zhengyuan Yang and Lijuan Wang and Mike Zheng Shou},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=jSKtxmxc0M}
} | Graphical User Interface (GUI) automation holds significant promise for enhancing human productivity by assisting with computer tasks. Existing task formulations primarily focus on simple tasks that can be specified by a single, language-only instruction, such as “Insert a new slide.” In this work, we introduce VideoGUI, a novel multi-modal benchmark designed to evaluate GUI assistants on visual-centric GUI tasks. Sourced from high-quality web instructional videos, our benchmark focuses on tasks involving professional and novel software (e.g., Adobe Pho- toshop or Stable Diffusion WebUI) and complex activities (e.g., video editing). VideoGUI evaluates GUI assistants through a hierarchical process, allowing for identification of the specific levels at which they may fail: (i) high-level planning: reconstruct procedural subtasks from visual conditions without language descrip- tions; (ii) middle-level planning: generate sequences of precise action narrations based on visual state (i.e., screenshot) and goals; (iii) atomic action execution: perform specific actions such as accurately clicking designated elements. For each level, we design evaluation metrics across individual dimensions to provide clear signals, such as individual performance in clicking, dragging, typing, and scrolling for atomic action execution. Our evaluation on VideoGUI reveals that even the SoTA large multimodal model GPT4o performs poorly on visual-centric GUI tasks, especially for high-level planning. The data and code are available at https://github.com/showlab/videogui. | VideoGUI: A Benchmark for GUI Automation from Instructional Videos | [
"Kevin Qinghong Lin",
"Linjie Li",
"Difei Gao",
"Qinchen WU",
"Mingyi Yan",
"Zhengyuan Yang",
"Lijuan Wang",
"Mike Zheng Shou"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | oral | 2406.10227 | [
""
] | https://huggingface.co/papers/2406.10227 | 6 | 9 | 1 | 8 | [] | [] | [] | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=j6PTT6NB2O | @inproceedings{
xu2024stresstesting,
title={Stress-Testing Long-Context Language Models with Lifelong {ICL} and Task Haystack},
author={Xiaoyue Xu and Qinyuan Ye and Xiang Ren},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=j6PTT6NB2O}
} | We introduce Lifelong ICL, a problem setting that challenges long-context language models (LMs) to learn a sequence of language tasks through in-context learning (ICL). We further introduce Task Haystack, an evaluation suite dedicated to assessing and diagnosing how long-context LMs utilizes contexts in Lifelong ICL. When given a task instruction and test inputs, long-context LMs are expected
to leverage the relevant demonstrations in the Lifelong ICL prompt, avoid distraction and interference from other tasks, and achieve test accuracies that are not significantly worse than those of the Single-task ICL baseline.
Task Haystack draws inspiration from the widely-adopted “needle-in-a-haystack” (NIAH) evaluation, but presents distinct new challenges. It requires models (1) to utilize the contexts at a deeper level, rather than resorting to simple copying and pasting; (2) to navigate through long streams of evolving topics and tasks, proxying the complexities and dynamism of contexts in real-world scenarios. Additionally, Task Haystack inherits the controllability of NIAH, providing model developers with tools and visualizations to identify model vulnerabilities effectively.
We benchmark 14 long-context LMs using Task Haystack, finding that frontier models like GPT-4o still struggle with the setting, failing on 15% of cases on average. Most open-weight models further lack behind by a large margin, with failure rates reaching up to 61%. In our controlled analysis, we identify factors such as distraction and recency bias as contributors to these failure cases. Further, performance declines when task instructions are paraphrased at test time or when ICL demonstrations are repeated excessively, raising concerns about the robustness, instruction understanding, and true context utilization of long-context LMs. We release our code and data to encourage future research that investigates and addresses these limitations. | Stress-Testing Long-Context Language Models with Lifelong ICL and Task Haystack | [
"Xiaoyue Xu",
"Qinyuan Ye",
"Xiang Ren"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2407.16695 | [
"https://github.com/ink-usc/lifelong-icl"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=j4CRWz418M | @inproceedings{
zhu2024are,
title={Are Large Language Models Good Statisticians?},
author={Yizhang Zhu and Shiyin Du and Boyan Li and Yuyu Luo and Nan Tang},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=j4CRWz418M}
} | Large Language Models (LLMs) have demonstrated impressive capabilities across a range of scientific tasks including mathematics, physics, and chemistry. Despite their successes, the effectiveness of LLMs in handling complex statistical tasks remains systematically under-explored. To bridge this gap, we introduce StatQA, a new benchmark designed for statistical analysis tasks. StatQA comprises 11,623 examples tailored to evaluate LLMs' proficiency in specialized statistical tasks and their applicability assessment capabilities, particularly for hypothesis testing methods. We systematically experiment with representative LLMs using various prompting strategies and show that even state-of-the-art models such as GPT-4o achieve a best performance of only 64.83%, indicating significant room for improvement. Notably, while open-source LLMs (e.g. LLaMA-3) show limited capability, those fine-tuned ones exhibit marked improvements, outperforming all in-context learning-based methods (e.g. GPT-4o). Moreover, our comparative human experiments highlight a striking contrast in error types between LLMs and humans: LLMs primarily make applicability errors, whereas humans mostly make statistical task confusion errors. This divergence highlights distinct areas of proficiency and deficiency, suggesting that combining LLM and human expertise could lead to complementary strengths, inviting further investigation into their collaborative potential. Our source code and data are available at https://statqa.github.io/. | Are Large Language Models Good Statisticians? | [
"Yizhang Zhu",
"Shiyin Du",
"Boyan Li",
"Yuyu Luo",
"Nan Tang"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2406.07815 | [
"https://github.com/derrickzhuyz/statqa"
] | https://huggingface.co/papers/2406.07815 | 0 | 0 | 0 | 5 | [
"sunatte/txt2sql",
"MachoMaheen/devdock4bit"
] | [] | [
"smarttang/blingsec"
] | [
"sunatte/txt2sql",
"MachoMaheen/devdock4bit"
] | [] | [
"smarttang/blingsec"
] | 1 |
null | https://openreview.net/forum?id=itBDglVylS | @inproceedings{
shao2024nyu,
title={{NYU} {CTF} Bench: A Scalable Open-Source Benchmark Dataset for Evaluating {LLM}s in Offensive Security},
author={Minghao Shao and Sofija Jancheska and Meet Udeshi and Brendan Dolan-Gavitt and Haoran Xi and Kimberly Milner and Boyuan Chen and Max Yin and Siddharth Garg and Prashanth Krishnamurthy and Farshad Khorrami and Ramesh Karri and Muhammad Shafique},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=itBDglVylS}
} | Large Language Models (LLMs) are being deployed across various domains today. However, their capacity to solve Capture the Flag (CTF) challenges in cybersecurity has not been thoroughly evaluated. To address this, we develop a novel method to assess LLMs in solving CTF challenges by creating a scalable, open-source benchmark database specifically designed for these applications. This database includes metadata for LLM testing and adaptive learning, compiling a diverse range of CTF challenges from popular competitions. Utilizing the advanced function calling capabilities of LLMs, we build a fully automated system with an enhanced workflow and support for external tool calls. Our benchmark dataset and automated framework allow us to evaluate the performance of five LLMs, encompassing both black-box and open-source models. This work lays the foundation for future research into improving the efficiency of LLMs in interactive cybersecurity tasks and automated task planning. By providing a specialized benchmark, our project offers an ideal platform for developing, testing, and refining LLM-based approaches to vulnerability detection and resolution. Evaluating LLMs on these challenges
and comparing with human performance yields insights into their potential for AI-driven cybersecurity solutions to perform real-world threat management. We make our benchmark dataset open source to public https://github.com/NYU-LLM-CTF/NYU_CTF_Bench along with our playground automated framework https://github.com/NYU-LLM-CTF/llm_ctf_automation. | NYU CTF Bench: A Scalable Open-Source Benchmark Dataset for Evaluating LLMs in Offensive Security | [
"Minghao Shao",
"Sofija Jancheska",
"Meet Udeshi",
"Brendan Dolan-Gavitt",
"Haoran Xi",
"Kimberly Milner",
"Boyuan Chen",
"Max Yin",
"Siddharth Garg",
"Prashanth Krishnamurthy",
"Farshad Khorrami",
"Ramesh Karri",
"Muhammad Shafique"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=ioAPiwNBE9 | @inproceedings{
chen2024opencdainfty,
title={Open{CDA}-\${\textbackslash}infty\$: A Closed-loop Benchmarking Platform for End-to-end Evaluation of Cooperative Perception},
author={Chia-Ju Chen and Runsheng Xu and Wei Shao and Junshan Zhang and Zhengzhong Tu},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=ioAPiwNBE9}
} | Vehicle-to-vehicle (V2V) cooperative perception systems hold immense promise for surpassing the limitations of single-agent lidar-based frameworks in autonomous driving. While existing benchmarks have primarily focused on object detection accuracy, a critical gap remains in understanding how the upstream perception performance impacts the system-level behaviors---the ultimate goal of driving safety and efficiency. In this work, we address the crucial question of how the detection accuracy of cooperative detection models natively influences the downstream behavioral planning decisions in an end-to-end cooperative driving simulator. To achieve this, we introduce a novel simulation framework, \textbf{OpenCDA-Loop}, that integrates the OpenCDA cooperative driving simulator with the OpenCOOD cooperative perception toolkit. This feature bundle enables the holistic evaluation of perception models by running any 3D detection models inside OpenCDA in a real-time, online fashion. This enables a closed-loop simulation that directly assesses the impact of perception capabilities on safety-centric planning performance. To challenge and advance the state-of-the-art in V2V perception, we further introduce the \textbf{OPV2V-Safety} dataset, consisting of twelve challenging and pre-crash open scenarios designed following the National Highway Traffic Safety Administration (NHTSA) reports. Our findings reveal that OPV2V-Safety indeed challenges the prior state-of-the-art V2V detection models, while our safety benchmark yielded new insights on evaluating perception models as compared to the results on prior standard benchmarks. We envision that our end-to-end, closed-loop benchmarking platform will drive the community to rethink how perception models are being evaluated at the system level for the future development of safe and efficient autonomous systems. The code and benchmark will be made publicly available. | OpenCDA-∞: A Closed-loop Benchmarking Platform for End-to-end Evaluation of Cooperative Perception | [
"Chia-Ju Chen",
"Runsheng Xu",
"Wei Shao",
"Junshan Zhang",
"Zhengzhong Tu"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=iaahkRzA9f | @inproceedings{
ho2024map,
title={Map It Anywhere: Empowering {BEV} Map Prediction using Large-scale Public Datasets},
author={Cherie Ho and Jiaye Zou and Omar Alama and Sai Mitheran and Benjamin Chiang and Taneesh Gupta and Chen Wang and Nikhil Varma Keetha and Katia P. Sycara and Sebastian Scherer},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=iaahkRzA9f}
} | Top-down Bird's Eye View (BEV) maps are a popular perception representation for ground robot navigation due to their richness and flexibility for downstream tasks. While recent methods have shown promise for predicting BEV maps from First-Person View (FPV) images, their generalizability is limited to small regions captured by current autonomous vehicle-based datasets. In this context, we show that a more scalable approach towards generalizable map prediction can be enabled by using two large-scale crowd-sourced mapping platforms, Mapillary for FPV images and OpenStreetMap for BEV semantic maps.
We introduce Map It Anywhere (MIA), a data engine that enables seamless curation and modeling of labeled map prediction data from existing open-source map platforms. Using our MIA data engine, we display the ease of automatically collecting a 1.2 million FPV & BEV pair dataset encompassing diverse geographies, landscapes, environmental factors, camera models & capture scenarios. We further train a simple camera model-agnostic model on this data for BEV map prediction.
Extensive evaluations using established benchmarks and our dataset show that the data curated by MIA enables effective pretraining for generalizable BEV map prediction, with zero-shot performance far exceeding baselines trained on existing datasets by 35%. Our analysis highlights the promise of using large-scale public maps for developing & testing generalizable BEV perception, paving the way for more robust autonomous navigation.
Website: mapitanywhere.github.io | Map It Anywhere: Empowering BEV Map Prediction using Large-scale Public Datasets | [
"Cherie Ho",
"Jiaye Zou",
"Omar Alama",
"Sai Mitheran",
"Benjamin Chiang",
"Taneesh Gupta",
"Chen Wang",
"Nikhil Varma Keetha",
"Katia P. Sycara",
"Sebastian Scherer"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=iWc0qE116u | @inproceedings{
koehler2024apebench,
title={{APEB}ench: A Benchmark for Autoregressive Neural Emulators of {PDE}s},
author={Felix Koehler and Simon Niedermayr and r{\"u}diger westermann and Nils Thuerey},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=iWc0qE116u}
} | We introduce the **A**utoregressive **P**DE **E**mulator Benchmark (APEBench), a comprehensive benchmark suite to evaluate autoregressive neural emulators for solving partial differential equations. APEBench is based on JAX and provides a seamlessly integrated differentiable simulation framework employing efficient pseudo-spectral methods, enabling 46 distinct PDEs across 1D, 2D, and 3D. Facilitating systematic analysis and comparison of learned emulators, we propose a novel taxonomy for unrolled training and introduce a unique identifier for PDE dynamics that directly relates to the stability criteria of classical numerical methods. APEBench enables the evaluation of diverse neural architectures, and unlike existing benchmarks, its tight integration of the solver enables support for differentiable physics training and neural-hybrid emulators. Moreover, APEBench emphasizes rollout metrics to understand temporal generalization, providing insights into the long-term behavior of emulating PDE dynamics. In several experiments, we highlight the similarities between neural emulators and numerical simulators. The code is available at [github.com/tum-pbs/apebench](https://github.com/tum-pbs/apebench) and APEBench can be installed via `pip install apebench`. | APEBench: A Benchmark for Autoregressive Neural Emulators of PDEs | [
"Felix Koehler",
"Simon Niedermayr",
"rüdiger westermann",
"Nils Thuerey"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2411.00180 | [
"https://github.com/tum-pbs/apebench"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=iTyOWtcCU2 | @inproceedings{
chen2024stimagekm,
title={{ST}image-1K4M: A histopathology image-gene expression dataset for spatial transcriptomics},
author={Jiawen Chen and Muqing Zhou and Wenrong Wu and Jinwei Zhang and Yun Li and Didong Li},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=iTyOWtcCU2}
} | Recent advances in multi-modal algorithms have driven and been driven by the increasing availability of large image-text datasets, leading to significant strides in various fields, including computational pathology. However, in most existing medical image-text datasets, the text typically provides high-level summaries that may not sufficiently describe sub-tile regions within a large pathology image. For example, an image might cover an extensive tissue area containing cancerous and healthy regions, but the accompanying text might only specify that this image is a cancer slide, lacking the nuanced details needed for in-depth analysis. In this study, we introduce STimage-1K4M, a novel dataset designed to bridge this gap by providing genomic features for sub-tile images. STimage-1K4M contains 1,149 images derived from spatial transcriptomics data, which captures gene expression information at the level of individual spatial spots within a pathology image. Specifically, each image in the dataset is broken down into smaller sub-image tiles, with each tile paired with $15,000-30,000$ dimensional gene expressions. With $4,293,195$ pairs of sub-tile images and gene expressions, STimage-1K4M offers unprecedented granularity, paving the way for a wide range of advanced research in multi-modal data analysis an innovative applications in computational pathology, and beyond. | STimage-1K4M: A histopathology image-gene expression dataset for spatial transcriptomics | [
"Jiawen Chen",
"Muqing Zhou",
"Wenrong Wu",
"Jinwei Zhang",
"Yun Li",
"Didong Li"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2406.06393 | [
"https://github.com/JiawenChenn/STimage-1K4M"
] | https://huggingface.co/papers/2406.06393 | 0 | 0 | 0 | 6 | [] | [
"jiawennnn/STimage-1K4M"
] | [] | [] | [
"jiawennnn/STimage-1K4M"
] | [] | 1 |
null | https://openreview.net/forum?id=iSwK1YqO7v | @inproceedings{
li2024embodied,
title={Embodied Agent Interface: Benchmarking {LLM}s for Embodied Decision Making},
author={Manling Li and Shiyu Zhao and Qineng Wang and Kangrui Wang and Yu Zhou and Sanjana Srivastava and Cem Gokmen and Tony Lee and Li Erran Li and Ruohan Zhang and Weiyu Liu and Percy Liang and Li Fei-Fei and Jiayuan Mao and Jiajun Wu},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=iSwK1YqO7v}
} | We aim to evaluate Large Language Models (LLMs) for embodied decision making. While a significant body of work has been leveraging LLMs for decision making in embodied environments, we still lack a systematic understanding of their performances, because they are usually applied in different domains for different purposes, and built based on different inputs and outputs. Furthermore, existing evaluations tend to rely solely on a final success rate, making it difficult to pinpoint what ability is missing in LLMs and where the problem lies, which in turn, blocks embodied agents from leveraging LLMs effectively and selectively. To address these limitations, we propose a generalized interface (**Embodied Agent Interface**) that supports the formalization of various types of tasks and input-output specifications of LLM-based modules. Specifically, it allows us to unify 1) a broad set of embodied decision making tasks involving both state and temporally extended goals, 2) four commonly-used LLM-based modules for decision making: goal interpretation, subgoal decomposition, action sequencing, and transition modeling, and 3) a collection of fine-grained metrics which break down evaluation into various types of errors, such as hallucination errors, affordance errors, various types of planning errors, etc. Overall, our benchmark offers a comprehensive and systematic assessment of LLMs' performance for different subtasks, pinpointing the strengths and weaknesses in LLM-powered embodied AI systems, and providing insights for effective and selective use of LLMs in embodied decision making. | Embodied Agent Interface: Benchmarking LLMs for Embodied Decision Making | [
"Manling Li",
"Shiyu Zhao",
"Qineng Wang",
"Kangrui Wang",
"Yu Zhou",
"Sanjana Srivastava",
"Cem Gokmen",
"Tony Lee",
"Li Erran Li",
"Ruohan Zhang",
"Weiyu Liu",
"Percy Liang",
"Li Fei-Fei",
"Jiayuan Mao",
"Jiajun Wu"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | oral | 2410.07166 | [
"https://github.com/embodied-agent-interface/embodied-agent-interface"
] | https://huggingface.co/papers/2410.07166 | 1 | 1 | 0 | 15 | [] | [
"Inevitablevalor/EmbodiedAgentInterface"
] | [] | [] | [
"Inevitablevalor/EmbodiedAgentInterface"
] | [] | 1 |
null | https://openreview.net/forum?id=iNYrB3ip9F | @inproceedings{
chen2024learning,
title={Learning Superconductivity from Ordered and Disordered Material Structures},
author={Pin Chen and Luoxuan Peng and Rui Jiao and Qing Mo and Zhen WANG and Wenbing Huang and Yang Liu and Yutong Lu},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=iNYrB3ip9F}
} | Superconductivity is a fascinating phenomenon observed in certain materials under certain conditions. However, some critical aspects of it, such as the relationship between superconductivity and materials' chemical/structural features, still need to be understood. Recent successes of data-driven approaches in material science strongly inspire researchers to study this relationship with them, but a corresponding dataset is still lacking. Hence, we present a new dataset for data-driven approaches, namely SuperCon3D, containing both 3D crystal structures and experimental superconducting transition temperature (Tc) for the first time. Based on SuperCon3D, we propose two deep learning methods for designing high Tc superconductors. The first is SODNet, a novel equivariant graph attention model for screening known structures, which differs from existing models in incorporating both ordered and disordered geometric content. The second is a diffusion generative model DiffCSP-SC for creating new structures, which enables high Tc-targeted generation. Extensive experiments demonstrate that both our proposed dataset and models are advantageous for designing new high Tc superconducting candidates. | Learning Superconductivity from Ordered and Disordered Material Structures | [
"Pin Chen",
"Luoxuan Peng",
"Rui Jiao",
"Qing Mo",
"Zhen WANG",
"Wenbing Huang",
"Yang Liu",
"Yutong Lu"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=iNB4uoFQJb | @inproceedings{
ding2024easyhardbench,
title={Easy2Hard-Bench: Standardized Difficulty Labels for Profiling {LLM} Performance and Generalization},
author={Mucong Ding and Chenghao Deng and Jocelyn Choo and Zichu Wu and Aakriti Agrawal and Avi Schwarzschild and Tianyi Zhou and Tom Goldstein and John Langford and Anima Anandkumar and Furong Huang},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=iNB4uoFQJb}
} | Despite the abundance of datasets available for assessing large language models (LLMs), the scarcity of continuous and reliable difficulty labels for individual data points, in most cases, curtails their capacity to benchmark model generalization performance across different levels of complexity. Addressing this limitation, we present Easy2Hard, an innovative collection of 6 benchmark datasets featuring standardized difficulty labels spanning a wide range of domains, such as mathematics and programming problems, chess puzzles, and reasoning questions, providing a much-needed tool for those in demand of a dataset with varying degrees of difficulty for LLM assessment. We estimate the difficulty of individual problems by leveraging the performance data of many human subjects and LLMs on prominent leaderboards. Harnessing the rich human performance data, we employ widely recognized difficulty ranking systems, including the Item Response Theory (IRT) and Glicko-2 models, to uniformly assign difficulty scores to problems. The Easy2Hard datasets distinguish themselves from previous collections by incorporating a significantly higher proportion of challenging problems, presenting a novel and demanding test for state-of-the-art LLMs. Through extensive experiments conducted with six state-of-the-art LLMs on the Easy2Hard datasets, we offer valuable insights into their performance and generalization capabilities across varying degrees of difficulty, setting the stage for future research in LLM generalization. | Easy2Hard-Bench: Standardized Difficulty Labels for Profiling LLM Performance and Generalization | [
"Mucong Ding",
"Chenghao Deng",
"Jocelyn Choo",
"Zichu Wu",
"Aakriti Agrawal",
"Avi Schwarzschild",
"Tianyi Zhou",
"Tom Goldstein",
"John Langford",
"Anima Anandkumar",
"Furong Huang"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2409.18433 | [
""
] | https://huggingface.co/papers/2409.18433 | 1 | 0 | 0 | 11 | [] | [] | [] | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=iMtAjdGh1U | @inproceedings{
zhou2024benchx,
title={BenchX: A Unified Benchmark Framework for Medical Vision-Language Pretraining on Chest X-Rays},
author={Yang Zhou and Tan Li Hui Faith and Yanyu Xu and Sicong Leng and Xinxing Xu and Yong Liu and Rick Siow Mong Goh},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=iMtAjdGh1U}
} | Medical Vision-Language Pretraining (MedVLP) shows promise in learning generalizable and transferable visual representations from paired and unpaired medical images and reports. MedVLP can provide useful features to downstream tasks and facilitate adapting task-specific models to new setups using fewer examples. However, existing MedVLP methods often differ in terms of datasets, preprocessing, and finetuning implementations. This pose great challenges in evaluating how well a MedVLP method generalizes to various clinically-relevant tasks due to the lack of unified, standardized, and comprehensive benchmark. To fill this gap, we propose BenchX, a unified benchmark framework that enables head-to-head comparison and systematical analysis between MedVLP methods using public chest X-ray datasets. Specifically, BenchX is composed of three components: 1) Comprehensive datasets covering nine datasets and four medical tasks; 2) Benchmark suites to standardize data preprocessing, train-test splits, and parameter selection; 3) Unified finetuning protocols that accommodate heterogeneous MedVLP methods for consistent task adaptation in classification, segmentation, and report generation, respectively. Utilizing BenchX, we establish baselines for nine state-of-the-art MedVLP methods and found that the performance of some early MedVLP methods can be enhanced to surpass more recent ones, prompting a revisiting of the developments and conclusions from prior works in MedVLP. Our code are available at https://github.com/yangzhou12/BenchX. | BenchX: A Unified Benchmark Framework for Medical Vision-Language Pretraining on Chest X-Rays | [
"Yang Zhou",
"Tan Li Hui Faith",
"Yanyu Xu",
"Sicong Leng",
"Xinxing Xu",
"Yong Liu",
"Rick Siow Mong Goh"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2410.21969 | [
"https://github.com/yangzhou12/benchx"
] | https://huggingface.co/papers/2410.21969 | 1 | 8 | 2 | 7 | [
"youngzhou12/ConVIRT",
"youngzhou12/GLoRIA",
"youngzhou12/M-FLAG",
"youngzhou12/MedKLIP",
"youngzhou12/MGCA-resnet50",
"youngzhou12/MGCA-vit",
"youngzhou12/MRM",
"youngzhou12/PTUnifier",
"youngzhou12/REFERS"
] | [] | [] | [
"youngzhou12/ConVIRT",
"youngzhou12/GLoRIA",
"youngzhou12/M-FLAG",
"youngzhou12/MedKLIP",
"youngzhou12/MGCA-resnet50",
"youngzhou12/MGCA-vit",
"youngzhou12/MRM",
"youngzhou12/PTUnifier",
"youngzhou12/REFERS"
] | [] | [] | 1 |
null | https://openreview.net/forum?id=iJAOpsXo2I | @inproceedings{
alam2024ctibench,
title={{CTIB}ench: A Benchmark for Evaluating {LLM}s in Cyber Threat Intelligence},
author={Md Tanvirul Alam and Dipkamal Bhusal and Le Nguyen and Nidhi Rastogi},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=iJAOpsXo2I}
} | Cyber threat intelligence (CTI) is crucial in today's cybersecurity landscape, providing essential insights to understand and mitigate the ever-evolving cyber threats. The recent rise of Large Language Models (LLMs) have shown potential in this domain, but concerns about their reliability, accuracy, and hallucinations persist. While existing benchmarks provide general evaluations of LLMs, there are no benchmarks that address the practical and applied aspects of CTI-specific tasks. To bridge this gap, we introduce CTIBench, a benchmark designed to assess LLMs' performance in CTI applications. CTIBench includes multiple datasets focused on evaluating knowledge acquired by LLMs in the cyber-threat landscape. Our evaluation of several state-of-the-art models on these tasks provides insights into their strengths and weaknesses in CTI contexts, contributing to a better understanding of LLM capabilities in CTI. | CTIBench: A Benchmark for Evaluating LLMs in Cyber Threat Intelligence | [
"Md Tanvirul Alam",
"Dipkamal Bhusal",
"Le Nguyen",
"Nidhi Rastogi"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | oral | 2406.07599 | [
"https://github.com/xashru/cti-bench"
] | https://huggingface.co/papers/2406.07599 | 1 | 0 | 0 | 4 | [] | [
"AI4Sec/cti-bench"
] | [] | [] | [
"AI4Sec/cti-bench"
] | [] | 1 |
null | https://openreview.net/forum?id=iEN2linUr8 | @inproceedings{
liu2024iibench,
title={{II}-Bench: An Image Implication Understanding Benchmark for Multimodal Large Language Models},
author={Ziqiang Liu and Feiteng Fang and Xi Feng and Xeron Du and Chenhao Zhang and Noah Wang and yuelin bai and Qixuan Zhao and Liyang Fan and CHENGGUANG GAN and Hongquan Lin and Jiaming Li and Yuansheng Ni and Haihong Wu and Yaswanth Narsupalli and Zhigang Zheng and Chengming Li and Xiping Hu and Ruifeng Xu and Xiaojun Chen and Min Yang and Jiaheng Liu and Ruibo Liu and Wenhao Huang and Ge Zhang and Shiwen Ni},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=iEN2linUr8}
} | The rapid advancements in the development of multimodal large language models (MLLMs) have consistently led to new breakthroughs on various benchmarks. In response, numerous challenging and comprehensive benchmarks have been proposed to more accurately assess the capabilities of MLLMs. However, there is a dearth of exploration of the higher-order perceptual capabilities of MLLMs. To fill this gap, we propose the Image Implication understanding Benchmark, II-Bench, which aims to evaluate the model's higher-order perception of images. Through extensive experiments on II-Bench across multiple MLLMs, we have made significant findings. Initially, a substantial gap is observed between the performance of MLLMs and humans on II-Bench. The pinnacle accuracy of MLLMs attains 74.8%, whereas human accuracy averages 90%, peaking at an impressive 98%. Subsequently, MLLMs perform worse on abstract and complex images, suggesting limitations in their ability to understand high-level semantics and capture image details. Finally, it is observed that most models exhibit enhanced accuracy when image sentiment polarity hints are incorporated into the prompts. This observation underscores a notable deficiency in their inherent understanding of image sentiment. We believe that II-Bench will inspire the community to develop the next generation of MLLMs, advancing the journey towards expert artificial general intelligence (AGI). II-Bench is publicly available at https://huggingface.co/datasets/m-a-p/II-Bench. | II-Bench: An Image Implication Understanding Benchmark for Multimodal Large Language Models | [
"Ziqiang Liu",
"Feiteng Fang",
"Xi Feng",
"Xeron Du",
"Chenhao Zhang",
"Noah Wang",
"yuelin bai",
"Qixuan Zhao",
"Liyang Fan",
"CHENGGUANG GAN",
"Hongquan Lin",
"Jiaming Li",
"Yuansheng Ni",
"Haihong Wu",
"Yaswanth Narsupalli",
"Zhigang Zheng",
"Chengming Li",
"Xiping Hu",
"Ruifeng Xu",
"Xiaojun Chen",
"Min Yang",
"Jiaheng Liu",
"Ruibo Liu",
"Wenhao Huang",
"Ge Zhang",
"Shiwen Ni"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2406.05862 | [
""
] | https://huggingface.co/papers/2406.05862 | 8 | 4 | 0 | 26 | [] | [
"m-a-p/II-Bench"
] | [] | [] | [
"m-a-p/II-Bench"
] | [] | 1 |
null | https://openreview.net/forum?id=iDg6ktCf6W | @inproceedings{
gabriel2024prospect,
title={{PROSPECT} {PTM}s: Rich Labeled Tandem Mass Spectrometry Dataset of Modified Peptides for Machine Learning in Proteomics},
author={Wassim Gabriel and Omar Shouman and Eva Ayla Schr{\"o}der and Florian B{\"o}{\ss}l and Mathias Wilhelm},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=iDg6ktCf6W}
} | Post-Translational Modifications (PTMs) are changes that occur in proteins after synthesis, influencing their structure, function, and cellular behavior. PTMs are essential in cell biology; they regulate protein function and stability, are involved in various cellular processes, and are linked to numerous diseases. A particularly interesting class of PTMs are chemical modifications such as phosphorylation introduced on amino acid side chains because they can drastically alter the physicochemical properties of the peptides once they are present. One or more PTMs can be attached to each amino acid of the peptide sequence. The most commonly applied technique to detect PTMs on proteins is bottom-up Mass Spectrometry-based proteomics (MS), where proteins are digested into peptides and subsequently analyzed using Tandem Mass Spectrometry (MS/MS). While an increasing number of machine learning models are published focusing on MS/MS-related property prediction of unmodified peptides, high-quality reference data for modified peptides is missing, impeding model development for this important class of peptides. To enable researchers to train machine learning models that can accurately predict the properties of modified peptides, we introduce four high-quality labeled datasets for applying machine and deep learning to tasks in MS-based proteomics. The four datasets comprise several subgroups of peptides with 1.2 million unique modified peptide sequences and 30 unique pairs of (amino-acid, PTM), covering both experimentally introduced and naturally occurring modifications on various amino acids. We evaluate the utility and importance of the dataset by providing benchmarking results on models trained with and without modifications and highlighting the impact of including modified sequences on downstream tasks. We demonstrate that predicting the properties of modified peptides is more challenging but has a broad impact since they are often the core of protein functionality and its regulation, and they have a potential role as biomarkers in clinical applications. Our datasets contribute to applied machine learning in proteomics by enabling the research community to experiment with methods to encode PTMs as model inputs and to benchmark against reference data for model comparison. With a proper data split for three common tasks in proteomics, we provide a robust way to evaluate model performance and assess generalization on unseen modified sequences. | PROSPECT PTMs: Rich Labeled Tandem Mass Spectrometry Dataset of Modified Peptides for Machine Learning in Proteomics | [
"Wassim Gabriel",
"Omar Shouman",
"Eva Ayla Schröder",
"Florian Bößl",
"Mathias Wilhelm"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=iACMjECRjV | @inproceedings{
sun2024conceptfactory,
title={ConceptFactory: Facilitate 3D Object Knowledge Annotation with Object Conceptualization},
author={Jianhua Sun and Yuxuan Li and Longfei Xu and Nange Wang and Jiude Wei and Yining Zhang and Cewu Lu},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=iACMjECRjV}
} | We present ConceptFactory, a novel scope to facilitate more efficient annotation of 3D object knowledge by recognizing 3D objects through generalized concepts (i.e. object conceptualization), aiming at promoting machine intelligence to learn comprehensive object knowledge from both vision and robotics aspects. This idea originates from the findings in human cognition research that the perceptual recognition of objects can be explained as a process of arranging generalized geometric components (e.g. cuboids and cylinders). ConceptFactory consists of two critical parts: i) ConceptFactory Suite, a unified toolbox that adopts Standard Concept Template Library (STL-C) to drive a web-based platform for object conceptualization, and ii) ConceptFactory Asset, a large collection of conceptualized objects acquired using ConceptFactory suite. Our approach enables researchers to effortlessly acquire or customize extensive varieties of object knowledge to comprehensively study different object understanding tasks. We validate our idea on a wide range of benchmark tasks from both vision and robotics aspects with state-of-the-art algorithms, demonstrating the high quality and versatility of annotations provided by our approach. Our website is available at https://apeirony.github.io/ConceptFactory. | ConceptFactory: Facilitate 3D Object Knowledge Annotation with Object Conceptualization | [
"Jianhua Sun",
"Yuxuan Li",
"Longfei Xu",
"Nange Wang",
"Jiude Wei",
"Yining Zhang",
"Cewu Lu"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2411.00448 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=i92eyFCQHC | @inproceedings{
lu2024wildvision,
title={WildVision: Evaluating Vision-Language Models in the Wild with Human Preferences},
author={Yujie Lu and Dongfu Jiang and Wenhu Chen and William Yang Wang and Yejin Choi and Bill Yuchen Lin},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=i92eyFCQHC}
} | Recent breakthroughs in vision-language models (VLMs) emphasize the necessity of benchmarking human preferences in real-world multimodal interactions. To address this gap, we launched WildVision-Arena (WV-Arena), an online platform that collects human preferences to evaluate VLMs. We curated WV-Bench by selecting 500 high-quality samples from 8,000 user submissions in WV-Arena. WV-Bench uses GPT-4 as the judge to compare each VLM with Claude-3-Sonnet, achieving a Spearman correlation of 0.94 with the WV-Arena Elo. This significantly outperforms other benchmarks like MMVet, MMMU, and MMStar.
Our comprehensive analysis of 20K real-world interactions reveals important insights into the failure cases of top-performing VLMs. For example, we find that although GPT-4V surpasses many other models like Reka-Flash, Opus, and Yi-VL-Plus in simple visual recognition and reasoning tasks, it still faces challenges with subtle contextual cues, spatial reasoning, visual imagination, and expert domain knowledge. Additionally, current VLMs exhibit issues with hallucinations and safety when intentionally provoked. We are releasing our chat and feedback data to further advance research in the field of VLMs. | WildVision: Evaluating Vision-Language Models in the Wild with Human Preferences | [
"Yujie Lu",
"Dongfu Jiang",
"Wenhu Chen",
"William Yang Wang",
"Yejin Choi",
"Bill Yuchen Lin"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2406.11069 | [
""
] | https://huggingface.co/papers/2406.11069 | 6 | 13 | 2 | 6 | [] | [
"WildVision/wildvision-bench",
"WildVision/wildvision-arena-data"
] | [
"WildVision/vision-arena",
"Harsha200314/vision-arena1"
] | [] | [
"WildVision/wildvision-bench",
"WildVision/wildvision-arena-data"
] | [
"WildVision/vision-arena",
"Harsha200314/vision-arena1"
] | 1 |
null | https://openreview.net/forum?id=hwbRjslR5N | @inproceedings{
guetta2024visual,
title={Visual Riddles: a Commonsense and World Knowledge Challenge for Large Vision and Language Models},
author={Nitzan Bitton Guetta and Aviv Slobodkin and Aviya Maimon and Eliya Habba and Royi Rassin and Yonatan Bitton and Idan Szpektor and Amir Globerson and Yuval Elovici},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=hwbRjslR5N}
} | Imagine observing someone scratching their arm; to understand why, additional context would be necessary. However, spotting a mosquito nearby would immediately offer a likely explanation for the person’s discomfort, thereby alleviating the need for further information. This example illustrates how subtle visual cues can challenge our cognitive skills and demonstrates the complexity of interpreting visual scenarios. To study these skills, we present Visual Riddles, a benchmark aimed to test vision and language models on visual riddles requiring commonsense and world knowledge. The benchmark comprises 400 visual riddles, each featuring a unique image created by a variety of text-to-image models, question, ground-truth answer, textual hint, and attribution. Human evaluation reveals that existing models lag significantly behind human performance, which is at 82% accuracy, with Gemini-Pro-1.5 leading with 40% accuracy. Our benchmark comes with automatic evaluation tasks to make assessment scalable. These findings underscore the potential of Visual Riddles as a valuable resource for enhancing vision and language models’ capabilities in interpreting complex visual scenarios. Data, code, and leaderboard are available at https://visual-riddles.github.io/. | Visual Riddles: a Commonsense and World Knowledge Challenge for Large Vision and Language Models | [
"Nitzan Bitton Guetta",
"Aviv Slobodkin",
"Aviya Maimon",
"Eliya Habba",
"Royi Rassin",
"Yonatan Bitton",
"Idan Szpektor",
"Amir Globerson",
"Yuval Elovici"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2407.19474 | [
""
] | https://huggingface.co/papers/2407.19474 | 7 | 23 | 2 | 9 | [] | [] | [] | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=hqvWcQ3uzF | @inproceedings{
coursey2024ftaed,
title={{FT}-{AED}: Benchmark Dataset for Early Freeway Traffic Anomalous Event Detection},
author={Austin Coursey and JUNYI JI and Marcos Quinones Grueiro and William Barbour and Yuhang Zhang and Tyler Derr and Gautam Biswas and Daniel Work},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=hqvWcQ3uzF}
} | Early and accurate detection of anomalous events on the freeway, such as accidents, can improve emergency response and clearance. However, existing delays and mistakes from manual crash reporting records make it a difficult problem to solve. Current large-scale freeway traffic datasets are not designed for anomaly detection and ignore these challenges. In this paper, we introduce the first large-scale lane-level freeway traffic dataset for anomaly detection. Our dataset consists of a month of weekday radar detection sensor data collected in 4 lanes along an 18-mile stretch of Interstate 24 heading toward Nashville, TN, comprising over 3.7 million sensor measurements. We also collect official crash reports from the Tennessee Department of Transportation Traffic Management Center and manually label all other potential anomalies in the dataset. To show the potential for our dataset to be used in future machine learning and traffic research, we benchmark numerous deep learning anomaly detection models on our dataset. We find that unsupervised graph neural network autoencoders are a promising solution for this problem and that ignoring spatial relationships leads to decreased performance. We demonstrate that our methods can reduce reporting delays by over 10 minutes on average while detecting 75% of crashes. Our dataset and all preprocessing code needed to get started are publicly released at https://vu.edu/ft-aed/ to facilitate future research. | FT-AED: Benchmark Dataset for Early Freeway Traffic Anomalous Event Detection | [
"Austin Coursey",
"JUNYI JI",
"Marcos Quinones Grueiro",
"William Barbour",
"Yuhang Zhang",
"Tyler Derr",
"Gautam Biswas",
"Daniel Work"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2406.15283 | [
"https://github.com/acoursey3/freeway-anomaly-data"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=hgl4dYE76J | @inproceedings{
naik2024bucktales,
title={BuckTales: A multi-{UAV} dataset for multi-object tracking and re-identification of wild antelopes},
author={Hemal Naik and Junran Yang and Dipin Das and Margaret C Crofoot and Akanksha Rathore and Vivek H Sridhar},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=hgl4dYE76J}
} | Understanding animal behaviour is central to predicting, understanding, and miti-
gating impacts of natural and anthropogenic changes on animal populations and
ecosystems. However, the challenges of acquiring and processing long-term, eco-
logically relevant data in wild settings have constrained the scope of behavioural
research. The increasing availability of Unmanned Aerial Vehicles (UAVs), cou-
pled with advances in machine learning, has opened new opportunities for wildlife
monitoring using aerial tracking. However, the limited availability of datasets with wild
animals in natural habitats has hindered progress in automated computer vision
solutions for long-term animal tracking. Here, we introduce the first large-scale
UAV dataset designed to solve multi-object tracking (MOT) and re-identification
(Re-ID) problem in wild animals, specifically the mating behaviour (or lekking) of
blackbuck antelopes. Collected in collaboration with biologists, the MOT dataset
includes over 1.2 million annotations including 680 tracks across 12 high-resolution
(5.4K) videos, each averaging 66 seconds and featuring 30 to 130 individuals. The
Re-ID dataset includes 730 individuals captured with two UAVs simultaneously.
The dataset is designed to drive scalable, long-term animal behavior tracking using
multiple camera sensors. By providing baseline performance with two detectors,
and benchmarking several state-of-the-art tracking methods, our dataset reflects the
real-world challenges of tracking wild animals in socially and ecologically relevant
contexts. In making these data widely available, we hope to catalyze progress in
MOT and Re-ID for wild animals, fostering insights into animal behaviour, conser-
vation efforts, and ecosystem dynamics through automated, long-term monitoring. | BuckTales: A multi-UAV dataset for multi-object tracking and re-identification of wild antelopes | [
"Hemal Naik",
"Junran Yang",
"Dipin Das",
"Margaret C Crofoot",
"Akanksha Rathore",
"Vivek H Sridhar"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=hej9QGCHT6 | @inproceedings{
li2024densefusionm,
title={DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception},
author={Xiaotong Li and Fan Zhang and Haiwen Diao and Yueze Wang and Xinlong Wang and LINGYU DUAN},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=hej9QGCHT6}
} | Existing Multimodal Large Language Models (MLLMs) increasingly emphasize complex understanding of various visual elements, including multiple objects, text information, spatial relations. Their development for comprehensive visual perception hinges on the availability of high-quality image-text datasets that offer diverse visual elements and throughout image descriptions. However, the scarcity of such hyper-detailed datasets currently hinders progress within the MLLM community. The bottleneck stems from the limited perceptual capabilities of current caption engines, which fall short in providing complete and accurate annotations. To facilitate the cutting-edge research of MLLMs on comprehensive vision perception, we thereby propose Perceptual Fusion, using a low-budget but highly effective caption engine for complete and accurate image descriptions. Specifically, Perceptual Fusion integrates diverse perception experts as image priors to provide explicit information on visual elements and adopts an efficient MLLM as a centric pivot to mimic advanced MLLMs' perception abilities. We carefully select 1M highly representative images from uncurated LAION dataset and generate dense descriptions using our engine, dubbed DenseFusion-1M. Extensive experiments validate that our engine outperforms its counterparts, where the resulting dataset significantly improves the perception and cognition abilities of existing MLLMs across diverse vision-language benchmarks, especially with high-resolution images as inputs. The code and dataset are available at https://huggingface.co/datasets/BAAI/DenseFusion-1M. | DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception | [
"Xiaotong Li",
"Fan Zhang",
"Haiwen Diao",
"Yueze Wang",
"Xinlong Wang",
"LINGYU DUAN"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2407.08303 | [
"https://github.com/baaivision/densefusion"
] | https://huggingface.co/papers/2407.08303 | 4 | 17 | 2 | 6 | [] | [
"BAAI/DenseFusion-1M"
] | [] | [] | [
"BAAI/DenseFusion-1M"
] | [] | 1 |
null | https://openreview.net/forum?id=hceKrY4dfC | @inproceedings{
karmakar2024indoor,
title={Indoor Air Quality Dataset with Activities of Daily Living in Low to Middle-income Communities},
author={Prasenjit Karmakar and Swadhin Pradhan and Sandip Chakraborty},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=hceKrY4dfC}
} | In recent years, indoor air pollution has posed a significant threat to our society, claiming over 3.2 million lives annually. Developing nations, such as India, are most affected since lack of knowledge, inadequate regulation, and outdoor air pollution lead to severe daily exposure to pollutants. However, only a limited number of studies have attempted to understand how indoor air pollution affects developing countries like India. To address this gap, we present spatiotemporal measurements of air quality from 30 indoor sites over six months during summer and winter seasons. The sites are geographically located across four regions of type: rural, suburban, and urban, covering the typical low to middle-income population in India. The dataset contains various types of indoor environments (e.g., studio apartments, classrooms, research laboratories, food canteens, and residential households), and can provide the basis for data-driven learning model research aimed at coping with unique pollution patterns in developing countries. This unique dataset demands advanced data cleaning and imputation techniques for handling missing data due to power failure or network outages during data collection. Furthermore, through a simple speech-to-text application, we provide real-time indoor activity labels annotated by occupants. Therefore, environmentalists and ML enthusiasts can utilize this dataset to understand the complex patterns of the pollutants under different indoor activities, identify recurring sources of pollution, forecast exposure, improve floor plans and room structures of modern indoor designs, develop pollution-aware recommender systems, etc. | Indoor Air Quality Dataset with Activities of Daily Living in Low to Middle-income Communities | [
"Prasenjit Karmakar",
"Swadhin Pradhan",
"Sandip Chakraborty"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2407.14501 | [
"https://github.com/prasenjit52282/dalton-dataset"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=hcOq2buakM | @inproceedings{
reuel2024betterbench,
title={BetterBench: Assessing {AI} Benchmarks, Uncovering Issues, and Establishing Best Practices},
author={Anka Reuel and Amelia Hardy and Chandler Smith and Max Lamparth and Malcolm Hardy and Mykel Kochenderfer},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=hcOq2buakM}
} | AI models are increasingly prevalent in high-stakes environments, necessitating thorough assessment of their capabilities and risks. Benchmarks are popular for measuring these attributes and for comparing model performance, tracking progress, and identifying weaknesses in foundation and non-foundation models. They can inform model selection for downstream tasks and influence policy initiatives. However, not all benchmarks are the same: their quality depends on their design and usability. In this paper, we develop an assessment framework considering 40 best practices across a benchmark's life cycle and evaluate 25 AI benchmarks against it. We find that there exist large quality differences and that commonly used benchmarks suffer from significant issues. We further find that most benchmarks do not report statistical significance of their results nor can results be easily replicated. To support benchmark developers in aligning with best practices, we provide a checklist for minimum quality assurance based on our assessment. We also develop a living repository of benchmark assessments to support benchmark comparability. | BetterBench: Assessing AI Benchmarks, Uncovering Issues, and Establishing Best Practices | [
"Anka Reuel",
"Amelia Hardy",
"Chandler Smith",
"Max Lamparth",
"Malcolm Hardy",
"Mykel Kochenderfer"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=hSAu90mDkC | @inproceedings{
tang2024vilcobench,
title={Vi{LC}o-Bench: {VI}deo Language {CO}ntinual learning Benchmark},
author={Tianqi Tang and Shohreh Deldari and Hao Xue and Celso M de Melo and Flora D. Salim},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=hSAu90mDkC}
} | Video language continual learning involves continuously adapting to information from video and text inputs, enhancing a model’s ability to handle new tasks while retaining prior knowledge. This field is a relatively under-explored area, and establishing appropriate datasets is crucial for facilitating communication and research in this field. In this study, we present the first dedicated benchmark, ViLCo-Bench, designed to evaluate continual learning models across a range of video-text tasks. The dataset comprises ten-minute-long videos and corresponding language queries collected from publicly available datasets. Additionally, we introduce a novel memory-efficient framework that incorporates self-supervised learning and mimics long-term and short-term memory effects. This framework addresses challenges including memory complexity from long video clips, natural language complexity from open queries, and text-video misalignment. We posit that ViLCo-Bench, with greater complexity compared to existing continual learning benchmarks, would serve as a critical tool for exploring the video-language domain, extending beyond conventional class-incremental tasks, and addressing complex and limited annotation issues. The curated data, evaluations, and our novel method are available at https://github.com/cruiseresearchgroup/ViLCo. | ViLCo-Bench: VIdeo Language COntinual learning Benchmark | [
"Tianqi Tang",
"Shohreh Deldari",
"Hao Xue",
"Celso M de Melo",
"Flora D. Salim"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2406.13123 | [
"https://github.com/cruiseresearchgroup/vilco"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=hQQyetmOxs | @inproceedings{
wu2024a,
title={A Systematic Review of Neur{IPS} Dataset Management Practices},
author={Yiwei Wu and Leah Hope Ajmani and Shayne Longpre and Hanlin Li},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=hQQyetmOxs}
} | As new machine learning methods demand larger training datasets, researchers and developers face significant challenges in dataset management. Although ethics reviews, documentation, and checklists have been established, it remains uncertain whether consistent dataset management practices exist across the community. This lack of a comprehensive overview hinders our ability to diagnose and address fundamental tensions and ethical issues related to managing large datasets. We present a systematic review of datasets published at the NeurIPS Datasets and Benchmarks track, focusing on four key aspects: provenance, distribution, ethical disclosure, and licensing. Our findings reveal that dataset provenance is often unclear due to ambiguous filtering and curation processes. Additionally, a variety of sites are used for dataset hosting, but only a few offer structured metadata and version control. These inconsistencies underscore the urgent need for standardized data infrastructures for the publication and management of datasets. | A Systematic Review of NeurIPS Dataset Management Practices | [
"Yiwei Wu",
"Leah Hope Ajmani",
"Shayne Longpre",
"Hanlin Li"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=hORTHzt2cE | @inproceedings{
liu2024roleagent,
title={RoleAgent: Building, Interacting, and Benchmarking High-quality Role-Playing Agents from Scripts},
author={Jiaheng Liu and Zehao Ni and Haoran Que and Tao Sun and Noah Wang and Jian Yang and JiakaiWang and Hongcheng Guo and Z.Y. Peng and Ge Zhang and Jiayi Tian and Xingyuan Bu and Ke Xu and Wenge Rong and Junran Peng and Zhaoxiang Zhang},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=hORTHzt2cE}
} | Believable agents can empower interactive applications ranging from immersive environments to rehearsal spaces for interpersonal communication. Recently, generative agents have been proposed to simulate believable human behavior by using Large Language Models. However, the existing method heavily relies on human-annotated agent profiles (e.g., name, age, personality, relationships with others, and so on) for the initialization of each agent, which cannot be scaled up easily. In this paper, we propose a scalable RoleAgent framework to generate high-quality role-playing agents from raw scripts, which includes building and interacting stages. Specifically, in the building stage, we use a hierarchical memory system to extract and summarize the structure and high-level information of each agent for the raw script. In the interacting stage, we propose a novel innovative mechanism with four steps to achieve a high-quality interaction between agents. Finally, we introduce a systematic and comprehensive evaluation benchmark called RoleAgentBench to evaluate the effectiveness of our RoleAgent, which includes 100 and 28 roles for 20 English and 5 Chinese scripts, respectively. Extensive experimental results on RoleAgentBench demonstrate the effectiveness of RoleAgent. | RoleAgent: Building, Interacting, and Benchmarking High-quality Role-Playing Agents from Scripts | [
"Jiaheng Liu",
"Zehao Ni",
"Haoran Que",
"Tao Sun",
"Noah Wang",
"Jian Yang",
"JiakaiWang",
"Hongcheng Guo",
"Z.Y. Peng",
"Ge Zhang",
"Jiayi Tian",
"Xingyuan Bu",
"Ke Xu",
"Wenge Rong",
"Junran Peng",
"Zhaoxiang Zhang"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=hMj6jZ6JWU | @inproceedings{
zhang2024empowering,
title={Empowering and Assessing the Utility of Large Language Models in Crop Science},
author={Hang Zhang and Jiawei Sun and Renqi Chen and Wei Liu and Zhonghang Yuan and Xinzhe Zheng and Zhefan Wang and Zhiyuan Yang and Hang Yan and Han-Sen Zhong and Xiqing Wang and Wanli Ouyang and Fan Yang and Nanqing Dong},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=hMj6jZ6JWU}
} | Large language models (LLMs) have demonstrated remarkable efficacy across knowledge-intensive tasks. Nevertheless, their untapped potential in crop science presents an opportunity for advancement. To narrow this gap, we introduce CROP, which includes a novel instruction tuning dataset specifically designed to enhance LLMs’ professional capabilities in the crop science sector, along with a benchmark that serves as a comprehensive evaluation of LLMs’ understanding of the domain knowledge. The CROP dataset is curated through a task-oriented and LLM-human integrated pipeline, comprising 210,038 single-turn and 1,871 multi-turn dialogues related to crop science scenarios. The CROP benchmark includes 5,045 multiple-choice questions covering three difficulty levels. Our experiments based on the CROP benchmark demonstrate notable enhancements in crop science-related tasks when LLMs are fine-tuned with the CROP dataset. To the best of our knowledge, CROP dataset is the first-ever instruction tuning dataset in the crop science domain. We anticipate that CROP will accelerate the adoption of LLMs in the domain of crop science, ultimately contributing to global food production. | Empowering and Assessing the Utility of Large Language Models in Crop Science | [
"Hang Zhang",
"Jiawei Sun",
"Renqi Chen",
"Wei Liu",
"Zhonghang Yuan",
"Xinzhe Zheng",
"Zhefan Wang",
"Zhiyuan Yang",
"Hang Yan",
"Han-Sen Zhong",
"Xiqing Wang",
"Wanli Ouyang",
"Fan Yang",
"Nanqing Dong"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=hHA9qrGZBe | @inproceedings{
wang2024harmonic,
title={{HARMONIC}: Harnessing {LLM}s for Tabular Data Synthesis and Privacy Protection},
author={Yuxin Wang and Duanyu Feng and Yongfu Dai and Zhengyu Chen and Jimin Huang and Sophia Ananiadou and Qianqian Xie and Hao Wang},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=hHA9qrGZBe}
} | Data serves as the fundamental basis for advancing deep learning. The tabular data presented in a structured format is highly valuable for modeling and training.
However, even in the era of LLM, obtaining tabular data from sensitive domains remains a challenge due to privacy or copyright concerns.
Therefore, exploring the methods for effectively using models like LLMs to generate synthetic tabular data, which is privacy-preserving but similar to original one, is urgent.
In this paper, we introduce a new framework HARMONIC for tabular data generation and evaluation by LLMs. In the data generation part of our framework, we employ fine-tuning to generate tabular data and enhance privacy rather than continued pre-training which is often used by previous small-scale LLM-based methods. In particular, we construct an instruction fine-tuning dataset based on the idea of the k-nearest neighbors algorithm to inspire LLMs to discover inter-row relationships. By such fine-tuning, LLMs are trained to remember the format and connections of the data rather than the data itself, which reduces the risk of privacy leakage. The experiments find that our tabular data generation achieves equivalent performance as existing methods but with better privacy by the metric of MLE, DCR, etc.
In the evaluation part of our framework, we develop a specific privacy risk metric DLT for LLM synthetic data generation, which quantifies the extent to which the generator itself leaks data. We also developed LLE, a performance evaluation metric for downstream LLM tasks, which is more practical and credible than previous metrics.
The experiments show that our data generation method outperform the previous methods in the metrics DLT and LLE. | HARMONIC: Harnessing LLMs for Tabular Data Synthesis and Privacy Protection | [
"Yuxin Wang",
"Duanyu Feng",
"Yongfu Dai",
"Zhengyu Chen",
"Jimin Huang",
"Sophia Ananiadou",
"Qianqian Xie",
"Hao Wang"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2408.02927 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=hFVpqkRRH1 | @inproceedings{
yun2024webcode,
title={Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal {LLM}s},
author={Sukmin Yun and Haokun Lin and Rusiru Thushara and Mohammad Qazim Bhat and Yongxin Wang and Zutao Jiang and Mingkai Deng and Jinhong Wang and Tianhua Tao and Junbo Li and Haonan Li and Preslav Nakov and Timothy Baldwin and Zhengzhong Liu and Eric P. Xing and Xiaodan Liang and Zhiqiang Shen},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=hFVpqkRRH1}
} | Multimodal large language models (MLLMs) have shown impressive success across modalities such as image, video, and audio in a variety of understanding and generation tasks.
However, current MLLMs are surprisingly poor at understanding webpage screenshots and generating their corresponding HTML code.
To address this problem,
we propose Web2Code, a benchmark consisting of a new large-scale webpage-to-code dataset for instruction tuning and an evaluation framework for the webpage understanding and HTML code translation abilities of MLLMs.
For dataset construction, we leverage pretrained LLMs to enhance existing webpage-to-code datasets as well as generate a diverse pool of new webpages rendered into images.
Specifically, the inputs are webpage images and instructions, while the responses are the webpage's HTML code.
We further include diverse natural language QA pairs about the webpage content in the responses to enable a more comprehensive understanding of the web content.
To evaluate model performance in these tasks, we develop an evaluation framework for testing MLLMs' abilities in webpage understanding and web-to-code generation.
Extensive experiments show that our proposed dataset is beneficial not only to our proposed tasks but also in the general visual domain.
We hope our work will contribute to the development of general MLLMs suitable for web-based content generation and task automation.
Our data and code are available at https://github.com/MBZUAI-LLM/web2code. | Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs | [
"Sukmin Yun",
"Haokun Lin",
"Rusiru Thushara",
"Mohammad Qazim Bhat",
"Yongxin Wang",
"Zutao Jiang",
"Mingkai Deng",
"Jinhong Wang",
"Tianhua Tao",
"Junbo Li",
"Haonan Li",
"Preslav Nakov",
"Timothy Baldwin",
"Zhengzhong Liu",
"Eric P. Xing",
"Xiaodan Liang",
"Zhiqiang Shen"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2406.20098 | [
"https://github.com/mbzuai-llm/web2code"
] | https://huggingface.co/papers/2406.20098 | 0 | 0 | 0 | 17 | [
"LLM360/CrystalChat-7B-Web2Code",
"qazimbhat1/Crystal-based-MLLM-7B"
] | [
"MBZUAI/Web2Code"
] | [] | [
"LLM360/CrystalChat-7B-Web2Code",
"qazimbhat1/Crystal-based-MLLM-7B"
] | [
"MBZUAI/Web2Code"
] | [] | 1 |
null | https://openreview.net/forum?id=hFDdSd6hSM | @inproceedings{
jung2024do,
title={Do Counterfactually Fair Image Classifiers Satisfy Group Fairness? -- A Theoretical and Empirical Study},
author={Sangwon Jung and Sumin Yu and Sanghyuk Chun and Taesup Moon},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=hFDdSd6hSM}
} | The notion of algorithmic fairness has been actively explored from various aspects of fairness, such as counterfactual fairness (CF) and group fairness (GF). However, the exact relationship between CF and GF remains to be unclear, especially in image classification tasks; the reason is because we often cannot collect counterfactual samples regarding a sensitive attribute, essential for evaluating CF, from the existing images (e.g., a photo of the same person but with different secondary sex characteristics). In this paper, we construct new image datasets for evaluating CF by using a high-quality image editing method and carefully labeling with human annotators. Our datasets, CelebA-CF and LFW-CF, build upon the popular image GF benchmarks; hence, we can evaluate CF and GF simultaneously. We empirically observe that CF does not imply GF in image classification, whereas previous studies on tabular datasets observed the opposite. We theoretically show that it could be due to the existence of a latent attribute $G$ that is correlated with, but not caused by, the sensitive attribute (e.g., secondary sex characteristics are highly correlated with hair length). From this observation, we propose a simple baseline, Counterfactual Knowledge Distillation (CKD), to mitigate such correlation with the sensitive attributes. Extensive experimental results on CelebA-CF and LFW-CF demonstrate that CF-achieving models satisfy GF if we successfully reduce the reliance on $G$ (e.g., using CKD). | Do Counterfactually Fair Image Classifiers Satisfy Group Fairness? – A Theoretical and Empirical Study | [
"Sangwon Jung",
"Sumin Yu",
"Sanghyuk Chun",
"Taesup Moon"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=h7Z2Q36sPk | @inproceedings{
song2024synrsd,
title={Syn{RS}3D: A Synthetic Dataset for Global 3D Semantic Understanding from Monocular Remote Sensing Imagery},
author={Jian Song and Hongruixuan Chen and Weihao Xuan and Junshi Xia and Naoto Yokoya},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=h7Z2Q36sPk}
} | Global semantic 3D understanding from single-view high-resolution remote sensing (RS) imagery is crucial for Earth observation (EO). However, this task faces significant challenges due to the high costs of annotations and data collection, as well as geographically restricted data availability. To address these challenges, synthetic data offer a promising solution by being unrestricted and automatically annotatable, thus enabling the provision of large and diverse datasets. We develop a specialized synthetic data generation pipeline for EO and introduce SynRS3D, the largest synthetic RS dataset. SynRS3D comprises 69,667 high-resolution optical images that cover six different city styles worldwide and feature eight land cover types, precise height information, and building change masks. To further enhance its utility, we develop a novel multi-task unsupervised domain adaptation (UDA) method, RS3DAda, coupled with our synthetic dataset, which facilitates the RS-specific transition from synthetic to real scenarios for land cover mapping and height estimation tasks, ultimately enabling global monocular 3D semantic understanding based on synthetic data. Extensive experiments on various real-world datasets demonstrate the adaptability and effectiveness of our synthetic dataset and the proposed RS3DAda method. SynRS3D and related codes are available at https://github.com/JTRNEO/SynRS3D. | SynRS3D: A Synthetic Dataset for Global 3D Semantic Understanding from Monocular Remote Sensing Imagery | [
"Jian Song",
"Hongruixuan Chen",
"Weihao Xuan",
"Junshi Xia",
"Naoto Yokoya"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | oral | 2406.18151 | [
"https://github.com/JTRNEO/SynRS3D"
] | https://huggingface.co/papers/2406.18151 | 0 | 1 | 0 | 5 | [] | [] | [] | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=h3lddsY5nf | @inproceedings{
pramanick2024spiqa,
title={{SPIQA}: A Dataset for Multimodal Question Answering on Scientific Papers},
author={Shraman Pramanick and Rama Chellappa and Subhashini Venugopalan},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=h3lddsY5nf}
} | Seeking answers to questions within long scientific research articles is a crucial area of study that aids readers in quickly addressing their inquiries. However, existing question-answering (QA) datasets based on scientific papers are limited in scale and focus solely on textual content. We introduce SPIQA (Scientific Paper Image Question Answering), the first large-scale QA dataset specifically designed to interpret complex figures and tables within the context of scientific research articles across various domains of computer science. Leveraging the breadth of expertise and ability of multimodal large language models (MLLMs) to understand figures, we employ automatic and manual curation to create the dataset. We craft an information-seeking task on interleaved images and text that involves multiple images covering a wide variety of plots, charts, tables, schematic diagrams, and result visualizations. SPIQA comprises 270K questions divided into training, validation, and three different evaluation splits. Through extensive experiments with 12 prominent foundational models, we evaluate the ability of current multimodal systems to comprehend the nuanced aspects of research articles. Additionally, we propose a Chain-of-Thought (CoT) evaluation strategy with in-context retrieval that allows fine-grained, step-by-step assessment and improves model performance. We further explore the upper bounds of performance enhancement with additional textual information, highlighting its promising potential for future research and the dataset’s impact on revolutionizing how we interact with scientific literature. | SPIQA: A Dataset for Multimodal Question Answering on Scientific Papers | [
"Shraman Pramanick",
"Rama Chellappa",
"Subhashini Venugopalan"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2407.09413 | [
"https://github.com/google/spiqa"
] | https://huggingface.co/papers/2407.09413 | 2 | 9 | 3 | 3 | [] | [
"google/spiqa",
"sugiv/spiqa-simplified-for-fuyu8b-transfer-learning"
] | [] | [] | [
"google/spiqa",
"sugiv/spiqa-simplified-for-fuyu8b-transfer-learning"
] | [] | 1 |
null | https://openreview.net/forum?id=h18O23kQzD | @inproceedings{
repasky2024blurd,
title={{BLURD}: Benchmarking and Learning using a Unified Rendering and Diffusion Model},
author={Boris Repasky and Ehsan Abbasnejad and Anthony Dick},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=h18O23kQzD}
} | Recent advancements in pre-trained vision models have made them pivotal in computer vision, emphasizing the need for their thorough evaluation and benchmarking. This evaluation needs to consider various factors of variation, their potential biases, shortcuts, and inaccuracies that ultimately lead to disparate performance in models. Such evaluations are conventionally done using either synthetic data from 2D or 3D rendering software or real-world images in controlled settings. Synthetic methods offer full control and flexibility, while real-world methods are limited by high costs and less adaptability. Moreover, 3D rendering can't yet fully replicate real photography, creating a realism gap.
In this paper, we introduce BLURD--Benchmarking and Learning using a Unified Rendering and Diffusion Model--a novel method combining 3D rendering and Stable Diffusion to bridge this gap in representation learning. With BLURD we create a new family of datasets that allow for the creation of both 3D rendered and photo-realistic images with identical factors. BLURD, therefore, provides deeper insights into the representations learned by various CLIP backbones. The source code for creating the BLURD datasets is available at https://github.com/squaringTheCircle/BLURD | BLURD: Benchmarking and Learning using a Unified Rendering and Diffusion Model | [
"Boris Repasky",
"Ehsan Abbasnejad",
"Anthony Dick"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=gad19kaPzb | @inproceedings{
jignasu2024slicek,
title={Slice-100K: A Multimodal Dataset for Extrusion-based 3D Printing},
author={Anushrut Jignasu and Kelly O. Marshall and Ankush Kumar Mishra and Lucas Nerone Rillo and Baskar Ganapathysubramanian and Aditya Balu and Chinmay Hegde and Adarsh Krishnamurthy},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=gad19kaPzb}
} | G-code (Geometric code) or RS-274 is the most widely used computer numerical control (CNC) and 3D printing programming language. G-code provides machine instructions for the movement of the 3D printer, especially for the nozzle, stage, and extrusion of material for extrusion-based additive manufacturing. Currently, there does not exist a large repository of curated CAD models along with their corresponding G-code files for additive manufacturing. To address this issue, we present Slice-100K, a first-of-its-kind dataset of over 100,000 G-code files, along with their tessellated CAD model, LVIS (Large Vocabulary Instance Segmentation) categories, geometric properties, and renderings. We build our dataset from triangulated meshes derived from Objaverse-XL and Thingi10K datasets. We demonstrate the utility of this dataset by finetuning GPT-2 on a subset of the dataset for G-code translation from a legacy G-code format (Sailfish) to a more modern, widely used format (Marlin). Our dataset can be found here. Slice-100K will be the first step in developing a multimodal foundation model for digital manufacturing. | Slice-100K: A Multimodal Dataset for Extrusion-based 3D Printing | [
"Anushrut Jignasu",
"Kelly O. Marshall",
"Ankush Kumar Mishra",
"Lucas Nerone Rillo",
"Baskar Ganapathysubramanian",
"Aditya Balu",
"Chinmay Hegde",
"Adarsh Krishnamurthy"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2407.04180 | [
""
] | https://huggingface.co/papers/2407.04180 | 0 | 0 | 0 | 8 | [] | [] | [] | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=gViJjwRUlM | @inproceedings{
turishcheva2024retrospective,
title={Retrospective for the Dynamic Sensorium Competition for predicting large-scale mouse primary visual cortex activity from videos},
author={Polina Turishcheva and Paul G. Fahey and Michaela Vystr{\v{c}}ilov{\'a} and Laura Hansel and Rachel E Froebe and Kayla Ponder and Yongrong Qiu and Konstantin Friedrich Willeke and Mohammad Bashiri and Ruslan Baikulov and Yu Zhu and Lei Ma and Shan Yu and Tiejun Huang and Bryan M. Li and Wolf De Wulf and Nina Kudryashova and Matthias H. Hennig and Nathalie Rochefort and Arno Onken and Eric Wang and Zhiwei Ding and Andreas S. Tolias and Fabian H. Sinz and Alexander S Ecker},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=gViJjwRUlM}
} | Understanding how biological visual systems process information is challenging because of the nonlinear relationship between visual input and neuronal responses.
Artificial neural networks allow computational neuroscientists to create predictive models that connect biological and machine vision.
Machine learning has benefited tremendously from benchmarks that compare different models on the same task under standardized conditions.
However, there was no standardized benchmark to identify state-of-the-art dynamic models of the mouse visual system.
To address this gap, we established the SENSORIUM 2023 Benchmark Competition with dynamic input, featuring a new large-scale dataset from the primary visual cortex of ten mice.
This dataset includes responses from 78,853 neurons to 2 hours of dynamic stimuli per neuron, together with behavioral measurements such as running speed, pupil dilation, and eye movements.
The competition ranked models in two tracks based on predictive performance for neuronal responses on a held-out test set: one focusing on predicting in-domain natural stimuli and another on out-of-distribution (OOD) stimuli to assess model generalization.
As part of the NeurIPS 2023 Competition Track, we received more than 160 model submissions from 22 teams.
Several new architectures for predictive models were proposed, and the winning teams improved the previous state-of-the-art model by 50\%.
Access to the dataset as well as the benchmarking infrastructure will remain online at www.sensorium-competition.net. | Retrospective for the Dynamic Sensorium Competition for predicting large-scale mouse primary visual cortex activity from videos | [
"Polina Turishcheva",
"Paul G. Fahey",
"Michaela Vystrčilová",
"Laura Hansel",
"Rachel E Froebe",
"Kayla Ponder",
"Yongrong Qiu",
"Konstantin Friedrich Willeke",
"Mohammad Bashiri",
"Ruslan Baikulov",
"Yu Zhu",
"Lei Ma",
"Shan Yu",
"Tiejun Huang",
"Bryan M. Li",
"Wolf De Wulf",
"Nina Kudryashova",
"Matthias H. Hennig",
"Nathalie Rochefort",
"Arno Onken",
"Eric Wang",
"Zhiwei Ding",
"Andreas S. Tolias",
"Fabian H. Sinz",
"Alexander S Ecker"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2407.09100 | [
"https://github.com/ecker-lab/sensorium_2023"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=gPLE4siNjO | @inproceedings{
liu2024a,
title={A Retrospective on the Robot Air Hockey Challenge: Benchmarking Robust, Reliable, and Safe Learning Techniques for Real-world Robotics},
author={Puze Liu and Jonas G{\"u}nster and Niklas Funk and Simon Gr{\"o}ger and Dong Chen and Haitham Bou Ammar and Julius Jankowski and Ante Mari{\'c} and Sylvain Calinon and Andrej Orsula and Miguel Olivares-Mendez and Hongyi Zhou and Rudolf Lioutikov and Gerhard Neumann and Amarildo Likmeta and Amirhossein Zhalehmehrabi and Thomas Bonenfant and Marcello Restelli and Davide Tateo and Ziyuan Liu and Jan Peters},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=gPLE4siNjO}
} | Machine learning methods have a groundbreaking impact in many application domains, but their application on real robotic platforms is still limited.
Despite the many challenges associated with combining machine learning technology with robotics, robot learning remains one of the most promising directions for enhancing the capabilities of robots.
When deploying learning-based approaches on real robots, extra effort is required to address the challenges posed by various real-world factors. To investigate the key factors influencing real-world deployment and to encourage original solutions from different researchers, we organized the Robot Air Hockey Challenge at the NeurIPS 2023 conference.
We selected the air hockey task as a benchmark, encompassing low-level robotics problems and high-level tactics. Different from other machine learning-centric benchmarks, participants need to tackle practical challenges in robotics, such as the sim-to-real gap, low-level control issues, safety problems, real-time requirements, and the limited availability of real-world data. Furthermore, we focus on a dynamic environment, removing the typical assumption of quasi-static motions of other real-world benchmarks.
The competition's results show that solutions combining learning-based approaches with prior knowledge outperform those relying solely on data when real-world deployment is challenging.
Our ablation study reveals which real-world factors may be overlooked when building a learning-based solution.
The successful real-world air hockey deployment of best-performing agents sets the foundation for future competitions and follow-up research directions. | A Retrospective on the Robot Air Hockey Challenge: Benchmarking Robust, Reliable, and Safe Learning Techniques for Real-world Robotics | [
"Puze Liu",
"Jonas Günster",
"Niklas Funk",
"Simon Gröger",
"Dong Chen",
"Haitham Bou Ammar",
"Julius Jankowski",
"Ante Marić",
"Sylvain Calinon",
"Andrej Orsula",
"Miguel Olivares-Mendez",
"Hongyi Zhou",
"Rudolf Lioutikov",
"Gerhard Neumann",
"Amarildo Likmeta",
"Amirhossein Zhalehmehrabi",
"Thomas Bonenfant",
"Marcello Restelli",
"Davide Tateo",
"Ziyuan Liu",
"Jan Peters"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2411.05718 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=gP4aAi7q8S | @inproceedings{
parmar2024causalchaos,
title={CausalChaos! Dataset for Comprehensive Causal Action Question Answering Over Longer Causal Chains Grounded in Dynamic Visual Scenes},
author={Paritosh Parmar and Eric Peh and Ruirui Chen and Ting En Lam and Yuhan Chen and Elston Tan and Basura Fernando},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=gP4aAi7q8S}
} | Causal video question answering (QA) has garnered increasing interest, yet existing datasets often lack depth in causal reasoning. To address this gap, we capitalize on the unique properties of cartoons and construct CausalChaos!, a novel, challenging causal Why-QA dataset built upon the iconic "Tom and Jerry" cartoon series. Cartoons use the principles of animation that allow animators to create expressive, unambiguous causal relationships between events to form a coherent storyline. Utilizing these properties, along with thought-provoking questions and multi-level answers (answer and detailed causal explanation), our questions involve causal chains that interconnect multiple dynamic interactions between characters and visual scenes. These factors demand models to solve more challenging, yet well-defined causal relationships. We also introduce hard incorrect answer mining, including a causally confusing version that is even more challenging. While models perform well, there is much room for improvement, especially, on open-ended answers. We identify more advanced/explicit causal relationship modeling \& joint modeling of vision and language as the immediate areas for future efforts to focus upon. Along with the other complementary datasets, our new challenging dataset will pave the way for these developments in the field. Dataset and Code: https://github.com/LUNAProject22/CausalChaos | CausalChaos! Dataset for Comprehensive Causal Action Question Answering Over Longer Causal Chains Grounded in Dynamic Visual Scenes | [
"Paritosh Parmar",
"Eric Peh",
"Ruirui Chen",
"Ting En Lam",
"Yuhan Chen",
"Elston Tan",
"Basura Fernando"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2404.01299 | [
"https://github.com/lunaproject22/causalchaos"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=g1Zn0XPUFF | @inproceedings{
al-tahan2024unibench,
title={UniBench: Visual Reasoning Requires Rethinking Vision-Language Beyond Scaling},
author={Haider Al-Tahan and Quentin Garrido and Randall Balestriero and Diane Bouchacourt and Caner Hazirbas and Mark Ibrahim},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=g1Zn0XPUFF}
} | Significant research efforts have been made to scale and improve vision-language model (VLM) training approaches.
Yet, with an ever-growing number of benchmarks,
researchers are tasked with the heavy burden of implementing each protocol, bearing a non-trivial computational cost, and making sense of how all these benchmarks translate into meaningful axes of progress.
To facilitate a systematic evaluation of VLM progress, we introduce UniBench: a unified implementation of 50+ VLM benchmarks spanning a range of carefully categorized vision-centric capabilities from object recognition to spatial awareness, counting, and much more. We showcase the utility of UniBench for measuring progress by evaluating nearly 60 publicly available vision-language models, trained on scales of up to 12.8B samples. We find that while scaling training data or model size can boost many vision-language model capabilities, scaling offers little benefit for reasoning or relations. Surprisingly, we also discover today's best VLMs struggle on simple digit recognition and counting tasks, e.g. MNIST, which much simpler networks can solve. Where scale falls short, we find that more precise interventions, such as data quality or tailored-learning objectives offer more promise. For practitioners, we also offer guidance on selecting a suitable VLM for a given application. Finally, we release an easy-to-run UniBench code-base with the full set of 50+ benchmarks and comparisons across 59 models as well as a distilled, representative set of benchmarks that runs in 5 minutes on a single GPU. UniBench with model evaluations on all benchmarks are provided as a toolbox at: https://github.com/facebookresearch/unibench | UniBench: Visual Reasoning Requires Rethinking Vision-Language Beyond Scaling | [
"Haider Al-Tahan",
"Quentin Garrido",
"Randall Balestriero",
"Diane Bouchacourt",
"Caner Hazirbas",
"Mark Ibrahim"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2408.04810 | [
"https://github.com/facebookresearch/unibench"
] | https://huggingface.co/papers/2408.04810 | 6 | 22 | 2 | 6 | [] | [] | [] | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=fuD0h4R1IL | @inproceedings{
liu2024timemmd,
title={Time-{MMD}: Multi-Domain Multimodal Dataset for Time Series Analysis},
author={Haoxin Liu and Shangqing Xu and Zhiyuan Zhao and Lingkai Kong and Harshavardhan Kamarthi and Aditya B. Sasanur and Megha Sharma and Jiaming Cui and Qingsong Wen and Chao Zhang and B. Aditya Prakash},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=fuD0h4R1IL}
} | Time series data are ubiquitous across a wide range of real-world domains.
While real-world time series analysis (TSA) requires human experts to integrate numerical series data with multimodal domain-specific knowledge, most existing TSA models rely solely on numerical data, overlooking the significance of information beyond numerical series.
This oversight is due to the untapped potential of textual series data and the absence of a comprehensive, high-quality multimodal dataset.
To overcome this obstacle, we introduce Time-MMD, the first multi-domain, multimodal time series dataset covering 9 primary data domains. Time-MMD ensures fine-grained modality alignment, eliminates data contamination, and provides high usability.
Additionally, we develop MM-TSFlib, the first multimodal time series forecasting (TSF) library, seamlessly pipelining multimodal TSF evaluations based on Time-MMD for in-depth analyses.
Extensive experiments conducted on Time-MMD through MM-TSFlib demonstrate significant performance enhancements by extending unimodal TSF to multimodality, evidenced by over 15\% mean squared error reduction in general, and up to 40\% in domains with rich textual data. More importantly, our datasets and library revolutionize broader applications, impacts, research topics to advance TSA.
The dataset and library are available at https://github.com/AdityaLab/Time-MMD and https://github.com/AdityaLab/MM-TSFlib. | Time-MMD: Multi-Domain Multimodal Dataset for Time Series Analysis | [
"Haoxin Liu",
"Shangqing Xu",
"Zhiyuan Zhao",
"Lingkai Kong",
"Harshavardhan Kamarthi",
"Aditya B. Sasanur",
"Megha Sharma",
"Jiaming Cui",
"Qingsong Wen",
"Chao Zhang",
"B. Aditya Prakash"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2406.08627 | [
"https://github.com/adityalab/mm-tsflib"
] | https://huggingface.co/papers/2406.08627 | 1 | 0 | 0 | 11 | [] | [
"lamblamb/time-mmd-economy"
] | [] | [] | [
"lamblamb/time-mmd-economy"
] | [] | 1 |
null | https://openreview.net/forum?id=fq7WmnJ3iV | @inproceedings{
obi2024value,
title={Value Imprint: A Technique for Auditing the Human Values Embedded in {RLHF} Datasets},
author={Ike Obi and Rohan Pant and Srishti Shekhar Agrawal and Maham Ghazanfar and Aaron Basiletti},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=fq7WmnJ3iV}
} | LLMs are increasingly fine-tuned using RLHF datasets to align them with human preferences and values. However, very limited research has investigated which specific human values are operationalized through these datasets. In this paper, we introduce Value Imprint, a framework for auditing and classifying the human values embedded within RLHF datasets. To investigate the viability of this framework, we conducted three case study experiments by auditing the Anthropic/hh-rlhf, OpenAI WebGPT Comparisons, and Alpaca GPT-4-LLM datasets to examine the human values embedded within them. Our analysis involved a two-phase process. During the first phase, we developed a taxonomy of human values through an integrated review of prior works from philosophy, axiology, and ethics. Then, we applied this taxonomy to annotate 6,501 RLHF preferences. During the second phase, we employed the labels generated from the annotation as ground truth data for training a transformer-based machine learning model to audit and classify the three RLHF datasets. Through this approach, we discovered that information-utility values, including Wisdom/Knowledge and Information Seeking, were the most dominant human values within all three RLHF datasets. In contrast, prosocial and democratic values, including Well-being, Justice, and Human/Animal Rights, were the least represented human values. These findings have significant implications for developing language models that align with societal values and norms. We contribute our datasets to support further research in this area. | Value Imprint: A Technique for Auditing the Human Values Embedded in RLHF Datasets | [
"Ike Obi",
"Rohan Pant",
"Srishti Shekhar Agrawal",
"Maham Ghazanfar",
"Aaron Basiletti"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | oral | 2411.11937 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=fgJ9OvJPZB | @inproceedings{
tsesmelis2024reassembling,
title={Re-assembling the past: The Re{PAIR} dataset and benchmark for real world 2D and 3D puzzle solving},
author={Theodore Tsesmelis and Luca Palmieri and Marina Khoroshiltseva and Adeela Islam and Gur Elkin and Ofir Itzhak Shahar and Gianluca Scarpellini and Stefano Fiorini and Yaniv Ohayon and Nadav Alali and Sinem Aslan and Pietro Morerio and Sebastiano Vascon and Elena gravina and Maria Cristina Napolitano and Giuseppe Scarpati and Gabriel zuchtriegel and Alexandra Sp{\"u}hler and Michel E. Fuchs and Stuart James and Ohad Ben-Shahar and Marcello Pelillo and Alessio Del Bue},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=fgJ9OvJPZB}
} | This paper proposes the RePAIR dataset that represents a challenging benchmark to test modern computational and data driven methods for puzzle-solving and reassembly tasks. Our dataset has unique properties that are uncommon to current benchmarks for 2D and 3D puzzle solving. The fragments and fractures are realistic, caused by a collapse of a fresco during a World War II bombing at the Pompeii archaeological park. The fragments are also eroded and have missing pieces with irregular shapes and different dimensions, challenging further the reassembly algorithms. The dataset is multi-modal providing high resolution images with characteristic pictorial elements, detailed 3D scans of the fragments and meta-data annotated by the archaeologists. Ground truth has been generated through several years of unceasing fieldwork, including the excavation and cleaning of each fragment, followed by manual puzzle solving by archaeologists of a subset of approx. 1000 pieces among the 16000 available. After digitizing all the fragments in 3D, a benchmark was prepared to challenge current reassembly and puzzle-solving methods that often solve more simplistic synthetic scenarios. The tested baselines show that there clearly exists a gap to fill in solving this computationally complex problem. | Re-assembling the past: The RePAIR dataset and benchmark for real world 2D and 3D puzzle solving | [
"Theodore Tsesmelis",
"Luca Palmieri",
"Marina Khoroshiltseva",
"Adeela Islam",
"Gur Elkin",
"Ofir Itzhak Shahar",
"Gianluca Scarpellini",
"Stefano Fiorini",
"Yaniv Ohayon",
"Nadav Alali",
"Sinem Aslan",
"Pietro Morerio",
"Sebastiano Vascon",
"Elena gravina",
"Maria Cristina Napolitano",
"Giuseppe Scarpati",
"Gabriel zuchtriegel",
"Alexandra Spühler",
"Michel E. Fuchs",
"Stuart James",
"Ohad Ben-Shahar",
"Marcello Pelillo",
"Alessio Del Bue"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2410.24010 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=fNcyFhTw2f | @inproceedings{
zhu2024advancing,
title={Advancing Video Anomaly Detection: A Concise Review and a New Dataset},
author={Liyun Zhu and Lei Wang and Arjun Raj and Tom Gedeon and Chen Chen},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=fNcyFhTw2f}
} | Video Anomaly Detection (VAD) finds widespread applications in security surveillance, traffic monitoring, industrial monitoring, and healthcare. Despite extensive research efforts, there remains a lack of concise reviews that provide insightful guidance for researchers. Such reviews would serve as quick references to grasp current challenges, research trends, and future directions. In this paper, we present such a review, examining models and datasets from various perspectives. We emphasize the critical relationship between model and dataset, where the quality and diversity of datasets profoundly influence model performance, and dataset development adapts to the evolving needs of emerging approaches. Our review identifies practical issues, including the absence of comprehensive datasets with diverse scenarios. To address this, we introduce a new dataset, Multi-Scenario Anomaly Detection (MSAD), comprising 14 distinct scenarios captured from various camera views. Our dataset has diverse motion patterns and challenging variations, such as different lighting and weather conditions, providing a robust foundation for training superior models. We conduct an in-depth analysis of recent representative models using MSAD and highlight its potential in addressing the challenges of detecting anomalies across diverse and evolving surveillance scenarios. | Advancing Video Anomaly Detection: A Concise Review and a New Dataset | [
"Liyun Zhu",
"Lei Wang",
"Arjun Raj",
"Tom Gedeon",
"Chen Chen"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2402.04857 | [
""
] | https://huggingface.co/papers/2402.04857 | 0 | 0 | 0 | 3 | [] | [] | [] | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=f5XZEROoGb | @inproceedings{
pardawala2024subjectiveqa,
title={Subj{ECT}ive-{QA}: Measuring Subjectivity in Earnings Call Transcripts' {QA} Through Six-Dimensional Feature Analysis},
author={Huzaifa Pardawala and Siddhant Sukhani and Agam Shah and Veer Kejriwal and Abhishek Pillai and Rohan Bhasin and Andrew DiBiasio and Tarun Mandapati and Dhruv Adha and Sudheer Chava},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=f5XZEROoGb}
} | Fact-checking is extensively studied in the context of misinformation and disinformation, addressing objective inaccuracies. However, a softer form of misinformation involves responses that are factually correct but lack certain features such as clarity and relevance. This challenge is prevalent in formal Question-Answer (QA) settings such as press conferences in finance, politics, sports, and other domains, where subjective answers can obscure transparency. Despite this, there is a lack of manually annotated datasets for subjective features across multiple dimensions. To address this gap, we introduce SubjECTive-QA, a human annotated dataset on Earnings Call Transcripts' (ECTs) QA sessions as the answers given by company representatives are often open to subjective interpretations and scrutiny. The dataset includes 49,446 annotations for long-form QA pairs across six features: Assertive, Cautious, Optimistic, Specific, Clear, and Relevant. These features are carefully selected to encompass the key attributes that reflect the tone of the answers provided during QA sessions across different domains. Our findings are that the best-performing Pre-trained Language Model (PLM), RoBERTa-base, has similar weighted F1 scores to Llama-3-70b-Chat on features with lower subjectivity, such as Relevant and Clear, with a mean difference of 2.17% in their weighted F1 scores. The models perform significantly better on features with higher subjectivity, such as Specific and Assertive, with a mean difference of 10.01% in their weighted F1 scores. Furthermore, testing SubjECTive-QA's generalizability using QAs from White House Press Briefings and Gaggles yields an average weighted F1 score of 65.97% using our best models for each feature, demonstrating broader applicability beyond the financial domain. SubjECTive-QA is publicly available under the CC BY 4.0 license. | SubjECTive-QA: Measuring Subjectivity in Earnings Call Transcripts' QA Through Six-Dimensional Feature Analysis | [
"Huzaifa Pardawala",
"Siddhant Sukhani",
"Agam Shah",
"Veer Kejriwal",
"Abhishek Pillai",
"Rohan Bhasin",
"Andrew DiBiasio",
"Tarun Mandapati",
"Dhruv Adha",
"Sudheer Chava"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2410.20651 | [
"https://github.com/gtfintechlab/subjective-qa"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=f1UL4wNlw6 | @inproceedings{
brahman2024the,
title={The Art of Saying No: Contextual Noncompliance in Language Models},
author={Faeze Brahman and Sachin Kumar and Vidhisha Balachandran and Pradeep Dasigi and Valentina Pyatkin and Abhilasha Ravichander and Sarah Wiegreffe and Nouha Dziri and Khyathi Chandu and Jack Hessel and Yulia Tsvetkov and Noah A. Smith and Yejin Choi and Hannaneh Hajishirzi},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=f1UL4wNlw6}
} | Chat-based language models are designed to be helpful, yet they should not comply with every user request.
While most existing work primarily focuses on refusal of ``unsafe'' queries, we posit that the scope of noncompliance should be broadened. We introduce a comprehensive taxonomy of contextual noncompliance describing when and how models should *not* comply with user requests. Our taxonomy spans a wide range of categories including *incomplete*, *unsupported*, *indeterminate*, and *humanizing* requests (in addition to *unsafe* requests). To test noncompliance capabilities of language models, we use this taxonomy to develop a new evaluation suite of 1000 noncompliance prompts. We find that most existing models show significantly high compliance rates in certain previously understudied categories with models like GPT-4 incorrectly complying with as many as 30\% of requests.
To address these gaps, we explore different training strategies using a synthetically-generated training set of requests and expected noncompliant responses.
Our experiments demonstrate that while direct finetuning of instruction-tuned models can lead to both over-refusal and a decline in general capabilities, using parameter efficient methods like low rank adapters helps to strike a good balance between appropriate noncompliance and other capabilities. | The Art of Saying No: Contextual Noncompliance in Language Models | [
"Faeze Brahman",
"Sachin Kumar",
"Vidhisha Balachandran",
"Pradeep Dasigi",
"Valentina Pyatkin",
"Abhilasha Ravichander",
"Sarah Wiegreffe",
"Nouha Dziri",
"Khyathi Chandu",
"Jack Hessel",
"Yulia Tsvetkov",
"Noah A. Smith",
"Yejin Choi",
"Hannaneh Hajishirzi"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2407.12043 | [
""
] | https://huggingface.co/papers/2407.12043 | 3 | 4 | 2 | 14 | [] | [
"allenai/coconot"
] | [] | [] | [
"allenai/coconot"
] | [] | 1 |
null | https://openreview.net/forum?id=etdXLAMZoc | @inproceedings{
zhang2024libmoon,
title={Lib{MOON}: A Gradient-based MultiObjective OptimizatioN Library in PyTorch},
author={Xiaoyuan Zhang and Liang Zhao and Yingying Yu and Xi Lin and Yifan Chen and Han Zhao and Qingfu Zhang},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=etdXLAMZoc}
} | Multiobjective optimization problems (MOPs) are prevalent in machine learning, with applications in multi-task learning, learning under fairness or robustness constraints, etc. Instead of reducing multiple objective functions into a scalar objective, MOPs aim to optimize for the so-called Pareto optimality or Pareto set learning, which involves optimizing more than one objective function simultaneously, over models with thousands to millions of parameters. Existing benchmark libraries for MOPs mainly focus on evolutionary algorithms, most of which are zeroth-order or meta-heuristic methods that do not effectively utilize higher-order information from objectives and cannot scale to large-scale models with millions of parameters. In light of the above challenges, this paper introduces \algoname, the first multiobjective optimization library that supports state-of-the-art gradient-based methods, provides a fair and comprehensive benchmark, and is open-sourced for the community. | LibMOON: A Gradient-based MultiObjective OptimizatioN Library in PyTorch | [
"Xiaoyuan Zhang",
"Liang Zhao",
"Yingying Yu",
"Xi Lin",
"Yifan Chen",
"Han Zhao",
"Qingfu Zhang"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2409.02969 | [
"https://github.com/xzhang2523/libmoon"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=eRleg6vy0Y | @inproceedings{
lozano2024microbench,
title={Micro-Bench: A Microscopy Benchmark for Vision-Language Understanding},
author={Alejandro Lozano and Jeffrey J Nirschl and James Burgess and Sanket Rajan Gupte and Yuhui Zhang and Alyssa Unell and Serena Yeung-Levy},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=eRleg6vy0Y}
} | Recent advances in microscopy have enabled the rapid generation of terabytes of image data in cell biology and biomedical research. Vision-language models (VLMs) offer a promising solution for large-scale biological image analysis, enhancing researchers’ efficiency, identifying new image biomarkers, and accelerating hypothesis generation and scientific discovery. However, there is a lack of standardized, diverse, and large-scale vision-language benchmarks to evaluate VLMs’ perception and cognition capabilities in biological image understanding. To address this gap, we introduce Micro-Bench, an expert-curated benchmark encompassing 24 biomedical tasks across various scientific disciplines (biology, pathology), microscopy modalities (electron, fluorescence, light), scales (subcellular, cellular, tissue), and organisms in both normal and abnormal states. We evaluate state-of-the-art biomedical, pathology, and general VLMs on Micro-Bench and find that: i) current models struggle on all categories, even for basic tasks such as distinguishing microscopy modalities; ii) current specialist models fine-tuned on biomedical data often perform worse than generalist models; iii) fine-tuning in specific microscopy domains can cause catastrophic forgetting, eroding prior biomedical knowledge encoded in their base model. iv) weight interpolation between fine-tuned and pre-trained models offers one solution to forgetting and improves general performance across biomedical tasks. We release Micro-Bench under a permissive license to accelerate the research and development of microscopy foundation models. | Micro-Bench: A Microscopy Benchmark for Vision-Language Understanding | [
"Alejandro Lozano",
"Jeffrey J Nirschl",
"James Burgess",
"Sanket Rajan Gupte",
"Yuhui Zhang",
"Alyssa Unell",
"Serena Yeung-Levy"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=eOszT2lepG | @inproceedings{
hollidt2024egosim,
title={EgoSim: An Egocentric Multi-view Simulator and Real Dataset for Body-worn Cameras during Motion and Activity},
author={Dominik Hollidt and Paul Streli and Jiaxi Jiang and Yasaman Haghighi and Changlin Qian and Xintong Liu and Christian Holz},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=eOszT2lepG}
} | Research on egocentric tasks in computer vision has mostly focused on head-mounted cameras, such as fisheye cameras or embedded cameras inside immersive headsets.
We argue that the increasing miniaturization of optical sensors will lead to the prolific integration of cameras into many more body-worn devices at various locations.
This will bring fresh perspectives to established tasks in computer vision and benefit key areas such as human motion tracking, body pose estimation, or action recognition---particularly for the lower body, which is typically occluded.
In this paper, we introduce EgoSim, a novel simulator of body-worn cameras that generates realistic egocentric renderings from multiple perspectives across a wearer's body.
A key feature of EgoSim is its use of real motion capture data to render motion artifacts, which are especially noticeable with arm- or leg-worn cameras.
In addition, we introduce MultiEgoView, a dataset of egocentric footage from six body-worn cameras and ground-truth full-body 3D poses during several activities:
119 hours of data are derived from AMASS motion sequences in four high-fidelity virtual environments, which we augment with 5 hours of real-world motion data from 13 participants using six GoPro cameras and 3D body pose references from an Xsens motion capture suit.
We demonstrate EgoSim's effectiveness by training an end-to-end video-only 3D pose estimation network.
Analyzing its domain gap, we show that our dataset and simulator substantially aid training for inference on real-world data.
EgoSim code & MultiEgoView dataset: https://siplab.org/projects/EgoSim | EgoSim: An Egocentric Multi-view Simulator and Real Dataset for Body-worn Cameras during Motion and Activity | [
"Dominik Hollidt",
"Paul Streli",
"Jiaxi Jiang",
"Yasaman Haghighi",
"Changlin Qian",
"Xintong Liu",
"Christian Holz"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=eFPxCNmI7i | @inproceedings{
pal2024semitruths,
title={Semi-Truths: A Large-Scale Dataset of {AI}-Augmented Images for Evaluating Robustness of {AI}-Generated Image detectors},
author={Anisha Pal and Julia Kruk and Mansi Phute and Manognya Bhattaram and Diyi Yang and Duen Horng Chau and Judy Hoffman},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=eFPxCNmI7i}
} | Text-to-image diffusion models have impactful applications in art, design, and entertainment, yet these technologies also pose significant risks by enabling the creation and dissemination of misinformation. Although recent advancements have produced AI-generated image detectors that claim robustness against various augmentations, their true effectiveness remains uncertain. Do these detectors reliably identify images with different levels of augmentation? Are they biased toward specific scenes or data distributions? To investigate, we introduce **Semi-Truths**, featuring $27,600$ real images, $223,400$ masks, and $1,472,700$ AI-augmented images that feature targeted and localized perturbations produced using diverse augmentation techniques, diffusion models, and data distributions. Each augmented image is accompanied by metadata for standardized and targeted evaluation of detector robustness. Our findings suggest that state-of-the-art detectors exhibit varying sensitivities to the types and degrees of perturbations, data distributions, and augmentation methods used, offering new insights into their performance and limitations. The code for the augmentation and evaluation pipeline is available at https://github.com/J-Kruk/SemiTruths. | Semi-Truths: A Large-Scale Dataset of AI-Augmented Images for Evaluating Robustness of AI-Generated Image detectors | [
"Anisha Pal",
"Julia Kruk",
"Mansi Phute",
"Manognya Bhattaram",
"Diyi Yang",
"Duen Horng Chau",
"Judy Hoffman"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2411.07472 | [
"https://github.com/j-kruk/semitruths"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=dsK5EmmomU | @inproceedings{
liu2024assemblage,
title={Assemblage: Automatic Binary Dataset Construction for Machine Learning},
author={Chang Liu and Rebecca Saul and Yihao Sun and Edward Raff and Maya Fuchs and Townsend Southard Pantano and James Holt and Kristopher Micinski},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=dsK5EmmomU}
} | Binary code is pervasive, and binary analysis is a key task in reverse engineering, malware classification, and vulnerability discovery. Unfortunately, while there exist large corpuses of malicious binaries, obtaining high-quality corpuses of benign binaries for modern systems has proven challenging (e.g., due to licensing issues). Consequently, machine learning based pipelines for binary analysis utilize either costly commercial corpuses (e.g., VirusTotal) or open-source binaries (e.g., coreutils) available in limited quantities. To address these issues, we present Assemblage: an extensible cloud-based distributed system that crawls, configures, and builds Windows PE binaries to obtain high-quality binary corpuses suitable for training state-of-the-art models in binary analysis. We have run Assemblage on AWS over the past year, producing 890k Windows PE and 428k Linux ELF binaries across 29 configurations. Assemblage is designed to be both reproducible and extensible, enabling users to publish "recipes" for their datasets, and facilitating the extraction of a wide array of features. We evaluated Assemblage by using its data to train modern learning-based pipelines for compiler provenance and binary function similarity. Our results illustrate the practical need for robust corpuses of high-quality Windows PE binaries in training modern learning-based binary analyses. | Assemblage: Automatic Binary Dataset Construction for Machine Learning | [
"Chang Liu",
"Rebecca Saul",
"Yihao Sun",
"Edward Raff",
"Maya Fuchs",
"Townsend Southard Pantano",
"James Holt",
"Kristopher Micinski"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2405.03991 | [
"https://github.com/assemblage-dataset/assemblage"
] | https://huggingface.co/papers/2405.03991 | 2 | 0 | 0 | 8 | [] | [
"changliu8541/Assemblage_PE",
"changliu8541/Assemblage_vcpkgDLL",
"changliu8541/Assemblage_LinuxELF"
] | [] | [] | [
"changliu8541/Assemblage_PE",
"changliu8541/Assemblage_vcpkgDLL",
"changliu8541/Assemblage_LinuxELF"
] | [] | 1 |
null | https://openreview.net/forum?id=djGx0hucok | @inproceedings{
ye2024fedllmbench,
title={Fed{LLM}-Bench: Realistic Benchmarks for Federated Learning of Large Language Models},
author={Rui Ye and Rui Ge and Xinyu Zhu and Jingyi Chai and Yaxin Du and Yang Liu and Yanfeng Wang and Siheng Chen},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=djGx0hucok}
} | Federated learning has enabled multiple parties to collaboratively train large language models without directly sharing their data (FedLLM).
Following this training paradigm, the community has put massive efforts from diverse aspects including framework, performance, and privacy.
However, an unpleasant fact is that there are currently no realistic datasets and benchmarks for FedLLM and previous works all rely on artificially constructed datasets, failing to capture properties in real-world scenarios.
Addressing this, we propose FedLLM-Bench, which involves 8 training methods, 4 training datasets, and 6 evaluation metrics, to offer a comprehensive testbed for the FedLLM community.
FedLLM-Bench encompasses three datasets (e.g., user-annotated multilingual dataset) for federated instruction tuning and one dataset (e.g., user-annotated preference dataset) for federated preference alignment, whose scale of client number ranges from 38 to 747.
Our datasets incorporate several representative diversities: language, quality, quantity, instruction, length, embedding, and preference, capturing properties in real-world scenarios.
Based on FedLLM-Bench, we conduct experiments on all datasets to benchmark existing FL methods and provide empirical insights (e.g., multilingual collaboration).
We believe that our FedLLM-Bench can benefit the FedLLM community by reducing required efforts, providing a practical testbed, and promoting fair comparisons.
Code and datasets are available at https://github.com/rui-ye/FedLLM-Bench. | FedLLM-Bench: Realistic Benchmarks for Federated Learning of Large Language Models | [
"Rui Ye",
"Rui Ge",
"Xinyu Zhu",
"Jingyi Chai",
"Yaxin Du",
"Yang Liu",
"Yanfeng Wang",
"Siheng Chen"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2406.04845 | [
"https://github.com/rui-ye/fedllm-bench"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=dVfNPSzpnv | @inproceedings{
ma2024imdlbenco,
title={{IMDL}-BenCo: A Comprehensive Benchmark and Codebase for Image Manipulation Detection \& Localization},
author={Xiaochen Ma and Xuekang Zhu and Lei Su and Bo Du and Zhuohang Jiang and Bingkui Tong and Zeyu Lei and Xinyu Yang and Chi-Man Pun and Jiancheng Lv and Ji-Zhe Zhou},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=dVfNPSzpnv}
} | A comprehensive benchmark is yet to be established in the Image Manipulation Detection \& Localization (IMDL) field. The absence of such a benchmark leads to insufficient and misleading model evaluations, severely undermining the development of this field. However, the scarcity of open-sourced baseline models and inconsistent training and evaluation protocols make conducting rigorous experiments and faithful comparisons among IMDL models challenging.
To address these challenges, we introduce IMDL-BenCo, the first comprehensive IMDL benchmark and modular codebase. IMDL-BenCo: i) decomposes the IMDL framework into standardized, reusable components and revises the model construction pipeline, improving coding efficiency and customization flexibility; ii) fully implements or incorporates training code for state-of-the-art models to establish a comprehensive IMDL benchmark; and iii) conducts deep analysis based on the established benchmark and codebase, offering new insights into IMDL model architecture, dataset characteristics, and evaluation standards.
Specifically, IMDL-BenCo includes common processing algorithms, 8 state-of-the-art IMDL models (1 of which are reproduced from scratch), 2 sets of standard training and evaluation protocols, 15 GPU-accelerated evaluation metrics, and 3 kinds of robustness evaluation. This benchmark and codebase represent a significant leap forward in calibrating the current progress in the IMDL field and inspiring future breakthroughs.
Code is available at: https://github.com/scu-zjz/IMDLBenCo | IMDL-BenCo: A Comprehensive Benchmark and Codebase for Image Manipulation Detection Localization | [
"Xiaochen Ma",
"Xuekang Zhu",
"Lei Su",
"Bo Du",
"Zhuohang Jiang",
"Bingkui Tong",
"Zeyu Lei",
"Xinyu Yang",
"Chi-Man Pun",
"Jiancheng Lv",
"Ji-Zhe Zhou"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | oral | [
"https://github.com/scu-zjz/imdlbenco"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=dF22s2GoX0 | @inproceedings{
hargrave2024epicare,
title={EpiCare: A Reinforcement Learning Benchmark for Dynamic Treatment Regimes},
author={Mason Hargrave and Alex Spaeth and Logan Grosenick},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=dF22s2GoX0}
} | Healthcare applications pose significant challenges to existing reinforcement learning (RL) methods due to implementation risks, low data availability, short treatment episodes, sparse rewards, partial observations, and heterogeneous treatment effects. Despite significant interest in using RL to generate dynamic treatment regimes for longitudinal patient care scenarios, no standardized benchmark has yet been developed.
To fill this need we introduce *Episodes of Care* (*EpiCare*), a benchmark designed to mimic the challenges associated with applying RL to longitudinal healthcare settings. We leverage this benchmark to test five state-of-the-art offline RL models as well as five common off-policy evaluation (OPE) techniques.
Our results suggest that while offline RL may be capable of improving upon existing standards of care given large data availability, its applicability does not appear to extend to the moderate to low data regimes typical of healthcare settings. Additionally, we demonstrate that several OPE techniques which have become standard in the the medical RL literature fail to perform adequately on our benchmark. These results suggest that the performance of RL models in dynamic treatment regimes may be difficult to meaningfully evaluate using current OPE methods, indicating that RL for this application may still be in its early stages. We hope that these results along with the benchmark itself will facilitate the comparison of existing methods and inspire further research into techniques that increase the practical applicability of medical RL. | EpiCare: A Reinforcement Learning Benchmark for Dynamic Treatment Regimes | [
"Mason Hargrave",
"Alex Spaeth",
"Logan Grosenick"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=dBSoa8fpV7 | @inproceedings{
dou2024peace,
title={{PEACE}: A Dataset of Pharmaceutical Care for Cancer Pain Analgesia Evaluation and Medication Decision},
author={Yutao Dou and Huimin Yu and Wei Li and Jingyang Li and Fei Xia and Jian Xiao},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=dBSoa8fpV7}
} | Over half of cancer patients experience long-term pain management challenges. Recently, interest has grown in systems for cancer pain treatment effectiveness assessment (TEA) and medication recommendation (MR) to optimize pharmacological care. These systems aim to improve treatment effectiveness by recommending personalized medication plans based on comprehensive patient information. Despite progress, current systems lack multidisciplinary treatment (MDT) team assessments of treatment and the patient's perception of medication, crucial for effective cancer pain management. Moreover, managing cancer pain medication requires multiple adjustments to the treatment plan based on the patient's evolving condition, a detail often missing in existing datasets. To tackle these issues, we designed the PEACE dataset specifically for cancer pain medication research. It includes detailed pharmacological care records for over 38,000 patients, covering demographics, clinical examination, treatment outcomes, medication plans, and patient self-perceptions. Unlike existing datasets, PEACE records not only long-term and multiple follow-ups both inside and outside hospitals but also includes patients' self-assessments of medication effects and the impact on their lives. We conducted a proof-of-concept study with 13 machine learning algorithms on the PEACE dataset for the TEA (classification task) and MR (regression task). These experiments provide valuable insights into the potential of the PEACE dataset for advancing personalized cancer pain management. The dataset is accessible at: [https://github.com/YTYTYD/PEACE]. | PEACE: A Dataset of Pharmaceutical Care for Cancer Pain Analgesia Evaluation and Medication Decision | [
"Yutao Dou",
"Huimin Yu",
"Wei Li",
"Jingyang Li",
"Fei Xia",
"Jian Xiao"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=d0gMFgrYFB | @inproceedings{
lin2024fvel,
title={{FVEL}: Interactive Formal Verification Environment with Large Language Models via Theorem Proving},
author={Xiaohan Lin and Qingxing Cao and Yinya Huang and Haiming Wang and Jianqiao Lu and Zhengying Liu and Linqi Song and Xiaodan Liang},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=d0gMFgrYFB}
} | Formal verification (FV) has witnessed growing significance with current emerging program synthesis by the evolving large language models (LLMs). However, current formal verification mainly resorts to symbolic verifiers or hand-craft rules, resulting in limitations for extensive and flexible verification. On the other hand, formal languages for automated theorem proving, such as Isabelle, as another line of rigorous verification, are maintained with comprehensive rules and theorems. In this paper, we propose FVEL, an interactive Formal Verification Environment with LLMs. Specifically, FVEL transforms a given code to be verified into Isabelle, and then conducts verification via neural automated theorem proving with an LLM. The joined paradigm leverages the rigorous yet abundant formulated and organized rules in Isabelle and is also convenient for introducing and adjusting cutting-edge LLMs. To achieve this goal, we extract a large-scale FVELER. The FVELER dataset includes code dependencies and verification processes that are formulated in Isabelle, containing 758 theories, 29,304 lemmas, and 201,498 proof steps in total with in-depth dependencies. We benchmark FVELER in the FVEL environment by first fine-tuning LLMs with FVELER and then evaluating them on Code2Inv and SV-COMP. The results show that FVEL with FVELER fine-tuned Llama3-8B solves 17.39% (69→81) more problems, and Mistral-7B 12% (75→84) more problems in SV-COMP. And the proportion of proof errors is reduced. Project page: https://fveler.github.io/. | FVEL: Interactive Formal Verification Environment with Large Language Models via Theorem Proving | [
"Xiaohan Lin",
"Qingxing Cao",
"Yinya Huang",
"Haiming Wang",
"Jianqiao Lu",
"Zhengying Liu",
"Linqi Song",
"Xiaodan Liang"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2406.14408 | [
"https://github.com/fveler/fvel"
] | https://huggingface.co/papers/2406.14408 | 0 | 0 | 0 | 8 | [
"sunatte/txt2sql",
"MachoMaheen/devdock4bit"
] | [] | [
"smarttang/blingsec"
] | [
"sunatte/txt2sql",
"MachoMaheen/devdock4bit"
] | [] | [
"smarttang/blingsec"
] | 1 |
null | https://openreview.net/forum?id=cy8mq7QYae | @inproceedings{
wang2024charxiv,
title={CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal {LLM}s},
author={Zirui Wang and Mengzhou Xia and Luxi He and Howard Chen and Yitao Liu and Richard Zhu and Kaiqu Liang and Xindi Wu and Haotian Liu and Sadhika Malladi and Alexis Chevalier and Sanjeev Arora and Danqi Chen},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=cy8mq7QYae}
} | Chart understanding plays a pivotal role when applying Multimodal Large Language Models (MLLMs) to real-world tasks such as analyzing scientific papers or financial reports. However, existing datasets often focus on oversimplified and homogeneous charts with template-based questions, leading to an overly optimistic measure of progress. We demonstrate that although open-source models can appear to outperform strong proprietary models on these benchmarks, a simple stress test with slightly different charts or questions deteriorates performance by up to 34.5%. In this work, we propose CharXiv, a comprehensive evaluation suite involving 2,323 natural, challenging, and diverse charts from scientific papers. CharXiv includes two types of questions: 1) descriptive questions about examining basic chart elements and 2) reasoning questions that require synthesizing information across complex visual elements in the chart. To ensure quality, all charts and questions are handpicked, curated, and verified by human experts. Our results reveal a substantial, previously underestimated gap between the reasoning skills of the strongest proprietary model (i.e., GPT-4o), which achieves 47.1% accuracy, and the strongest open-source model (i.e., InternVL Chat V1.5), which achieves 29.2%. All models lag far behind human performance of 80.5%, underscoring weaknesses in the chart understanding capabilities of existing MLLMs. We hope that CharXiv facilitates future research on MLLM chart understanding by providing a more realistic and faithful measure of progress. Project website: https://charxiv.github.io/ | CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs | [
"Zirui Wang",
"Mengzhou Xia",
"Luxi He",
"Howard Chen",
"Yitao Liu",
"Richard Zhu",
"Kaiqu Liang",
"Xindi Wu",
"Haotian Liu",
"Sadhika Malladi",
"Alexis Chevalier",
"Sanjeev Arora",
"Danqi Chen"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2406.18521 | [
"https://github.com/princeton-nlp/CharXiv"
] | https://huggingface.co/papers/2406.18521 | 11 | 28 | 2 | 13 | [] | [
"princeton-nlp/CharXiv"
] | [] | [] | [
"princeton-nlp/CharXiv"
] | [] | 1 |
null | https://openreview.net/forum?id=cu8FfaYriU | @inproceedings{
zhao2024a,
title={A Taxonomy of Challenges to Curating Fair Datasets},
author={Dora Zhao and Morgan Scheuerman and Pooja Chitre and Jerone Andrews and Georgia Panagiotidou and Shawn Walker and Kathleen H. Pine and Alice Xiang},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=cu8FfaYriU}
} | Despite extensive efforts to create fairer machine learning (ML) datasets, there remains a limited understanding of the practical aspects of dataset curation. Drawing from interviews with 30 ML dataset curators, we present a comprehensive taxonomy of the challenges and trade-offs encountered throughout the dataset curation lifecycle. Our findings underscore overarching issues within the broader fairness landscape that impact data curation. We conclude with recommendations aimed at fostering systemic changes to better facilitate fair dataset curation practices. | A Taxonomy of Challenges to Curating Fair Datasets | [
"Dora Zhao",
"Morgan Scheuerman",
"Pooja Chitre",
"Jerone Andrews",
"Georgia Panagiotidou",
"Shawn Walker",
"Kathleen H. Pine",
"Alice Xiang"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | oral | 2406.06407 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=cnjmZqVpm9 | @inproceedings{
arodi2024cableinspectad,
title={CableInspect-{AD}: An Expert-Annotated Anomaly Detection Dataset},
author={Akshatha Arodi and Margaux Luck and Jean-Luc Bedwani and Aldo Zaimi and Ge Li and Nicolas Pouliot and Julien Beaudry and Ga{\'e}tan Marceau Caron},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=cnjmZqVpm9}
} | Machine learning models are increasingly being deployed in real-world contexts. However, systematic studies on their transferability to specific and critical applications are underrepresented in the research literature. An important example is visual anomaly detection (VAD) for robotic power line inspection. While existing VAD methods perform well in controlled environments, real-world scenarios present diverse and unexpected anomalies that current datasets fail to capture. To address this gap, we introduce CableInspect-AD, a high-quality, publicly available dataset created and annotated by domain experts from Hydro-Québec, a Canadian public utility. This dataset includes high-resolution images with challenging real-world anomalies, covering defects with varying severity levels. To address the challenges of collecting diverse anomalous and nominal examples for setting a detection threshold, we propose an enhancement to the celebrated PatchCore algorithm. This enhancement enables its use in scenarios with limited labeled data. We also present a comprehensive evaluation protocol based on cross-validation to assess models' performances. We evaluate our Enhanced-PatchCore for few-shot and many-shot detection, and Vision-Language Models for zero-shot detection. While promising, these models struggle to detect all anomalies, highlighting the dataset's value as a challenging benchmark for the broader research community. Project page: https://mila-iqia.github.io/cableinspect-ad/. | CableInspect-AD: An Expert-Annotated Anomaly Detection Dataset | [
"Akshatha Arodi",
"Margaux Luck",
"Jean-Luc Bedwani",
"Aldo Zaimi",
"Ge Li",
"Nicolas Pouliot",
"Julien Beaudry",
"Gaétan Marceau Caron"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2409.20353 | [
"https://github.com/mila-iqia/cableinspect-ad-code"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=catfRXDWcb | @inproceedings{
wang2024humanvid,
title={HumanVid: Demystifying Training Data for Camera-controllable Human Image Animation},
author={Zhenzhi Wang and Yixuan Li and Yanhong Zeng and Youqing Fang and Yuwei Guo and Wenran Liu and Jing Tan and Kai Chen and Tianfan Xue and Bo Dai and Dahua Lin},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=catfRXDWcb}
} | Human image animation involves generating videos from a character photo, allowing user control and unlocking the potential for video and movie production. While recent approaches yield impressive results using high-quality training data, the inaccessibility of these datasets hampers fair and transparent benchmarking. Moreover, these approaches prioritize 2D human motion and overlook the significance of camera motions in videos, leading to limited control and unstable video generation. To demystify the training data, we present HumanVid, the first large-scale high-quality dataset tailored for human image animation, which combines crafted real-world and synthetic data. For the real-world data, we compile a vast collection of real-world videos from the internet. We developed and applied careful filtering rules to ensure video quality, resulting in a curated collection of 20K high-resolution (1080P) human-centric videos. Human and camera motion annotation is accomplished using a 2D pose estimator and a SLAM-based method. To expand our synthetic dataset, we collected 10K 3D avatar assets and leveraged existing assets of body shapes, skin textures and clothings. Notably, we introduce a rule-based camera trajectory generation method, enabling the synthetic pipeline to incorporate diverse and precise camera motion annotation, which can rarely be found in real-world data. To verify the effectiveness of HumanVid, we establish a baseline model named **CamAnimate**, short for Camera-controllable Human Animation, that considers both human and camera motions as conditions. Through extensive experimentation, we demonstrate that such simple baseline training on our HumanVid achieves state-of-the-art performance in controlling both human pose and camera motions, setting a new benchmark. Demo, data and code could be found in the project website: https://humanvid.github.io/. | HumanVid: Demystifying Training Data for Camera-controllable Human Image Animation | [
"Zhenzhi Wang",
"Yixuan Li",
"Yanhong Zeng",
"Youqing Fang",
"Yuwei Guo",
"Wenran Liu",
"Jing Tan",
"Kai Chen",
"Tianfan Xue",
"Bo Dai",
"Dahua Lin"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2407.17438 | [
"https://github.com/zhenzhiwang/humanvid"
] | https://huggingface.co/papers/2407.17438 | 6 | 23 | 3 | 11 | [] | [] | [] | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=cX57Pbw8vS | @inproceedings{
geng2024benchmarking,
title={Benchmarking PtO and PnO Methods in the Predictive Combinatorial Optimization Regime},
author={Haoyu Geng and Hang Ruan and Runzhong Wang and Yang Li and YANG WANG and Lei CHEN and Junchi Yan},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=cX57Pbw8vS}
} | Predictive combinatorial optimization, where the parameters of combinatorial optimization (CO) are unknown at the decision-making time, is the precise modeling of many real-world applications, including energy cost-aware scheduling and budget allocation on advertising. Tackling such a problem usually involves a prediction model and a CO solver. These two modules are integrated into the predictive CO pipeline following two design principles: ''Predict-then-Optimize (PtO)'', which learns predictions by supervised training and subsequently solves CO using predicted coefficients, while the other, named ''Predict-and-Optimize (PnO)'', directly optimizes towards the ultimate decision quality and claims to yield better decisions than traditional PtO approaches. However, there lacks a systematic benchmark of both approaches, including the specific design choices at the module level, as well as an evaluation dataset that covers representative real-world scenarios. To this end, we develop a modular framework to benchmark 11 existing PtO/PnO methods on 8 problems, including a new industrial dataset for combinatorial advertising that will be released. Our study shows that PnO approaches are better than PtO on 7 out of 8 benchmarks, but there is no silver bullet found for the specific design choices of PnO. A comprehensive categorization of current approaches and integration of typical scenarios are provided under a unified benchmark. Therefore, this paper could serve as a comprehensive benchmark for future PnO approach development and also offer fast prototyping for application-focused development. The code is available at \url{https://github.com/Thinklab-SJTU/PredictiveCO-Benchmark}. | Benchmarking PtO and PnO Methods in the Predictive Combinatorial Optimization Regime | [
"Haoyu Geng",
"Hang Ruan",
"Runzhong Wang",
"Yang Li",
"YANG WANG",
"Lei CHEN",
"Junchi Yan"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=cR3T1ZYN8I | @inproceedings{
foteinopoulou2024a,
title={A Hitchhiker's Guide to Fine-Grained Face Forgery Detection Using Common Sense Reasoning},
author={Niki Foteinopoulou and Enjie Ghorbel and Djamila Aouada},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=cR3T1ZYN8I}
} | Explainability in artificial intelligence is crucial for restoring trust, particularly in areas like face forgery detection, where viewers often struggle to distinguish between real and fabricated content. Vision and Large Language Models (VLLM) bridge computer vision and natural language, offering numerous applications driven by strong common-sense reasoning. Despite their success in various tasks, the potential of vision and language remains underexplored in face forgery detection, where they hold promise for enhancing explainability by leveraging the intrinsic reasoning capabilities of language to analyse fine-grained manipulation areas.
For that reason, few works have recently started to frame the problem of deepfake detection as a Visual Question Answering (VQA) task, nevertheless omitting the realistic and informative open-ended multi-label setting. With the rapid advances in the field of VLLM, an exponential rise of investigations in that direction is expected.
As such, there is a need for a clear experimental methodology that converts face forgery detection to a Visual Question Answering (VQA) task to systematically and fairly evaluate different VLLM architectures. Previous evaluation studies in deepfake detection have mostly focused on the simpler binary task, overlooking evaluation protocols for multi-label fine-grained detection and text-generative models. We propose a multi-staged approach that diverges from the traditional binary evaluation protocol and conducts a comprehensive evaluation study to compare the capabilities of several VLLMs in this context.
In the first stage, we assess the models' performance on the binary task and their sensitivity to given instructions using several prompts. In the second stage, we delve deeper into fine-grained detection by identifying areas of manipulation in a multiple-choice VQA setting. In the third stage, we convert the fine-grained detection to an open-ended question and compare several matching strategies for the multi-label classification task. Finally, we qualitatively evaluate the fine-grained responses of the VLLMs included in the benchmark.
We apply our benchmark to several popular models, providing a detailed comparison of binary, multiple-choice, and open-ended VQA evaluation across seven datasets. \url{https://nickyfot.github.io/hitchhickersguide.github.io/} | A Hitchhiker's Guide to Fine-Grained Face Forgery Detection Using Common Sense Reasoning | [
"Niki Foteinopoulou",
"Enjie Ghorbel",
"Djamila Aouada"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=cLga8GStdk | @inproceedings{
bean2024lingoly,
title={{LINGOLY}: A Benchmark of Olympiad-Level Linguistic Reasoning Puzzles in Low Resource and Extinct Languages},
author={Andrew Michael Bean and Simeon Hellsten and Harry Mayne and Jabez Magomere and Ethan A Chi and Ryan Andrew Chi and Scott A. Hale and Hannah Rose Kirk},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=cLga8GStdk}
} | In this paper, we present the LingOly benchmark, a novel benchmark for advanced reasoning abilities in large language models. Using challenging Linguistic Olympiad puzzles, we evaluate (i) capabilities for in-context identification and generalisation of linguistic patterns in very low-resource or extinct languages, and (ii) abilities to follow complex task instructions. The LingOly benchmark covers more than 90 mostly low-resource languages, minimising issues of data contamination, and contains 1,133 problems across 6 formats and 5 levels of human difficulty. We assess performance with both direct accuracy and comparison to a no-context baseline to penalise memorisation. Scores from 11 state-of-the-art LLMs demonstrate the benchmark to be challenging, and models perform poorly on the higher difficulty problems. On harder problems, even the top model only achieved 38.7% accuracy, a 24.7% improvement over the no-context baseline. Large closed models typically outperform open models, and in general, the higher resource the language, the better the scores. These results indicate, in absence of memorisation, true multi-step out-of-domain reasoning remains a challenge for current language models. | LINGOLY: A Benchmark of Olympiad-Level Linguistic Reasoning Puzzles in Low Resource and Extinct Languages | [
"Andrew Michael Bean",
"Simeon Hellsten",
"Harry Mayne",
"Jabez Magomere",
"Ethan A Chi",
"Ryan Andrew Chi",
"Scott A. Hale",
"Hannah Rose Kirk"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | oral | [
"https://github.com/am-bean/lingOly"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=cLS4fLIA5P | @inproceedings{
zhang2024webuotm,
title={Web{UOT}-1M: Advancing Deep Underwater Object Tracking with A Million-Scale Benchmark},
author={Chunhui Zhang and Li Liu and Guanjie Huang and Hao Wen and XI ZHOU and Yanfeng Wang},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=cLS4fLIA5P}
} | Underwater Object Tracking (UOT) is essential for identifying and tracking submerged objects in underwater videos, but existing datasets are limited in scale, diversity of target categories and scenarios covered, impeding the development of advanced tracking algorithms. To bridge this gap, we take the first step and introduce WebUOT-1M, \ie, the largest public UOT benchmark to date, sourced from complex and realistic underwater environments. It comprises 1.1 million frames across 1,500 video clips filtered from 408 target categories, largely surpassing previous UOT datasets, \eg, UVOT400. Through meticulous manual annotation and verification, we provide high-quality bounding boxes for underwater targets. Additionally, WebUOT-1M includes language prompts for video sequences, expanding its application areas, \eg, underwater vision-language tracking. Given that most existing trackers are designed for open-air conditions and perform poorly in underwater environments due to domain gaps, we propose a novel framework that uses omni-knowledge distillation to train a student Transformer model effectively. To the best of our knowledge, this framework is the first to effectively transfer open-air domain knowledge to the UOT model through knowledge distillation, as demonstrated by results on both existing UOT datasets and the newly proposed WebUOT-1M. We have thoroughly tested WebUOT-1M with 30 deep trackers, showcasing its potential as a benchmark for future UOT research. The complete dataset, along with codes and tracking results, are publicly accessible at \href{https://github.com/983632847/Awesome-Multimodal-Object-Tracking}{\color{magenta}{here}}. | WebUOT-1M: Advancing Deep Underwater Object Tracking with A Million-Scale Benchmark | [
"Chunhui Zhang",
"Li Liu",
"Guanjie Huang",
"Hao Wen",
"XI ZHOU",
"Yanfeng Wang"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2405.19818 | [
"https://github.com/983632847/awesome-multimodal-object-tracking"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=cFyagd2Yh4 | @inproceedings{
han2024medsafetybench,
title={MedSafetyBench: Evaluating and Improving the Medical Safety of Large Language Models},
author={Tessa Han and Aounon Kumar and Chirag Agarwal and Himabindu Lakkaraju},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=cFyagd2Yh4}
} | As large language models (LLMs) develop increasingly sophisticated capabilities and find applications in medical settings, it becomes important to assess their medical safety due to their far-reaching implications for personal and public health, patient safety, and human rights. However, there is little to no understanding of the notion of medical safety in the context of LLMs, let alone how to evaluate and improve it. To address this gap, we first define the notion of medical safety in LLMs based on the Principles of Medical Ethics set forth by the American Medical Association. We then leverage this understanding to introduce MedSafetyBench, the first benchmark dataset designed to measure the medical safety of LLMs. We demonstrate the utility of MedSafetyBench by using it to evaluate and improve the medical safety of LLMs. Our results show that publicly-available medical LLMs do not meet standards of medical safety and that fine-tuning them using MedSafetyBench improves their medical safety while preserving their medical performance. By introducing this new benchmark dataset, our work enables a systematic study of the state of medical safety in LLMs and motivates future work in this area, paving the way to mitigate the safety risks of LLMs in medicine. The benchmark dataset and code are available at https://github.com/AI4LIFE-GROUP/med-safety-bench. | MedSafetyBench: Evaluating and Improving the Medical Safety of Large Language Models | [
"Tessa Han",
"Aounon Kumar",
"Chirag Agarwal",
"Himabindu Lakkaraju"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2403.03744 | [
"https://github.com/ai4life-group/med-safety-bench"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=cDYqckEt6d | @inproceedings{
jansen2024discoveryworld,
title={DiscoveryWorld: A Virtual Environment for Developing and Evaluating Automated Scientific Discovery Agents},
author={Peter Jansen and Marc-Alexandre C{\^o}t{\'e} and Tushar Khot and Erin Bransom and Bhavana Dalvi Mishra and Bodhisattwa Prasad Majumder and Oyvind Tafjord and Peter Clark},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=cDYqckEt6d}
} | Automated scientific discovery promises to accelerate progress across scientific domains, but evaluating an agent's capacity for end-to-end scientific reasoning is challenging as running real-world experiments is often prohibitively expensive or infeasible. In this work we introduce DiscoveryWorld, a virtual environment that enables benchmarking an agent's ability to perform complete cycles of novel scientific discovery in an inexpensive, simulated, multi-modal, long-horizon, and fictional setting.
DiscoveryWorld consists of 24 scientific tasks across three levels of difficulty, each with parametric variations that provide new discoveries for agents to make across runs. Tasks require an agent to form hypotheses, design and run experiments, analyze results, and act on conclusions. Task difficulties are normed to range from straightforward to challenging for human scientists with advanced degrees. DiscoveryWorld further provides three automatic metrics for evaluating performance, including: (1) binary task completion, (2) fine-grained report cards detailing procedural scoring of task-relevant actions, and (3) the accuracy of discovered explanatory knowledge.
While simulated environments such as DiscoveryWorld are low-fidelity compared to the real world, we find that strong baseline agents struggle on most DiscoveryWorld tasks, highlighting the utility of using simulated environments as proxy tasks for near-term development of scientific discovery competency in agents. | DiscoveryWorld: A Virtual Environment for Developing and Evaluating Automated Scientific Discovery Agents | [
"Peter Jansen",
"Marc-Alexandre Côté",
"Tushar Khot",
"Erin Bransom",
"Bhavana Dalvi Mishra",
"Bodhisattwa Prasad Majumder",
"Oyvind Tafjord",
"Peter Clark"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | oral | 2406.06769 | [
"https://github.com/allenai/discoveryworld"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=c7SApXZz4b | @inproceedings{
zhang2024stronger,
title={Stronger Than You Think: Benchmarking Weak Supervision on Realistic Tasks},
author={Tianyi Zhang and Linrong Cai and Jeffrey Li and Nicholas Roberts and Neel Guha and Frederic Sala},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=c7SApXZz4b}
} | Weak supervision (WS) is a popular approach for label-efficient learning, leveraging diverse sources of noisy but inexpensive *weak labels* to automatically annotate training data. Despite heavy usage, the value of WS is challenging to benchmark due to its complexity: the knobs involved include data sources, labeling functions (LFs), aggregation techniques, called label models (LMs), and end model pipelines. Existing evaluation suites tend to be limited, focusing on particular components or specialized use cases, or relying on simplistic benchmark datasets with poor LFs, producing insights that may not generalize to real-world settings. We address these by introducing a new benchmark, BoxWRENCH, designed to more accurately reflect *real-world usage of WS.* This benchmark features (1) higher class cardinality and imbalance, (2) substantial domain expertise requirements, and (3) linguistic variations found in parallel corpora. For all tasks, LFs are written using a careful procedure aimed at mimicking real-world settings. In contrast to existing WS benchmarks, we show that supervised learning requires substantial amounts (1000+) of labeled examples to match WS in many settings. | Stronger Than You Think: Benchmarking Weak Supervision on Realistic Tasks | [
"Tianyi Zhang",
"Linrong Cai",
"Jeffrey Li",
"Nicholas Roberts",
"Neel Guha",
"Frederic Sala"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=c4NnhBi4oM | @inproceedings{
elrefaie2024drivaernet,
title={DrivAerNet++: A Large-Scale Multimodal Car Dataset with Computational Fluid Dynamics Simulations and Deep Learning Benchmarks},
author={Mohamed Elrefaie and Florin Morar and Angela Dai and Faez Ahmed},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=c4NnhBi4oM}
} | We present DrivAerNet++, the largest and most comprehensive multimodal dataset for aerodynamic car design. DrivAerNet++ comprises 8,000 diverse car designs modeled with high-fidelity computational fluid dynamics (CFD) simulations. The dataset includes diverse car configurations such as fastback, notchback, and estateback, with different underbody and wheel designs to represent both internal combustion engines and electric vehicles. Each entry in the dataset features detailed 3D meshes, parametric models, aerodynamic coefficients, and extensive flow and surface field data, along with segmented parts for car classification and point cloud data. This dataset supports a wide array of machine learning applications including data-driven design optimization, generative modeling, surrogate model training, CFD simulation acceleration, and geometric classification. With more than 39 TB of publicly available engineering data, DrivAerNet++ fills a significant gap in available resources, providing high-quality, diverse data to enhance model training, promote generalization, and accelerate automotive design processes. Along with rigorous dataset validation, we also provide ML benchmarking results on the task of aerodynamic drag prediction, showcasing the breadth of applications supported by our dataset. This dataset is set to significantly impact automotive design and broader engineering disciplines by fostering innovation and improving the fidelity of aerodynamic evaluations. Dataset and code available at: https://github.com/Mohamedelrefaie/DrivAerNet | DrivAerNet++: A Large-Scale Multimodal Car Dataset with Computational Fluid Dynamics Simulations and Deep Learning Benchmarks | [
"Mohamed Elrefaie",
"Florin Morar",
"Angela Dai",
"Faez Ahmed"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2406.09624 | [
"https://github.com/mohamedelrefaie/drivaernet"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=c4JE1gemWc | @inproceedings{
mou2024sgbench,
title={{SG}-Bench: Evaluating {LLM} Safety Generalization Across Diverse Tasks and Prompt Types},
author={Yutao Mou and Shikun Zhang and Wei Ye},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=c4JE1gemWc}
} | Ensuring the safety of large language model (LLM) applications is essential for developing trustworthy artificial intelligence. Current LLM safety benchmarks have two limitations. First, they focus solely on either discriminative or generative evaluation paradigms while ignoring their interconnection. Second, they rely on standardized inputs, overlooking the effects of widespread prompting techniques, such as system prompts, few-shot demonstrations, and chain-of-thought prompting. To overcome these issues, we developed SG-Bench, a novel benchmark to assess the generalization of LLM safety across various tasks and prompt types. This benchmark integrates both generative and discriminative evaluation tasks and includes extended data to examine the impact of prompt engineering and jailbreak on LLM safety. Our assessment of 3 advanced proprietary LLMs and 10 open-source LLMs with the benchmark reveals that most LLMs perform worse on discriminative tasks than generative ones, and are highly susceptible to prompts, indicating poor generalization in safety alignment. We also explain these findings quantitatively and qualitatively to provide insights for future research. | SG-Bench: Evaluating LLM Safety Generalization Across Diverse Tasks and Prompt Types | [
"Yutao Mou",
"Shikun Zhang",
"Wei Ye"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2410.21965 | [
"https://github.com/MurrayTom/SG-Bench"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=brxBxj4Dv3 | @inproceedings{
wang2024noisygl,
title={Noisy{GL}: A Comprehensive Benchmark for Graph Neural Networks under Label Noise},
author={Zhonghao Wang and Danyu Sun and Sheng Zhou and Haobo Wang and Jiapei Fan and Longtao Huang and Jiajun Bu},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=brxBxj4Dv3}
} | Graph Neural Networks (GNNs) exhibit strong potential in node classification task through a message-passing mechanism. However, their performance often hinges on high-quality node labels, which are challenging to obtain in real-world scenarios due to unreliable sources or adversarial attacks. Consequently, label noise is common in real-world graph data, negatively impacting GNNs by propagating incorrect information during training. To address this issue, the study of Graph Neural Networks under Label Noise (GLN) has recently gained traction. However, due to variations in dataset selection, data splitting, and preprocessing techniques, the community currently lacks a comprehensive benchmark, which impedes deeper understanding and further development of GLN. To fill this gap, we introduce NoisyGL in this paper, the first comprehensive benchmark for graph neural networks under label noise. NoisyGL enables fair comparisons and detailed analyses of GLN methods on noisy labeled graph data across various datasets, with unified experimental settings and interface. Our benchmark has uncovered several important insights that were missed in previous research, and we believe these findings will be highly beneficial for future studies. We hope our open-source benchmark library will foster further advancements in this field. The code of the benchmark can be found in https://github.com/eaglelab-zju/NoisyGL. | NoisyGL: A Comprehensive Benchmark for Graph Neural Networks under Label Noise | [
"Zhonghao Wang",
"Danyu Sun",
"Sheng Zhou",
"Haobo Wang",
"Jiapei Fan",
"Longtao Huang",
"Jiajun Bu"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2406.04299 | [
"https://github.com/eaglelab-zju/noisygl"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=bepcG3itGX | @inproceedings{
zeng2024libamm,
title={Lib{AMM}: Empirical Insights into Approximate Computing for Accelerating Matrix Multiplication},
author={Xianzhi Zeng and Wenchao Jiang and Shuhao Zhang},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=bepcG3itGX}
} | Matrix multiplication (MM) is pivotal in fields from deep learning to scientific computing, driving the quest for improved computational efficiency. Accelerating MM encompasses strategies like complexity reduction, parallel and distributed computing, hardware acceleration, and approximate computing techniques, namely AMM algorithms. Amidst growing concerns over the resource demands of large language models (LLMs), AMM has garnered renewed focus. However, understanding the nuances that govern AMM’s effectiveness remains incomplete. This study delves into AMM by examining algorithmic strategies, operational specifics, dataset characteristics, and their application in real-world tasks. Through comprehensive testing across diverse datasets and scenarios, we analyze how these factors affect AMM’s performance, uncovering that the selection of AMM approaches significantly influences the balance between efficiency and accuracy, with factors like memory access playing a pivotal role. Additionally, dataset attributes are shown to be vital for the success of AMM in applications. Our results advocate for tailored algorithmic approaches and careful strategy selection to enhance AMM’s effectiveness. To aid in the practical application and ongoing research of AMM, we introduce LibAMM —a toolkit offering a wide range of AMM algorithms, benchmarks, and tools for experiment management. LibAMM aims to facilitate research and application in AMM, guiding future developments towards more adaptive and context-aware computational solutions. | LibAMM: Empirical Insights into Approximate Computing for Accelerating Matrix Multiplication | [
"Xianzhi Zeng",
"Wenchao Jiang",
"Shuhao Zhang"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=bAxUA5r3Ss | @inproceedings{
shen2024taskbench,
title={TaskBench: Benchmarking Large Language Models for Task Automation},
author={Yongliang Shen and Kaitao Song and Xu Tan and Wenqi Zhang and Kan Ren and Siyu Yuan and Weiming Lu and Dongsheng Li and Yueting Zhuang},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=bAxUA5r3Ss}
} | In recent years, the remarkable progress of large language models (LLMs) has sparked interest in task automation, which involves decomposing complex tasks described by user instructions into sub-tasks and invoking external tools to execute them, playing a central role in autonomous agents. However, there is a lack of systematic and standardized benchmarks to promote the development of LLMs in task automation. To address this, we introduce TaskBench, a comprehensive framework to evaluate the capability of LLMs in task automation. Specifically, task automation can be divided into three critical stages: task decomposition, tool selection, and parameter prediction. To tackle the complexities inherent in these stages, we introduce the concept of Tool Graph to represent decomposed tasks and adopt a back-instruct method to generate high-quality user instructions. We propose TaskEval, a multi-faceted evaluation methodology that assesses LLM performance across these three stages. Our approach combines automated construction with rigorous human verification, ensuring high consistency with human evaluation. Experimental results demonstrate that TaskBench effectively reflects the capabilities of various LLMs in task automation. It provides insights into model performance across different task complexities and domains, pushing the boundaries of what current models can achieve. TaskBench offers a scalable, adaptable, and reliable benchmark for advancing LLM-based autonomous agents. | TaskBench: Benchmarking Large Language Models for Task Automation | [
"Yongliang Shen",
"Kaitao Song",
"Xu Tan",
"Wenqi Zhang",
"Kan Ren",
"Siyu Yuan",
"Weiming Lu",
"Dongsheng Li",
"Yueting Zhuang"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2311.18760 | [
"https://github.com/microsoft/JARVIS"
] | https://huggingface.co/papers/2311.18760 | 2 | 2 | 0 | 9 | [] | [
"microsoft/Taskbench"
] | [] | [] | [
"microsoft/Taskbench"
] | [] | 1 |
null | https://openreview.net/forum?id=b6IBmU1uzw | @inproceedings{
xia2024cares,
title={{CARES}: A Comprehensive Benchmark of Trustworthiness in Medical Vision Language Models},
author={Peng Xia and Ze Chen and Juanxi Tian and Gong Yangrui and Ruibo Hou and Yue Xu and Zhenbang Wu and Zhiyuan Fan and Yiyang Zhou and Kangyu Zhu and Wenhao Zheng and Zhaoyang Wang and Xiao Wang and Xuchao Zhang and Chetan Bansal and Marc Niethammer and Junzhou Huang and Hongtu Zhu and Yun Li and Jimeng Sun and Zongyuan Ge and Gang Li and James Zou and Huaxiu Yao},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=b6IBmU1uzw}
} | Artificial intelligence has significantly impacted medical applications, particularly with the advent of Medical Large Vision Language Models (Med-LVLMs), sparking optimism for the future of automated and personalized healthcare. However, the trustworthiness of Med-LVLMs remains unverified, posing significant risks for future model deployment. In this paper, we introduce CARES and aim to comprehensively evaluate the Trustworthiness of Med-LVLMs across the medical domain. We assess the trustworthiness of Med-LVLMs across five dimensions, including trustfulness, fairness, safety, privacy, and robustness. CARES comprises about 41K question-answer pairs in both closed and open-ended formats, covering 16 medical image modalities and 27 anatomical regions. Our analysis reveals that the models consistently exhibit concerns regarding trustworthiness, often displaying factual inaccuracies and failing to maintain fairness across different demographic groups. Furthermore, they are vulnerable to attacks and demonstrate a lack of privacy awareness. We publicly release our benchmark and code in https://github.com/richard-peng-xia/CARES. | CARES: A Comprehensive Benchmark of Trustworthiness in Medical Vision Language Models | [
"Peng Xia",
"Ze Chen",
"Juanxi Tian",
"Gong Yangrui",
"Ruibo Hou",
"Yue Xu",
"Zhenbang Wu",
"Zhiyuan Fan",
"Yiyang Zhou",
"Kangyu Zhu",
"Wenhao Zheng",
"Zhaoyang Wang",
"Xiao Wang",
"Xuchao Zhang",
"Chetan Bansal",
"Marc Niethammer",
"Junzhou Huang",
"Hongtu Zhu",
"Yun Li",
"Jimeng Sun",
"Zongyuan Ge",
"Gang Li",
"James Zou",
"Huaxiu Yao"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2406.06007 | [
"https://github.com/richard-peng-xia/cares"
] | https://huggingface.co/papers/2406.06007 | 2 | 2 | 0 | 24 | [] | [] | [] | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=b5n3lKRLzk | @inproceedings{
salter2024emgpose,
title={emg2pose: A Large and Diverse Benchmark for Surface Electromyographic Hand Pose Estimation},
author={Sasha Salter and Richard Warren and Collin Schlager and Adrian Spurr and Shangchen Han and Rohin Bhasin and Yujun Cai and Peter Walkington and Anuoluwapo Bolarinwa and Robert Wang and Nathan Danielson and Josh Merel and Eftychios A. Pnevmatikakis and Jesse D Marshall},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=b5n3lKRLzk}
} | Hands are the primary means through which humans interact with the world. Reliable and always-available hand pose inference could yield new and intuitive control schemes for human-computer interactions, particularly in virtual and augmented reality. Computer vision is effective but requires one or multiple cameras and can struggle with occlusions, limited field of view, and poor lighting. Wearable wrist-based surface electromyography (sEMG) presents a promising alternative as an always-available modality sensing muscle activities that drive hand motion. However, sEMG signals are strongly dependent on user anatomy and sensor placement; existing sEMG models have thus required hundreds of users and device placements to effectively generalize for tasks other than pose inference. To facilitate progress on sEMG pose inference, we introduce the emg2pose benchmark, which is to our knowledge the first publicly available dataset of high-quality hand pose labels and wrist sEMG recordings. emg2pose contains 2kHz, 16 channel sEMG and pose labels from a 26-camera motion capture rig for 193 users, 370 hours, and 29 stages with diverse gestures - a scale comparable to vision-based hand pose datasets. We provide competitive baselines and challenging tasks evaluating real-world generalization scenarios: held-out users, sensor placements, and stages. This benchmark provides the machine learning community a platform for exploring complex generalization problems, holding potential to significantly enhance the development of sEMG-based human-computer interactions. | emg2pose: A Large and Diverse Benchmark for Surface Electromyographic Hand Pose Estimation | [
"Sasha Salter",
"Richard Warren",
"Collin Schlager",
"Adrian Spurr",
"Shangchen Han",
"Rohin Bhasin",
"Yujun Cai",
"Peter Walkington",
"Anuoluwapo Bolarinwa",
"Robert Wang",
"Nathan Danielson",
"Josh Merel",
"Eftychios A. Pnevmatikakis",
"Jesse D Marshall"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=b57BKV8qKQ | @inproceedings{
ma2024implicit,
title={Implicit Zoo: A Large-Scale Dataset of Neural Implicit Functions for 2D Images and 3D Scenes},
author={Qi Ma and Danda Pani Paudel and Ender Konukoglu and Luc Van Gool},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=b57BKV8qKQ}
} | Neural implicit functions have demonstrated significant importance in various areas such as computer vision, graphics. Their advantages include the ability to represent complex shapes and scenes with high fidelity, smooth interpolation capabilities, and continuous representations. Despite these benefits, the development and analysis of implicit functions have been limited by the lack of comprehensive datasets and the substantial computational resources required for their implementation and evaluation. To address these challenges, we introduce "Implicit-Zoo": a large-scale dataset requiring thousands of GPU training days designed to facilitate research and development in this field. Our dataset includes diverse 2D and 3D scenes, such as CIFAR-10, ImageNet-1K, and Cityscapes for 2D image tasks, and the OmniObject3D dataset for 3D vision tasks. We ensure high quality through strict checks, refining or filtering out low-quality data. Using Implicit-Zoo, we showcase two immediate benefits as it enables to: (1) learn token locations for transformer models; (2) Directly regress 3D cameras poses of 2D images with respect to NeRF models. This in turn leads to an \emph{improved performance} in all three task of image classification, semantic segmentation, and 3D pose regression -- thereby unlocking new avenues for research. | Implicit Zoo: A Large-Scale Dataset of Neural Implicit Functions for 2D Images and 3D Scenes | [
"Qi Ma",
"Danda Pani Paudel",
"Ender Konukoglu",
"Luc Van Gool"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | [
"https://github.com/qimaqi/Implicit-Zoo"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=ayF8bEKYQy | @inproceedings{
huang2024olympicarena,
title={OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent {AI}},
author={Zhen Huang and Zengzhi Wang and Shijie Xia and Xuefeng Li and Haoyang Zou and Ruijie Xu and Run-Ze Fan and Lyumanshan Ye and Ethan Chern and Yixin Ye and Yikai Zhang and Yuqing Yang and Ting Wu and Binjie Wang and Shichao Sun and Yang Xiao and Yiyuan Li and Fan Zhou and Steffi Chern and Yiwei Qin and Yan Ma and Jiadi Su and Yixiu Liu and Yuxiang Zheng and Shaoting Zhang and Dahua Lin and Yu Qiao and Pengfei Liu},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=ayF8bEKYQy}
} | The evolution of Artificial Intelligence (AI) has been significantly accelerated by advancements in Large Language Models (LLMs) and Large Multimodal Models (LMMs), gradually showcasing potential cognitive reasoning abilities in problem-solving and scientific discovery (i.e., AI4Science) once exclusive to human intellect. To comprehensively evaluate current models' performance in cognitive reasoning abilities, we introduce OlympicArena, which includes 11,163 bilingual problems across both text-only and interleaved text-image modalities. These challenges encompass a wide range of disciplines spanning seven fields and 62 international Olympic competitions, rigorously examined for data leakage. We argue that the challenges in Olympic competition problems are ideal for evaluating AI's cognitive reasoning due to their complexity and interdisciplinary nature, which are essential for tackling complex scientific challenges and facilitating discoveries. Beyond evaluating performance across various disciplines using answer-only criteria, we conduct detailed experiments and analyses from multiple perspectives. We delve into the models' cognitive reasoning abilities, their performance across different modalities, and their outcomes in process-level evaluations, which are vital for tasks requiring complex reasoning with lengthy solutions. Our extensive evaluations reveal that even advanced models like GPT-4o only achieve a 39.97\% overall accuracy (28.67\% for mathematics and 29.71\% for physics), illustrating current AI limitations in complex reasoning and multimodal integration. Through the OlympicArena, we aim to advance AI towards superintelligence, equipping it to address more complex challenges in science and beyond. We also provide a comprehensive set of resources to support AI research, including a benchmark dataset, an open-source annotation platform, a detailed evaluation tool, and a leaderboard with automatic submission features. | OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI | [
"Zhen Huang",
"Zengzhi Wang",
"Shijie Xia",
"Xuefeng Li",
"Haoyang Zou",
"Ruijie Xu",
"Run-Ze Fan",
"Lyumanshan Ye",
"Ethan Chern",
"Yixin Ye",
"Yikai Zhang",
"Yuqing Yang",
"Ting Wu",
"Binjie Wang",
"Shichao Sun",
"Yang Xiao",
"Yiyuan Li",
"Fan Zhou",
"Steffi Chern",
"Yiwei Qin",
"Yan Ma",
"Jiadi Su",
"Yixiu Liu",
"Yuxiang Zheng",
"Shaoting Zhang",
"Dahua Lin",
"Yu Qiao",
"Pengfei Liu"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2406.12753 | [
"https://github.com/gair-nlp/olympicarena"
] | https://huggingface.co/papers/2406.12753 | 14 | 14 | 2 | 28 | [] | [
"GAIR/OlympicArena"
] | [
"GAIR/OlympicArenaSubmission"
] | [] | [
"GAIR/OlympicArena"
] | [
"GAIR/OlympicArenaSubmission"
] | 1 |
null | https://openreview.net/forum?id=ar8aRMrmod | @inproceedings{
wei2024evaluating,
title={Evaluating Copyright Takedown Methods for Language Models},
author={Boyi Wei and Weijia Shi and Yangsibo Huang and Noah A. Smith and Chiyuan Zhang and Luke Zettlemoyer and Kai Li and Peter Henderson},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=ar8aRMrmod}
} | Language models (LMs) derive their capabilities from extensive training on diverse data, including copyrighted material.
These models can memorize and generate content similar to their training data, potentially risking legal issues like copyright infringement.
Therefore, model creators are motivated to develop mitigation methods that prevent generating particular copyrighted content, an ability we refer to as *copyright takedowns*. This paper introduces the first evaluation of the feasibility and side effects of copyright takedowns for LMs. We propose CoTaEval, an evaluation framework to assess the effectiveness of copyright takedown methods,
the impact on the model's ability to retain uncopyrightable factual knowledge from the copyrighted content, and how well the model maintains its general utility and efficiency.
We examine several strategies, including adding system prompts, decoding-time filtering interventions, and unlearning approaches. Our findings indicate that no method excels across all metrics, showing significant room for research in this unique problem setting and indicating potential unresolved challenges for live policy proposals. | Evaluating Copyright Takedown Methods for Language Models | [
"Boyi Wei",
"Weijia Shi",
"Yangsibo Huang",
"Noah A. Smith",
"Chiyuan Zhang",
"Luke Zettlemoyer",
"Kai Li",
"Peter Henderson"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2406.18664 | [
""
] | https://huggingface.co/papers/2406.18664 | 2 | 1 | 0 | 8 | [] | [] | [] | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=ap4x1kArGy | @inproceedings{
lyu2024odrl,
title={{ODRL}: A Benchmark for Off-Dynamics Reinforcement Learning},
author={Jiafei Lyu and Kang Xu and Jiacheng Xu and Mengbei Yan and Jing-Wen Yang and Zongzhang Zhang and Chenjia Bai and Zongqing Lu and Xiu Li},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=ap4x1kArGy}
} | We consider off-dynamics reinforcement learning (RL) where one needs to transfer policies across different domains with dynamics mismatch. Despite the focus on developing dynamics-aware algorithms, this field is hindered due to the lack of a standard benchmark. To bridge this gap, we introduce ODRL, the first benchmark tailored for evaluating off-dynamics RL methods. ODRL contains four experimental settings where the source and target domains can be either online or offline, and provides diverse tasks and a broad spectrum of dynamics shifts, making it a reliable platform to comprehensively evaluate the agent's adaptation ability to the target domain. Furthermore, ODRL includes recent off-dynamics RL algorithms in a unified framework and introduces some extra baselines for different settings, all implemented in a single-file manner. To unpack the true adaptation capability of existing methods, we conduct extensive benchmarking experiments, which show that no method has universal advantages across varied dynamics shifts. We hope this benchmark can serve as a cornerstone for future research endeavors. Our code is publicly available at https://github.com/OffDynamicsRL/off-dynamics-rl. | ODRL: A Benchmark for Off-Dynamics Reinforcement Learning | [
"Jiafei Lyu",
"Kang Xu",
"Jiacheng Xu",
"Mengbei Yan",
"Jing-Wen Yang",
"Zongzhang Zhang",
"Chenjia Bai",
"Zongqing Lu",
"Xiu Li"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2410.20750 | [
"https://github.com/offdynamicsrl/off-dynamics-rl"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=akEt8QAa6V | @inproceedings{
wang2024gta,
title={{GTA}: A Benchmark for General Tool Agents},
author={Jize Wang and Ma Zerun and Yining Li and Songyang Zhang and Cailian Chen and Kai Chen and Xinyi Le},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=akEt8QAa6V}
} | In developing general-purpose agents, significant focus has been placed on integrating large language models (LLMs) with various tools. This poses a challenge to the tool-use capabilities of LLMs. However, there are evident gaps between existing tool evaluations and real-world scenarios. Current evaluations often use AI-generated queries, single-step tasks, dummy tools, and text-only inputs, which fail to reveal the agents' real-world problem-solving abilities effectively. To address this, we propose GTA, a benchmark for **G**eneral **T**ool **A**gents, featuring three main aspects: (i) *Real user queries*: human-written queries with simple real-world objectives but implicit tool-use, requiring the LLM to reason the suitable tools and plan the solution steps. (ii) *Real deployed tools*: an evaluation platform equipped with tools across perception, operation, logic, and creativity categories to evaluate the agents' actual task execution performance. (iii) *Real multimodal inputs*: authentic image files, such as spatial scenes, web page screenshots, tables, code snippets, and printed/handwritten materials, used as the query contexts to align with real-world scenarios closely. We designed 229 real-world tasks and executable tool chains to evaluate mainstream LLMs. Our findings show that real-world user queries are challenging for existing LLMs, with GPT-4 completing less than 50\% of the tasks and most LLMs achieving below 25\%. This evaluation reveals the bottlenecks in the tool-use capabilities of current LLMs in real-world scenarios, which is beneficial for the advancement of general-purpose tool agents. Dataset and code are available at https://github.com/open-compass/GTA. | GTA: A Benchmark for General Tool Agents | [
"Jize Wang",
"Ma Zerun",
"Yining Li",
"Songyang Zhang",
"Cailian Chen",
"Kai Chen",
"Xinyi Le"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2407.08713 | [
"https://github.com/open-compass/GTA"
] | https://huggingface.co/papers/2407.08713 | 4 | 14 | 2 | 7 | [] | [
"Jize1/GTA"
] | [] | [] | [
"Jize1/GTA"
] | [] | 1 |
null | https://openreview.net/forum?id=aekfb95slj | @inproceedings{
hao2024pinnacle,
title={{PINN}acle: A Comprehensive Benchmark of Physics-Informed Neural Networks for Solving {PDE}s},
author={Zhongkai Hao and Jiachen Yao and Chang Su and Hang Su and Ziao Wang and Fanzhi Lu and Zeyu Xia and Yichi Zhang and Songming Liu and Lu Lu and Jun Zhu},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=aekfb95slj}
} | While significant progress has been made on Physics-Informed Neural Networks (PINNs), a comprehensive comparison of these methods across a wide range of Partial Differential Equations (PDEs) is still lacking. This study introduces PINNacle, a benchmarking tool designed to fill this gap. PINNacle provides a diverse dataset, comprising over 20 distinct PDEs from various domains, including heat conduction, fluid dynamics, biology, and electromagnetics. These PDEs encapsulate key challenges inherent to real-world problems, such as complex geometry, multi-scale phenomena, nonlinearity, and high dimensionality. PINNacle also offers a user-friendly toolbox, incorporating about 10 state-of-the-art PINN methods for systematic evaluation and comparison. We have conducted extensive experiments with these methods, offering insights into their strengths and weaknesses. In addition to providing a standardized means of assessing performance, PINNacle also offers an in-depth analysis to guide future research, particularly in areas such as domain decomposition methods and loss reweighting for handling multi-scale problems and complex geometry. To the best of our knowledge, it is the largest benchmark with a diverse and comprehensive evaluation that will undoubtedly foster further research in PINNs. | PINNacle: A Comprehensive Benchmark of Physics-Informed Neural Networks for Solving PDEs | [
"Zhongkai Hao",
"Jiachen Yao",
"Chang Su",
"Hang Su",
"Ziao Wang",
"Fanzhi Lu",
"Zeyu Xia",
"Yichi Zhang",
"Songming Liu",
"Lu Lu",
"Jun Zhu"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | poster | 2306.08827 | [
"https://github.com/i207m/pinnacle"
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=abXaOcvujs | @inproceedings{
vogel2024wikidbs,
title={Wiki{DB}s: A Large-Scale Corpus Of Relational Databases From Wikidata},
author={Liane Vogel and Jan-Micha Bodensohn and Carsten Binnig},
booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2024},
url={https://openreview.net/forum?id=abXaOcvujs}
} | Deep learning on tabular data, and particularly tabular representation learning, has recently gained growing interest. However, representation learning for relational databases with multiple tables is still an underexplored area, which may be attributed to the lack of openly available resources. To support the development of foundation models for tabular data and relational databases, we introduce WikiDBs, a novel open-source corpus of 100,000 relational databases. Each database consists of multiple tables connected by foreign keys. The corpus is based on Wikidata and aims to follow certain characteristics of real-world databases. In this paper, we describe the dataset and our method for creating it. By making our code publicly available, we enable others to create tailored versions of the dataset, for example, by creating databases in different languages. Finally, we conduct a set of initial experiments to showcase how WikiDBs can be used to train for data engineering tasks, such as missing value imputation and column type annotation. | WikiDBs: A Large-Scale Corpus Of Relational Databases From Wikidata | [
"Liane Vogel",
"Jan-Micha Bodensohn",
"Carsten Binnig"
] | NeurIPS.cc/2024/Datasets_and_Benchmarks_Track | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | [] | [] | [] | 0 |