bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
302
2.02k
abstract
stringlengths
566
2.48k
title
stringlengths
16
179
authors
sequencelengths
1
76
id
stringclasses
1 value
type
stringclasses
2 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringlengths
0
40
n_linked_authors
int64
-1
24
upvotes
int64
-1
86
num_comments
int64
-1
10
n_authors
int64
-1
75
Models
sequencelengths
0
37
Datasets
sequencelengths
0
10
Spaces
sequencelengths
0
26
old_Models
sequencelengths
0
37
old_Datasets
sequencelengths
0
10
old_Spaces
sequencelengths
0
26
paper_page_exists_pre_conf
int64
0
1
null
https://openreview.net/forum?id=aXeiCbMFFJ
@inproceedings{ shao2024visual, title={Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought Reasoning}, author={Hao Shao and Shengju Qian and Han Xiao and Guanglu Song and Zhuofan Zong and Letian Wang and Yu Liu and Hongsheng Li}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=aXeiCbMFFJ} }
Multi-Modal Large Language Models (MLLMs) have demonstrated impressive performance in various VQA tasks. However, they often lack interpretability and struggle with complex visual inputs, especially when the resolution of the input image is high or when the interested region that could provide key information for answering the question is small. To address these challenges, we collect and introduce the large-scale Visual CoT dataset comprising 438k question-answer pairs, annotated with intermediate bounding boxes highlighting key regions essential for answering the questions. Additionally, about 98k pairs of them are annotated with detailed reasoning steps. Importantly, we propose a multi-turn processing pipeline that dynamically focuses on visual inputs and provides interpretable thoughts. We also introduce the related benchmark to evaluate the MLLMs in scenarios requiring specific local region identification. Extensive experiments demonstrate the effectiveness of our framework and shed light on better inference strategies. The Visual CoT dataset, benchmark, and pre-trained models are available on this [website](https://hao-shao.com/projects/viscot.html) to support further research in this area.
Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought Reasoning
[ "Hao Shao", "Shengju Qian", "Han Xiao", "Guanglu Song", "Zhuofan Zong", "Letian Wang", "Yu Liu", "Hongsheng Li" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
oral
2403.16999
[ "https://github.com/deepcs233/visual-cot" ]
https://huggingface.co/papers/2403.16999
1
3
0
8
[]
[ "deepcs233/Visual-CoT" ]
[]
[]
[ "deepcs233/Visual-CoT" ]
[]
1
null
https://openreview.net/forum?id=aJ1yse8GEr
@inproceedings{ kargaran2024glotcc, title={Glot{CC}: An Open Broad-Coverage CommonCrawl Corpus and Pipeline for Minority Languages}, author={Amir Hossein Kargaran and Fran{\c{c}}ois Yvon and Hinrich Schuetze}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=aJ1yse8GEr} }
The need for large text corpora has increased with the advent of pretrained language models and, in particular, the discovery of scaling laws for these models. Most available corpora have sufficient data only for languages with large dominant communities. However, there is no corpus available that (i) covers a wide range of minority languages; (ii) is generated by an open-source reproducible pipeline; and (iii) is rigorously cleaned from noise, making it trustworthy to use. We present GlotCC, a clean, document-level, 2TB general domain corpus derived from CommonCrawl, covering more than 1000 languages. We make GlotCC and the system used to generate it— including the pipeline, language identification model, and filters—available to the research community. Corpus v. 1.0 https://huggingface.co/datasets/cis-lmu/GlotCC-v1 Pipeline v. 3.0 https://github.com/cisnlp/GlotCC
GlotCC: An Open Broad-Coverage CommonCrawl Corpus and Pipeline for Minority Languages
[ "Amir Hossein Kargaran", "François Yvon", "Hinrich Schuetze" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2410.23825
[ "https://github.com/cisnlp/glotcc" ]
https://huggingface.co/papers/2410.23825
1
3
2
3
[]
[ "cis-lmu/GlotCC-V1" ]
[]
[]
[ "cis-lmu/GlotCC-V1" ]
[]
1
null
https://openreview.net/forum?id=a7LPpyFWj2
@inproceedings{ askari2024decobench, title={{DECO}-Bench: Unified Benchmark for Decoupled Task-Agnostic Synthetic Data Release}, author={Farzaneh Askari and Lingjuan Lyu and Vivek Sharma}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=a7LPpyFWj2} }
In this work, we tackle the question of how to systematically benchmark task-agnostic decoupling methods for privacy-preserving machine learning (ML). Sharing datasets that include sensitive information often triggers privacy concerns, necessitating robust decoupling methods to separate sensitive and non-sensitive attributes. Despite the development of numerous decoupling techniques, a standard benchmark for systematically comparing these methods remains absent. Our framework integrates various decoupling techniques along with synthetic data generation and evaluation protocols within a unified system. Using our framework, we benchmark various decoupling techniques and evaluate their privacy-utility trade-offs. Finally, we release our source code, pre-trained models, datasets of decoupled representations to foster research in this area.
DECO-Bench: Unified Benchmark for Decoupled Task-Agnostic Synthetic Data Release
[ "Farzaneh Askari", "Lingjuan Lyu", "Vivek Sharma" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=a6DteCxiw6
@inproceedings{ martinez2024codec, title={Codec Avatar Studio: Paired Human Captures for Complete, Driveable, and Generalizable Avatars}, author={Julieta Martinez and Emily Kim and Javier Romero and Timur Bagautdinov and Shunsuke Saito and Shoou-I Yu and Stuart Anderson and Michael Zollh{\"o}fer and Te-Li Wang and Shaojie Bai and Chenghui Li and Shih-En Wei and Rohan Joshi and Wyatt Borsos and Tomas Simon and Jason Saragih and Paul Theodosis and Alexander Greene and Anjani Josyula and Silvio Mano Maeta and Andrew I Jewett and Simion Venshtain and Christopher Heilman and Yueh-Tung Chen and Sidi Fu and Mohamed Ezzeldin A. Elshaer and Tingfang Du and Longhua Wu and Shen-Chi Chen and Kai Kang and Michael Wu and Youssef Emad and Steven Longay and Ashley Brewer and Hitesh Shah and James Booth and Taylor Koska and Kayla Haidle and Matthew Andromalos and Joanna Ching-Hui Hsu and Thomas Dauer and Peter Selednik and Tim Godisart and Scott Ardisson and Matthew Cipperly and Ben Humberston and Lon Farr and Bob Hansen and Peihong Guo and Dave Braun and Steven Krenn and He Wen and Lucas Evans and Natalia Fadeeva and Matthew Stewart and Gabriel Schwartz and Divam Gupta and Gyeongsik Moon and Kaiwen Guo and Yuan Dong and Yichen Xu and Takaaki Shiratori and Fabian Andres Prada Nino and Bernardo R Pires and Bo Peng and Julia Buffalini and Autumn Trimble and Kevyn Alex Anthony McPhail and Melissa Robinson Schoeller and Yaser Sheikh}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=a6DteCxiw6} }
To build photorealistic avatars that users can embody, human modelling must be complete (cover the full body), driveable (able to reproduce the current motion and appearance from the user), and generalizable (_i.e._, easily adaptable to novel identities). Towards these goals, _paired_ captures, that is, captures of the same subject obtained from systems of diverse quality and availability, are crucial. However, paired captures are rarely available to researchers outside of dedicated industrial labs: _Codec Avatar Studio_ is our proposal to close this gap. Towards generalization and driveability, we introduce a dataset of 256 subjects captured in two modalities: high resolution multi-view scans of their heads, and video from the internal cameras of a headset. Towards completeness, we introduce a dataset of 4 subjects captured in eight modalities: high quality relightable multi-view captures of heads and hands, full body multi-view captures with minimal and regular clothes, and corresponding head, hands and body phone captures. Together with our data, we also provide code and pre-trained models for different state-of-the-art human generation models. Our datasets and code are available at https://github.com/facebookresearch/ava-256 and https://github.com/facebookresearch/goliath.
Codec Avatar Studio: Paired Human Captures for Complete, Driveable, and Generalizable Avatars
[ "Julieta Martinez", "Emily Kim", "Javier Romero", "Timur Bagautdinov", "Shunsuke Saito", "Shoou-I Yu", "Stuart Anderson", "Michael Zollhöfer", "Te-Li Wang", "Shaojie Bai", "Chenghui Li", "Shih-En Wei", "Rohan Joshi", "Wyatt Borsos", "Tomas Simon", "Jason Saragih", "Paul Theodosis", "Alexander Greene", "Anjani Josyula", "Silvio Mano Maeta", "Andrew I Jewett", "Simion Venshtain", "Christopher Heilman", "Yueh-Tung Chen", "Sidi Fu", "Mohamed Ezzeldin A. Elshaer", "Tingfang Du", "Longhua Wu", "Shen-Chi Chen", "Kai Kang", "Michael Wu", "Youssef Emad", "Steven Longay", "Ashley Brewer", "Hitesh Shah", "James Booth", "Taylor Koska", "Kayla Haidle", "Matthew Andromalos", "Joanna Ching-Hui Hsu", "Thomas Dauer", "Peter Selednik", "Tim Godisart", "Scott Ardisson", "Matthew Cipperly", "Ben Humberston", "Lon Farr", "Bob Hansen", "Peihong Guo", "Dave Braun", "Steven Krenn", "He Wen", "Lucas Evans", "Natalia Fadeeva", "Matthew Stewart", "Gabriel Schwartz", "Divam Gupta", "Gyeongsik Moon", "Kaiwen Guo", "Yuan Dong", "Yichen Xu", "Takaaki Shiratori", "Fabian Andres Prada Nino", "Bernardo R Pires", "Bo Peng", "Julia Buffalini", "Autumn Trimble", "Kevyn Alex Anthony McPhail", "Melissa Robinson Schoeller", "Yaser Sheikh" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=a0WAM6q6fV
@inproceedings{ akhtar2024croissant, title={Croissant: A Metadata Format for {ML}-Ready Datasets}, author={Mubashara Akhtar and Omar Benjelloun and Costanza Conforti and Luca Foschini and Joan Giner-Miguelez and Pieter Gijsbers and Sujata Goswami and Nitisha Jain and Michalis Karamousadakis and Michael Kuchnik and Satyapriya Krishna and Sylvain Lesage and Quentin Lhoest and Pierre Marcenac and Manil Maskey and Peter Mattson and Luis Oala and Hamidah Oderinwale and Pierre Ruyssen and Tim Santos and Rajat Shinde and Elena Simperl and Arjun Suresh and Goeff Thomas and Slava Tykhonov and Joaquin Vanschoren and Susheel Varma and Jos van der Velde and Steffen Vogler and Carole-Jean Wu and Luyao Zhang}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=a0WAM6q6fV} }
Data is a critical resource for machine learning (ML), yet working with data remains a key friction point. This paper introduces Croissant, a metadata format for datasets that creates a shared representation across ML tools, frameworks, and platforms. Croissant makes datasets more discoverable, portable, and interoperable, thereby addressing significant challenges in ML data management. Croissant is already supported by several popular dataset repositories, spanning hundreds of thousands of datasets, enabling easy loading into the most commonly-used ML frameworks, regardless of where the data is stored. Our initial evaluation by human raters shows that Croissant metadata is readable, understandable, complete, yet concise.
Croissant: A Metadata Format for ML-Ready Datasets
[ "Mubashara Akhtar", "Omar Benjelloun", "Costanza Conforti", "Luca Foschini", "Joan Giner-Miguelez", "Pieter Gijsbers", "Sujata Goswami", "Nitisha Jain", "Michalis Karamousadakis", "Michael Kuchnik", "Satyapriya Krishna", "Sylvain Lesage", "Quentin Lhoest", "Pierre Marcenac", "Manil Maskey", "Peter Mattson", "Luis Oala", "Hamidah Oderinwale", "Pierre Ruyssen", "Tim Santos", "Rajat Shinde", "Elena Simperl", "Arjun Suresh", "Goeff Thomas", "Slava Tykhonov", "Joaquin Vanschoren", "Susheel Varma", "Jos van der Velde", "Steffen Vogler", "Carole-Jean Wu", "Luyao Zhang" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
oral
2403.19546
[ "https://github.com/mlcommons/croissant" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=ZsyFwzuDzD
@inproceedings{ liu2024revisiting, title={Revisiting, Benchmarking and Understanding Unsupervised Graph Domain Adaptation}, author={Meihan Liu and Zhen Zhang and Jiachen Tang and Jiajun Bu and Bingsheng He and Sheng Zhou}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=ZsyFwzuDzD} }
Unsupervised Graph Domain Adaptation (UGDA) involves the transfer of knowledge from a label-rich source graph to an unlabeled target graph under domain discrepancies. Despite the proliferation of methods designed for this emerging task, the lack of standard experimental settings and fair performance comparisons makes it challenging to understand which and when models perform well across different scenarios. To fill this gap, we present the first comprehensive benchmark for unsupervised graph domain adaptation named GDABench, which encompasses 16 algorithms across diverse adaptation tasks. Through extensive experiments, we observe that the performance of current UGDA models varies significantly across different datasets and adaptation scenarios. Specifically, we recognize that when the source and target graphs face significant distribution shifts, it is imperative to formulate strategies to effectively address and mitigate graph structural shifts. We also find that with appropriate neighbourhood aggregation mechanisms, simple GNN variants can even surpass state-of-the-art UGDA baselines. To facilitate reproducibility, we have developed an easy-to-use library PyGDA for training and evaluating existing UGDA methods, providing a standardized platform in this community. Our source codes and datasets can be found at https://github.com/pygda-team/pygda.
Revisiting, Benchmarking and Understanding Unsupervised Graph Domain Adaptation
[ "Meihan Liu", "Zhen Zhang", "Jiachen Tang", "Jiajun Bu", "Bingsheng He", "Sheng Zhou" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2407.11052
[ "https://github.com/pygda-team/pygda" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=ZZ17sBJh3w
@inproceedings{ jin2024apddv, title={{APDD}v2: Aesthetics of Paintings and Drawings Dataset with Artist Labeled Scores and Comments}, author={Xin Jin and Qianqian Qiao and Yi Lu and HuayeWang and Heng Huang and Shan Gao and Jianfei Liu and Rui Li}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=ZZ17sBJh3w} }
Datasets play a pivotal role in training visual models, facilitating the development of abstract understandings of visual features through diverse image samples and multidimensional attributes. However, in the realm of aesthetic evaluation of artistic images, datasets remain relatively scarce. Existing painting datasets are often characterized by limited scoring dimensions and insufficient annotations, thereby constraining the advancement and application of automatic aesthetic evaluation methods in the domain of painting. To bridge this gap, we introduce the Aesthetics Paintings and Drawings Dataset (APDD), the first comprehensive collection of paintings encompassing 24 distinct artistic categories and 10 aesthetic attributes. Building upon the initial release of APDDv1, our ongoing research has identified opportunities for enhancement in data scale and annotation precision. Consequently, APDDv2 boasts an expanded image corpus and improved annotation quality, featuring detailed language comments to better cater to the needs of both researchers and practitioners seeking high-quality painting datasets. Furthermore, we present an updated version of the Art Assessment Network for Specific Painting Styles, denoted as ArtCLIP. Experimental validation demonstrates the superior performance of this revised model in the realm of aesthetic evaluation, surpassing its predecessor in accuracy and efficacy. The dataset and model are available at https://github.com/BestiVictory/APDDv2.git.
APDDv2: Aesthetics of Paintings and Drawings Dataset with Artist Labeled Scores and Comments
[ "Xin Jin", "Qianqian Qiao", "Yi Lu", "HuayeWang", "Heng Huang", "Shan Gao", "Jianfei Liu", "Rui Li" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "https://github.com/bestivictory/apddv2" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=ZRMAhpZ3ED
@inproceedings{ monroc2024wfcrl, title={{WFCRL}: A Multi-Agent Reinforcement Learning Benchmark for Wind Farm Control}, author={Claire Bizon Monroc and Ana Busic and Donatien Dubuc and Jiamin Zhu}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=ZRMAhpZ3ED} }
The wind farm control problem is challenging, since conventional model-based control strategies require tractable models of complex aerodynamical interactions between the turbines and suffer from the curse of dimension when the number of turbines increases. Recently, model-free and multi-agent reinforcement learning approaches have been used to address this challenge. In this article, we introduce WFCRL (Wind Farm Control with Reinforcement Learning), the first suite of multi-agent reinforcement learning environments for the wind farm control problem. WFCRL frames a cooperative Multi-Agent Reinforcement Learning (MARL) problem: each turbine is an agent and can learn to adjust its yaw, pitch or torque to maximize the common objective (e.g. the total power production of the farm). WFCRL also offers turbine load observations that will allow to optimize the farm performance while limiting turbine structural damages. Interfaces with two state-of-the-art farm simulators are implemented in WFCRL: a static simulator (Floris) and a dynamic simulator (FAST.farm). For each simulator, $10$ wind layouts are provided, including $5$ real wind farms. Two state-of-the-art online MARL algorithms are implemented to illustrate the scaling challenges. As learning online on FAST.Farm is highly time-consuming, WFCRL offers the possibility of designing transfer learning strategies from Floris to FAST.Farm.
WFCRL: A Multi-Agent Reinforcement Learning Benchmark for Wind Farm Control
[ "Claire Bizon Monroc", "Ana Busic", "Donatien Dubuc", "Jiamin Zhu" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=ZQy6dGlBay
@inproceedings{ cho2024a, title={A Benchmark Dataset for Event-Guided Human Pose Estimation and Tracking in Extreme Conditions}, author={Hoonhee Cho and Taewoo Kim and Yuhwan Jeong and Kuk-Jin Yoon}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=ZQy6dGlBay} }
Multi-person pose estimation and tracking have been actively researched by the computer vision community due to their practical applicability. However, existing human pose estimation and tracking datasets have only been successful in typical scenarios, such as those without motion blur or with well-lit conditions. These RGB-based datasets are limited to learning under extreme motion blur situations or poor lighting conditions, making them inherently vulnerable to such scenarios. As a promising solution, bio-inspired event cameras exhibit robustness in extreme scenarios due to their high dynamic range and micro-second level temporal resolution. Therefore, in this paper, we introduce a new hybrid dataset encompassing both RGB and event data for human pose estimation and tracking in two extreme scenarios: low-light and motion blur environments. The proposed Event-guided Human Pose Estimation and Tracking in eXtreme Conditions (EHPT-XC) dataset covers cases of motion blur caused by dynamic objects and low-light conditions individually as well as both simultaneously. With EHPT-XC, we aim to inspire researchers to tackle pose estimation and tracking in extreme conditions by leveraging the advantageous of the event camera. Project pages are available at https://github.com/Chohoonhee/EHPT-XC.
A Benchmark Dataset for Event-Guided Human Pose Estimation and Tracking in Extreme Conditions
[ "Hoonhee Cho", "Taewoo Kim", "Yuhwan Jeong", "Kuk-Jin Yoon" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=ZMn2SPUgkU
@inproceedings{ chen2024verified, title={{VERIFIED}: A Video Corpus Moment Retrieval Benchmark for Fine-Grained Video Understanding}, author={Houlun Chen and Xin Wang and Hong Chen and Zeyang Zhang and Wei Feng and Bin Huang and Jia Jia and Wenwu Zhu}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=ZMn2SPUgkU} }
Existing Video Corpus Moment Retrieval (VCMR) is limited to coarse-grained understanding that hinders precise video moment localization when given fine-grained queries. In this paper, we propose a more challenging fine-grained VCMR benchmark requiring methods to localize the best-matched moment from the corpus with other partially matched candidates. To improve the dataset construction efficiency and guarantee high-quality data annotations, we propose VERIFIED, an automatic \underline{V}id\underline{E}o-text annotation pipeline to generate captions with \underline{R}el\underline{I}able \underline{FI}n\underline{E}-grained statics and \underline{D}ynamics. Specifically, we resort to large language models (LLM) and large multimodal models (LMM) with our proposed Statics and Dynamics Enhanced Captioning modules to generate diverse fine-grained captions for each video. To filter out the inaccurate annotations caused by the LLM hallucination, we propose a Fine-Granularity Aware Noise Evaluator where we fine-tune a video foundation model with disturbed hard-negatives augmented contrastive and matching losses. With VERIFIED, we construct a more challenging fine-grained VCMR benchmark containing Charades-FIG, DiDeMo-FIG, and ActivityNet-FIG which demonstrate a high level of annotation quality. We evaluate several state-of-the-art VCMR models on the proposed dataset, revealing that there is still significant scope for fine-grained video understanding in VCMR.
VERIFIED: A Video Corpus Moment Retrieval Benchmark for Fine-Grained Video Understanding
[ "Houlun Chen", "Xin Wang", "Hong Chen", "Zeyang Zhang", "Wei Feng", "Bin Huang", "Jia Jia", "Wenwu Zhu" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2410.08593
[ "https://github.com/hlchen23/verified" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=ZGMkOikEyv
@inproceedings{ wu2024detectrl, title={Detect{RL}: Benchmarking {LLM}-Generated Text Detection in Real-World Scenarios}, author={Junchao Wu and Runzhe Zhan and Derek F. Wong and Shu Yang and Xinyi Yang and Yulin Yuan and Lidia S. Chao}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=ZGMkOikEyv} }
Detecting text generated by large language models (LLMs) is of great recent interest. With zero-shot methods like DetectGPT, detection capabilities have reached impressive levels. However, the reliability of existing detectors in real-world applications remains underexplored. In this study, we present a new benchmark, DetectRL, highlighting that even state-of-the-art (SOTA) detection techniques still underperformed in this task. We collected human-written datasets from domains where LLMs are particularly prone to misuse. Using popular LLMs, we generated data that better aligns with real-world applications. Unlike previous studies, we employed heuristic rules to create adversarial LLM-generated text, simulating advanced prompt usages, human revisions like word substitutions, and writing errors. Our development of DetectRL reveals the strengths and limitations of current SOTA detectors. More importantly, we analyzed the potential impact of writing styles, model types, attack methods, the text lengths, and real-world human writing factors on different types of detectors. We believe DetectRL could serve as an effective benchmark for assessing detectors in real-world scenarios, evolving with advanced attack methods, thus providing more stressful evaluation to drive the development of more efficient detectors\footnote{Data and code are publicly available at: https://github.com/NLP2CT/DetectRL.
DetectRL: Benchmarking LLM-Generated Text Detection in Real-World Scenarios
[ "Junchao Wu", "Runzhe Zhan", "Derek F. Wong", "Shu Yang", "Xinyi Yang", "Yulin Yuan", "Lidia S. Chao" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2410.23746
[ "https://github.com/nlp2ct/detectrl" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=YzM10FEJ2D
@inproceedings{ bassi2024touchstone, title={Touchstone Benchmark: Are We on the Right Way for Evaluating {AI} Algorithms for Medical Segmentation?}, author={Pedro R. A. S. Bassi and Wenxuan Li and Yucheng Tang and Fabian Isensee and Zifu Wang and Jieneng Chen and Yu-Cheng Chou and Yannick Kirchhoff and Maximilian Rouven Rokuss and Ziyan Huang and Jin Ye and Junjun He and Tassilo Wald and Constantin Ulrich and Michael Baumgartner and Saikat Roy and Klaus Maier-Hein and Paul F Jaeger and Yiwen Ye and Yutong Xie and Jianpeng Zhang and Ziyang Chen and Yong Xia and Zhaohu Xing and Lei Zhu and Yousef Sadegheih and Afshin Bozorgpour and Pratibha Kumari and Reza Azad and Dorit Merhof and Pengcheng Shi and Ting Ma and Yuxin Du and Fan BAI and Tiejun Huang and Bo Zhao and Haonan Wang and Xiaomeng Li and Hanxue Gu and Haoyu Dong and Jichen Yang and Maciej A Mazurowski and Saumya Gupta and Linshan Wu and Jia-Xin Zhuang and Hao Chen and Holger R Roth and Daguang Xu and Matthew B. Blaschko and Sergio Decherchi and Andrea Cavalli and Alan Yuille and Zongwei Zhou}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=YzM10FEJ2D} }
How can we test AI performance? This question seems trivial, but it isn't. Standard benchmarks often have problems such as in-distribution and small-size test sets, oversimplified metrics, unfair comparisons, and short-term outcome pressure. As a consequence, good performance on standard benchmarks does not guarantee success in real-world scenarios. To address these problems, we present Touchstone, a large-scale collaborative segmentation benchmark of 9 types of abdominal organs. This benchmark is based on 5,195 training CT scans from 76 hospitals around the world and 5,903 testing CT scans from 11 additional hospitals. This diverse test set enhances the statistical significance of benchmark results and rigorously evaluates AI algorithms across various out-of-distribution scenarios. We invited 14 inventors of 19 AI algorithms to train their algorithms, while our team, as a third party, independently evaluated these algorithms on three test sets. In addition, we also evaluated pre-existing AI frameworks---which, differing from algorithms, are more flexible and can support different algorithms—including MONAI from NVIDIA, nnU-Net from DKFZ, and numerous other open-source frameworks. We are committed to expanding this benchmark to encourage more innovation of AI algorithms for the medical domain.
Touchstone Benchmark: Are We on the Right Way for Evaluating AI Algorithms for Medical Segmentation?
[ "Pedro R. A. S. Bassi", "Wenxuan Li", "Yucheng Tang", "Fabian Isensee", "Zifu Wang", "Jieneng Chen", "Yu-Cheng Chou", "Yannick Kirchhoff", "Maximilian Rouven Rokuss", "Ziyan Huang", "Jin Ye", "Junjun He", "Tassilo Wald", "Constantin Ulrich", "Michael Baumgartner", "Saikat Roy", "Klaus Maier-Hein", "Paul F Jaeger", "Yiwen Ye", "Yutong Xie", "Jianpeng Zhang", "Ziyang Chen", "Yong Xia", "Zhaohu Xing", "Lei Zhu", "Yousef Sadegheih", "Afshin Bozorgpour", "Pratibha Kumari", "Reza Azad", "Dorit Merhof", "Pengcheng Shi", "Ting Ma", "Yuxin Du", "Fan BAI", "Tiejun Huang", "Bo Zhao", "Haonan Wang", "Xiaomeng Li", "Hanxue Gu", "Haoyu Dong", "Jichen Yang", "Maciej A Mazurowski", "Saumya Gupta", "Linshan Wu", "Jia-Xin Zhuang", "Hao Chen", "Holger R Roth", "Daguang Xu", "Matthew B. Blaschko", "Sergio Decherchi", "Andrea Cavalli", "Alan Yuille", "Zongwei Zhou" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2411.03670
[ "https://github.com/mrgiovanni/touchstone" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=YxuuzyplFZ
@inproceedings{ bandara2024eyegraph, title={EyeGraph: Modularity-aware Spatio Temporal Graph Clustering for Continuous Event-based Eye Tracking}, author={Nuwan Sriyantha Bandara and Thivya Kandappu and Argha Sen and Ila Gokarn and Archan Misra}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=YxuuzyplFZ} }
Continuous tracking of eye movement dynamics plays a significant role in developing a broad spectrum of human-centered applications, such as cognitive skills (visual attention and working memory) modeling, human-machine interaction, biometric user authentication, and foveated rendering. Recently neuromorphic cameras have garnered significant interest in the eye-tracking research community, owing to their sub-microsecond latency in capturing intensity changes resulting from eye movements. Nevertheless, the existing approaches for event-based eye tracking suffer from several limitations: dependence on RGB frames, label sparsity, and training on datasets collected in controlled lab environments that do not adequately reflect real-world scenarios. To address these limitations, in this paper, we propose a dynamic graph-based approach that uses a neuromorphic event stream captured by Dynamic Vision Sensors (DVS) for high-fidelity tracking of pupillary movement. More specifically, first, we present EyeGraph, a large-scale multi-modal near-eye tracking dataset collected using a wearable event camera attached to a head-mounted device from 40 participants -- the dataset was curated while mimicking in-the-wild settings, accounting for varying mobility and ambient lighting conditions. Subsequently, to address the issue of label sparsity, we adopt an unsupervised topology-aware approach as a benchmark. To be specific, (a) we first construct a dynamic graph using Gaussian Mixture Models (GMM), resulting in a uniform and detailed representation of eye morphology features, facilitating accurate modeling of pupil and iris. Then (b) apply a novel topologically guided modularity-aware graph clustering approach to precisely track the movement of the pupil and address the label sparsity in event-based eye tracking. We show that our unsupervised approach has comparable performance against the supervised approaches while consistently outperforming the conventional clustering approaches.
EyeGraph: Modularity-aware Spatio Temporal Graph Clustering for Continuous Event-based Eye Tracking
[ "Nuwan Sriyantha Bandara", "Thivya Kandappu", "Argha Sen", "Ila Gokarn", "Archan Misra" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=YktwH3tOuc
@inproceedings{ longjohn2024benchmark, title={Benchmark Data Repositories for Better Benchmarking}, author={Rachel Longjohn and Markelle Kelly and Sameer Singh and Padhraic Smyth}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=YktwH3tOuc} }
In machine learning research, it is common to evaluate algorithms via their performance on standard benchmark datasets. While a growing body of work establishes guidelines for---and levies criticisms at---data and benchmarking practices in machine learning, comparatively less attention has been paid to the data repositories where these datasets are stored, documented, and shared. In this paper, we analyze the landscape of these _benchmark data repositories_ and the role they can play in improving benchmarking. This role includes addressing issues with both datasets themselves (e.g., representational harms, construct validity) and the manner in which evaluation is carried out using such datasets (e.g., overemphasis on a few datasets and metrics, lack of reproducibility). To this end, we identify and discuss a set of considerations surrounding the design and use of benchmark data repositories, with a focus on improving benchmarking practices in machine learning.
Benchmark Data Repositories for Better Benchmarking
[ "Rachel Longjohn", "Markelle Kelly", "Sameer Singh", "Padhraic Smyth" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2410.24100
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=YagfTP3RK6
@inproceedings{ ren2024safetywashing, title={Safetywashing: Do {AI} Safety Benchmarks Actually Measure Safety Progress?}, author={Richard Ren and Steven Basart and Adam Khoja and Alice Gatti and Long Phan and Xuwang Yin and Mantas Mazeika and Alexander Pan and Gabriel Mukobi and Ryan Hwang Kim and Stephen Fitz and Dan Hendrycks}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=YagfTP3RK6} }
Performance on popular ML benchmarks is highly correlated with model scale, suggesting that most benchmarks tend to measure a similar underlying factor of general model capabilities. However, substantial research effort remains devoted to designing new benchmarks, many of which claim to measure novel phenomena. In the spirit of the Bitter Lesson, we leverage spectral analysis to measure an underlying capabilities component, the direction in benchmark-performance-space which explains most variation in model performance. In an extensive analysis of existing safety benchmarks, we find that variance in model performance on many safety benchmarks is largely explained by the capabilities component. In response, we argue that safety research should prioritize metrics which are not highly correlated with scale. Our work provides a lens to analyze both novel safety benchmarks and novel safety methods, which we hope will enable future work to make differential progress on safety.
Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress?
[ "Richard Ren", "Steven Basart", "Adam Khoja", "Alice Gatti", "Long Phan", "Xuwang Yin", "Mantas Mazeika", "Alexander Pan", "Gabriel Mukobi", "Ryan Hwang Kim", "Stephen Fitz", "Dan Hendrycks" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=YXXmIHJQBN
@inproceedings{ wang2024dbinfer, title={4{DBI}nfer: A 4D Benchmarking Toolbox for Graph-Centric Predictive Modeling on {RDB}s}, author={Minjie Wang and Quan Gan and David Wipf and Zheng Zhang and Christos Faloutsos and Weinan Zhang and Muhan Zhang and Zhenkun Cai and Jiahang Li and Zunyao Mao and Yakun Song and Jianheng Tang and Yanlin Zhang and Guang Yang and Chuan Lei and Xiao Qin and Ning Li and Han Zhang and Yanbo Wang and Zizhao Zhang}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=YXXmIHJQBN} }
Given a relational database (RDB), how can we predict missing column values in some target table of interest? Although RDBs store vast amounts of rich, informative data spread across interconnected tables, the progress of predictive machine learning models as applied to such tasks arguably falls well behind advances in other domains such as computer vision or natural language processing. This deficit stems, at least in part, from the lack of established/public RDB benchmarks as needed for training and evaluation purposes. As a result, related model development thus far often defaults to tabular approaches trained on ubiquitous single-table benchmarks, or on the relational side, graph-based alternatives such as GNNs applied to a completely different set of graph datasets devoid of tabular characteristics. To more precisely target RDBs lying at the nexus of these two complementary regimes, we explore a broad class of baseline models predicated on: (i) converting multi-table datasets into graphs using various strategies equipped with efficient subsampling, while preserving tabular characteristics; and (ii) trainable models with well-matched inductive biases that output predictions based on these input subgraphs. Then, to address the dearth of suitable public benchmarks and reduce siloed comparisons, we assemble a diverse collection of (i) large-scale RDB datasets and (ii) coincident predictive tasks. From a delivery standpoint, we operationalize the above four dimensions (4D) of exploration within a unified, scalable open-source toolbox called 4DBInfer; please see https://github.com/awslabs/multi-table-benchmark .
4DBInfer: A 4D Benchmarking Toolbox for Graph-Centric Predictive Modeling on RDBs
[ "Minjie Wang", "Quan Gan", "David Wipf", "Zheng Zhang", "Christos Faloutsos", "Weinan Zhang", "Muhan Zhang", "Zhenkun Cai", "Jiahang Li", "Zunyao Mao", "Yakun Song", "Jianheng Tang", "Yanlin Zhang", "Guang Yang", "Chuan Lei", "Xiao Qin", "Ning Li", "Han Zhang", "Yanbo Wang", "Zizhao Zhang" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=YMAU2kJgzY
@inproceedings{ cherian2024evaluating, title={Evaluating Large Vision-and-Language Models on Children's Mathematical Olympiads}, author={Anoop Cherian and Kuan-Chuan Peng and Suhas Lohit and Joanna Matthiesen and Kevin A. Smith and Joshua B. Tenenbaum}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=YMAU2kJgzY} }
Recent years have seen a significant progress in the general-purpose problem solving abilities of large vision and language models (LVLMs), such as ChatGPT, Gemini, etc.; some of these breakthroughs even seem to enable AI models to outperform human abilities in varied tasks that demand higher-order cognitive skills. Are the current large AI models indeed capable of generalized problem solving as humans do? A systematic analysis of AI capabilities for joint vision and text reasoning, however, is missing in the current scientific literature. In this paper, we make an effort towards filling this gap, by evaluating state-of-the-art LVLMs on their mathematical and algorithmic reasoning abilities using visuo-linguistic problems from children's Olympiads. Specifically, we consider problems from the Mathematical Kangaroo (MK) Olympiad, which is a popular international competition targeted at children from grades 1-12, that tests children's deeper mathematical abilities using puzzles that are appropriately gauged to their age and skills. Using the puzzles from MK, we created a dataset, dubbed SMART-840, consisting of 840 problems from years 2020-2024. With our dataset, we analyze LVLMs power on mathematical reasoning; their responses on our puzzles offer a direct way to compare against that of children. Our results show that modern LVLMs do demonstrate increasingly powerful reasoning skills in solving problems for higher grades, but lack the foundations to correctly answer problems designed for younger children. Further analysis shows that there is no significant correlation between the reasoning capabilities of AI models and that of young children, and their capabilities appear to be based on a different type of reasoning than the cumulative knowledge that underlies children's mathematical skills.
Evaluating Large Vision-and-Language Models on Children's Mathematical Olympiads
[ "Anoop Cherian", "Kuan-Chuan Peng", "Suhas Lohit", "Joanna Matthiesen", "Kevin A. Smith", "Joshua B. Tenenbaum" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.15736
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=YFUp7zMrM9
@inproceedings{ peddi2024captaincookd, title={CaptainCook4D: A Dataset for Understanding Errors in Procedural Activities}, author={Rohith Peddi and Shivvrat Arya and Bharath Challa and Likhitha Pallapothula and Akshay Vyas and Bhavya Gouripeddi and Qifan Zhang and Jikai Wang and Vasundhara Komaragiri and Eric Ragan and Nicholas Ruozzi and Yu Xiang and Vibhav Gogate}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=YFUp7zMrM9} }
Following step-by-step procedures is an essential component of various activities carried out by individuals in their daily lives. These procedures serve as a guiding framework that helps to achieve goals efficiently, whether it is assembling furniture or preparing a recipe. However, the complexity and duration of procedural activities inherently increase the likelihood of making errors. Understanding such procedural activities from a sequence of frames is a challenging task that demands an accurate interpretation of visual information and the ability to reason about the structure of the activity. To this end, we collect a new egocentric 4D dataset, CaptainCook4D, comprising 384 recordings (94.5 hours) of people performing recipes in real kitchen environments. This dataset consists of two distinct types of activity: one in which participants adhere to the provided recipe instructions and another in which they deviate and induce errors. We provide 5.3K step annotations and 10K fine-grained action annotations and benchmark the dataset for the following tasks: error recognition, multistep localization and procedure learning.
CaptainCook4D: A Dataset for Understanding Errors in Procedural Activities
[ "Rohith Peddi", "Shivvrat Arya", "Bharath Challa", "Likhitha Pallapothula", "Akshay Vyas", "Bhavya Gouripeddi", "Qifan Zhang", "Jikai Wang", "Vasundhara Komaragiri", "Eric Ragan", "Nicholas Ruozzi", "Yu Xiang", "Vibhav Gogate" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2312.14556
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=XvHdPiKy6c
@inproceedings{ dai2024safesora, title={SafeSora: Towards Safety Alignment of Text2Video Generation via a Human Preference Dataset}, author={Josef Dai and Tianle Chen and Xuyao Wang and Ziran Yang and Taiye Chen and Jiaming Ji and Yaodong Yang}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=XvHdPiKy6c} }
To mitigate the risk of harmful outputs from large vision models (LVMs), we introduce the *SafeSora* dataset to promote research on aligning text-to-video generation with human values. This dataset encompasses human preferences in text-to-video generation tasks along two primary dimensions: helpfulness and harmlessness. To capture in-depth human preferences and facilitate structured reasoning by crowdworkers, we subdivide helpfulness into 4 sub-dimensions and harmlessness into 12 sub-categories, serving as the basis for pilot annotations. The *SafeSora* dataset includes 14,711 unique prompts, 57,333 unique videos generated by 4 distinct LVMs, and 51,691 pairs of preference annotations labeled by humans. We further demonstrate the utility of the *SafeSora* dataset through several applications, including training the text-video moderation model and aligning LVMs with human preference by fine-tuning a prompt augmentation module or the diffusion model. These applications highlight its potential as the foundation for text-to-video alignment research, such as human preference modeling and the development and validation of alignment algorithms. Our project is available at https://sites.google.com/view/safe-sora. Warning: this paper contains example data that may be offensive or harmful.
SafeSora: Towards Safety Alignment of Text2Video Generation via a Human Preference Dataset
[ "Josef Dai", "Tianle Chen", "Xuyao Wang", "Ziran Yang", "Taiye Chen", "Jiaming Ji", "Yaodong Yang" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.14477
[ "https://github.com/pku-alignment/safe-sora" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=XukWe15QCi
@inproceedings{ chib2024pedestrian, title={Pedestrian Trajectory Prediction with Missing Data: Datasets, Imputation, and Benchmarking}, author={Pranav singh chib and Pravendra Singh}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=XukWe15QCi} }
Pedestrian trajectory prediction is crucial for several applications such as robotics and self-driving vehicles. Significant progress has been made in the past decade thanks to the availability of pedestrian trajectory datasets, which enable trajectory prediction methods to learn from pedestrians' past movements and predict future trajectories. However, these datasets and methods typically assume that the observed trajectory sequence is complete, ignoring real-world issues such as sensor failure, occlusion, and limited fields of view that can result in missing values in observed trajectories. To address this challenge, we present TrajImpute, a pedestrian trajectory prediction dataset that simulates missing coordinates in the observed trajectory, enhancing real-world applicability. TrajImpute maintains a uniform distribution of missing data within the observed trajectories. In this work, we comprehensively examine several imputation methods to reconstruct the missing coordinates and benchmark them for imputing pedestrian trajectories. Furthermore, we provide a thorough analysis of recent trajectory prediction methods and evaluate the performance of these models on the imputed trajectories. Our experimental evaluation of the imputation and trajectory prediction methods offers several valuable insights. Our dataset provides a foundational resource for future research on imputation-aware pedestrian trajectory prediction, potentially accelerating the deployment of these methods in real-world applications. Publicly accessible links to the datasets and code files are available at https://github.com/Pranav-chib/TrajImpute.
Pedestrian Trajectory Prediction with Missing Data: Datasets, Imputation, and Benchmarking
[ "Pranav singh chib", "Pravendra Singh" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2411.00174
[ "https://github.com/pranav-chib/trajimpute" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=XrKhwfPmyI
@inproceedings{ kweon2024ehrnoteqa, title={{EHRN}ote{QA}: An {LLM} Benchmark for Real-World Clinical Practice Using Discharge Summaries}, author={Sunjun Kweon and Jiyoun Kim and Heeyoung Kwak and Dongchul Cha and Hangyul Yoon and Kwang Hyun Kim and Jeewon Yang and Seunghyun Won and Edward Choi}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=XrKhwfPmyI} }
Discharge summaries in Electronic Health Records (EHRs) are crucial for clinical decision-making, but their length and complexity make information extraction challenging, especially when dealing with accumulated summaries across multiple patient admissions. Large Language Models (LLMs) show promise in addressing this challenge by efficiently analyzing vast and complex data. Existing benchmarks, however, fall short in properly evaluating LLMs' capabilities in this context, as they typically focus on single-note information or limited topics, failing to reflect the real-world inquiries required by clinicians. To bridge this gap, we introduce EHRNoteQA, a novel benchmark built on the MIMIC-IV EHR, comprising 962 different QA pairs each linked to distinct patients' discharge summaries. Every QA pair is initially generated using GPT-4 and then manually reviewed and refined by three clinicians to ensure clinical relevance. EHRNoteQA includes questions that require information across multiple discharge summaries and covers eight diverse topics, mirroring the complexity and diversity of real clinical inquiries. We offer EHRNoteQA in two formats: open-ended and multi-choice question answering, and propose a reliable evaluation method for each. We evaluate 27 LLMs using EHRNoteQA and examine various factors affecting the model performance (e.g., the length and number of discharge summaries). Furthermore, to validate EHRNoteQA as a reliable proxy for expert evaluations in clinical practice, we measure the correlation between the LLM performance on EHRNoteQA, and the LLM performance manually evaluated by clinicians. Results show that LLM performance on EHRNoteQA have higher correlation with clinician-evaluated performance (Spearman: 0.78, Kendall: 0.62) compared to other benchmarks, demonstrating its practical relevance in evaluating LLMs in clinical settings. EHRNoteQA will be publicly available to support further research and improve LLM evaluation in clinical practice. EHRNoteQA is publicly available under PhysioNet credential access at https://doi.org/10.13026/acga-ht95, and the code is available at https://github.com/ji-youn-kim/EHRNoteQA.
EHRNoteQA: An LLM Benchmark for Real-World Clinical Practice Using Discharge Summaries
[ "Sunjun Kweon", "Jiyoun Kim", "Heeyoung Kwak", "Dongchul Cha", "Hangyul Yoon", "Kwang Hyun Kim", "Jeewon Yang", "Seunghyun Won", "Edward Choi" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2402.16040
[ "https://github.com/ji-youn-kim/ehrnoteqa" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=XmyxQaTyck
@inproceedings{ hesse2024benchmarking, title={Benchmarking the Attribution Quality of Vision Models}, author={Robin Hesse and Simone Schaub-Meyer and Stefan Roth}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=XmyxQaTyck} }
Attribution maps are one of the most established tools to explain the functioning of computer vision models. They assign importance scores to input features, indicating how relevant each feature is for the prediction of a deep neural network. While much research has gone into proposing new attribution methods, their proper evaluation remains a difficult challenge. In this work, we propose a novel evaluation protocol that overcomes two fundamental limitations of the widely used incremental-deletion protocol, i.e., the out-of-domain issue and lacking inter-model comparisons. This allows us to evaluate 23 attribution methods and how different design choices of popular vision backbones affect their attribution quality. We find that intrinsically explainable models outperform standard models and that raw attribution values exhibit a higher attribution quality than what is known from previous work. Further, we show consistent changes in the attribution quality when varying the network design, indicating that some standard design choices promote attribution quality.
Benchmarking the Attribution Quality of Vision Models
[ "Robin Hesse", "Simone Schaub-Meyer", "Stefan Roth" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2407.11910
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=XZYUdhMvjL
@inproceedings{ bukas2024multiorg, title={MultiOrg: A Multi-rater Organoid-detection Dataset}, author={Christina Bukas and Harshavardhan Subramanian and Fenja See and Carina Steinchen and Ivan Ezhov and Gowtham Boosarpu and Sara Asgharpour and Gerald Burgstaller and Mareike Lehmann and Florian Kofler and Marie Piraud}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=XZYUdhMvjL} }
High-throughput image analysis in the biomedical domain has gained significant attention in recent years, driving advancements in drug discovery, disease prediction, and personalized medicine. Organoids, specifically, are an active area of research, providing excellent models for human organs and their functions. Automating the quantification of organoids in microscopy images would provide an effective solution to overcome substantial manual quantification bottlenecks, particularly in high-throughput image analysis. However, there is a notable lack of open biomedical datasets, in contrast to other domains, such as autonomous driving, and, notably, only few of them have attempted to quantify annotation uncertainty. In this work, we present MultiOrg a comprehensive organoid dataset tailored for object detection tasks with uncertainty quantification. This dataset comprises over 400 high-resolution 2d microscopy images and curated annotations of more than 60,000 organoids. Most importantly, it includes three label sets for the test data, independently annotated by two experts at distinct time points. We additionally provide a benchmark for organoid detection, and make the best model available through an easily installable, interactive plugin for the popular image visualization tool Napari, to perform organoid quantification.
MultiOrg: A Multi-rater Organoid-detection Dataset
[ "Christina Bukas", "Harshavardhan Subramanian", "Fenja See", "Carina Steinchen", "Ivan Ezhov", "Gowtham Boosarpu", "Sara Asgharpour", "Gerald Burgstaller", "Mareike Lehmann", "Florian Kofler", "Marie Piraud" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2410.14612
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=XXaIoJyYs7
@inproceedings{ wu2024medjourney, title={MedJourney: Benchmark and Evaluation of Large Language Models over Patient Clinical Journey}, author={Xian Wu and Yutian Zhao and Yunyan Zhang and Jiageng Wu and Zhihong Zhu and Yingying Zhang and Yi Ouyang and Ziheng Zhang and Huimin WANG and Zhenxi Lin and Jie Yang and Shuang Zhao and Yefeng Zheng}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=XXaIoJyYs7} }
Large language models (LLMs) have demonstrated remarkable capabilities in language understanding and generation, leading to their widespread adoption across various fields. Among these, the medical field is particularly well-suited for LLM applications, as many medical tasks can be enhanced by LLMs. Despite the existence of benchmarks for evaluating LLMs in medical question-answering and exams, there remains a notable gap in assessing LLMs' performance in supporting patients throughout their entire hospital visit journey in real-world clinical practice. In this paper, we address this gap by dividing a typical patient's clinical journey into four stages: planning, access, delivery and ongoing care. For each stage, we introduce multiple tasks and corresponding datasets, resulting in a comprehensive benchmark comprising 12 datasets, of which five are newly introduced, and seven are constructed from existing datasets. This proposed benchmark facilitates a thorough evaluation of LLMs' effectiveness across the entire patient journey, providing insights into their practical application in clinical settings. Additionally, we evaluate three categories of LLMs against this benchmark: 1) proprietary LLM services such as GPT-4; 2) public LLMs like QWen; and 3) specialized medical LLMs, like HuatuoGPT2. Through this extensive evaluation, we aim to provide a better understanding of LLMs' performance in the medical domain, ultimately contributing to their more effective deployment in healthcare settings.
MedJourney: Benchmark and Evaluation of Large Language Models over Patient Clinical Journey
[ "Xian Wu", "Yutian Zhao", "Yunyan Zhang", "Jiageng Wu", "Zhihong Zhu", "Yingying Zhang", "Yi Ouyang", "Ziheng Zhang", "Huimin WANG", "Zhenxi Lin", "Jie Yang", "Shuang Zhao", "Yefeng Zheng" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=XOGosbxLrz
@inproceedings{ herde2024dopanim, title={dopanim: A Dataset of Doppelganger Animals with Noisy Annotations from Multiple Humans}, author={Marek Herde and Denis Huseljic and Lukas Rauch and Bernhard Sick}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=XOGosbxLrz} }
Human annotators typically provide annotated data for training machine learning models, such as neural networks. Yet, human annotations are subject to noise, impairing generalization performances. Methodological research on approaches counteracting noisy annotations requires corresponding datasets for a meaningful empirical evaluation. Consequently, we introduce a novel benchmark dataset, dopanim, consisting of about 15,750 animal images of 15 classes with ground truth labels. For approximately 10,500 of these images, 20 humans provided over 52,000 annotations with an accuracy of circa 67\%. Its key attributes include (1) the challenging task of classifying doppelganger animals, (2) human-estimated likelihoods as annotations, and (3) annotator metadata. We benchmark well-known multi-annotator learning approaches using seven variants of this dataset and outline further evaluation use cases such as learning beyond hard class labels and active learning. Our dataset and a comprehensive codebase are publicly available to emulate the data collection process and to reproduce all empirical results.
dopanim: A Dataset of Doppelganger Animals with Noisy Annotations from Multiple Humans
[ "Marek Herde", "Denis Huseljic", "Lukas Rauch", "Bernhard Sick" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2407.20950
[ "https://github.com/ies-research/multi-annotator-machine-learning" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=XBcStBjBIE
@inproceedings{ yuan2024chronomagicbench, title={ChronoMagic-Bench: A Benchmark for Metamorphic Evaluation of Text-to-Time-lapse Video Generation}, author={Shenghai Yuan and Jinfa Huang and Yongqi Xu and YaoYang Liu and Shaofeng Zhang and Yujun Shi and Rui-Jie Zhu and Xinhua Cheng and Jiebo Luo and Li Yuan}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=XBcStBjBIE} }
We propose a novel text-to-video (T2V) generation benchmark, *ChronoMagic-Bench*, to evaluate the temporal and metamorphic knowledge skills in time-lapse video generation of the T2V models (e.g. Sora and Lumiere). Compared to existing benchmarks that focus on visual quality and text relevance of generated videos, *ChronoMagic-Bench* focuses on the models’ ability to generate time-lapse videos with significant metamorphic amplitude and temporal coherence. The benchmark probes T2V models for their physics, biology, and chemistry capabilities, in a free-form text control. For these purposes, *ChronoMagic-Bench* introduces **1,649** prompts and real-world videos as references, categorized into four major types of time-lapse videos: biological, human creation, meteorological, and physical phenomena, which are further divided into 75 subcategories. This categorization ensures a comprehensive evaluation of the models’ capacity to handle diverse and complex transformations. To accurately align human preference on the benchmark, we introduce two new automatic metrics, MTScore and CHScore, to evaluate the videos' metamorphic attributes and temporal coherence. MTScore measures the metamorphic amplitude, reflecting the degree of change over time, while CHScore assesses the temporal coherence, ensuring the generated videos maintain logical progression and continuity. Based on the *ChronoMagic-Bench*, we conduct comprehensive manual evaluations of eighteen representative T2V models, revealing their strengths and weaknesses across different categories of prompts, providing a thorough evaluation framework that addresses current gaps in video generation research. More encouragingly, we create a large-scale *ChronoMagic-Pro* dataset, containing **460k** high-quality pairs of 720p time-lapse videos and detailed captions. Each caption ensures high physical content and large metamorphic amplitude, which have a far-reaching impact on the video generation community. The source data and code are publicly available on [https://pku-yuangroup.github.io/ChronoMagic-Bench](https://pku-yuangroup.github.io/ChronoMagic-Bench).
ChronoMagic-Bench: A Benchmark for Metamorphic Evaluation of Text-to-Time-lapse Video Generation
[ "Shenghai Yuan", "Jinfa Huang", "Yongqi Xu", "YaoYang Liu", "Shaofeng Zhang", "Yujun Shi", "Rui-Jie Zhu", "Xinhua Cheng", "Jiebo Luo", "Li Yuan" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
oral
2406.18522
[ "https://github.com/pku-yuangroup/chronomagic-bench" ]
https://huggingface.co/papers/2406.18522
7
40
2
10
[ "BestWishYsh/ChronoMagic-Bench" ]
[ "BestWishYsh/ChronoMagic-ProH", "BestWishYsh/ChronoMagic-Bench", "BestWishYsh/ChronoMagic-Pro" ]
[ "BestWishYsh/ChronoMagic-Bench" ]
[ "BestWishYsh/ChronoMagic-Bench" ]
[ "BestWishYsh/ChronoMagic-ProH", "BestWishYsh/ChronoMagic-Bench", "BestWishYsh/ChronoMagic-Pro" ]
[ "BestWishYsh/ChronoMagic-Bench" ]
1
null
https://openreview.net/forum?id=X90tyXDe8z
@inproceedings{ rutherford2024jaxmarl, title={Jax{MARL}: Multi-Agent {RL} Environments and Algorithms in {JAX}}, author={Alexander Rutherford and Benjamin Ellis and Matteo Gallici and Jonathan Cook and Andrei Lupu and Gar{\dh}ar Ingvarsson and Timon Willi and Ravi Hammond and Akbir Khan and Christian Schroeder de Witt and Alexandra Souly and Saptarashmi Bandyopadhyay and Mikayel Samvelyan and Minqi Jiang and Robert Tjarko Lange and Shimon Whiteson and Bruno Lacerda and Nick Hawes and Tim Rockt{\"a}schel and Chris Lu and Jakob Nicolaus Foerster}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=X90tyXDe8z} }
Benchmarks are crucial in the development of machine learning algorithms, significantly influencing reinforcement learning (RL) research through the available environments. Traditionally, RL environments run on the CPU, which limits their scalability with the computational resources typically available in academia. However, recent advancements in JAX have enabled the wider use of hardware acceleration, enabling massively parallel RL training pipelines and environments. While this has been successfully applied to single-agent RL, it has not yet been widely adopted for multi-agent scenarios. In this paper, we present JaxMARL, the first open-source, easy-to-use code base that combines GPU-enabled efficiency with support for a large number of commonly used MARL environments and popular baseline algorithms. Our experiments show that, in terms of wall clock time, our JAX-based training pipeline is up to 12,500 times faster than existing approaches. This enables efficient and thorough evaluations, potentially alleviating the evaluation crisis in the field. We also introduce and benchmark SMAX, a vectorised, simplified version of the popular StarCraft Multi-Agent Challenge, which removes the need to run the StarCraft II game engine. This not only enables GPU acceleration, but also provides a more flexible MARL environment, unlocking the potential for self-play, meta-learning, and other future applications in MARL. The code is available at https://github.com/flairox/jaxmarl.
JaxMARL: Multi-Agent RL Environments and Algorithms in JAX
[ "Alexander Rutherford", "Benjamin Ellis", "Matteo Gallici", "Jonathan Cook", "Andrei Lupu", "Garðar Ingvarsson", "Timon Willi", "Ravi Hammond", "Akbir Khan", "Christian Schroeder de Witt", "Alexandra Souly", "Saptarashmi Bandyopadhyay", "Mikayel Samvelyan", "Minqi Jiang", "Robert Tjarko Lange", "Shimon Whiteson", "Bruno Lacerda", "Nick Hawes", "Tim Rocktäschel", "Chris Lu", "Jakob Nicolaus Foerster" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2311.10090
[ "https://github.com/flairox/jaxmarl" ]
https://huggingface.co/papers/2311.10090
9
6
0
20
[]
[]
[]
[]
[]
[]
1
null
https://openreview.net/forum?id=X8ItT6mGKF
@inproceedings{ fent2024man, title={{MAN} TruckScenes: A multimodal dataset for autonomous trucking in diverse conditions}, author={Felix Sebastian Fent and Fabian Kuttenreich and Florian Ruch and Farija Rizwin and Stefan Juergens and Lorenz Lechermann and Christian Nissler and Andrea Perl and Ulrich Voll and Min Yan and Markus Lienkamp}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=X8ItT6mGKF} }
Autonomous trucking is a promising technology that can greatly impact modern logistics and the environment. Ensuring its safety on public roads is one of the main duties that requires an accurate perception of the environment. To achieve this, machine learning methods rely on large datasets, but to this day, no such datasets are available for autonomous trucks. In this work, we present MAN TruckScenes, the first multimodal dataset for autonomous trucking. MAN TruckScenes allows the research community to come into contact with truck-specific challenges, such as trailer occlusions, novel sensor perspectives, and terminal environments for the first time. It comprises more than 740 scenes of 20 s each within a multitude of different environmental conditions. The sensor set includes 4 cameras, 6 lidar, 6 radar sensors, 2 IMUs, and a high-precision GNSS. The dataset's 3D bounding boxes were manually annotated and carefully reviewed to achieve a high quality standard. Bounding boxes are available for 27 object classes, 15 attributes, and a range of more than 230 m. The scenes are tagged according to 34 distinct scene tags, and all objects are tracked throughout the scene to promote a wide range of applications. Additionally, MAN TruckScenes is the first dataset to provide 4D radar data with 360° coverage and is thereby the largest radar dataset with annotated 3D bounding boxes. Finally, we provide extensive dataset analysis and baseline results. The dataset, development kit, and more will be available online.
MAN TruckScenes: A multimodal dataset for autonomous trucking in diverse conditions
[ "Felix Sebastian Fent", "Fabian Kuttenreich", "Florian Ruch", "Farija Rizwin", "Stefan Juergens", "Lorenz Lechermann", "Christian Nissler", "Andrea Perl", "Ulrich Voll", "Min Yan", "Markus Lienkamp" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2407.07462
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=X4nq0W2qZX
@inproceedings{ vu2024mmcows, title={MmCows: A Multimodal Dataset for Dairy Cattle Monitoring}, author={Hien Vu and Omkar Prabhune and Unmesh Raskar and Dimuth Panditharatne and Hanwook Chung and Christopher Choi and Younghyun Kim}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=X4nq0W2qZX} }
Precision livestock farming (PLF) has been transformed by machine learning (ML), enabling more precise and timely interventions that enhance overall farm productivity, animal welfare, and environmental sustainability. However, despite the availability of various sensing technologies, few datasets leverage multiple modalities, which are crucial for developing more accurate and efficient monitoring devices and ML models. To address this gap, we present MmCows, a multimodal dataset for dairy cattle monitoring. This dataset comprises a large amount of synchronized, high-quality measurement data on behavioral, physiological, and environmental factors. It includes two weeks of data collected using wearable and implantable sensors deployed on ten milking Holstein cows, such as ultra-wideband (UWB) sensors, inertial sensors, and body temperature sensors. In addition, it features 4.8 million frames of high-resolution image sequences from four isometric view cameras, as well as temperature and humidity data from environmental sensors. We also gathered milk yield data and outdoor weather conditions. One full day’s worth of image data is annotated as ground truth, totaling 20,000 frames with 213,000 bounding boxes of 16 cows, along with their 3D locations and behavior labels. An extensive analysis of MmCows is provided to evaluate the modalities individually and their complementary benefits. The release of MmCows and its benchmarks will facilitate research on multimodal monitoring of dairy cattle, thereby promoting sustainable dairy farming. The dataset and the code for benchmarks are available at https://github.com/neis-lab/mmcows.
MmCows: A Multimodal Dataset for Dairy Cattle Monitoring
[ "Hien Vu", "Omkar Prabhune", "Unmesh Raskar", "Dimuth Panditharatne", "Hanwook Chung", "Christopher Choi", "Younghyun Kim" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=X4KImMSIRq
@inproceedings{ jim{\'e}nez-s{\'a}nchez2024copycats, title={Copycats: the many lives of a publicly available medical imaging dataset}, author={Amelia Jim{\'e}nez-S{\'a}nchez and Natalia-Rozalia Avlona and Dovile Juodelyte and Th{\'e}o Sourget and Caroline Vang-Larsen and Anna Rogers and Hubert Dariusz Zaj{\k{a}}c and Veronika Cheplygina}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=X4KImMSIRq} }
Medical Imaging (MI) datasets are fundamental to artificial intelligence in healthcare. The accuracy, robustness, and fairness of diagnostic algorithms depend on the data (and its quality) used to train and evaluate the models. MI datasets used to be proprietary, but have become increasingly available to the public, including on community-contributed platforms (CCPs) like Kaggle or HuggingFace. While open data is important to enhance the redistribution of data's public value, we find that the current CCP governance model fails to uphold the quality needed and recommended practices for sharing, documenting, and evaluating datasets. In this paper, we conduct an analysis of publicly available machine learning datasets on CCPs, discussing datasets' context, and identifying limitations and gaps in the current CCP landscape. We highlight differences between MI and computer vision datasets, particularly in the potentially harmful downstream effects from poor adoption of recommended dataset management practices. We compare the analyzed datasets across several dimensions, including data sharing, data documentation, and maintenance. We find vague licenses, lack of persistent identifiers and storage, duplicates, and missing metadata, with differences between the platforms. Our research contributes to efforts in responsible data curation and AI algorithms for healthcare.
Copycats: the many lives of a publicly available medical imaging dataset
[ "Amelia Jiménez-Sánchez", "Natalia-Rozalia Avlona", "Dovile Juodelyte", "Théo Sourget", "Caroline Vang-Larsen", "Anna Rogers", "Hubert Dariusz Zając", "Veronika Cheplygina" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2402.06353
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=WffhOhYvZ0
@inproceedings{ li2024solarcube, title={SolarCube: An Integrative Benchmark Dataset Harnessing Satellite and In-situ Observations for Large-scale Solar Energy Forecasting}, author={Ruohan Li and Yiqun Xie and Xiaowei Jia and Dongdong Wang and Yanhua Li and Yingxue Zhang and Zhihao Wang and Zhili Li}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=WffhOhYvZ0} }
Solar power is a critical source of renewable energy, offering significant potential to lower greenhouse gas emissions and mitigate climate change. However, the cloud induced-variability of solar radiation reaching the earth’s surface presents a challenge for integrating solar power into the grid (e.g., storage and backup management). The new generation of geostationary satellites such as GOES-16 has become an important data source for large-scale and high temporal frequency solar radiation forecasting. However, no machine-learning-ready dataset has integrated geostationary satellite data with fine-grained solar radiation information to support forecasting model development and benchmarking with consistent metrics. We present SolarCube, a new ML-ready benchmark dataset for solar radiation forecasting. SolarCube covers 19 study areas distributed over multiple continents: North America, South America, Asia, and Oceania. The dataset supports short (i.e., 30 minutes to 6 hours) and long-term (i.e., day-ahead or longer) solar radiation forecasting at both point-level (i.e., specific locations of monitoring stations) and area-level, by processing and integrating data from multiple sources, including geostationary satellite images, physics-derived solar radiation, and ground station observations from different monitoring networks over the globe. We also evaluated a set of forecasting models for point- and image-based time-series data to develop performance benchmarks under different testing scenarios. The dataset is available at https://doi.org/10.5281/zenodo.11498739. A Python library is available to conveniently generate different variations of the dataset based on user needs, along with baseline models at https://github.com/Ruohan-Li/SolarCube.
SolarCube: An Integrative Benchmark Dataset Harnessing Satellite and In-situ Observations for Large-scale Solar Energy Forecasting
[ "Ruohan Li", "Yiqun Xie", "Xiaowei Jia", "Dongdong Wang", "Yanhua Li", "Yingxue Zhang", "Zhihao Wang", "Zhili Li" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=WUWHVN4gxk
@inproceedings{ debenedetti2024dataset, title={Dataset and Lessons Learned from the 2024 Sa{TML} {LLM} Capture-the-Flag Competition}, author={Edoardo Debenedetti and Javier Rando and Daniel Paleka and Silaghi Fineas Florin and Dragos Albastroiu and Niv Cohen and Yuval Lemberg and Reshmi Ghosh and Rui Wen and Ahmed Salem and Giovanni Cherubin and Santiago Zanella-Beguelin and Robin Schmid and Victor Klemm and Takahiro Miki and Chenhao Li and Stefan Kraft and Mario Fritz and Florian Tram{\`e}r and Sahar Abdelnabi and Lea Sch{\"o}nherr}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=WUWHVN4gxk} }
Large language model systems face significant security risks from maliciously crafted messages that aim to overwrite the system's original instructions or leak private data. To study this problem, we organized a capture-the-flag competition at IEEE SaTML 2024, where the flag is a secret string in the LLM system prompt. The competition was organized in two phases. In the first phase, teams developed defenses to prevent the model from leaking the secret. During the second phase, teams were challenged to extract the secrets hidden for defenses proposed by the other teams. This report summarizes the main insights from the competition. Notably, we found that all defenses were bypassed at least once, highlighting the difficulty of designing a successful defense and the necessity for additional research to protect LLM systems. To foster future research in this direction, we compiled a dataset with over 137k multi-turn attack chats and open-sourced the platform.
Dataset and Lessons Learned from the 2024 SaTML LLM Capture-the-Flag Competition
[ "Edoardo Debenedetti", "Javier Rando", "Daniel Paleka", "Silaghi Fineas Florin", "Dragos Albastroiu", "Niv Cohen", "Yuval Lemberg", "Reshmi Ghosh", "Rui Wen", "Ahmed Salem", "Giovanni Cherubin", "Santiago Zanella-Beguelin", "Robin Schmid", "Victor Klemm", "Takahiro Miki", "Chenhao Li", "Stefan Kraft", "Mario Fritz", "Florian Tramèr", "Sahar Abdelnabi", "Lea Schönherr" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
oral
2406.07954
[ "https://github.com/ethz-spylab/ctf-satml24-data-analysis" ]
https://huggingface.co/papers/2406.07954
3
2
0
21
[]
[ "ethz-spylab/ctf-satml24" ]
[]
[]
[ "ethz-spylab/ctf-satml24" ]
[]
1
null
https://openreview.net/forum?id=WTI4RJYSVm
@inproceedings{ sza{\l}ata2024a, title={A benchmark for prediction of transcriptomic responses to chemical perturbations across cell types}, author={Artur Sza{\l}ata and Andrew Benz and Robrecht Cannoodt and Mauricio Cortes and Jason Fong and Sunil Kuppasani and Richard Lieberman and Tianyu Liu and Javier A. Mas-Rosario and Rico Meinl and Jalil Nourisa and Jared Tumiel and Tin M. Tunjic and Mengbo Wang and Noah Weber and Hongyu Zhao and Benedict Anchang and Fabian J Theis and Malte D Luecken and Daniel B Burkhardt}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=WTI4RJYSVm} }
Single-cell transcriptomics has revolutionized our understanding of cellular heterogeneity and drug perturbation effects. However, its high cost and the vast chemical space of potential drugs present barriers to experimentally characterizing the effect of chemical perturbations in all the myriad cell types of the human body. To overcome these limitations, several groups have proposed using machine learning methods to directly predict the effect of chemical perturbations either across cell contexts or chemical space. However, advances in this field have been hindered by a lack of well-designed evaluation datasets and benchmarks. To drive innovation in perturbation modeling, the Open Problems Perturbation Prediction (OP3) benchmark introduces a framework for predicting the effects of small molecule perturbations on cell type-specific gene expression. OP3 leverages the Open Problems in Single-cell Analysis benchmarking infrastructure and is enabled by a new single-cell perturbation dataset, encompassing 146 compounds tested on human blood cells. The benchmark includes diverse data representations, evaluation metrics, and winning methods from our "Single-cell perturbation prediction: generalizing experimental interventions to unseen contexts" competition at NeurIPS 2023. We envision that the OP3 benchmark and competition will drive innovation in single-cell perturbation prediction by improving the accessibility, visibility, and feasibility of this challenge, thereby promoting the impact of machine learning in drug discovery.
A benchmark for prediction of transcriptomic responses to chemical perturbations across cell types
[ "Artur Szałata", "Andrew Benz", "Robrecht Cannoodt", "Mauricio Cortes", "Jason Fong", "Sunil Kuppasani", "Richard Lieberman", "Tianyu Liu", "Javier A. Mas-Rosario", "Rico Meinl", "Jalil Nourisa", "Jared Tumiel", "Tin M. Tunjic", "Mengbo Wang", "Noah Weber", "Hongyu Zhao", "Benedict Anchang", "Fabian J Theis", "Malte D Luecken", "Daniel B Burkhardt" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=WRxFVzx0uG
@inproceedings{ ashton2024windsorml, title={Windsor{ML}: High-Fidelity Computational Fluid Dynamics Dataset For Automotive Aerodynamics}, author={Neil Ashton and Jordan B. Angel and Aditya Ghate and Gaetan Kenway and Man Long Wong and Cetin C. Kiris and Astrid Walle and Danielle C. Maddix and Gary Page}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=WRxFVzx0uG} }
This paper presents a new open-source high-fidelity dataset for Machine Learning (ML) containing 355 geometric variants of the Windsor body, to help the development and testing of ML surrogate models for external automotive aerodynamics. Each Computational Fluid Dynamics (CFD) simulation was run with a GPU-native high-fidelity Wall-Modeled Large-Eddy Simulations (WMLES) using a Cartesian immersed-boundary method using more than 280M cells to ensure the greatest possible accuracy. The dataset contains geometry variants that exhibits a wide range of flow characteristics that are representative of those observed on road-cars. The dataset itself contains the 3D time-averaged volume \& boundary data as well as the geometry and force \& moment coefficients. This paper discusses the validation of the underlying CFD methods as well as contents and structure of the dataset. To the authors knowledge, this represents the first, large-scale high-fidelity CFD dataset for the Windsor body with a permissive open-source license (CC-BY-SA).
WindsorML: High-Fidelity Computational Fluid Dynamics Dataset For Automotive Aerodynamics
[ "Neil Ashton", "Jordan B. Angel", "Aditya Ghate", "Gaetan Kenway", "Man Long Wong", "Cetin C. Kiris", "Astrid Walle", "Danielle C. Maddix", "Gary Page" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2407.19320
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=WMzQIP70O0
@inproceedings{ miranda2024bivlc, title={Bi{VLC}: Extending Vision-Language Compositionality Evaluation with Text-to-Image Retrieval}, author={Imanol Miranda and Ander Salaberria and Eneko Agirre and Gorka Azkune}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=WMzQIP70O0} }
Existing Vision-Language Compositionality (VLC) benchmarks like SugarCrepe are formulated as image-to-text retrieval problems, where, given an image, the models need to select between the correct textual description and a synthetic hard negative text. In this work, we present the Bidirectional Vision-Language Compositionality (BiVLC) dataset. The novelty of BiVLC is to add a synthetic hard negative image generated from the synthetic text, resulting in two image-to-text retrieval examples (one for each image) and, more importantly, two text-to-image retrieval examples (one for each text). Human annotators filter out ill-formed examples ensuring the validity of the benchmark. The experiments on BiVLC uncover a weakness of current multimodal models, as they perform poorly in the text-to-image direction. In fact, when considering both retrieval directions, the conclusions obtained in previous works change significantly. In addition to the benchmark, we show that a contrastive model trained using synthetic images and texts significantly improves over the base model in SugarCrepe and in BiVLC for both retrieval directions. The gap to human performance in BiVLC confirms that Vision-Language Compositionality is still a challenging problem.
BiVLC: Extending Vision-Language Compositionality Evaluation with Text-to-Image Retrieval
[ "Imanol Miranda", "Ander Salaberria", "Eneko Agirre", "Gorka Azkune" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.09952
[ "https://github.com/imirandam/bivlc" ]
https://huggingface.co/papers/2406.09952
0
0
0
4
[ "imirandam/CLIP_TROHN-Img", "imirandam/CLIP_TROHN-Text", "imirandam/CLIP_COCO", "imirandam/CLIP_Detector", "imirandam/CLIP_TROHN-Img_Detector" ]
[ "imirandam/BiVLC", "imirandam/TROHN-Img", "imirandam/TROHN-Text" ]
[]
[ "imirandam/CLIP_TROHN-Img", "imirandam/CLIP_TROHN-Text", "imirandam/CLIP_COCO", "imirandam/CLIP_Detector", "imirandam/CLIP_TROHN-Img_Detector" ]
[ "imirandam/BiVLC", "imirandam/TROHN-Img", "imirandam/TROHN-Text" ]
[]
1
null
https://openreview.net/forum?id=WGoCZl2itU
@inproceedings{ wu2024clasheval, title={ClashEval: Quantifying the tug-of-war between an {LLM}{\textquoteright}s internal prior and external evidence}, author={Kevin Wu and Eric Wu and James Zou}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=WGoCZl2itU} }
Retrieval augmented generation (RAG) is frequently used to mitigate hallucinations and provide up-to-date knowledge for large language models (LLMs). However, given that document retrieval is an imprecise task and sometimes results in erroneous or even harmful content being presented in context, this raises the question of how LLMs handle retrieved information: If the provided content is incorrect, does the model know to ignore it, or does it recapitulate the error? Conversely, when the model's initial response is incorrect, does it always know to use the retrieved information to correct itself, or does it insist on its wrong prior response? To answer this, we curate a dataset of over 1200 questions across six domains (e.g., drug dosages, Olympic records, locations) along with content relevant to answering each question. We further apply precise perturbations to the answers in the content that range from subtle to blatant errors. We benchmark six top-performing LLMs, including GPT-4o, on this dataset and find that LLMs are susceptible to adopting incorrect retrieved content, overriding their own correct prior knowledge over 60\% of the time. However, the more unrealistic the retrieved content is (i.e. more deviated from truth), the less likely the model is to adopt it. Also, the less confident a model is in its initial response (via measuring token probabilities), the more likely it is to adopt the information in the retrieved content. We exploit this finding and demonstrate simple methods for improving model accuracy where there is conflicting retrieved content. Our results highlight a difficult task and benchmark for LLMs -- namely, their ability to correctly discern when it is wrong in light of correct retrieved content and to reject cases when the provided content is incorrect. Our dataset, called ClashEval, and evaluations are open-sourced to allow for future benchmarking on top-performing models at https://github.com/kevinwu23/StanfordClashEval.
ClashEval: Quantifying the tug-of-war between an LLM’s internal prior and external evidence
[ "Kevin Wu", "Eric Wu", "James Zou" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=WEFxOm3Aez
@inproceedings{ robinson2024relbench, title={RelBench: A Benchmark for Deep Learning on Relational Databases}, author={Joshua Robinson and Rishabh Ranjan and Weihua Hu and Kexin Huang and Jiaqi Han and Alejandro Dobles and Matthias Fey and Jan Eric Lenssen and Yiwen Yuan and Zecheng Zhang and Xinwei He and Jure Leskovec}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=WEFxOm3Aez} }
We present RelBench, a public benchmark for solving predictive tasks in relational databases with deep learning. RelBench provides databases and tasks spanning diverse domains, scales, and database dimensions, and is intended to be a foundational infrastructure for future research in this direction. We use RelBench to conduct the first comprehensive empirical study of graph neural network (GNN) based predictive models on relational data, as recently proposed by Fey et al. 2024. End-to-end learned GNNs are capable fully exploiting the predictive signal encoded in links between entities, marking a significant shift away from the dominant paradigm of manual feature engineering combined with tabular machine learning. To thoroughly evaluate GNNs against the prior gold-standard we conduct a user study, where an experienced data scientist manually engineers features for each task. In this study, GNNs learn better models whilst reducing human work needed by more than an order of magnitude. This result demonstrates the power of GNNs for solving predictive tasks in relational databases, opening up new research opportunities.
RelBench: A Benchmark for Deep Learning on Relational Databases
[ "Joshua Robinson", "Rishabh Ranjan", "Weihua Hu", "Kexin Huang", "Jiaqi Han", "Alejandro Dobles", "Matthias Fey", "Jan Eric Lenssen", "Yiwen Yuan", "Zecheng Zhang", "Xinwei He", "Jure Leskovec" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2407.20060
[ "https://github.com/snap-stanford/relbench-user-study" ]
https://huggingface.co/papers/2407.20060
5
7
3
12
[]
[ "relbench/results", "relbench/requests" ]
[ "relbench/leaderboard" ]
[]
[ "relbench/results", "relbench/requests" ]
[ "relbench/leaderboard" ]
1
null
https://openreview.net/forum?id=W8OZdhowxo
@inproceedings{ liu2024towards, title={Towards General Loop Invariant Generation: A Benchmark of Programs with Memory Manipulation}, author={Chang Liu and Xiwei Wu and Yuan Feng and Qinxiang Cao and Junchi Yan}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=W8OZdhowxo} }
Program verification is vital for ensuring software reliability, especially in the context of increasingly complex systems. Loop invariants, remaining true before and after each iteration of loops, are crucial for this verification process. Traditional provers and machine learning based methods for generating loop invariants often require expert intervention or extensive labeled data, and typically only handle numerical property verification. These methods struggle with programs involving complex data structures and memory manipulations, limiting their applicability and automation capabilities. This paper introduces a new benchmark named LIG-MM, specifically for programs with complex data structures and memory manipulations. We collect 312 programs from various sources, including daily programs from college homework, the international competition (SV-COMP), benchmarks from previous papers (SLING), and programs from real-world software systems (Linux Kernel, GlibC, LiteOS, and Zephyr). Based on LIG-MM, our findings indicate that previous methods, including GPT-4, fail to automate verification for these programs. Consequently, we propose a novel LLM-SE framework that coordinates LLM with symbolic execution, fine-tuned using self-supervised learning, to generate loop invariants. Experimental results on LIG-MM demonstrate that our LLM-SE outperforms state-of-the-art methods, offering a new direction toward automated program verification in real-world scenarios.
Towards General Loop Invariant Generation: A Benchmark of Programs with Memory Manipulation
[ "Chang Liu", "Xiwei Wu", "Yuan Feng", "Qinxiang Cao", "Junchi Yan" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2311.10483
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=W0FEprcxva
@inproceedings{ kajic2024evaluating, title={Evaluating Numerical Reasoning in Text-to-Image Models}, author={Ivana Kajic and Olivia Wiles and Isabela Albuquerque and Matthias Bauer and Su Wang and Jordi Pont-Tuset and Aida Nematzadeh}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=W0FEprcxva} }
Text-to-image generative models are capable of producing high-quality images that often faithfully depict concepts described using natural language. In this work, we comprehensively evaluate a range of text-to-image models on numerical reasoning tasks of varying difficulty, and show that even the most advanced models have only rudimentary numerical skills. Specifically, their ability to correctly generate an exact number of objects in an image is limited to small numbers, it is highly dependent on the context the number term appears in, and it deteriorates quickly with each successive number. We also demonstrate that models have poor understanding of linguistic quantifiers (such as “few” or “as many as”), the concept of zero, and struggle with more advanced concepts such as fractional representations. We bundle prompts, generated images and human annotations into GeckoNum, a novel benchmark for evaluation of numerical reasoning.
Evaluating Numerical Reasoning in Text-to-Image Models
[ "Ivana Kajic", "Olivia Wiles", "Isabela Albuquerque", "Matthias Bauer", "Su Wang", "Jordi Pont-Tuset", "Aida Nematzadeh" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.14774
[ "https://github.com/google-deepmind/geckonum_benchmark_t2i" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=VpkfxuVXwx
@inproceedings{ zhu2024privauditor, title={PrivAuditor: Benchmarking Data Protection Vulnerabilities in {LLM} Adaptation Techniques}, author={Derui Zhu and Dingfan Chen and Xiongfei Wu and Jiahui Geng and Zhuo Li and Jens Grossklags and Lei Ma}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=VpkfxuVXwx} }
Large Language Models (LLMs) are recognized for their potential to be an important building block toward achieving artificial general intelligence due to their unprecedented capability for solving diverse tasks. Despite these achievements, LLMs often underperform in domain-specific tasks without training on relevant domain data. This phenomenon, which is often attributed to distribution shifts, makes adapting pre-trained LLMs with domain-specific data crucial. However, this adaptation raises significant privacy concerns, especially when the data involved come from sensitive domains. In this work, we extensively investigate the privacy vulnerabilities of adapted (fine-tuned) LLMs and benchmark privacy leakage across a wide range of data modalities, state-of-the-art privacy attack methods, adaptation techniques, and model architectures. We systematically evaluate and pinpoint critical factors related to privacy leakage. With our organized codebase and actionable insights, we aim to provide a standardized auditing tool for practitioners seeking to deploy customized LLM applications with faithful privacy assessments.
PrivAuditor: Benchmarking Data Protection Vulnerabilities in LLM Adaptation Techniques
[ "Derui Zhu", "Dingfan Chen", "Xiongfei Wu", "Jiahui Geng", "Zhuo Li", "Jens Grossklags", "Lei Ma" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Vp6HAjrdIg
@inproceedings{ wu2024fiva, title={Fi{VA}: Fine-grained Visual Attribute Dataset for Text-to-Image Diffusion Models}, author={Tong Wu and Yinghao Xu and Ryan Po and Mengchen Zhang and Guandao Yang and Jiaqi Wang and Ziwei Liu and Dahua Lin and Gordon Wetzstein}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=Vp6HAjrdIg} }
Recent advances in text-to-image generation have enabled the creation of high-quality images with diverse applications. However, accurately describing desired visual attributes can be challenging, especially for non-experts in art and photography. An intuitive solution involves adopting favorable attributes from source images. Current methods attempt to distill identity and style from source images. However, "style" is a broad concept that includes texture, color, and artistic elements, but does not cover other important attributes like lighting and dynamics. Additionally, a simplified "style" adaptation prevents combining multiple attributes from different sources into one generated image. In this work, we formulate a more effective approach to decompose the aesthetics of a picture into specific visual attributes, letting users apply characteristics like lighting, texture, and dynamics from different images. To achieve this goal, we constructed the first fine-grained visual attributes dataset (FiVA) to the best of our knowledge. This FiVA dataset features a well-organized taxonomy for visual attributes and includes 1 M high-quality generated images with visual attribute annotations. Leveraging this dataset, we propose a fine-grained visual attributes adaptation framework (FiVA-Adapter) , which decouples and adapts visual attributes from one or more source images into a generated one. This approach enhances user-friendly customization, allowing users to selectively apply desired attributes to create images that meet their unique preferences and specific content requirements.
FiVA: Fine-grained Visual Attribute Dataset for Text-to-Image Diffusion Models
[ "Tong Wu", "Yinghao Xu", "Ryan Po", "Mengchen Zhang", "Guandao Yang", "Jiaqi Wang", "Ziwei Liu", "Dahua Lin", "Gordon Wetzstein" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Vcw3vzjHDb
@inproceedings{ ying2024lean, title={Lean Workbook: A large-scale Lean problem set formalized from natural language math problems}, author={Huaiyuan Ying and Zijian Wu and Yihan Geng and JIayu Wang and Dahua Lin and Kai Chen}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=Vcw3vzjHDb} }
Large language models have demonstrated impressive capabilities across various natural language processing tasks, especially in solving mathematical problems. However, large language models are not good at math theorem proving using formal languages like Lean. A significant challenge in this area is the scarcity of training data available in these formal languages. To address this issue, we propose a novel pipeline that iteratively generates and filters synthetic data to translate natural language mathematical problems into Lean 4 statements, and vice versa. Our results indicate that the synthetic data pipeline can provide useful training data and improve the performance of LLMs in translating and understanding complex mathematical problems and proofs. Our final dataset contains about 57K formal-informal question pairs along with searched proof from the math contest forum and 21 new IMO questions. We open-source our code at \url{https://github.com/InternLM/InternLM-Math} and our data at \url{https://huggingface.co/datasets/InternLM/Lean-Workbook}.
Lean Workbook: A large-scale Lean problem set formalized from natural language math problems
[ "Huaiyuan Ying", "Zijian Wu", "Yihan Geng", "JIayu Wang", "Dahua Lin", "Kai Chen" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.03847
[ "https://github.com/internlm/internlm-math" ]
https://huggingface.co/papers/2406.03847
0
0
0
6
[]
[ "internlm/Lean-Workbook" ]
[]
[]
[ "internlm/Lean-Workbook" ]
[]
1
null
https://openreview.net/forum?id=Vb1vVr75JT
@inproceedings{ silberg2024unitox, title={UniTox: Leveraging {LLM}s to Curate a Unified Dataset of Drug-Induced Toxicity from {FDA} Labels}, author={Jake Silberg and Kyle Swanson and Elana Simon and Angela Zhang and Zaniar Ghazizadeh and Scott Ogden and Hisham Hamadeh and James Zou}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=Vb1vVr75JT} }
Drug-induced toxicity is one of the leading reasons new drugs fail clinical trials. Machine learning models that predict drug toxicity from molecular structure could help researchers prioritize less toxic drug candidates. However, current toxicity datasets are typically small and limited to a single organ system (e.g., cardio, renal, or liver). Creating these datasets often involved time-intensive expert curation by parsing drug labelling documents that can exceed 100 pages per drug. Here, we introduce UniTox, a unified dataset of 2,418 FDA-approved drugs with drug-induced toxicity summaries and ratings created by using GPT-4o to process FDA drug labels. UniTox spans eight types of toxicity: cardiotoxicity, liver toxicity, renal toxicity, pulmonary toxicity, hematological toxicity, dermatological toxicity, ototoxicity, and infertility. This is, to the best of our knowledge, the largest such systematic human in vivo database by number of drugs and toxicities, and the first covering nearly all non-combination FDA-approved medications for several of these toxicities. We recruited clinicians to validate a random sample of our GPT-4o annotated toxicities, and UniTox's toxicity ratings concord with clinician labelers 85-96\% of the time. Finally, we benchmark several machine learning models trained on UniTox to demonstrate the utility of this dataset for building molecular toxicity prediction models.
UniTox: Leveraging LLMs to Curate a Unified Dataset of Drug-Induced Toxicity from FDA Labels
[ "Jake Silberg", "Kyle Swanson", "Elana Simon", "Angela Zhang", "Zaniar Ghazizadeh", "Scott Ogden", "Hisham Hamadeh", "James Zou" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=VXohja0vrQ
@inproceedings{ khandekar2024medcalcbench, title={MedCalc-Bench: Evaluating Large Language Models for Medical Calculations}, author={Nikhil Khandekar and Qiao Jin and Guangzhi Xiong and Soren Dunn and Serina S Applebaum and Zain Anwar and Maame Sarfo-Gyamfi and Conrad W Safranek and Abid Anwar and Andrew Jiaxing Zhang and Aidan Gilson and Maxwell B Singer and Amisha D Dave and R. Andrew Taylor and Aidong Zhang and Qingyu Chen and Zhiyong Lu}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=VXohja0vrQ} }
Current benchmarks for evaluating large language models (LLMs) in medicine are primarily focused on question-answering involving domain knowledge and descriptive reasoning. While such qualitative capabilities are vital to medical diagnosis, in real-world scenarios, doctors frequently use clinical calculators that follow quantitative equations and rule-based reasoning paradigms for evidence-based decision support. To this end, we propose MedCalc-Bench, a first-of-its-kind dataset focused on evaluating the medical calculation capability of LLMs. MedCalc-Bench contains an evaluation set of over 1000 manually reviewed instances from 55 different medical calculation tasks. Each instance in MedCalc-Bench consists of a patient note, a question requesting to compute a specific medical value, a ground truth answer, and a step-by-step explanation showing how the answer is obtained. While our evaluation results show the potential of LLMs in this area, none of them are effective enough for clinical settings. Common issues include extracting the incorrect entities, not using the correct equation or rules for a calculation task, or incorrectly performing the arithmetic for the computation. We hope our study highlights the quantitative knowledge and reasoning gaps in LLMs within medical settings, encouraging future improvements of LLMs for various clinical calculation tasks. MedCalc-Bench is publicly available at: https://github.com/ncbi-nlp/MedCalc-Bench.
MedCalc-Bench: Evaluating Large Language Models for Medical Calculations
[ "Nikhil Khandekar", "Qiao Jin", "Guangzhi Xiong", "Soren Dunn", "Serina S Applebaum", "Zain Anwar", "Maame Sarfo-Gyamfi", "Conrad W Safranek", "Abid Anwar", "Andrew Jiaxing Zhang", "Aidan Gilson", "Maxwell B Singer", "Amisha D Dave", "R. Andrew Taylor", "Aidong Zhang", "Qingyu Chen", "Zhiyong Lu" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
oral
2406.12036
[ "https://github.com/ncbi-nlp/medcalc-bench" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=VJuSeShdZA
@inproceedings{ hemmat2024hidden, title={Hidden in Plain Sight: Evaluating Abstract Shape Recognition in Vision-Language Models}, author={Arshia Hemmat and Adam Davies and Tom A. Lamb and Jianhao Yuan and Philip Torr and Ashkan Khakzar and Francesco Pinto}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=VJuSeShdZA} }
Despite the importance of shape perception in human vision, early neural image classifiers relied less on shape information for object recognition than other (often spurious) features. While recent research suggests that current large Vision-Language Models (VLMs) exhibit more reliance on shape, we find them to still be seriously limited in this regard. To quantify such limitations, we introduce IllusionBench, a dataset that challenges current cutting-edge VLMs to decipher shape information when the shape is represented by an arrangement of visual elements in a scene. Our extensive evaluations reveal that, while these shapes are easily detectable by human annotators, current VLMs struggle to recognize them, indicating important avenues for future work in developing more robust visual perception systems. The full dataset and codebase are available at: https://arshiahemmat.github.io/illusionbench/
Hidden in Plain Sight: Evaluating Abstract Shape Recognition in Vision-Language Models
[ "Arshia Hemmat", "Adam Davies", "Tom A. Lamb", "Jianhao Yuan", "Philip Torr", "Ashkan Khakzar", "Francesco Pinto" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2411.06287
[ "https://github.com/arshiahemmat/illusionbench" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=VHa0XNjWj2
@inproceedings{ maruf2024vlmbio, title={{VLM}4Bio: A Benchmark Dataset to Evaluate Pretrained Vision-Language Models for Trait Discovery from Biological Images}, author={M. Maruf and Arka Daw and Kazi Sajeed Mehrab and Harish Babu Manogaran and Abhilash Neog and Medha Sawhney and Mridul Khurana and James Balhoff and Yasin Bakis and Bahadir Altintas and Matthew J Thompson and Elizabeth G Campolongo and Josef Uyeda and Hilmar Lapp and Henry Bart and Paula Mabee and Yu Su and Wei-Lun Chao and Charles Stewart and Tanya Berger-Wolf and Wasila M Dahdul and Anuj Karpatne}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=VHa0XNjWj2} }
Images are increasingly becoming the currency for documenting biodiversity on the planet, providing novel opportunities for accelerating scientific discoveries in the field of organismal biology, especially with the advent of large vision-language models (VLMs). We ask if pre-trained VLMs can aid scientists in answering a range of biologically relevant questions without any additional fine-tuning. In this paper, we evaluate the effectiveness of $12$ state-of-the-art (SOTA) VLMs in the field of organismal biology using a novel dataset, VLM4Bio, consisting of $469K$ question-answer pairs involving $30K$ images from three groups of organisms: fishes, birds, and butterflies, covering five biologically relevant tasks. We also explore the effects of applying prompting techniques and tests for reasoning hallucination on the performance of VLMs, shedding new light on the capabilities of current SOTA VLMs in answering biologically relevant questions using images.
VLM4Bio: A Benchmark Dataset to Evaluate Pretrained Vision-Language Models for Trait Discovery from Biological Images
[ "M. Maruf", "Arka Daw", "Kazi Sajeed Mehrab", "Harish Babu Manogaran", "Abhilash Neog", "Medha Sawhney", "Mridul Khurana", "James Balhoff", "Yasin Bakis", "Bahadir Altintas", "Matthew J Thompson", "Elizabeth G Campolongo", "Josef Uyeda", "Hilmar Lapp", "Henry Bart", "Paula Mabee", "Yu Su", "Wei-Lun Chao", "Charles Stewart", "Tanya Berger-Wolf", "Wasila M Dahdul", "Anuj Karpatne" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2408.16176
[ "https://github.com/sammarfy/vlm4bio" ]
https://huggingface.co/papers/2408.16176
15
7
1
22
[]
[ "sammarfy/VLM4Bio", "imageomics/VLM4Bio" ]
[]
[]
[ "sammarfy/VLM4Bio", "imageomics/VLM4Bio" ]
[]
1
null
https://openreview.net/forum?id=UnWhcpIyUC
@inproceedings{ laine2024me, title={Me, Myself, and {AI}: The Situational Awareness Dataset ({SAD}) for {LLM}s}, author={Rudolf Laine and Bilal Chughtai and Jan Betley and Kaivalya Hariharan and Mikita Balesni and J{\'e}r{\'e}my Scheurer and Marius Hobbhahn and Alexander Meinke and Owain Evans}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=UnWhcpIyUC} }
AI assistants such as ChatGPT are trained to respond to users by saying, "I am a large language model”. This raises questions. Do such models "know'' that they are LLMs and reliably act on this knowledge? Are they "aware" of their current circumstances, such as being deployed to the public? We refer to a model's knowledge of itself and its circumstances as **situational awareness**. To quantify situational awareness in LLMs, we introduce a range of behavioral tests, based on question answering and instruction following. These tests form the **Situational Awareness Dataset (SAD)**, a benchmark comprising 7 task categories and over 13,000 questions. The benchmark tests numerous abilities, including the capacity of LLMs to (i) recognize their own generated text, (ii) predict their own behavior, (iii) determine whether a prompt is from internal evaluation or real-world deployment, and (iv) follow instructions that depend on self-knowledge. We evaluate 16 LLMs on SAD, including both base (pretrained) and chat models. While all models perform better than chance, even the highest-scoring model (Claude 3 Opus) is far from a human baseline on certain tasks. We also observe that performance on SAD is only partially predicted by metrics of general knowledge. Chat models, which are finetuned to serve as AI assistants, outperform their corresponding base models on SAD but not on general knowledge tasks. The purpose of SAD is to facilitate scientific understanding of situational awareness in LLMs by breaking it down into quantitative abilities. Situational awareness is important because it enhances a model's capacity for autonomous planning and action. While this has potential benefits from automation, it also introduces novel risks related to AI safety and control.
Me, Myself, and AI: The Situational Awareness Dataset (SAD) for LLMs
[ "Rudolf Laine", "Bilal Chughtai", "Jan Betley", "Kaivalya Hariharan", "Mikita Balesni", "Jérémy Scheurer", "Marius Hobbhahn", "Alexander Meinke", "Owain Evans" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2407.04694
[ "https://github.com/lrudl/sad" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=UZpySDOwvZ
@inproceedings{ yan2024df, title={{DF}40: Toward Next-Generation Deepfake Detection}, author={Zhiyuan Yan and Taiping Yao and Shen Chen and Yandan Zhao and Xinghe Fu and Junwei Zhu and Donghao Luo and Chengjie Wang and Shouhong Ding and Yunsheng Wu and Li Yuan}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=UZpySDOwvZ} }
We propose a new comprehensive benchmark to revolutionize the current deepfake detection field to the next generation. Predominantly, existing works identify top-notch detection algorithms and models by adhering to the common practice: training detectors on one specific dataset (*e.g.,* FF++) and testing them on other prevalent deepfake datasets. This protocol is often regarded as a "golden compass" for navigating SoTA detectors. But can these stand-out "winners" be truly applied to tackle the myriad of realistic and diverse deepfakes lurking in the real world? If not, what underlying factors contribute to this gap? In this work, we found the **dataset** (both train and test) can be the "primary culprit" due to the following: (1) *forgery diversity*: Deepfake techniques are commonly referred to as both face forgery (face-swapping and face-reenactment) and entire image synthesis (AIGC, especially face). Most existing datasets only contain partial types of them, with limited forgery methods implemented (*e.g.,* 2 swapping and 2 reenactment methods in FF++); (2) *forgery realism*: The dominated training dataset, FF++, contains out-of-date forgery techniques from the past four years. "Honing skills" on these forgeries makes it difficult to guarantee effective detection generalization toward nowadays' SoTA deepfakes; (3) *evaluation protocol*: Most detection works perform evaluations on one type, *e.g.,* face-swapping types only, which hinders the development of universal deepfake detectors. To address this dilemma, we construct a highly diverse and large-scale deepfake detection dataset called **DF40**, which comprises **40** distinct deepfake techniques (10 times larger than FF++). We then conduct comprehensive evaluations using **4** standard evaluation protocols and **8** representative detection methods, resulting in over **2,000** evaluations. Through these evaluations, we provide an extensive analysis from various perspectives, leading to **7** new insightful findings contributing to the field. We also open up **4** valuable yet previously underexplored research questions to inspire future works. We release our dataset, code, and pre-trained weights at https://github.com/YZY-stack/DF40.
DF40: Toward Next-Generation Deepfake Detection
[ "Zhiyuan Yan", "Taiping Yao", "Shen Chen", "Yandan Zhao", "Xinghe Fu", "Junwei Zhu", "Donghao Luo", "Chengjie Wang", "Shouhong Ding", "Yunsheng Wu", "Li Yuan" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.13495
[ "https://github.com/YZY-stack/DF40" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=UYgE9IfQIV
@inproceedings{ naug2024sustaindc, title={Sustain{DC}: Benchmarking for Sustainable Data Center Control}, author={Avisek Naug and Antonio Guillen and Ricardo Luna Gutierrez and Vineet Gundecha and Cullen Bash and Sahand Ghorbanpour and Sajad Mousavi and Ashwin Ramesh Babu and Dejan Markovikj and Lekhapriya Dheeraj Kashyap and Desik Rengarajan and Soumyendu Sarkar}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=UYgE9IfQIV} }
Machine learning has driven an exponential increase in computational demand, leading to massive data centers that consume significant amounts of energy and contribute to climate change. This makes sustainable data center control a priority. In this paper, we introduce SustainDC, a set of Python environments for benchmarking multi-agent reinforcement learning (MARL) algorithms for data centers (DC). SustainDC supports custom DC configurations and tasks such as workload scheduling, cooling optimization, and auxiliary battery management, with multiple agents managing these operations while accounting for the effects of each other. We evaluate various MARL algorithms on SustainDC, showing their performance across diverse DC designs, locations, weather conditions, grid carbon intensity, and workload requirements. Our results highlight significant opportunities for improvement of data center operations using MARL algorithms. Given the increasing use of DC due to AI, SustainDC provides a crucial platform for the development and benchmarking of advanced algorithms essential for achieving sustainable computing and addressing other heterogeneous real-world challenges.
SustainDC: Benchmarking for Sustainable Data Center Control
[ "Avisek Naug", "Antonio Guillen", "Ricardo Luna Gutierrez", "Vineet Gundecha", "Cullen Bash", "Sahand Ghorbanpour", "Sajad Mousavi", "Ashwin Ramesh Babu", "Dejan Markovikj", "Lekhapriya Dheeraj Kashyap", "Desik Rengarajan", "Soumyendu Sarkar" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2408.07841
[ "https://github.com/hewlettpackard/dc-rl" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=USUkwg5pW6
@inproceedings{ boettcher2024scribbles, title={Scribbles for All: Benchmarking Scribble Supervised Segmentation Across Datasets}, author={Wolfgang Boettcher and Lukas Hoyer and Ozan Unal and Jan Eric Lenssen and Bernt Schiele}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=USUkwg5pW6} }
In this work, we introduce *Scribbles for All*, a label and training data generation algorithm for semantic segmentation trained on scribble labels. Training or fine-tuning semantic segmentation models with weak supervision has become an important topic recently and was subject to significant advances in model quality. In this setting, scribbles are a promising label type to achieve high quality segmentation results while requiring a much lower annotation effort than usual pixel-wise dense semantic segmentation annotations. The main limitation of scribbles as source for weak supervision is the lack of challenging datasets for scribble segmentation, which hinders the development of novel methods and conclusive evaluations. To overcome this limitation, *Scribbles for All* provides scribble labels for several popular segmentation datasets and provides an algorithm to automatically generate scribble labels for any dataset with dense annotations, paving the way for new insights and model advancements in the field of weakly supervised segmentation. In addition to providing datasets and algorithm, we evaluate state-of-the-art segmentation models on our datasets and show that models trained with our synthetic labels perform competitively with respect to models trained on manual labels. Thus, our datasets enable state-of-the-art research into methods for scribble-labeled semantic segmentation. The datasets, scribble generation algorithm, and baselines are publicly available at https://github.com/wbkit/Scribbles4All.
Scribbles for All: Benchmarking Scribble Supervised Segmentation Across Datasets
[ "Wolfgang Boettcher", "Lukas Hoyer", "Ozan Unal", "Jan Eric Lenssen", "Bernt Schiele" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
oral
2408.12489
[ "https://github.com/wbkit/scribbles4all" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=UDC8D6U7dX
@inproceedings{ koppula2024tapvidd, title={{TAPV}id-3D: A Benchmark for Tracking Any Point in 3D}, author={Skanda Koppula and Ignacio Rocco and Yi Yang and Joseph Heyward and Joao Carreira and Andrew Zisserman and Gabriel Brostow and Carl Doersch}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=UDC8D6U7dX} }
We introduce a new benchmark, TAPVid-3D, for evaluating the task of long-range Tracking Any Point in 3D (TAP-3D). While point tracking in two dimensions (TAP-2D) has many benchmarks measuring performance on real-world videos, such as TAPVid-DAVIS, three-dimensional point tracking has none. To this end, leveraging existing footage, we build a new benchmark for 3D point tracking featuring 4,000+ real-world videos, composed of three different data sources spanning a variety of object types, motion patterns, and indoor and outdoor environments. To measure performance on the TAP-3D task, we formulate a collection of metrics that extend the Jaccard-based metric used in TAP-2D to handle the complexities of ambiguous depth scales across models, occlusions, and multi-track spatio-temporal smoothness. We manually verify a large sample of trajectories to ensure correct video annotations, and assess the current state of the TAP-3D task by constructing competitive baselines using existing tracking models. We anticipate this benchmark will serve as a guidepost to improve our ability to understand precise 3D motion and surface deformation from monocular video.
TAPVid-3D: A Benchmark for Tracking Any Point in 3D
[ "Skanda Koppula", "Ignacio Rocco", "Yi Yang", "Joseph Heyward", "Joao Carreira", "Andrew Zisserman", "Gabriel Brostow", "Carl Doersch" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2407.05921
[ "https://github.com/google-deepmind/tapnet" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=U2pNwSuQqD
@inproceedings{ wang2024needle, title={Needle In A Multimodal Haystack}, author={Weiyun Wang and Shuibo Zhang and Yiming Ren and Yuchen Duan and Tiantong Li and Shuo Liu and Mengkang Hu and Zhe Chen and Kaipeng Zhang and Lewei Lu and Xizhou Zhu and Ping Luo and Yu Qiao and Jifeng Dai and Wenqi Shao and Wenhai Wang}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=U2pNwSuQqD} }
With the rapid advancement of multimodal large language models (MLLMs), their evaluation has become increasingly comprehensive. However, understanding long multimodal content, as a foundational ability for real-world applications, remains underexplored. In this work, we present Needle In A Multimodal Haystack (MM-NIAH), the first benchmark specifically designed to systematically evaluate the capability of existing MLLMs to comprehend long multimodal documents. Our benchmark includes three types of evaluation tasks: multimodal retrieval, counting, and reasoning. In each task, the model is required to answer the questions according to different key information scattered throughout the given multimodal document. Evaluating the leading MLLMs on MM-NIAH, we observe that existing models still have significant room for improvement on these tasks, especially on vision-centric evaluation. We hope this work can provide a platform for further research on long multimodal document comprehension and contribute to the advancement of MLLMs. Code and benchmark are released at https://github.com/OpenGVLab/MM-NIAH.
Needle In A Multimodal Haystack
[ "Weiyun Wang", "Shuibo Zhang", "Yiming Ren", "Yuchen Duan", "Tiantong Li", "Shuo Liu", "Mengkang Hu", "Zhe Chen", "Kaipeng Zhang", "Lewei Lu", "Xizhou Zhu", "Ping Luo", "Yu Qiao", "Jifeng Dai", "Wenqi Shao", "Wenhai Wang" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.07230
[ "https://github.com/opengvlab/mm-niah" ]
https://huggingface.co/papers/2406.07230
10
52
1
16
[]
[ "OpenGVLab/MM-NIAH" ]
[]
[]
[ "OpenGVLab/MM-NIAH" ]
[]
1
null
https://openreview.net/forum?id=U2aVNDrZGx
@inproceedings{ wen2024benchmarking, title={Benchmarking Complex Instruction-Following with Multiple Constraints Composition}, author={Bosi Wen and Pei Ke and Xiaotao Gu and Lindong Wu and Hao Huang and Jinfeng Zhou and Wenchuang Li and Binxin Hu and Wendy Gao and Jiaxing Xu and Yiming Liu and Jie Tang and Hongning Wang and Minlie Huang}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=U2aVNDrZGx} }
Instruction following is one of the fundamental capabilities of large language models (LLMs). As the ability of LLMs is constantly improving, they have been increasingly applied to deal with complex human instructions in real-world scenarios. Therefore, how to evaluate the ability of complex instruction-following of LLMs has become a critical research problem. Existing benchmarks mainly focus on modeling different types of constraints in human instructions while neglecting the composition of different constraints, which is an indispensable constituent in complex instructions. To this end, we propose ComplexBench, a benchmark for comprehensively evaluating the ability of LLMs to follow complex instructions composed of multiple constraints. We propose a hierarchical taxonomy for complex instructions, including 4 constraint types, 19 constraint dimensions, and 4 composition types, and manually collect a high-quality dataset accordingly. To make the evaluation reliable, we augment LLM-based evaluators with rules to effectively verify whether generated texts can satisfy each constraint and composition. Furthermore, we obtain the final evaluation score based on the dependency structure determined by different composition types. ComplexBench identifies significant deficiencies in existing LLMs when dealing with complex instructions with multiple constraints composition.
Benchmarking Complex Instruction-Following with Multiple Constraints Composition
[ "Bosi Wen", "Pei Ke", "Xiaotao Gu", "Lindong Wu", "Hao Huang", "Jinfeng Zhou", "Wenchuang Li", "Binxin Hu", "Wendy Gao", "Jiaxing Xu", "Yiming Liu", "Jie Tang", "Hongning Wang", "Minlie Huang" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2407.03978
[ "https://github.com/thu-coai/complexbench" ]
https://huggingface.co/papers/2407.03978
0
0
0
14
[]
[]
[]
[]
[]
[]
1
null
https://openreview.net/forum?id=TyIWrwzpgu
@inproceedings{ franzen2024arctique, title={Arctique: An artificial histopathological dataset unifying realism and controllability for uncertainty quantification}, author={Jannik Franzen and Claudia Winklmayr and Vanessa Emanuela Guarino and Christoph Karg and Xiaoyan Yu and Nora Koreuber and Jan Philipp Albrecht and Philip Bischoff and Dagmar Kainmueller}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=TyIWrwzpgu} }
Uncertainty Quantification (UQ) is crucial for reliable image segmentation. Yet, while the field sees continual development of novel methods, a lack of agreed-upon benchmarks limits their systematic comparison and evaluation: Current UQ methods are typically tested either on overly simplistic toy datasets or on complex real-world datasets that do not allow to discern true uncertainty. To unify both controllability and complexity, we introduce Arctique, a procedurally generated dataset modeled after histopathological colon images. We chose histopathological images for two reasons: 1) their complexity in terms of intricate object structures and highly variable appearance, which yields challenging segmentation problems, and 2) their broad prevalence for medical diagnosis and respective relevance of high-quality UQ. To generate Arctique, we established a Blender-based framework for 3D scene creation with intrinsic noise manipulation. Arctique contains 50,000 rendered images with precise masks as well as noisy label simulations. We show that by independently controlling the uncertainty in both images and labels, we can effectively study the performance of several commonly used UQ methods. Hence, Arctique serves as a critical resource for benchmarking and advancing UQ techniques and other methodologies in complex, multi-object environments, bridging the gap between realism and controllability. All code is publicly available, allowing re-creation and controlled manipulations of our shipped images as well as creation and rendering of new scenes.
Arctique: An artificial histopathological dataset unifying realism and controllability for uncertainty quantification
[ "Jannik Franzen", "Claudia Winklmayr", "Vanessa Emanuela Guarino", "Christoph Karg", "Xiaoyan Yu", "Nora Koreuber", "Jan Philipp Albrecht", "Philip Bischoff", "Dagmar Kainmueller" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2411.07097
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=TuMnKFKPho
@inproceedings{ lee2024vhelm, title={{VHELM}: A Holistic Evaluation of Vision Language Models}, author={Tony Lee and Haoqin Tu and Chi Heem Wong and Wenhao Zheng and Yiyang Zhou and Yifan Mai and Josselin Somerville Roberts and Michihiro Yasunaga and Huaxiu Yao and Cihang Xie and Percy Liang}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=TuMnKFKPho} }
Current benchmarks for assessing vision-language models (VLMs) often focus on their perception or problem-solving capabilities and neglect other critical aspects such as fairness, multilinguality, or toxicity. Furthermore, they differ in their evaluation procedures and the scope of the evaluation, making it difficult to compare models. To address these issues, we extend the HELM framework to VLMs to present the Holistic Evaluation of Vision Language Models (VHELM). VHELM aggregates various datasets to cover one or more of the 9 aspects: *visual perception*, *knowledge*, *reasoning*, *bias*, *fairness*, *multilinguality*, *robustness*, *toxicity*, and *safety*. In doing so, we produce a comprehensive, multi-dimensional view of the capabilities of the VLMs across these important factors. In addition, we standardize the standard inference parameters, methods of prompting, and evaluation metrics to enable fair comparisons across models. Our framework is designed to be lightweight and automatic so that evaluation runs are cheap and fast. Our initial run evaluates 22 VLMs on 21 existing datasets to provide a holistic snapshot of the models. We uncover new key findings, such as the fact that efficiency-focused models (e.g., Claude 3 Haiku or Gemini 1.5 Flash) perform significantly worse than their full models (e.g., Claude 3 Opus or Gemini 1.5 Pro) on the bias benchmark but not when evaluated on the other aspects. For transparency, we release the raw model generations and complete results on our website at https://crfm.stanford.edu/helm/vhelm/v2.0.1. VHELM is intended to be a living benchmark, and we hope to continue adding new datasets and models over time.
VHELM: A Holistic Evaluation of Vision Language Models
[ "Tony Lee", "Haoqin Tu", "Chi Heem Wong", "Wenhao Zheng", "Yiyang Zhou", "Yifan Mai", "Josselin Somerville Roberts", "Michihiro Yasunaga", "Huaxiu Yao", "Cihang Xie", "Percy Liang" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2410.07112
[ "https://github.com/stanford-crfm/helm" ]
https://huggingface.co/papers/2410.07112
0
2
2
11
[]
[]
[]
[]
[]
[]
1
null
https://openreview.net/forum?id=TbslDzPxhF
@inproceedings{ chen2024jobsdf, title={Job-{SDF}: A Multi-Granularity Dataset for Job Skill Demand Forecasting and Benchmarking}, author={Xi Chen and Chuan Qin and Chuyu Fang and Chao Wang and Chen Zhu and Fuzhen Zhuang and Hengshu Zhu and Hui Xiong}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=TbslDzPxhF} }
In a rapidly evolving job market, skill demand forecasting is crucial as it enables policymakers and businesses to anticipate and adapt to changes, ensuring that workforce skills align with market needs, thereby enhancing productivity and competitiveness. Additionally, by identifying emerging skill requirements, it directs individuals towards relevant training and education opportunities, promoting continuous self-learning and development. However, the absence of comprehensive datasets presents a significant challenge, impeding research and the advancement of this field. To bridge this gap, we present Job-SDF, a dataset designed to train and benchmark job-skill demand forecasting models. Based on millions of public job advertisements collected from online recruitment platforms, this dataset encompasses monthly recruitment demand. Our dataset uniquely enables evaluating skill demand forecasting models at various granularities, including occupation, company, and regional levels. We benchmark a range of models on this dataset, evaluating their performance in standard scenarios, in predictions focused on lower value ranges, and in the presence of structural breaks, providing new insights for further research. Our code and dataset are publicly accessible via the https://github.com/Job-SDF/benchmark.
Job-SDF: A Multi-Granularity Dataset for Job Skill Demand Forecasting and Benchmarking
[ "Xi Chen", "Chuan Qin", "Chuyu Fang", "Chao Wang", "Chen Zhu", "Fuzhen Zhuang", "Hengshu Zhu", "Hui Xiong" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.11920
[ "https://github.com/job-sdf/benchmark" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=THMgVAkZwh
@inproceedings{ huang2024vlkeb, title={{VLKEB}: A Large Vision-Language Model Knowledge Editing Benchmark}, author={Han Huang and Haitian Zhong and Tao Yu and Qiang Liu and Shu Wu and Liang Wang and Tieniu Tan}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=THMgVAkZwh} }
Recently, knowledge editing on large language models (LLMs) has received considerable attention. Compared to this, editing Large Vision-Language Models (LVLMs) faces extra challenges from diverse data modalities and complicated model components, and data for LVLMs editing are limited. The existing LVLM editing benchmark, which comprises three metrics (Reliability, Locality, and Generality), falls short in the quality of synthesized evaluation images and cannot assess whether models apply edited knowledge in relevant content. Therefore, we employ more reliable data collection methods to construct a new Large $\textbf{V}$ision-$\textbf{L}$anguage Model $\textbf{K}$nowledge $\textbf{E}$diting $\textbf{B}$enchmark, $\textbf{VLKEB}$, and extend the Portability metric for more comprehensive evaluation. Leveraging a multi-modal knowledge graph, our image data are bound with knowledge entities. This can be further used to extract entity-related knowledge, which constitutes the base of editing data. We conduct experiments of different editing methods on five LVLMs, and thoroughly analyze how do they impact the models. The results reveal strengths and deficiencies of these methods and hopefully provide insights for future research. The codes and dataset are available at: https://github.com/VLKEB/VLKEB.
VLKEB: A Large Vision-Language Model Knowledge Editing Benchmark
[ "Han Huang", "Haitian Zhong", "Tao Yu", "Qiang Liu", "Shu Wu", "Liang Wang", "Tieniu Tan" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2403.07350
[ "https://github.com/vlkeb/vlkeb" ]
https://huggingface.co/papers/2403.07350
0
1
0
7
[ "HymanH/VLKEB-models" ]
[ "HymanH/VLKEB-data" ]
[]
[ "HymanH/VLKEB-models" ]
[ "HymanH/VLKEB-data" ]
[]
1
null
https://openreview.net/forum?id=Sp9cj4pNYD
@inproceedings{ wu2024surgicai, title={Surgic{AI}: A Hierarchical Platform for Fine-Grained Surgical Policy Learning and Benchmarking}, author={Jin Wu and Haoying Zhou and Peter Kazanzides and Adnan Munawar and Anqi Liu}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=Sp9cj4pNYD} }
Despite advancements in robotic-assisted surgery, automating complex tasks like suturing remains challenging due to the need for adaptability and precision. Learning-based approaches, particularly reinforcement learning (RL) and imitation learning (IL), require realistic simulation environments for efficient data collection. However, current platforms often include only relatively simple, non-dexterous manipulations and lack the flexibility required for effective learning and generalization. We introduce SurgicAI, a novel platform for development and benchmarking that addresses these challenges by providing the flexibility to accommodate both modular subtasks and more importantly task decomposition in RL-based surgical robotics. Compatible with the da Vinci Surgical System, SurgicAI offers a standardized pipeline for collecting and utilizing expert demonstrations. It supports the deployment of multiple RL and IL approaches, and the training of both singular and compositional subtasks in suturing scenarios, featuring high dexterity and modularization. Meanwhile, SurgicAI sets clear metrics and benchmarks for the assessment of learned policies. We implemented and evaluated multiple RL and IL algorithms on SurgicAI. Our detailed benchmark analysis underscores SurgicAI's potential to advance policy learning in surgical robotics. Details: https://github.com/surgical-robotics-ai/SurgicAI
SurgicAI: A Hierarchical Platform for Fine-Grained Surgical Policy Learning and Benchmarking
[ "Jin Wu", "Haoying Zhou", "Peter Kazanzides", "Adnan Munawar", "Anqi Liu" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.13865
[ "https://github.com/surgical-robotics-ai/surgicai" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=ScPgzCZ6Lo
@inproceedings{ sun2024gcbench, title={{GC}-Bench: An Open and Unified Benchmark for Graph Condensation}, author={Qingyun Sun and Ziying Chen and Beining Yang and Cheng Ji and Xingcheng Fu and Sheng Zhou and Hao Peng and Jianxin Li and Philip S. Yu}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=ScPgzCZ6Lo} }
Graph condensation (GC) has recently garnered considerable attention due to its ability to reduce large-scale graph datasets while preserving their essential properties. The core concept of GC is to create a smaller, more manageable graph that retains the characteristics of the original graph. Despite the proliferation of graph condensation methods developed in recent years, there is no comprehensive evaluation and in-depth analysis, which creates a great obstacle to understanding the progress in this field. To fill this gap, we develop a comprehensive Graph Condensation Benchmark (GC-Bench) to analyze the performance of graph condensation in different scenarios systematically. Specifically, GC-Bench systematically investigates the characteristics of graph condensation in terms of the following dimensions: effectiveness, transferability, and complexity. We comprehensively evaluate 12 state-of-the-art graph condensation algorithms in node-level and graph-level tasks and analyze their performance in 12 diverse graph datasets. Further, we have developed an easy-to-use library for training and evaluating different GC methods to facilitate reproducible research.The GC-Bench library is available at https://github.com/RingBDStack/GC-Bench.
GC-Bench: An Open and Unified Benchmark for Graph Condensation
[ "Qingyun Sun", "Ziying Chen", "Beining Yang", "Cheng Ji", "Xingcheng Fu", "Sheng Zhou", "Hao Peng", "Jianxin Li", "Philip S. Yu" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2407.00615
[ "https://github.com/ringbdstack/gc-bench" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=SXYmSTXyHm
@inproceedings{ fawkes2024the, title={The Fragility of Fairness: Causal Sensitivity Analysis for Fair Machine Learning}, author={Jake Fawkes and Nic Fishman and Mel Andrews and Zachary Chase Lipton}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=SXYmSTXyHm} }
Fairness metrics are a core tool in the fair machine learning literature (FairML), used to determine that ML models are, in some sense, “fair.” Real-world data, however, are typically plagued by various measurement biases and other violated assumptions, which can render fairness assessments meaningless. We adapt tools from causal sensitivity analysis to the FairML context, providing a general frame- work which (1) accommodates effectively any combination of fairness metric and bias that can be posed in the “oblivious setting”; (2) allows researchers to inves- tigate combinations of biases, resulting in non-linear sensitivity; and (3) enables flexible encoding of domain-specific constraints and assumptions. Employing this framework, we analyze the sensitivity of the most common parity metrics under 3 varieties of classifier across 14 canonical fairness datasets. Our analysis reveals the striking fragility of fairness assessments to even minor dataset biases. We show that causal sensitivity analysis provides a powerful and necessary toolkit for gauging the informativeness of parity metric evaluations. Our repository is \href{https://github.com/Jakefawkes/fragile_fair}{available here}.
The Fragility of Fairness: Causal Sensitivity Analysis for Fair Machine Learning
[ "Jake Fawkes", "Nic Fishman", "Mel Andrews", "Zachary Chase Lipton" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2410.09600
[ "https://github.com/jakefawkes/fragile_fair" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=S9Qrrxpy6z
@inproceedings{ press2024citeme, title={Cite{ME}: Can Language Models Accurately Cite Scientific Claims?}, author={Ori Press and Andreas Hochlehnert and Ameya Prabhu and Vishaal Udandarao and Ofir Press and Matthias Bethge}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=S9Qrrxpy6z} }
Thousands of new scientific papers are published each month. Such information overload complicates researcher efforts to stay current with the state-of-the-art as well as to verify and correctly attribute claims. We pose the following research question: Given a text excerpt referencing a paper, could an LM act as a research assistant to correctly identify the referenced paper? We advance efforts to answer this question by building a benchmark that evaluates the abilities of LMs in citation attribution. Our benchmark, CiteME, consists of text excerpts from recent machine learning papers, each referencing a single other paper. CiteME use reveals a large gap between frontier LMs and human performance, with LMs achieving only 4.2-18.5% accuracy and humans 69.7%. We close this gap by introducing CiteAgent, an autonomous system built on the GPT-4o LM that can also search and read papers, which achieves an accuracy of 35.3% on CiteME. Overall, CiteME serves as a challenging testbed for open-ended claim attribution, driving the research community towards a future where any claim made by an LM can be automatically verified and discarded if found to be incorrect.
CiteME: Can Language Models Accurately Cite Scientific Claims?
[ "Ori Press", "Andreas Hochlehnert", "Ameya Prabhu", "Vishaal Udandarao", "Ofir Press", "Matthias Bethge" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2407.12861
[ "https://github.com/bethgelab/CiteME" ]
https://huggingface.co/papers/2407.12861
0
0
0
6
[]
[ "bethgelab/CiteME" ]
[]
[]
[ "bethgelab/CiteME" ]
[]
1
null
https://openreview.net/forum?id=RgUcvs6ssu
@inproceedings{ liu2024welqrate, title={WelQrate: Defining the Gold Standard in Small Molecule Drug Discovery Benchmarking}, author={Yunchao Liu and Ha Dong and Xin Wang and Rocco Moretti and Yu Wang and Zhaoqian Su and Jiawei Gu and Bobby Bodenheimer and Charles Weaver and Jens Meiler and Tyler Derr}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=RgUcvs6ssu} }
While deep learning has revolutionized computer-aided drug discovery, the AI community has predominantly focused on model innovation and placed less emphasis on establishing best benchmarking practices. We posit that without a sound model evaluation framework, the AI community's efforts cannot reach their full potential, thereby slowing the progress and transfer of innovation into real-world drug discovery. Thus, in this paper, we seek to establish a new gold standard for small molecule drug discovery benchmarking, *WelQrate*. Specifically, our contributions are threefold: ***WelQrate*** **dataset collection** - we introduce a meticulously curated collection of 9 datasets spanning 5 therapeutic target classes. Our hierarchical curation pipelines, designed by drug discovery experts, go beyond the primary high-throughput screen by leveraging additional confirmatory and counter screens along with rigorous domain-driven preprocessing, such as Pan-Assay Interference Compounds (PAINS) filtering, to ensure the high-quality data in the datasets; ***WelQrate*** **Evaluation Framework** - we propose a standardized model evaluation framework considering high-quality datasets, featurization, 3D conformation generation, evaluation metrics, and data splits, which provides a reliable benchmarking for drug discovery experts conducting real-world virtual screening; **Benchmarking** - we evaluate model performance through various research questions using the *WelQrate* dataset collection, exploring the effects of different models, dataset quality, featurization methods, and data splitting strategies on the results. In summary, we recommend adopting our proposed *WelQrate* as the gold standard in small molecule drug discovery benchmarking. The *WelQrate* dataset collection, along with the curation codes, and experimental scripts are all publicly available at www.WelQrate.org.
WelQrate: Defining the Gold Standard in Small Molecule Drug Discovery Benchmarking
[ "Yunchao Liu", "Ha Dong", "Xin Wang", "Rocco Moretti", "Yu Wang", "Zhaoqian Su", "Jiawei Gu", "Bobby Bodenheimer", "Charles Weaver", "Jens Meiler", "Tyler Derr" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2411.09820
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=RSvhU69sbG
@inproceedings{ wang2024mathpile, title={MathPile: A Billion-Token-Scale Pretraining Corpus for Math}, author={Zengzhi Wang and Xuefeng Li and Rui Xia and Pengfei Liu}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=RSvhU69sbG} }
High-quality, large-scale corpora are the cornerstone of building foundation models. In this work, we introduce MathPile, a diverse and high-quality math-centric corpus comprising about 9.5 billion tokens. Throughout its creation, we adhered to the principle of “less is more”, firmly believing in the supremacy of data quality over quantity, even in the pre-training phase. Our meticulous data collection and processing efforts included a complex suite of preprocessing, prefiltering, language identification, cleaning, filtering, and deduplication, ensuring the high quality of our corpus. Furthermore, we performed data contamination detection on downstream benchmark test sets to eliminate duplicates and conducted continual pre-training experiments, booting the performance on common mathematical reasoning benchmarks. We aim for our MathPile to boost language models’ mathematical reasoning abilities and open-source its different versions and processing scripts to advance the field.
MathPile: A Billion-Token-Scale Pretraining Corpus for Math
[ "Zengzhi Wang", "Xuefeng Li", "Rui Xia", "Pengfei Liu" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2312.17120
[ "https://github.com/gair-nlp/mathpile" ]
https://huggingface.co/papers/2312.17120
2
25
10
3
[]
[ "GAIR/MathPile", "GAIR/MathPile_Commercial" ]
[]
[]
[ "GAIR/MathPile", "GAIR/MathPile_Commercial" ]
[]
1
null
https://openreview.net/forum?id=RQlbMrA5XL
@inproceedings{ zhou2024novobench, title={NovoBench: Benchmarking Deep Learning-based {\textbackslash}emph\{De Novo\} Sequencing Methods in Proteomics}, author={Jingbo Zhou and Shaorong Chen and Jun Xia and Sizhe Liu and Tianze Ling and Wenjie Du and Yue Liu and Jianwei Yin and Stan Z. Li}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=RQlbMrA5XL} }
Tandem mass spectrometry has played a pivotal role in advancing proteomics, enabling the analysis of protein composition in biological tissues. Many deep learning methods have been developed for \emph{de novo} peptide sequencing task, i.e., predicting the peptide sequence for the observed mass spectrum. However, two key challenges seriously hinder the further research of this important task. Firstly, since there is no consensus for the evaluation datasets, the empirical results in different research papers are often not comparable, leading to unfair comparison. Secondly, the current methods are usually limited to amino acid-level or peptide-level precision and recall metrics. In this work, we present the first unified benchmark NovoBench for \emph{de novo} peptide sequencing, which comprises diverse mass spectrum data, integrated models, and comprehensive evaluation metrics. Recent impressive methods, including DeepNovo, PointNovo, Casanovo, InstaNovo, AdaNovo and $\pi$-HelixNovo are integrated into our framework. In addition to amino acid-level and peptide-level precision and recall, we also evaluate the models' performance in terms of identifying post-tranlational modifications (PTMs), efficiency and robustness to peptide length, noise peaks and missing fragment ratio, which are important influencing factors while seldom be considered. Leveraging this benchmark, we conduct a large-scale study of current methods, report many insightful findings that open up new possibilities for future development. The benchmark is open-sourced to facilitate future research and application. The code is available at \url{https://github.com/Westlake-OmicsAI/NovoBench}.
NovoBench: Benchmarking Deep Learning-based De Novo Sequencing Methods in Proteomics
[ "Jingbo Zhou", "Shaorong Chen", "Jun Xia", "Sizhe Liu", "Tianze Ling", "Wenjie Du", "Yue Liu", "Jianwei Yin", "Stan Z. Li" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=RJZRhMzZzH
@inproceedings{ zhang2024a, title={A Careful Examination of Large Language Model Performance on Grade School Arithmetic}, author={Hugh Zhang and Jeff Da and Dean Lee and Vaughn Robinson and Catherine Wu and William Song and Tiffany Zhao and Pranav Vishnu Raja and Charlotte Zhuang and Dylan Z Slack and Qin Lyu and Sean M. Hendryx and Russell Kaplan and Michele Lunati and Summer Yue}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=RJZRhMzZzH} }
Large language models (LLMs) have achieved impressive success on many benchmarks for mathematical reasoning. However, there is growing concern that some of this performance actually reflects dataset contamination, where data closely resembling benchmark questions leaks into the training data, instead of true reasoning ability. To investigate this claim rigorously, we commission Grade School Math 1000 (GSM1k). GSM1k is designed to mirror the style and complexity of the established GSM8k benchmark, the gold standard for measuring elementary mathematical reasoning. We ensure that the two benchmarks are comparable across important metrics such as human solve rates, number of steps in solution, answer magnitude, and more. When evaluating leading open- and closed-source LLMs on GSM1k, we observe accuracy drops of up to 8%, with several families of models showing evidence of systematic overfitting across almost all model sizes. Further analysis suggests a positive relationship (Spearman's r^2=0.36) between a model's probability of generating an example from GSM8k and its performance gap between GSM8k and GSM1k, suggesting that some models may have partially memorized GSM8k. Nevertheless, many models, especially those on the frontier, show minimal signs of overfitting, and all models broadly demonstrate generalization to novel math problems guaranteed to not be in their training data.
A Careful Examination of Large Language Model Performance on Grade School Arithmetic
[ "Hugh Zhang", "Jeff Da", "Dean Lee", "Vaughn Robinson", "Catherine Wu", "William Song", "Tiffany Zhao", "Pranav Vishnu Raja", "Charlotte Zhuang", "Dylan Z Slack", "Qin Lyu", "Sean M. Hendryx", "Russell Kaplan", "Michele Lunati", "Summer Yue" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
oral
2405.00332
[ "" ]
https://huggingface.co/papers/2405.00332
8
30
2
15
[ "Aleph-Alpha/Pharia-1-LLM-7B-control" ]
[ "malhajar/gsm1k-tr", "orbinaDev/gsm1k-sample" ]
[]
[ "Aleph-Alpha/Pharia-1-LLM-7B-control" ]
[ "malhajar/gsm1k-tr", "orbinaDev/gsm1k-sample" ]
[]
1
null
https://openreview.net/forum?id=RJHQAcbmpZ
@inproceedings{ gonz{\'a}lez-duque2024a, title={A survey and benchmark of high-dimensional Bayesian optimization of discrete sequences}, author={Miguel Gonz{\'a}lez-Duque and Richard Michael and Simon Bartels and Yevgen Zainchkovskyy and S{\o}ren Hauberg and Wouter Boomsma}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=RJHQAcbmpZ} }
Optimizing discrete black-box functions is key in several domains, e.g. protein engineering and drug design. Due to the lack of gradient information and the need for sample efficiency, Bayesian optimization is an ideal candidate for these tasks. Several methods for high-dimensional continuous and categorical Bayesian optimization have been proposed recently. However, our survey of the field reveals highly heterogeneous experimental set-ups across methods and technical barriers for the replicability and application of published algorithms to real-world tasks. To address these issues, we develop a unified framework to test a vast array of high-dimensional Bayesian optimization methods and a collection of standardized black-box functions representing real-world application domains in chemistry and biology. These two components of the benchmark are each supported by flexible, scalable, and easily extendable software libraries (poli and poli-baselines), allowing practitioners to readily incorporate new optimization objectives or discrete optimizers. Project website: https://machinelearninglifescience.github.io/hdbo_benchmark.
A survey and benchmark of high-dimensional Bayesian optimization of discrete sequences
[ "Miguel González-Duque", "Richard Michael", "Simon Bartels", "Yevgen Zainchkovskyy", "Søren Hauberg", "Wouter Boomsma" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.04739
[ "https://github.com/MachineLearningLifeScience/hdbo_benchmark" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=R9gR9MPuD5
@inproceedings{ gupta2024interpbench, title={InterpBench: Semi-Synthetic Transformers for Evaluating Mechanistic Interpretability Techniques}, author={Rohan Gupta and Iv{\'a}n Arcuschin and Thomas Kwa and Adri{\`a} Garriga-Alonso}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=R9gR9MPuD5} }
Mechanistic interpretability methods aim to identify the algorithm a neural network implements, but it is difficult to validate such methods when the true algorithm is unknown. This work presents InterpBench, a collection of semi-synthetic yet realistic transformers with known circuits for evaluating these techniques. We train simple neural networks using a stricter version of Interchange Intervention Training (IIT) which we call Strict IIT (SIIT). Like the original, SIIT trains neural networks by aligning their internal computation with a desired high-level causal model, but it also prevents non-circuit nodes from affecting the model's output. We evaluate SIIT on sparse transformers produced by the Tracr tool and find that SIIT models maintain Tracr's original circuit while being more realistic. SIIT can also train transformers with larger circuits, like Indirect Object Identification (IOI). Finally, we use our benchmark to evaluate existing circuit discovery techniques.
InterpBench: Semi-Synthetic Transformers for Evaluating Mechanistic Interpretability Techniques
[ "Rohan Gupta", "Iván Arcuschin", "Thomas Kwa", "Adrià Garriga-Alonso" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2407.14494
[ "https://github.com/FlyingPumba/circuits-benchmark" ]
https://huggingface.co/papers/2407.14494
0
1
0
4
[ "cybershiptrooper/InterpBench" ]
[]
[]
[ "cybershiptrooper/InterpBench" ]
[]
[]
1
null
https://openreview.net/forum?id=R6kJtWsTGy
@inproceedings{ liu2024the, title={The Elephant in the Room: Towards A Reliable Time-Series Anomaly Detection Benchmark}, author={Qinghua Liu and John Paparrizos}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=R6kJtWsTGy} }
Time-series anomaly detection is a fundamental task across scientific fields and industries. However, the field has long faced the ``elephant in the room:'' critical issues including flawed datasets, biased evaluation measures, and inconsistent benchmarking practices that have remained largely ignored and unaddressed. We introduce the TSB-AD to systematically tackle these issues in the following three aspects: (i) Dataset Integrity: with 1070 high-quality time series from a diverse collection of 40 datasets (doubling the size of the largest collection and four times the number of existing curated datasets), we provide the first large-scale, heterogeneous, meticulously curated dataset that combines the effort of human perception and model interpretation; (ii) Measure Reliability: by revealing issues and biases in evaluation measures, we identify the most reliable and accurate measure, namely, VUS-PR for anomaly detection in time series to address concerns from the community; and (iii) Comprehensive Benchmarking: with a broad spectrum of 40 detection algorithms, from statistical methods to the latest foundation models, we perform a comprehensive evaluation that includes a thorough hyperparameter tuning and a unified setup for a fair and reproducible comparison. Our findings challenge the conventional wisdom regarding the superiority of advanced neural network architectures, revealing that simpler architectures and statistical methods often yield better performance. The promising performance of neural networks on multivariate cases and foundation models on point anomalies highlights the need for further advancements in these methods. We open-source the benchmark at https://github.com/TheDatumOrg/TSB-AD to promote further research.
The Elephant in the Room: Towards A Reliable Time-Series Anomaly Detection Benchmark
[ "Qinghua Liu", "John Paparrizos" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=R4rNYJ2slJ
@inproceedings{ zhao2024opensatmap, title={OpenSatMap: A Fine-grained High-resolution Satellite Dataset for Large-scale Map Construction}, author={Hongbo Zhao and Lue Fan and Yuntao Chen and Haochen Wang and yuran Yang and Xiaojuan Jin and YIXIN ZHANG and Gaofeng Meng and Zhaoxiang Zhang}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=R4rNYJ2slJ} }
In this paper, we propose OpenSatMap, a fine-grained, high-resolution satellite dataset for large-scale map construction. Map construction is one of the foundations of the transportation industry, such as navigation and autonomous driving. Extracting road structures from satellite images is an efficient way to construct large-scale maps. However, existing satellite datasets provide only coarse semantic-level labels with a relatively low resolution (up to level 19), impeding the advancement of this field. In contrast, the proposed OpenSatMap (1) has fine-grained instance-level annotations; (2) consists of high-resolution images (level 20); (3) is currently the largest one of its kind; (4) collects data with high diversity. Moreover, OpenSatMap covers and aligns with the popular nuScenes dataset and Argoverse 2 dataset to potentially advance autonomous driving technologies. By publishing and maintaining the dataset, we provide a high-quality benchmark for satellite-based map construction and downstream tasks like autonomous driving.
OpenSatMap: A Fine-grained High-resolution Satellite Dataset for Large-scale Map Construction
[ "Hongbo Zhao", "Lue Fan", "Yuntao Chen", "Haochen Wang", "yuran Yang", "Xiaojuan Jin", "YIXIN ZHANG", "Gaofeng Meng", "Zhaoxiang Zhang" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2410.23278
[ "" ]
https://huggingface.co/papers/2410.23278
1
1
0
9
[]
[ "z-hb/OpenSatMap" ]
[]
[]
[ "z-hb/OpenSatMap" ]
[]
1
null
https://openreview.net/forum?id=Qz2xmVhn4S
@inproceedings{ cao2024spiderv, title={Spider2-V: How Far Are Multimodal Agents From Automating Data Science and Engineering Workflows?}, author={Ruisheng Cao and Fangyu Lei and Haoyuan Wu and Jixuan Chen and Yeqiao Fu and Hongcheng Gao and Xiong Xinzhuang and Hanchong Zhang and Wenjing Hu and Yuchen Mao and Tianbao Xie and Hongshen Xu and Danyang Zhang and Sida Wang and Ruoxi Sun and Pengcheng Yin and Caiming Xiong and Ansong Ni and Qian Liu and Victor Zhong and Lu Chen and Kai Yu and Tao Yu}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=Qz2xmVhn4S} }
Data science and engineering workflows often span multiple stages, from warehousing to orchestration, using tools like BigQuery, dbt, and Airbyte. As vision language models (VLMs) advance in multimodal understanding and code generation, VLM-based agents could potentially automate these workflows by generating SQL queries, Python code, and GUI operations. This automation can improve the productivity of experts while democratizing access to large-scale data analysis. In this paper, we introduce Spider2-V, the first multimodal agent benchmark focusing on professional data science and engineering workflows, featuring 494 real-world tasks in authentic computer environments and incorporating 20 enterprise-level professional applications. These tasks, derived from real-world use cases, evaluate the ability of a multimodal agent to perform data-related tasks by writing code and managing the GUI in enterprise data software systems. To balance realistic simulation with evaluation simplicity, we devote significant effort to developing automatic configurations for task setup and carefully crafting evaluation metrics for each task. Furthermore, we supplement multimodal agents with comprehensive documents of these enterprise data software systems. Our empirical evaluation reveals that existing state-of-the-art LLM/VLM-based agents do not reliably automate full data workflows (14.0% success). Even with step-by-step guidance, these agents still underperform in tasks that require fine-grained, knowledge-intensive GUI actions (16.2%) and involve remote cloud-hosted workspaces (10.6%). We hope that Spider2-V paves the way for autonomous multimodal agents to transform the automation of data science and engineering workflow. Our code and data are available at https://spider2-v.github.io.
Spider2-V: How Far Are Multimodal Agents From Automating Data Science and Engineering Workflows?
[ "Ruisheng Cao", "Fangyu Lei", "Haoyuan Wu", "Jixuan Chen", "Yeqiao Fu", "Hongcheng Gao", "Xiong Xinzhuang", "Hanchong Zhang", "Wenjing Hu", "Yuchen Mao", "Tianbao Xie", "Hongshen Xu", "Danyang Zhang", "Sida Wang", "Ruoxi Sun", "Pengcheng Yin", "Caiming Xiong", "Ansong Ni", "Qian Liu", "Victor Zhong", "Lu Chen", "Kai Yu", "Tao Yu" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
oral
2407.10956
[ "https://github.com/xlang-ai/spider2-v" ]
https://huggingface.co/papers/2407.10956
5
6
2
23
[]
[ "xlangai/ubuntu_spider2v" ]
[]
[]
[ "xlangai/ubuntu_spider2v" ]
[]
1
null
https://openreview.net/forum?id=QxJHh7Z39R
@inproceedings{ guille-escuret2024expecting, title={Expecting The Unexpected: Towards Broad Out-Of-Distribution Detection}, author={Charles Guille-Escuret and Pierre-Andre Noel and Ioannis Mitliagkas and David Vazquez and Joao Monteiro}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=QxJHh7Z39R} }
Deployed machine learning systems require some mechanism to detect out-of-distribution (OOD) inputs. Existing research mainly focuses on one type of distribution shift: detecting samples from novel classes, absent from the training set. However, real-world systems encounter a broad variety of anomalous inputs, and the OOD literature neglects this diversity. This work categorizes five distinct types of distribution shifts and critically evaluates the performance of recent OOD detection methods on each of them. We publicly release our benchmark under the name BROAD (Benchmarking Resilience Over Anomaly Diversity). We find that while these methods excel in detecting novel classes, their performances are inconsistent across other types of distribution shifts. In other words, they can only reliably detect unexpected inputs that they have been specifically designed to expect. As a first step toward broad OOD detection, we learn a Gaussian mixture generative model for existing detection scores, enabling an ensemble detection approach that is more consistent and comprehensive for broad OOD detection, with improved performances over existing methods. We release code to build BROAD to facilitate a more comprehensive evaluation of novel OOD detectors.
Expecting The Unexpected: Towards Broad Out-Of-Distribution Detection
[ "Charles Guille-Escuret", "Pierre-Andre Noel", "Ioannis Mitliagkas", "David Vazquez", "Joao Monteiro" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2308.11480
[ "https://github.com/servicenow/broad-openood" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=QpF3DFP3Td
@inproceedings{ perron2024archaeoscape, title={Archaeoscape: Bringing Aerial Laser Scanning Archaeology to the Deep Learning Era}, author={Yohann PERRON and Vladyslav Sydorov and Adam P. Wijker and Damian Evans and Christophe Pottier and Loic Landrieu}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=QpF3DFP3Td} }
Airborne Laser Scanning (ALS) technology has transformed modern archaeology by unveiling hidden landscapes beneath dense vegetation. However, the lack of expert-annotated, open-access resources has hindered the analysis of ALS data using advanced deep learning techniques. We address this limitation with Archaeoscape (available at https://archaeoscape.ai), a novel large-scale archaeological ALS dataset spanning 888 km² in Cambodia with 31,141 annotated archaeological features from the Angkorian period. Archaeoscape is over four times larger than comparable datasets, and the first ALS archaeology resource with open-access data, annotations, and models. We benchmark several recent segmentation models to demonstrate the benefits of modern vision techniques for this problem and highlight the unique challenges of discovering subtle human-made structures under dense jungle canopies. By making Archaeoscape available in open access, we hope to bridge the gap between traditional archaeology and modern computer vision methods.
Archaeoscape: Bringing Aerial Laser Scanning Archaeology to the Deep Learning Era
[ "Yohann PERRON", "Vladyslav Sydorov", "Adam P. Wijker", "Damian Evans", "Christophe Pottier", "Loic Landrieu" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=QocjHRR31U
@inproceedings{ etxaniz2024bertaqa, title={Berta{QA}: How Much Do Language Models Know About Local Culture?}, author={Julen Etxaniz and Gorka Azkune and Aitor Soroa and Oier Lopez de Lacalle and Mikel Artetxe}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=QocjHRR31U} }
Large Language Models (LLMs) exhibit extensive knowledge about the world, but most evaluations have been limited to global or anglocentric subjects. This raises the question of how well these models perform on topics relevant to other cultures, whose presence on the web is not that prominent. To address this gap, we introduce BertaQA, a multiple-choice trivia dataset that is parallel in English and Basque. The dataset consists of a local subset with questions pertinent to the Basque culture, and a global subset with questions of broader interest. We find that state-of-the-art LLMs struggle with local cultural knowledge, even as they excel on global topics. However, we show that continued pre-training in Basque significantly improves the models' performance on Basque culture, even when queried in English. To our knowledge, this is the first solid evidence of knowledge transfer from a low-resource to a high-resource language. Our analysis sheds light on the complex interplay between language and knowledge, and reveals that some prior findings do not fully hold when reassessed on local topics. Our dataset and evaluation code are available under open licenses at https://github.com/juletx/BertaQA.
BertaQA: How Much Do Language Models Know About Local Culture?
[ "Julen Etxaniz", "Gorka Azkune", "Aitor Soroa", "Oier Lopez de Lacalle", "Mikel Artetxe" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.07302
[ "https://github.com/juletx/bertaqa" ]
https://huggingface.co/papers/2406.07302
1
0
0
5
[]
[ "HiTZ/BertaQA" ]
[]
[]
[ "HiTZ/BertaQA" ]
[]
1
null
https://openreview.net/forum?id=Qdf3ad5MXH
@inproceedings{ fang2024mmbenchvideo, title={{MMB}ench-Video: A Long-Form Multi-Shot Benchmark for Holistic Video Understanding}, author={Xinyu Fang and Kangrui Mao and Haodong Duan and Xiangyu Zhao and Yining Li and Dahua Lin and Kai Chen}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=Qdf3ad5MXH} }
The advent of large vision-language models (LVLMs) has spurred research into their applications in multi-modal contexts, particularly in video understanding. Traditional VideoQA benchmarks, despite providing quantitative metrics, often fail to encompass the full spectrum of video content and inadequately assess models' temporal comprehension. To address these limitations, we introduce MMBench-Video, a quantitative benchmark designed to rigorously evaluate LVLMs' proficiency in video understanding. MMBench-Video incorporates lengthy videos from YouTube and employs free-form questions, mirroring practical use cases. The benchmark is meticulously crafted to probe the models' temporal reasoning skills, with all questions human-annotated according to a carefully constructed ability taxonomy. We employ GPT-4 for automated assessment, demonstrating superior accuracy and robustness over earlier LLM-based evaluations. Utilizing MMBench-Video, we have conducted comprehensive evaluations that include both proprietary and open-source LVLMs for images and videos. MMBench-Video stands as a valuable resource for the research community, facilitating improved evaluation of LVLMs and catalyzing progress in the field of video understanding.
MMBench-Video: A Long-Form Multi-Shot Benchmark for Holistic Video Understanding
[ "Xinyu Fang", "Kangrui Mao", "Haodong Duan", "Xiangyu Zhao", "Yining Li", "Dahua Lin", "Kai Chen" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.14515
[ "https://github.com/open-compass/vlmevalkit" ]
https://huggingface.co/papers/2406.14515
6
32
1
7
[]
[ "opencompass/MMBench-Video" ]
[ "Demo750/XGBoost_Gaze" ]
[]
[ "opencompass/MMBench-Video" ]
[ "Demo750/XGBoost_Gaze" ]
1
null
https://openreview.net/forum?id=QWTCcxMpPA
@inproceedings{ wang2024measuring, title={Measuring Multimodal Mathematical Reasoning with {MATH}-Vision Dataset}, author={Ke Wang and Junting Pan and Weikang Shi and Zimu Lu and Houxing Ren and Aojun Zhou and Mingjie Zhan and Hongsheng Li}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=QWTCcxMpPA} }
Recent advancements in Large Multimodal Models (LMMs) have shown promising results in mathematical reasoning within visual contexts, with models exceeding human-level performance on existing benchmarks such as MathVista. However, we observe significant limitations in the diversity of questions and breadth of subjects covered by these benchmarks. To address this issue, we present the MATH-Vision (MATH-V) dataset, a meticulously curated collection of 3,040 high-quality mathematical problems with visual contexts sourced from real math competitions. Spanning 16 distinct mathematical disciplines and graded across 5 levels of difficulty, our dataset provides a comprehensive and diverse set of challenges for evaluating the mathematical reasoning abilities of LMMs. Through extensive experimentation, we unveil a notable performance gap between current LMMs and human performance on \datasetname, underscoring the imperative for further advancements in LMMs. Moreover, our detailed categorization allows for a thorough error analysis of LMMs, offering valuable insights to guide future research and development. The dataset is released at [MathLLMs/MathVision](https://huggingface.co/datasets/MathLLMs/MathVision)
Measuring Multimodal Mathematical Reasoning with MATH-Vision Dataset
[ "Ke Wang", "Junting Pan", "Weikang Shi", "Zimu Lu", "Houxing Ren", "Aojun Zhou", "Mingjie Zhan", "Hongsheng Li" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2402.14804
[ "" ]
https://huggingface.co/papers/2402.14804
2
2
0
6
[]
[ "MathLLMs/MathVision" ]
[]
[]
[ "MathLLMs/MathVision" ]
[]
1
null
https://openreview.net/forum?id=QSS5cGmKb1
@inproceedings{ wu2024stark, title={{ST}a{RK}: Benchmarking {LLM} Retrieval on Textual and Relational Knowledge Bases}, author={Shirley Wu and Shiyu Zhao and Michihiro Yasunaga and Kexin Huang and Kaidi Cao and Qian Huang and Vassilis N. Ioannidis and Karthik Subbian and James Zou and Jure Leskovec}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=QSS5cGmKb1} }
Answering real-world complex queries, such as complex product search, often requires accurate retrieval from semi-structured knowledge bases that involve blend of unstructured (e.g., textual descriptions of products) and structured (e.g., entity relations of products) information. However, many previous works studied textual and relational retrieval tasks as separate topics. To address the gap, we develop STARK, a large-scale Semi-structure retrieval benchmark on Textual and Relational Knowledge Bases. Our benchmark covers three domains: product search, academic paper search, and queries in precision medicine. We design a novel pipeline to synthesize realistic user queries that integrate diverse relational information and complex textual properties, together with their ground-truth answers (items). We conduct rigorous human evaluation to validate the quality of our synthesized queries. We further enhance the benchmark with high-quality human-generated queries to provide an authentic reference. STARK serves as a comprehensive testbed for evaluating the performance of retrieval systems driven by large language models (LLMs). Our experiments suggest that STARK presents significant challenges to the current retrieval and LLM systems, highlighting the need for more capable semi-structured retrieval systems.
STaRK: Benchmarking LLM Retrieval on Textual and Relational Knowledge Bases
[ "Shirley Wu", "Shiyu Zhao", "Michihiro Yasunaga", "Kexin Huang", "Kaidi Cao", "Qian Huang", "Vassilis N. Ioannidis", "Karthik Subbian", "James Zou", "Jure Leskovec" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2404.13207
[ "https://github.com/snap-stanford/stark" ]
https://huggingface.co/papers/2404.13207
0
0
0
10
[]
[ "snap-stanford/stark" ]
[ "snap-stanford/SKB-Explorer", "snap-stanford/stark-leaderboard", "zsyJosh/stark" ]
[]
[ "snap-stanford/stark" ]
[ "snap-stanford/SKB-Explorer", "snap-stanford/stark-leaderboard", "zsyJosh/stark" ]
1
null
https://openreview.net/forum?id=QLO0pXYKVi
@inproceedings{ yuan2024fusu, title={{FUSU}: A Multi-temporal-source Land Use Change Segmentation Dataset for Fine-grained Urban Semantic Understanding}, author={Shuai Yuan and Guancong Lin and Lixian Zhang and Runmin Dong and Jinxiao Zhang and Shuang Chen and Juepeng Zheng and Jie Wang and Haohuan Fu}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=QLO0pXYKVi} }
Fine urban change segmentation using multi-temporal remote sensing images is essential for understanding human-environment interactions in urban areas. Although there have been advances in high-quality land cover datasets that reveal the physical features of urban landscapes, the lack of fine-grained land use datasets hinders a deeper understanding of how human activities are distributed across landscapes and the impact of these activities on the environment, thus constraining proper technique development. To address this, we introduce FUSU, the first fine-grained land use change segmentation dataset for Fine-grained Urban Semantic Understanding. FUSU features the most detailed land use classification system to date, with 17 classes and 30 billion pixels of annotations. It includes bi-temporal high-resolution satellite images with 0.2-0.5 m ground sample distance and monthly optical and radar satellite time series, covering 847 km^2 across five urban areas in the southern and northern of China with different geographical features. The fine-grained land use pixel-wise annotations and high spatial-temporal resolution data provide a robust foundation for developing proper deep learning models to provide contextual insights on human activities and urbanization. To fully leverage FUSU, we propose a unified time-series architecture for both change detection and segmentation. We benchmark FUSU on various methods for several tasks. Dataset and code are available at: https://github.com/yuanshuai0914/FUSU.
FUSU: A Multi-temporal-source Land Use Change Segmentation Dataset for Fine-grained Urban Semantic Understanding
[ "Shuai Yuan", "Guancong Lin", "Lixian Zhang", "Runmin Dong", "Jinxiao Zhang", "Shuang Chen", "Juepeng Zheng", "Jie Wang", "Haohuan Fu" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2405.19055
[ "https://github.com/yuanshuai0914/fusu" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=QIJQ1qCGqV
@inproceedings{ kim2024text, title={Text to Blind Motion}, author={Hee Jae Kim and Kathakoli Sengupta and Masaki Kuribayashi and Hernisa Kacorri and Eshed Ohn-Bar}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=QIJQ1qCGqV} }
People who are blind perceive the world differently than those who are sighted, which can result in distinct motion characteristics. For instance, when crossing at an intersection, blind individuals may have different patterns of movement, such as veering more from a straight path or using touch-based exploration around curbs and obstacles. These behaviors may appear less predictable to motion models embedded in technologies such as autonomous vehicles. Yet, the ability of 3D motion models to capture such behavior has not been previously studied, as existing datasets for 3D human motion currently lack diversity and are biased toward people who are sighted. In this work, we introduce BlindWays, the first multimodal motion benchmark for pedestrians who are blind. We collect 3D motion data using wearable sensors with 11 blind participants navigating eight different routes in a real-world urban setting. Additionally, we provide rich textual descriptions that capture the distinctive movement characteristics of blind pedestrians and their interactions with both the navigation aid (e.g., a white cane or a guide dog) and the environment. We benchmark state-of-the-art 3D human prediction models, finding poor performance with off-the-shelf and pre-training-based methods for our novel task. To contribute toward safer and more reliable systems that can seamlessly reason over diverse human movements in their environments, our text-and-motion benchmark is available at https://blindways.github.io/.
Text to Blind Motion
[ "Hee Jae Kim", "Kathakoli Sengupta", "Masaki Kuribayashi", "Hernisa Kacorri", "Eshed Ohn-Bar" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=QCY01LvyKm
@inproceedings{ chasmai2024the, title={The iNaturalist Sounds Dataset}, author={Mustafa Chasmai and Alexander Shepard and Subhransu Maji and Grant Van Horn}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=QCY01LvyKm} }
We present the iNaturalist Sounds Dataset (iNatSounds), a collection of 230,000 audio files capturing sounds from over 5,500 species, contributed by more than 27,000 recordists worldwide. The dataset encompasses sounds from birds, mammals, insects, reptiles, and amphibians, with audio and species labels derived from observations submitted to iNaturalist, a global citizen science platform. Each recording in the dataset varies in length and includes a single species annotation. We benchmark multiple backbone architectures, comparing multiclass classification objectives with multilabel objectives. Despite weak labeling, we demonstrate that iNatSounds serves as a useful pretraining resource by benchmarking it on strongly labeled downstream evaluation datasets. The dataset is available as a single, freely accessible archive, promoting accessibility and research in this important domain. We envision models trained on this data powering next-generation public engagement applications, and assisting biologists, ecologists, and land use managers in processing large audio collections, thereby contributing to the understanding of species compositions in diverse soundscapes.
The iNaturalist Sounds Dataset
[ "Mustafa Chasmai", "Alexander Shepard", "Subhransu Maji", "Grant Van Horn" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Q7xKdEMrrZ
@inproceedings{ liu2024asep, title={As{EP}: Benchmarking Deep Learning Methods for Antibody-specific Epitope Prediction}, author={ChuNan Liu and Lilian Denzler and Yihong Chen and Andrew CR Martin and Brooks Paige}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=Q7xKdEMrrZ} }
Epitope identification is vital for antibody design yet challenging due to the inherent variability in antibodies. While many deep learning methods have been developed for general protein binding site prediction tasks, whether they work for epitope prediction remains an understudied research question. The challenge is also heightened by the lack of a consistent evaluation pipeline with sufficient dataset size and epitope diversity. We introduce a filtered antibody-antigen complex structure dataset, AsEP (Antibody-specific Epitope Prediction). AsEP is the largest of its kind and provides clustered epitope groups, allowing the community to develop and test novel epitope prediction methods and evaluate their generalisability. AsEP comes with an easy-to-use interface in Python and pre-built graph representations of each antibody-antigen complex while also supporting customizable embedding methods. Using this new dataset, we benchmark several representative general protein-binding site prediction methods and find that their performances fall short of expectations for epitope prediction. To address this, we propose a novel method, WALLE, which leverages both unstructured modeling from protein language models and structural modeling from graph neural networks. WALLE demonstrate up to 3-10X performance improvement over the baseline methods. Our empirical findings suggest that epitope prediction benefits from combining sequential features provided by language models with geometrical information from graph representations. This provides a guideline for future epitope prediction method design. In addition, we reformulate the task as bipartite link prediction, allowing convenient model performance attribution and interpretability. We open source our data and code at https://github.com/biochunan/AsEP-dataset.
AsEP: Benchmarking Deep Learning Methods for Antibody-specific Epitope Prediction
[ "ChuNan Liu", "Lilian Denzler", "Yihong Chen", "Andrew CR Martin", "Brooks Paige" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2407.18184
[ "https://github.com/biochunan/asep-dataset" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Q7lAqY41HH
@inproceedings{ yang2024crag, title={{CRAG} - Comprehensive {RAG} Benchmark}, author={Xiao Yang and Kai Sun and Hao Xin and Yushi Sun and Nikita Bhalla and Xiangsen Chen and Sajal Choudhary and Rongze Gui and Ziran Jiang and Ziyu JIANG and Lingkun Kong and Brian Moran and Jiaqi Wang and Yifan Ethan Xu and An Yan and Chenyu Yang and Eting Yuan and Hanwen Zha and Nan Tang and Lei Chen and Nicolas SCHEFFER and Yue Liu and Nirav Shah and Rakesh Wanga and Anuj Kumar and Wen-tau Yih and Xin Luna Dong}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=Q7lAqY41HH} }
Retrieval-Augmented Generation (RAG) has recently emerged as a promising solution to alleviate Large Language Model (LLM)’s deficiency in lack of knowledge. Existing RAG datasets, however, do not adequately represent the diverse and dynamic nature of real-world Question Answering (QA) tasks. To bridge this gap, we introduce the Comprehensive RAG Benchmark (CRAG), a factual question answering benchmark of 4,409 question-answer pairs and mock APIs to simulate web and Knowledge Graph (KG) search. CRAG is designed to encapsulate a diverse array of questions across five domains and eight question categories, reflecting varied entity popularity from popular to long-tail, and temporal dynamisms ranging from years to seconds. Our evaluation on this benchmark highlights the gap to fully trustworthy QA. Whereas most advanced LLMs achieve $\le 34\%$ accuracy on CRAG, adding RAG in a straightforward manner improves the accuracy only to 44%. State-of-the-art industry RAG solutions only answer 63% questions without any hallucination. CRAG also reveals much lower accuracy in answering questions regarding facts with higher dynamism, lower popularity, or higher complexity, suggesting future research directions. The CRAG benchmark laid the groundwork for a KDD Cup 2024 challenge, attracted thousands of participants and submissions. We commit to maintaining CRAG to serve research communities in advancing RAG solutions and general QA solutions. CRAG is available at https://github.com/facebookresearch/CRAG/.
CRAG - Comprehensive RAG Benchmark
[ "Xiao Yang", "Kai Sun", "Hao Xin", "Yushi Sun", "Nikita Bhalla", "Xiangsen Chen", "Sajal Choudhary", "Rongze Gui", "Ziran Jiang", "Ziyu JIANG", "Lingkun Kong", "Brian Moran", "Jiaqi Wang", "Yifan Ethan Xu", "An Yan", "Chenyu Yang", "Eting Yuan", "Hanwen Zha", "Nan Tang", "Lei Chen", "Nicolas SCHEFFER", "Yue Liu", "Nirav Shah", "Rakesh Wanga", "Anuj Kumar", "Wen-tau Yih", "Xin Luna Dong" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "https://github.com/facebookresearch/crag" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Q2sDuwtutB
@inproceedings{ chen2024textspace, title={Text-space Graph Foundation Models: Comprehensive Benchmarks and New Insights}, author={Zhikai Chen and Haitao Mao and Jingzhe Liu and Yu Song and Bingheng Li and Wei Jin and Bahare Fatemi and Anton Tsitsulin and Bryan Perozzi and Hui Liu and Jiliang Tang}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=Q2sDuwtutB} }
Given the ubiquity of graph data and its applications in diverse domains, building a Graph Foundation Model (GFM) that can work well across different graphs and tasks with a unified backbone has recently garnered significant interests. A major obstacle to achieving this goal stems from the fact that graphs from different domains often exhibit diverse node features. Inspired by multi-modal models that align different modalities with natural language, the text has recently been adopted to provide a unified feature space for diverse graphs. Despite the great potential of these text-space GFMs, current research in this field is hampered by two problems. First, the absence of a comprehensive benchmark with unified problem settings hinders a clear understanding of the comparative effectiveness and practical value of different text-space GFMs. Second, there is a lack of sufficient datasets to thoroughly explore the methods' full potential and verify their effectiveness across diverse settings. To address these issues, we conduct a comprehensive benchmark providing novel text-space datasets and comprehensive evaluation under unified problem settings. Empirical results provide new insights and inspire future research directions. Our code and data are publicly available from https://github.com/CurryTang/TSGFM.
Text-space Graph Foundation Models: Comprehensive Benchmarks and New Insights
[ "Zhikai Chen", "Haitao Mao", "Jingzhe Liu", "Yu Song", "Bingheng Li", "Wei Jin", "Bahare Fatemi", "Anton Tsitsulin", "Bryan Perozzi", "Hui Liu", "Jiliang Tang" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.10727
[ "https://github.com/currytang/tsgfm" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=PyTf2jj0SH
@inproceedings{ liu2024convbench, title={ConvBench: A Multi-Turn Conversation Evaluation Benchmark with Hierarchical Ablation Capability for Large Vision-Language Models}, author={Shuo Liu and Kaining Ying and Hao Zhang and Yue Yang and Yuqi Lin and Tianle Zhang and Chuanhao Li and Yu Qiao and Ping Luo and Wenqi Shao and Kaipeng Zhang}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=PyTf2jj0SH} }
Multi-turn visual conversation is an important ability of real-world AI assistants. However, the related evaluation benchmark is missed. This paper presents ConvBench, a multi-turn conversation benchmark with hierarchical capabilities ablation evaluation for Large Vision-Language Models (LVLMs). ConvBench comprises 577 curated multi-turn conversations, encompassing 215 tasks. These tasks are broad and open-ended, which resemble real-world user behaviors. ConvBench progressively examines the LVLMs' perception, reasoning, and creativity capabilities in each conversation and can decouple these capabilities in evaluations and thus perform reliable error attribution. Besides, considering the diversity of open-ended questions, we introduce an efficient and reliable automatic evaluation framework. Experimental results reveal that ConvBench is a significant challenge for current LVLMs, even for GPT4V, which achieves only a 39.51% score. Besides, we have some insightful findings, such as the weak perception of LVLMs inhibits authentic strengths in reasoning and creation. We believe our design of hierarchical capabilities, decoupling capabilities evaluation, and multi-turn conversation can blaze a new trail in LVLMs evaluation. Code and benchmark are released at https://github.com/shirlyliu64/ConvBench.
ConvBench: A Multi-Turn Conversation Evaluation Benchmark with Hierarchical Ablation Capability for Large Vision-Language Models
[ "Shuo Liu", "Kaining Ying", "Hao Zhang", "Yue Yang", "Yuqi Lin", "Tianle Zhang", "Chuanhao Li", "Yu Qiao", "Ping Luo", "Wenqi Shao", "Kaipeng Zhang" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=PvVKUFhaNy
@inproceedings{ wang2024helpsteer, title={HelpSteer 2: Open-source dataset for training top-performing reward models}, author={Zhilin Wang and Yi Dong and Olivier Delalleau and Jiaqi Zeng and Gerald Shen and Daniel Egert and Jimmy J. Zhang and Makesh Narsimhan Sreedhar and Oleksii Kuchaiev}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=PvVKUFhaNy} }
High-quality preference datasets are essential for training reward models that can effectively guide large language models (LLMs) in generating high-quality responses aligned with human preferences. As LLMs become stronger and better aligned, permissively licensed preference datasets, such as Open Assistant, HH-RLHF, and HelpSteer need to be updated to remain effective for reward modeling. Methods that distil preference data from proprietary LLMs such as GPT-4 have restrictions on commercial usage imposed by model providers. To improve upon both generated responses and attribute labeling quality, we release HelpSteer2, a permissively licensed preference dataset (CC-BY-4.0). Using a powerful Nemotron-4-340B base model trained on HelpSteer2, we are able to achieve the SOTA score (92.0%) on Reward-Bench's primary dataset, outperforming currently listed open and proprietary models, as of June 12th, 2024. Notably, HelpSteer2 consists of only ten thousand response pairs, an order of magnitude fewer than existing preference datasets (e.g., HH-RLHF), which makes it highly efficient for training reward models. Our extensive experiments demonstrate that reward models trained with HelpSteer2 are effective in aligning LLMs. Additionally, we propose SteerLM 2.0, a model alignment approach that can effectively make use of the rich multi-attribute score predicted by our reward models. HelpSteer2 is available at https://huggingface.co/datasets/nvidia/HelpSteer2 and code is available at https://github.com/NVIDIA/NeMo-Aligner
HelpSteer 2: Open-source dataset for training top-performing reward models
[ "Zhilin Wang", "Yi Dong", "Olivier Delalleau", "Jiaqi Zeng", "Gerald Shen", "Daniel Egert", "Jimmy J. Zhang", "Makesh Narsimhan Sreedhar", "Oleksii Kuchaiev" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=PnjbvbblGv
@inproceedings{ ao2024sdeval, title={{SD}-Eval: A Benchmark Dataset for Spoken Dialogue Understanding Beyond Words}, author={Junyi Ao and Yuancheng Wang and Xiaohai Tian and Dekun Chen and Jun Zhang and Lu Lu and Yuxuan Wang and Haizhou Li and Zhizheng Wu}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=PnjbvbblGv} }
Speech encompasses a wealth of information, including but not limited to content, paralinguistic, and environmental information. This comprehensive nature of speech significantly impacts communication and is crucial for human-computer interaction. Chat-Oriented Large Language Models (LLMs), known for their general-purpose assistance capabilities, have evolved to handle multi-modal inputs, including speech. Although these models can be adept at recognizing and analyzing speech, they often fall short of generating appropriate responses. We argue that this is due to the lack of principles on task definition and model development, which requires open-source datasets and metrics suitable for model evaluation. To bridge the gap, we present SD-Eval, a benchmark dataset aimed at multidimensional evaluation of spoken dialogue understanding and generation. SD-Eval focuses on paralinguistic and environmental information and includes 7,303 utterances, amounting to 8.76 hours of speech data. The data is aggregated from eight public datasets, representing four perspectives: emotion, accent, age, and background sound. To assess the SD-Eval benchmark dataset, we implement three different models and construct a training set following a process similar to that of SD-Eval. The training set contains 1,052.72 hours of speech data and 724.4k utterances. We also conduct a comprehensive evaluation using objective evaluation methods (e.g. BLEU and ROUGE), subjective evaluations and LLM-based metrics for the generated responses. Models conditioned with paralinguistic and environmental information outperform their counterparts in both objective and subjective measures. Moreover, experiments demonstrate that LLM-based metrics show a higher correlation with human evaluation compared to traditional metrics. We open-source SD-Eval at https://github.com/amphionspace/SD-Eval.
SD-Eval: A Benchmark Dataset for Spoken Dialogue Understanding Beyond Words
[ "Junyi Ao", "Yuancheng Wang", "Xiaohai Tian", "Dekun Chen", "Jun Zhang", "Lu Lu", "Yuxuan Wang", "Haizhou Li", "Zhizheng Wu" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "https://github.com/amphionspace/sd-eval" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Pm0UzCehgB
@inproceedings{ sivasubramaniam2024smtexttoquery, title={{SM}3-Text-to-Query: Synthetic Multi-Model Medical Text-to-Query Benchmark}, author={Sithursan Sivasubramaniam and Cedric Osei-Akoto and Yi Zhang and Kurt Stockinger and Jonathan Fuerst}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=Pm0UzCehgB} }
Electronic health records (EHRs) are stored in various database systems with different database models on heterogeneous storage architectures, such as relational databases, document stores, or graph databases. These different database models have a big impact on query complexity and performance. While this has been a known fact in database research, its implications for the growing number of Text-to-Query systems have surprisingly not been investigated so far. In this paper, we present SM3-Text-to-Query, the first multi-model medical Text-to-Query benchmark based on synthetic patient data from Synthea, following the SNOMED-CT taxonomy---a widely used knowledge graph ontology covering medical terminology. SM3-Text-to-Query provides data representations for relational databases (PostgreSQL), document stores (MongoDB), and graph databases (Neo4j and GraphDB (RDF)), allowing the evaluation across four popular query languages, namely SQL, MQL, Cypher, and SPARQL. We systematically and manually develop 408 template questions, which we augment to construct a benchmark of 10K diverse natural language question/query pairs for these four query languages (40K pairs overall). On our dataset, we evaluate several common in-context-learning (ICL) approaches for a set of representative closed and open-source LLMs. Our evaluation sheds light on the trade-offs between database models and query languages for different ICL strategies and LLMs. Last, SM3-Text-to-Query is easily extendable to additional query languages or real, standard-based patient databases.
SM3-Text-to-Query: Synthetic Multi-Model Medical Text-to-Query Benchmark
[ "Sithursan Sivasubramaniam", "Cedric Osei-Akoto", "Yi Zhang", "Kurt Stockinger", "Jonathan Fuerst" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2411.05521
[ "https://github.com/jf87/sm3-text-to-query" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=PcbSZwVVc5
@inproceedings{ wang2024dreamcatcher, title={DreamCatcher: A Wearer-aware Multi-modal Sleep Event Dataset Based on Earables in Non-restrictive Environments}, author={Zeyu Wang and Xiyuxing Zhang and Ruotong Yu and Yuntao Wang and Kenneth Christofferson and Jingru Zhang and Alex Mariakakis and Yuanchun Shi}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=PcbSZwVVc5} }
Poor quality sleep can be characterized by the occurrence of events ranging from body movement to breathing impairment. Widely available earbuds equipped with sensors (also known as earables) can be combined with a sleep event detection algorithm to offer a convenient alternative to laborious clinical tests for individuals suffering from sleep disorders. Although various solutions utilizing such devices have been proposed to detect sleep events, they ignore the fact that individuals often share sleeping spaces with roommates or couples. To address this issue, we introduce DreamCatcher, the first publicly available dataset for wearer-aware sleep event algorithm development on earables. DreamCatcher encompasses eight distinct sleep events, including synchronous dual-channel audio and motion data collected from 12 pairs (24 participants) totaling 210 hours (420 hour.person) with fine-grained label. We tested multiple benchmark models on three tasks related to sleep event detection, demonstrating the usability and unique challenge of DreamCatcher. We hope that the proposed DreamCatcher can inspire other researchers to further explore efficient wearer-aware human vocal activity sensing on earables. DreamCatcher is publicly available at https://github.com/thuhci/DreamCatcher.
DreamCatcher: A Wearer-aware Multi-modal Sleep Event Dataset Based on Earables in Non-restrictive Environments
[ "Zeyu Wang", "Xiyuxing Zhang", "Ruotong Yu", "Yuntao Wang", "Kenneth Christofferson", "Jingru Zhang", "Alex Mariakakis", "Yuanchun Shi" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=PZbFW8ZrSJ
@inproceedings{ simonetto2024tabularbench, title={TabularBench: Benchmarking Adversarial Robustness for Tabular Deep Learning in Real-world Use-cases}, author={Thibault Simonetto and Salah GHAMIZI and Maxime Cordy}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=PZbFW8ZrSJ} }
While adversarial robustness in computer vision is a mature research field, fewer researchers have tackled the evasion attacks against tabular deep learning, and even fewer investigated robustification mechanisms and reliable defenses. We hypothesize that this lag in the research on tabular adversarial attacks is in part due to the lack of standardized benchmarks. To fill this gap, we propose TabularBench, the first comprehensive benchmark of robustness of tabular deep learning classification models. We evaluated adversarial robustness with CAA, an ensemble of gradient and search attacks which was recently demonstrated as the most effective attack against a tabular model. In addition to our open benchmark https://github.com/serval-uni-lu/tabularbench where we welcome submissions of new models and defenses, we implement 7 robustification mechanisms inspired by state-of-the-art defenses in computer vision and propose the largest benchmark of robust tabular deep learning over 200 models across five critical scenarios in finance, healthcare and security. We curated real datasets for each use case, augmented with hundreds of thousands of realistic synthetic inputs, and trained and assessed our models with and without data augmentations. We open-source our library that provides API access to all our pre-trained robust tabular models, and the largest datasets of real and synthetic tabular inputs. Finally, we analyze the impact of various defenses on the robustness and provide actionable insights to design new defenses and robustification mechanisms.
TabularBench: Benchmarking Adversarial Robustness for Tabular Deep Learning in Real-world Use-cases
[ "Thibault Simonetto", "Salah GHAMIZI", "Maxime Cordy" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2408.07579
[ "https://github.com/serval-uni-lu/tabularbench" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=PSDXcYjrkO
@inproceedings{ lu2024towards, title={Towards Comprehensive Detection of Chinese Harmful Memes}, author={Junyu Lu and Bo Xu and Xiaokun Zhang and WangHongbo and Haohao Zhu and Dongyu Zhang and Liang Yang and Hongfei Lin}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=PSDXcYjrkO} }
Harmful memes have proliferated on the Chinese Internet, while research on detecting Chinese harmful memes significantly lags behind due to the absence of reliable datasets and effective detectors. To this end, we present the comprehensive detection of Chinese harmful memes. We introduce ToxiCN MM, the first Chinese harmful meme dataset, which consists of 12,000 samples with fine-grained annotations for meme types. Additionally, we propose a baseline detector, Multimodal Harmful Knowledge Enhancement (MHKE), designed to incorporate contextual information from meme content, thereby enhancing the model's understanding of Chinese memes. In the evaluation phase, we conduct extensive quantitative experiments and qualitative analyses on multiple baselines, including LLMs and our MHKE. Experimental results indicate that detecting Chinese harmful memes is challenging for existing models, while demonstrating the effectiveness of MHKE.
Towards Comprehensive Detection of Chinese Harmful Memes
[ "Junyu Lu", "Bo Xu", "Xiaokun Zhang", "WangHongbo", "Haohao Zhu", "Dongyu Zhang", "Liang Yang", "Hongfei Lin" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2410.02378
[ "https://github.com/dut-lujunyu/toxicn_mm" ]
https://huggingface.co/papers/2410.02378
0
0
0
8
[]
[ "JunyuLu/ToxiCN_MM" ]
[]
[]
[ "JunyuLu/ToxiCN_MM" ]
[]
1
null
https://openreview.net/forum?id=PFwlw9bnAr
@inproceedings{ yang2024care, title={{CARE}: a Benchmark Suite for the Classification and Retrieval of Enzymes}, author={Jason Yang and Ariane Mora and Shengchao Liu and Bruce James Wittmann and Anima Anandkumar and Frances H. Arnold and Yisong Yue}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=PFwlw9bnAr} }
Enzymes are important proteins that catalyze chemical reactions. In recent years, machine learning methods have emerged to predict enzyme function from sequence; however, there are no standardized benchmarks to evaluate these methods. We introduce CARE, a benchmark and dataset suite for the Classification And Retrieval of Enzymes (CARE). CARE centers on two tasks: (1) classification of a protein sequence by its enzyme commission (EC) number and (2) retrieval of an EC number given a chemical reaction. For each task, we design train-test splits to evaluate different kinds of out-of-distribution generalization that are relevant to real use cases. For the classification task, we provide baselines for state-of-the-art methods. Because the retrieval task has not been previously formalized, we propose a method called Contrastive Reaction-EnzymE Pretraining (CREEP) as one of the first baselines for this task and compare it to the recent method, CLIPZyme. CARE is available at https://github.com/jsunn-y/CARE/.
CARE: a Benchmark Suite for the Classification and Retrieval of Enzymes
[ "Jason Yang", "Ariane Mora", "Shengchao Liu", "Bruce James Wittmann", "Anima Anandkumar", "Frances H. Arnold", "Yisong Yue" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.15669
[ "https://github.com/jsunn-y/care" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=PCjK8dqrWW
@inproceedings{ boisvert2024workarena, title={WorkArena++: Towards Compositional Planning and Reasoning-based Common Knowledge Work Tasks}, author={L{\'e}o Boisvert and Megh Thakkar and Maxime Gasse and Massimo Caccia and Thibault Le Sellier de Chezelles and Quentin Cappart and Nicolas Chapados and Alexandre Lacoste and Alexandre Drouin}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=PCjK8dqrWW} }
The ability of large language models (LLMs) to mimic human-like intelligence has led to a surge in LLM-based autonomous agents. Though recent LLMs seem capable of planning and reasoning given user instructions, their effectiveness in applying these capabilities for autonomous task solving remains underexplored. This is especially true in enterprise settings, where automated agents hold the promise of a high impact. To fill this gap, we propose WorkArena++, a novel benchmark consisting of 682 tasks corresponding to realistic workflows routinely performed by knowledge workers. WorkArena++ is designed to evaluate the planning, problem-solving, logical/arithmetic reasoning, retrieval, and contextual understanding abilities of web agents. Our empirical studies across state-of-the-art LLMs and vision-language models (VLMs), as well as human workers, reveal several challenges for such models to serve as useful assistants in the workplace. In addition to the benchmark, we provide a mechanism to effortlessly generate thousands of ground-truth observation/action traces, which can be used for fine-tuning existing models. Overall, we expect this work to serve as a useful resource to help the community progress towards capable autonomous agents. The benchmark can be found at https://github.com/ServiceNow/WorkArena.
WorkArena++: Towards Compositional Planning and Reasoning-based Common Knowledge Work Tasks
[ "Léo Boisvert", "Megh Thakkar", "Maxime Gasse", "Massimo Caccia", "Thibault Le Sellier de Chezelles", "Quentin Cappart", "Nicolas Chapados", "Alexandre Lacoste", "Alexandre Drouin" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2407.05291
[ "https://github.com/servicenow/workarena" ]
https://huggingface.co/papers/2407.05291
1
1
0
9
[]
[]
[]
[]
[]
[]
1
null
https://openreview.net/forum?id=Ogw1sSo9FP
@inproceedings{ li2024tegdb, title={{TEG}-{DB}: A Comprehensive Dataset and Benchmark of Textual-Edge Graphs}, author={Zhuofeng Li and Zixing Gou and Xiangnan Zhang and Zhongyuan Liu and Sirui Li and Yuntong Hu and Chen Ling and Zheng Zhang and Liang Zhao}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=Ogw1sSo9FP} }
Text-Attributed Graphs (TAGs) augment graph structures with natural language descriptions, facilitating detailed depictions of data and their interconnections across various real-world settings. However, existing TAG datasets predominantly feature textual information only at the nodes, with edges typically represented by mere binary or categorical attributes. This lack of rich textual edge annotations significantly limits the exploration of contextual relationships between entities, hindering deeper insights into graph-structured data. To address this gap, we introduce Textual-Edge Graphs Datasets and Benchmark (TEG-DB), a comprehensive and diverse collection of benchmark textual-edge datasets featuring rich textual descriptions on nodes and edges. The TEG-DB datasets are large-scale and encompass a wide range of domains, from citation networks to social networks. In addition, we conduct extensive benchmark experiments on TEG-DB to assess the extent to which current techniques, including pre-trained language models, graph neural networks, and their combinations, can utilize textual node and edge information. Our goal is to elicit advancements in textual-edge graph research, specifically in developing methodologies that exploit rich textual node and edge descriptions to enhance graph analysis and provide deeper insights into complex real-world networks. The entire TEG-DB project is publicly accessible as an open-source repository on Github, accessible at https://github.com/Zhuofeng-Li/TEG-Benchmark.
TEG-DB: A Comprehensive Dataset and Benchmark of Textual-Edge Graphs
[ "Zhuofeng Li", "Zixing Gou", "Xiangnan Zhang", "Zhongyuan Liu", "Sirui Li", "Yuntong Hu", "Chen Ling", "Zheng Zhang", "Liang Zhao" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.10310
[ "https://github.com/zhuofeng-li/teg-benchmark" ]
https://huggingface.co/papers/2406.10310
0
0
0
9
[]
[]
[]
[]
[]
[]
1
null
https://openreview.net/forum?id=OfXwix3NRH
@inproceedings{ leong2024shdocs, title={{SHD}ocs: A dataset, benchmark, and method to efficiently generate high-quality, real-world specular highlight data with near-perfect alignment}, author={Jovin Leong and Koa Ming Di and Benjamin Cham Wen Bin and Shaun Heng}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=OfXwix3NRH} }
A frequent problem in vision-based reasoning tasks such as object detection and optical character recognition (OCR) is the persistence of specular highlights. Specular highlights appear as bright spots of glare that occur due to the concentrated reflection of light; these spots manifest as image artifacts which occlude computer vision models and are challenging to reconstruct. Despite this, specular highlight removal receives relatively little attention due to the difficulty of acquiring high-quality, real-world data. We introduce a method to generate specular highlight data with near-perfect alignment and present SHDocs—a dataset of specular highlights on document images created using our method. Through our benchmark, we demonstrate that our dataset enables us to surpass the performance of state-of-the-art specular highlight removal models and downstream OCR tasks. We release our dataset, code, and methods publicly to motivate further exploration of image enhancement for practical computer vision challenges.
SHDocs: A dataset, benchmark, and method to efficiently generate high-quality, real-world specular highlight data with near-perfect alignment
[ "Jovin Leong", "Koa Ming Di", "Benjamin Cham Wen Bin", "Shaun Heng" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=OfOCl3dGcF
@inproceedings{ huang2024conme, title={ConMe: Rethinking Evaluation of Compositional Reasoning for Modern {VLM}s}, author={Irene Huang and Wei Lin and Muhammad Jehanzeb Mirza and Jacob A Hansen and Sivan Doveh and Victor Ion Butoi and Roei Herzig and Assaf Arbelle and Hilde Kuehne and Trevor Darrell and Chuang Gan and Aude Oliva and Rogerio Feris and Leonid Karlinsky}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=OfOCl3dGcF} }
Compositional Reasoning (CR) entails grasping the significance of attributes, relations, and word order. Recent Vision-Language Models (VLMs), comprising a visual encoder and a Large Language Model (LLM) decoder, have demonstrated remarkable proficiency in such reasoning tasks. This prompts a crucial question: have VLMs effectively tackled the CR challenge? We conjecture that existing CR benchmarks may not adequately push the boundaries of modern VLMs due to the reliance on an LLM only negative text generation pipeline. Consequently, the negatives produced either appear as outliers from the natural language distribution learned by VLMs' LLM decoders or as improbable within the corresponding image context. To address these limitations, we introduce ConMe\footnote{ConMe is an abbreviation for Confuse Me.} -- a compositional reasoning benchmark and a novel data generation pipeline leveraging VLMs to produce `hard CR Q&A'. Through a new concept of VLMs conversing with each other to collaboratively expose their weaknesses, our pipeline autonomously generates, evaluates, and selects challenging compositional reasoning questions, establishing a robust CR benchmark, also subsequently validated manually. Our benchmark provokes a noteworthy, up to 33%, decrease in CR performance compared to preceding benchmarks, reinstating the CR challenge even for state-of-the-art VLMs.
ConMe: Rethinking Evaluation of Compositional Reasoning for Modern VLMs
[ "Irene Huang", "Wei Lin", "Muhammad Jehanzeb Mirza", "Jacob A Hansen", "Sivan Doveh", "Victor Ion Butoi", "Roei Herzig", "Assaf Arbelle", "Hilde Kuehne", "Trevor Darrell", "Chuang Gan", "Aude Oliva", "Rogerio Feris", "Leonid Karlinsky" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.08164
[ "https://github.com/jmiemirza/conme" ]
https://huggingface.co/papers/2406.08164
1
0
0
14
[ "conme/ConMe" ]
[]
[]
[ "conme/ConMe" ]
[]
[]
1
null
https://openreview.net/forum?id=OTjTKFk7gb
@inproceedings{ su2024a, title={A Novel Benchmark for Decision-Making in Uncertain and Competitive Games}, author={Kefan Su and Yusen Huo and Zhilin Zhang and Shuai Dou and Chuan Yu and Jian Xu and Zongqing Lu and Bo Zheng}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=OTjTKFk7gb} }
The study of decision-making in large-scale game environments is a crucial domain within artificial intelligence, possessing substantial implications for practical applications. Nevertheless, the lack of comprehensive, realistic game environments and associated datasets has limited progress in this field. To address this and promote research on this important problem, we introduce the Large-Scale Auction (LSA) Benchmark derived from online advertising, a rapidly expanding industry worth $626.8 billion in 2023. The LSA Benchmark consists of an environment and the corresponding dataset. The LSA Environment is augmented with the deep generative model to reduce the gap between the simulation environment and reality while avoiding the risks of sensitive data exposure. The LSA Dataset comprises over 500 million records, totaling 40 GB in size, which contains trajectories with 50 diverse agents competing with each other, for effective offline training. We evaluate different types of existing algorithms in the LSA Environment. We hope the LSA benchmark can promote the development of decision-making in large-scale games.
A Novel Benchmark for Decision-Making in Uncertain and Competitive Games
[ "Kefan Su", "Yusen Huo", "Zhilin Zhang", "Shuai Dou", "Chuan Yu", "Jian Xu", "Zongqing Lu", "Bo Zheng" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=OPx8Dd27zT
@inproceedings{ silcock2024newswire, title={Newswire: A Large-Scale Structured Database of a Century of Historical News}, author={Emily Silcock and Abhishek Arora and Luca D'Amico-Wong and Melissa Dell}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=OPx8Dd27zT} }
In the U.S. historically, local newspapers drew their content largely from newswires like the Associated Press. Historians argue that newswires played a pivotal role in creating a national identity and shared understanding of the world, but there is no comprehensive archive of the content sent over newswires. We reconstruct such an archive by applying a customized deep learning pipeline to hundreds of terabytes of raw image scans from thousands of local newspapers. The resulting dataset contains 2.7 million unique public domain U.S. news wire articles, written between 1878 and 1977. Locations in these articles are georeferenced, topics are tagged using customized neural topic classification, named entities are recognized, and individuals are disambiguated to Wikipedia using a novel entity disambiguation model. To construct the Newswire dataset, we first recognize newspaper layouts and transcribe around 138 millions structured article texts from raw image scans. We then use a customized neural bi-encoder model to de-duplicate reproduced articles, in the presence of considerable abridgement and noise, quantifying how widely each article was reproduced. A text classifier is used to ensure that we only include newswire articles, which historically are in the public domain. The structured data that accompany the texts provide rich information about the who (disambiguated individuals), what (topics), and where (georeferencing) of the news that millions of Americans read over the course of a century. We also include Library of Congress metadata information about the newspapers that ran the articles on their front pages. The Newswire dataset is useful both for large language modeling - expanding training data beyond what is available from modern web texts - and for studying a diversity of questions in computational linguistics, social science, and the digital humanities.
Newswire: A Large-Scale Structured Database of a Century of Historical News
[ "Emily Silcock", "Abhishek Arora", "Luca D'Amico-Wong", "Melissa Dell" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2406.09490
[ "https://github.com/dell-research-harvard/newswire" ]
https://huggingface.co/papers/2406.09490
0
0
0
4
[ "dell-research-harvard/wire-classifier", "dell-research-harvard/LinkMentions", "dell-research-harvard/byline-detection", "dell-research-harvard/topic-antitrust", "dell-research-harvard/topic-protests", "dell-research-harvard/topic-politics", "dell-research-harvard/topic-fire", "dell-research-harvard/topic-obits", "dell-research-harvard/topic-sport", "dell-research-harvard/topic-labor_movement", "dell-research-harvard/topic-crime", "dell-research-harvard/topic-govt_regulation", "dell-research-harvard/topic-civil_rights", "dell-research-harvard/topic-weather" ]
[ "dell-research-harvard/newswire" ]
[]
[ "dell-research-harvard/wire-classifier", "dell-research-harvard/LinkMentions", "dell-research-harvard/byline-detection", "dell-research-harvard/topic-antitrust", "dell-research-harvard/topic-protests", "dell-research-harvard/topic-politics", "dell-research-harvard/topic-fire", "dell-research-harvard/topic-obits", "dell-research-harvard/topic-sport", "dell-research-harvard/topic-labor_movement", "dell-research-harvard/topic-crime", "dell-research-harvard/topic-govt_regulation", "dell-research-harvard/topic-civil_rights", "dell-research-harvard/topic-weather" ]
[ "dell-research-harvard/newswire" ]
[]
1
null
https://openreview.net/forum?id=OOItbUUQcd
@inproceedings{ werner2024a, title={A Cross-Domain Benchmark for Active Learning}, author={Thorben Werner and Johannes Burchert and Maximilian Stubbemann and Lars Schmidt-Thieme}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=OOItbUUQcd} }
Active Learning (AL) deals with identifying the most informative samples for labeling to reduce data annotation costs for supervised learning tasks. AL research suffers from the fact that lifts from literature generalize poorly and that only a small number of repetitions of experiments are conducted. To overcome these obstacles, we propose CDALBench, the first active learning benchmark which includes tasks in computer vision, natural language processing and tabular learning. Furthermore, by providing an efficient, greedy oracle, CDALBench can be evaluated with 50 runs for each experiment. We show, that both the cross-domain character and a large amount of repetitions are crucial for sophisticated evaluation of AL research. Concretely, we show that the superiority of specific methods varies over the different domains, making it important to evaluate Active Learning with a cross-domain benchmark. Additionally, we show that having a large amount of runs is crucial. With only conducting three runs as often done in the literature, the superiority of specific methods can strongly vary with the specific runs. This effect is so strong, that, depending on the seed, even a well-established method's performance can be significantly better and significantly worse than random for the same dataset.
A Cross-Domain Benchmark for Active Learning
[ "Thorben Werner", "Johannes Burchert", "Maximilian Stubbemann", "Lars Schmidt-Thieme" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2408.00426
[ "https://github.com/wernerth94/a-cross-domain-benchmark-for-active-learning" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=OCrxDanhoO
@inproceedings{ junczyk2024bigos, title={{BIGOS} Benchmark for Polish {ASR}: Curated Datasets and Tools for Reproducible Evaluation}, author={Micha{\l} Junczyk}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=OCrxDanhoO} }
Speech datasets available in the public domain are often underutilized because of challenges in accessibility and interoperability. To address this, a system to survey, catalog, and curate existing speech datasets was developed, enabling reproducible evaluation of automatic speech recognition (ASR) systems. The system was applied to curate over 24 datasets and evaluate 25 ASR models, with a specific focus on Polish. This research represents the most extensive comparison to date of commercial and free ASR systems for the Polish language, drawing insights from 600 system-model-test set evaluations across 8 analysis scenarios. Curated datasets and benchmark results are available publicly. The evaluation tools are open-sourced to support reproducibility of the benchmark, encourage community-driven improvements, and facilitate adaptation for other languages.
BIGOS Benchmark for Polish ASR: Curated Datasets and Tools for Reproducible Evaluation
[ "Michał Junczyk" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=NrCPBJSOOc
@inproceedings{ wu2024daco, title={{DACO}: Towards Application-Driven and Comprehensive Data Analysis via Code Generation}, author={Xueqing Wu and Rui Zheng and Jingzhen Sha and Te-Lin Wu and Hanyu Zhou and Tang Mohan and Kai-Wei Chang and Nanyun Peng and Haoran Huang}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=NrCPBJSOOc} }
Data analysis is a crucial analytical process essential for deriving insights from real-world databases. As shown in Figure 1, the need for data analysis typically arises from specific application scenarios, and requires diverse reasoning skills including mathematical reasoning, logical reasoning, and strategic reasoning. Existing work often focus on simple factual retrieval or arithmetic resolutions and thus are insufficient for addressing complex real-world queries. This work aims to propose new resources and benchmarks on this crucial yet challenging and under-explored task. Due to the prohibitively high cost of collecting expert annotations, we use large language models (LLMs) enhanced by code generation to automatically generate high-quality data analysis, which will later be refined by human annotators. We construct the **DACO dataset**, containing (1) 440 databases (of tabular data) collected from real-world scenarios, (2) ~2k automatically generated query-answer pairs that can serve as weak supervision for model training, and (3) a concentrated but high-quality test set with human refined annotations that serves as our main evaluation benchmark. Experiments show that while LLMs like GPT-4 exhibit promising data analysis capabilities, they are still evaluated as less helpful than human-written analysis on 58.1% cases. Leveraging our weak supervision data, we experiment with various fine-tuning methods, including supervised fine-tuning (SFT) and reinforcement learning from human feedback (RLHF). Our trained model outperforms existing baselines for table question answering, and RLHF further boosts the helpfulness of generated analysis on 58.5% cases. Data and code are released at https://github.com/shirley-wu/daco.
DACO: Towards Application-Driven and Comprehensive Data Analysis via Code Generation
[ "Xueqing Wu", "Rui Zheng", "Jingzhen Sha", "Te-Lin Wu", "Hanyu Zhou", "Tang Mohan", "Kai-Wei Chang", "Nanyun Peng", "Haoran Huang" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2403.02528
[ "https://github.com/shirley-wu/daco" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
null
https://openreview.net/forum?id=Na2gnQFkn8
@inproceedings{ tsuruta2024a, title={A {SARS}-CoV-2 Interaction Dataset and {VHH} Sequence Corpus for Antibody Language Models}, author={Hirofumi Tsuruta and Hiroyuki Yamazaki and Ryota Maeda and Ryotaro Tamura and Akihiro Imura}, booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2024}, url={https://openreview.net/forum?id=Na2gnQFkn8} }
Antibodies are crucial proteins produced by the immune system to eliminate harmful foreign substances and have become pivotal therapeutic agents for treating human diseases. To accelerate the discovery of antibody therapeutics, there is growing interest in constructing language models using antibody sequences. However, the applicability of pre-trained language models for antibody discovery has not been thoroughly evaluated due to the scarcity of labeled datasets. To overcome these limitations, we introduce AVIDa-SARS-CoV-2, a dataset featuring the antigen-variable domain of heavy chain of heavy chain antibody (VHH) interactions obtained from two alpacas immunized with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) spike proteins. AVIDa-SARS-CoV-2 includes binary labels indicating the binding or non-binding of diverse VHH sequences to 12 SARS-CoV-2 mutants, such as the Delta and Omicron variants. Furthermore, we release VHHCorpus-2M, a pre-training dataset for antibody language models, containing over two million VHH sequences. We report benchmark results for predicting SARS-CoV-2-VHH binding using VHHBERT pre-trained on VHHCorpus-2M and existing general protein and antibody-specific pre-trained language models. These results confirm that AVIDa-SARS-CoV-2 provides valuable benchmarks for evaluating the representation capabilities of antibody language models for binding prediction, thereby facilitating the development of AI-driven antibody discovery. The datasets are available at https://datasets.cognanous.com.
A SARS-CoV-2 Interaction Dataset and VHH Sequence Corpus for Antibody Language Models
[ "Hirofumi Tsuruta", "Hiroyuki Yamazaki", "Ryota Maeda", "Ryotaro Tamura", "Akihiro Imura" ]
NeurIPS.cc/2024/Datasets_and_Benchmarks_Track
poster
2405.18749
[ "https://github.com/cognano/AVIDa-SARS-CoV-2" ]
https://huggingface.co/papers/2405.18749
0
0
0
5
[ "COGNANO/VHHBERT" ]
[ "COGNANO/VHHCorpus-2M", "COGNANO/AVIDa-SARS-CoV-2", "SaProtHub/Dateset-AVIDa-SARS-CoV-2-Alpha" ]
[]
[ "COGNANO/VHHBERT" ]
[ "COGNANO/VHHCorpus-2M", "COGNANO/AVIDa-SARS-CoV-2", "SaProtHub/Dateset-AVIDa-SARS-CoV-2-Alpha" ]
[]
1