bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
286
1.35k
abstract
stringlengths
558
2.37k
title
stringlengths
18
163
authors
sequencelengths
1
56
id
stringclasses
1 value
type
stringclasses
2 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
63 values
n_linked_authors
int64
-1
10
upvotes
int64
-1
45
num_comments
int64
-1
6
n_authors
int64
-1
40
Models
sequencelengths
0
100
Datasets
sequencelengths
0
10
Spaces
sequencelengths
0
100
paper_page_exists_pre_conf
int64
0
1
null
https://openreview.net/forum?id=OzcPJz7rgg
@inproceedings{ shimada2023starss, title={{STARSS}23: An Audio-Visual Dataset of Spatial Recordings of Real Scenes with Spatiotemporal Annotations of Sound Events}, author={Kazuki Shimada and Archontis Politis and Parthasaarathy Sudarsanam and Daniel Aleksander Krause and Kengo Uchida and Sharath Adavanne and Aapo Hakala and Yuichiro Koyama and Naoya Takahashi and Shusuke Takahashi and Tuomas Virtanen and Yuki Mitsufuji}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=OzcPJz7rgg} }
While direction of arrival (DOA) of sound events is generally estimated from multichannel audio data recorded in a microphone array, sound events usually derive from visually perceptible source objects, e.g., sounds of footsteps come from the feet of a walker. This paper proposes an audio-visual sound event localization and detection (SELD) task, which uses multichannel audio and video information to estimate the temporal activation and DOA of target sound events. Audio-visual SELD systems can detect and localize sound events using signals from a microphone array and audio-visual correspondence. We also introduce an audio-visual dataset, Sony-TAu Realistic Spatial Soundscapes 2023 (STARSS23), which consists of multichannel audio data recorded with a microphone array, video data, and spatiotemporal annotation of sound events. Sound scenes in STARSS23 are recorded with instructions, which guide recording participants to ensure adequate activity and occurrences of sound events. STARSS23 also serves human-annotated temporal activation labels and human-confirmed DOA labels, which are based on tracking results of a motion capture system. Our benchmark results demonstrate the benefits of using visual object positions in audio-visual SELD tasks. The data is available at https://zenodo.org/record/7880637.
STARSS23: An Audio-Visual Dataset of Spatial Recordings of Real Scenes with Spatiotemporal Annotations of Sound Events
[ "Kazuki Shimada", "Archontis Politis", "Parthasaarathy Sudarsanam", "Daniel Aleksander Krause", "Kengo Uchida", "Sharath Adavanne", "Aapo Hakala", "Yuichiro Koyama", "Naoya Takahashi", "Shusuke Takahashi", "Tuomas Virtanen", "Yuki Mitsufuji" ]
Track/Datasets_and_Benchmarks
poster
2306.09126
[ "https://github.com/sony/audio-visual-seld-dcase2023" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=OyTIV57Prb
@inproceedings{ zhu2023capp, title={{CAPP}-130: A Corpus of Chinese Application Privacy Policy Summarization and Interpretation}, author={Pengyun Zhu and Long Wen and Jinfei Liu and Feng Xue and Jian Lou and Zhibo Wang and Kui Ren}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=OyTIV57Prb} }
A privacy policy serves as an online internet protocol crafted by service providers, which details how service providers collect, process, store, manage, and use personal information when users engage with applications. However, these privacy policies are often filled with technobabble and legalese, making them "incomprehensible''. As a result, users often agree to all terms unknowingly, even some terms may conflict with the law, thereby posing a considerable risk to personal privacy information. One potential solution to alleviate this challenge is to automatically summarize privacy policies using NLP techniques. However, existing techniques primarily focus on extracting key sentences, resulting in comparatively shorter agreements, but failing to address the poor readability caused by the "incomprehensible'' of technobabble and legalese. Moreover, research on Chinese application privacy policy summarization is currently almost nonexistent, and there is a lack of a high-quality corpus suitable for addressing readability issues. To tackle these challenges, we introduce a fine-grained CAPP-130 corpus and a TCSI-pp framework. CAPP-130 contains 130 Chinese privacy policies from popular applications that have been carefully annotated and interpreted by legal experts, resulting in 52,489 annotations and 20,555 rewritten sentences. TCSI-pp first extracts sentences related to the topic specified by users and then uses a generative model to rewrite the sentences into comprehensible summarization. Built upon TSCI-pp, we construct a summarization tool TSCI-pp-zh by selecting RoBERTa from six classification models for sentence extraction and selecting mT5 from five generative models for sentence rewriting. Experimental results show that TCSI-pp-zh outperforms GPT-4 and other baselines in Chinese application privacy policy summarization, demonstrating exceptional readability and reliability. Our data, annotation guidelines, benchmark models, and source code are publicly available at https://github.com/EnlightenedAI/CAPP-130.
CAPP-130: A Corpus of Chinese Application Privacy Policy Summarization and Interpretation
[ "Pengyun Zhu", "Long Wen", "Jinfei Liu", "Feng Xue", "Jian Lou", "Zhibo Wang", "Kui Ren" ]
Track/Datasets_and_Benchmarks
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=OXOLiS0ak6
@inproceedings{ chaudhary2023a, title={A Dataset for Analyzing Streaming Media Performance over {HTTP}/3 Browsers}, author={Sapna Chaudhary and Mukulika Maity and Sandip Chakraborty and Naval Kumar Shukla}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=OXOLiS0ak6} }
HTTP/3 is a new application layer protocol supported by most browsers. It uses QUIC as an underlying transport protocol. QUIC provides multiple benefits, like faster connection establishment, reduced latency, and improved connection migration. Hence, most popular browsers like Chrome/Chromium, Microsoft Edge, Apple Safari, and Mozilla Firefox have started supporting it. In this paper, we present an HTTP/3-supported browser dataset collection tool named H3B. It collects the application and network-level logs during YouTube streaming. We consider YouTube, as it the most popular video streaming application supporting QUIC. Using this tool, we collected a dataset of over 5936 YouTube sessions covering 5464 hours of streaming over 5 different geographical locations and 5 different bandwidth patterns. We believe our tool and as well as the dataset could be used in multiple applications such as a better configuration of application/transport protocols based on the network conditions, intelligent integration of network and application, predicting YouTube's QoE etc. We analyze the dataset and observe that during an HTTP/3 streaming not all requests are served by HTTP/3. Instead whenever the network condition is not favorable the browser chooses to fallback, and the application requests are transmitted using HTTP/2 over the old-standing transport protocol TCP. We observe that such switching of protocols impacts the performance of video streaming applications.
A Dataset for Analyzing Streaming Media Performance over HTTP/3 Browsers
[ "Sapna Chaudhary", "Mukulika Maity", "Sandip Chakraborty", "Naval Kumar Shukla" ]
Track/Datasets_and_Benchmarks
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=OMOOO3ls6g
@inproceedings{ wang2023openlanev, title={OpenLane-V2: A Topology Reasoning Benchmark for Unified 3D {HD} Mapping}, author={Huijie Wang and Tianyu Li and Yang Li and Li Chen and Chonghao Sima and Zhenbo Liu and Bangjun Wang and Peijin Jia and Yuting Wang and Shengyin Jiang and Feng Wen and Hang Xu and Ping Luo and Junchi Yan and Wei Zhang and Hongyang Li}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=OMOOO3ls6g} }
Accurately depicting the complex traffic scene is a vital component for autonomous vehicles to execute correct judgments. However, existing benchmarks tend to oversimplify the scene by solely focusing on lane perception tasks. Observing that human drivers rely on both lanes and traffic signals to operate their vehicles safely, we present OpenLane-V2, the first dataset on topology reasoning for traffic scene structure. The objective of the presented dataset is to advance research in understanding the structure of road scenes by examining the relationship between perceived entities, such as traffic elements and lanes. Leveraging existing datasets, OpenLane-V2 consists of 2,000 annotated road scenes that describe traffic elements and their correlation to the lanes. It comprises three primary sub-tasks, including the 3D lane detection inherited from OpenLane, accompanied by corresponding metrics to evaluate the model’s performance. We evaluate various state-of-the-art methods, and present their quantitative and qualitative results on OpenLane-V2 to indicate future avenues for investigating topology reasoning in traffic scenes.
OpenLane-V2: A Topology Reasoning Benchmark for Unified 3D HD Mapping
[ "Huijie Wang", "Tianyu Li", "Yang Li", "Li Chen", "Chonghao Sima", "Zhenbo Liu", "Bangjun Wang", "Peijin Jia", "Yuting Wang", "Shengyin Jiang", "Feng Wen", "Hang Xu", "Ping Luo", "Junchi Yan", "Wei Zhang", "Hongyang Li" ]
Track/Datasets_and_Benchmarks
poster
2304.10440
[ "https://github.com/OpenDriveLab/OpenLane-V2" ]
https://huggingface.co/papers/2304.10440
2
0
0
16
[]
[]
[]
1
null
https://openreview.net/forum?id=OL2JQoO0kq
@inproceedings{ ikezogwo2023quiltm, title={Quilt-1M: One Million Image-Text Pairs for Histopathology}, author={Wisdom Oluchi Ikezogwo and Mehmet Saygin Seyfioglu and Fatemeh Ghezloo and Dylan Stefan Chan Geva and Fatwir Sheikh Mohammed and Pavan Kumar Anand and Ranjay Krishna and Linda Shapiro}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=OL2JQoO0kq} }
Recent accelerations in multi-modal applications have been made possible with the plethora of image and text data available online. However, the scarcity of analogous data in the medical field, specifically in histopathology, has slowed comparable progress. To enable similar representation learning for histopathology, we turn to YouTube, an untapped resource of videos, offering $1,087$ hours of valuable educational histopathology videos from expert clinicians. From YouTube, we curate QUILT: a large-scale vision-language dataset consisting of $802, 144$ image and text pairs. QUILT was automatically curated using a mixture of models, including large language models, handcrafted algorithms, human knowledge databases, and automatic speech recognition. In comparison, the most comprehensive datasets curated for histopathology amass only around $200$K samples. We combine QUILT with datasets from other sources, including Twitter, research papers, and the internet in general, to create an even larger dataset: QUILT-1M, with $1$M paired image-text samples, marking it as the largest vision-language histopathology dataset to date. We demonstrate the value of QUILT-1M by fine-tuning a pre-trained CLIP model. Our model outperforms state-of-the-art models on both zero-shot and linear probing tasks for classifying new histopathology images across $13$ diverse patch-level datasets of $8$ different sub-pathologies and cross-modal retrieval tasks.
Quilt-1M: One Million Image-Text Pairs for Histopathology
[ "Wisdom Oluchi Ikezogwo", "Mehmet Saygin Seyfioglu", "Fatemeh Ghezloo", "Dylan Stefan Chan Geva", "Fatwir Sheikh Mohammed", "Pavan Kumar Anand", "Ranjay Krishna", "Linda Shapiro" ]
Track/Datasets_and_Benchmarks
oral
2306.11207
[ "https://github.com/wisdomikezogwo/quilt1m" ]
https://huggingface.co/papers/2306.11207
0
1
0
8
[ "wisdomik/QuiltNet-B-32", "wisdomik/QuiltNet-B-16-PMB", "wisdomik/QuiltNet-B-16", "raylim/QuiltNet-B-16-PMB" ]
[ "wisdomik/QUILT-LLaVA-Instruct-107K", "wisdomik/Quilt_VQA", "jamessyx/PathMMU", "wisdomik/Quilt-LLaVA-Pretrain" ]
[ "rifatramadhani/wisdomik-QuiltNet-B-16", "Dggatsby/wisdomik-QuiltNet-B-32", "saradha12/wisdomik-QuiltNet-B-32" ]
1
null
https://openreview.net/forum?id=OHimIaixXk
@inproceedings{ gushchin2023building, title={Building the Bridge of Schr\"odinger: A Continuous Entropic Optimal Transport Benchmark}, author={Nikita Gushchin and Alexander Kolesov and Petr Mokrov and Polina Karpikova and Andrei Spiridonov and Evgeny Burnaev and Alexander Korotin}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=OHimIaixXk} }
Over the last several years, there has been significant progress in developing neural solvers for the Schrödinger Bridge (SB) problem and applying them to generative modelling. This new research field is justifiably fruitful as it is interconnected with the practically well-performing diffusion models and theoretically grounded entropic optimal transport (EOT). Still, the area lacks non-trivial tests allowing a researcher to understand how well the methods solve SB or its equivalent continuous EOT problem. We fill this gap and propose a novel way to create pairs of probability distributions for which the ground truth OT solution is known by the construction. Our methodology is generic and works for a wide range of OT formulations, in particular, it covers the EOT which is equivalent to SB (the main interest of our study). This development allows us to create continuous benchmark distributions with the known EOT and SB solutions on high-dimensional spaces such as spaces of images. As an illustration, we use these benchmark pairs to test how well existing neural EOT/SB solvers actually compute the EOT solution. Our code for constructing benchmark pairs under different setups is available at: https://github.com/ngushchin/EntropicOTBenchmark
Building the Bridge of Schrödinger: A Continuous Entropic Optimal Transport Benchmark
[ "Nikita Gushchin", "Alexander Kolesov", "Petr Mokrov", "Polina Karpikova", "Andrei Spiridonov", "Evgeny Burnaev", "Alexander Korotin" ]
Track/Datasets_and_Benchmarks
poster
2306.10161
[ "https://github.com/ngushchin/entropicotbenchmark" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=ODB01Fyr4a
@inproceedings{ kim2023imagine, title={Imagine the Unseen World: A Benchmark for Systematic Generalization in Visual World Models}, author={Yeongbin Kim and Gautam Singh and Junyeong Park and Caglar Gulcehre and Sungjin Ahn}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=ODB01Fyr4a} }
Systematic compositionality, or the ability to adapt to novel situations by creating a mental model of the world using reusable pieces of knowledge, remains a significant challenge in machine learning. While there has been considerable progress in the language domain, efforts towards systematic visual imagination, or envisioning the dynamical implications of a visual observation, are in their infancy. We introduce the Systematic Visual Imagination Benchmark (SVIB), the first benchmark designed to address this problem head-on. SVIB offers a novel framework for a minimal world modeling problem, where models are evaluated based on their ability to generate one-step image-to-image transformations under a latent world dynamics. The framework provides benefits such as the possibility to jointly optimize for systematic perception and imagination, a range of difficulty levels, and the ability to control the fraction of possible factor combinations used during training. We provide a comprehensive evaluation of various baseline models on SVIB, offering insight into the current state-of-the-art in systematic visual imagination. We hope that this benchmark will help advance visual systematic compositionality.
Imagine the Unseen World: A Benchmark for Systematic Generalization in Visual World Models
[ "Yeongbin Kim", "Gautam Singh", "Junyeong Park", "Caglar Gulcehre", "Sungjin Ahn" ]
Track/Datasets_and_Benchmarks
poster
2311.09064
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=OB10WTlwmX
@inproceedings{ zhang2023evaluating, title={Evaluating and Improving Tool-Augmented Computation-Intensive Math Reasoning}, author={Beichen Zhang and Kun Zhou and Xilin Wei and Xin Zhao and Jing Sha and Shijin Wang and Ji-Rong Wen}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=OB10WTlwmX} }
Chain-of-thought prompting (CoT) and tool augmentation have been validated in recent work as effective practices for improving large language models (LLMs) to perform step-by-step reasoning on complex math-related tasks. However, most existing math reasoning datasets may not be able to fully evaluate and analyze the ability of LLMs in manipulating tools and performing reasoning, as they often only require very few invocations of tools or miss annotations for evaluating intermediate reasoning steps, thus supporting only outcome evaluation. To address the issue, we construct **CARP**, a new Chinese dataset consisting of 4,886 computation-intensive algebra problems with formulated annotations on intermediate steps, facilitating the evaluation of the intermediate reasoning process. In CARP, we test four LLMs with CoT prompting, and find that they are all prone to make mistakes at the early steps of the solution, leading to incorrect answers. Based on this finding, we propose a new approach that can facilitate the deliberation on reasoning steps with tool interfaces, namely **DELI**. In DELI, we first initialize a step-by-step solution based on retrieved exemplars, then iterate two deliberation procedures that check and refine the intermediate steps of the generated solution, from both tool manipulation and natural language reasoning perspectives, until solutions converge or the maximum iteration is achieved. Experimental results on CARP and six other datasets show that the proposed DELI mostly outperforms competitive baselines, and can further boost the performance of existing CoT methods. Our data and code are available at https://github.com/RUCAIBox/CARP.
Evaluating and Improving Tool-Augmented Computation-Intensive Math Reasoning
[ "Beichen Zhang", "Kun Zhou", "Xilin Wei", "Xin Zhao", "Jing Sha", "Shijin Wang", "Ji-Rong Wen" ]
Track/Datasets_and_Benchmarks
poster
2306.02408
[ "https://github.com/rucaibox/carp" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=NoE8g3LRAM
@inproceedings{ shakya2023benchmarking, title={Benchmarking Encoder-Decoder Architectures for Biplanar X-ray to 3D Bone Shape Reconstruction}, author={Mahesh Shakya and Bishesh Khanal}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=NoE8g3LRAM} }
Various deep learning models have been proposed for 3D bone shape reconstruction from two orthogonal (biplanar) X-ray images. However, it is unclear how these models compare against each other since they are evaluated on different anatomy, cohort and (often privately held) datasets. Moreover, the impact of the commonly optimized image-based segmentation metrics such as dice score on the estimation of clinical parameters relevant in 2D-3D bone shape reconstruction is not well known. To move closer toward clinical translation, we propose a benchmarking framework that evaluates tasks relevant to real-world clinical scenarios, including reconstruction of fractured bones, bones with implants, robustness to population shift, and error in estimating clinical parameters. Our open-source platform provides reference implementations of 8 models (many of whose implementations were not publicly available), APIs to easily collect and preprocess 6 public datasets, and the implementation of automatic clinical parameter and landmark extraction methods. We present an extensive evaluation of 8 2D-3D models on equal footing using 6 public datasets comprising images for four different anatomies. Our results show that attention-based methods that capture global spatial relationships tend to perform better across all anatomies and datasets; performance on clinically relevant subgroups may be overestimated without disaggregated reporting; ribs are substantially more difficult to reconstruct compared to femur, hip and spine; and the dice score improvement does not always bring corresponding improvement in the automatic estimation of clinically relevant parameters.
Benchmarking Encoder-Decoder Architectures for Biplanar X-ray to 3D Bone Shape Reconstruction
[ "Mahesh Shakya", "Bishesh Khanal" ]
Track/Datasets_and_Benchmarks
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=MzZcXPeqcU
@inproceedings{ tarasov2023corl, title={{CORL}: Research-oriented Deep Offline Reinforcement Learning Library}, author={Denis Tarasov and Alexander Nikulin and Dmitry Akimov and Vladislav Kurenkov and Sergey Kolesnikov}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=MzZcXPeqcU} }
CORL is an open-source library that provides thoroughly benchmarked single-file implementations of both deep offline and offline-to-online reinforcement learning algorithms. It emphasizes a simple developing experience with a straightforward codebase and a modern analysis tracking tool. In CORL, we isolate methods implementation into separate single files, making performance-relevant details easier to recognize. Additionally, an experiment tracking feature is available to help log metrics, hyperparameters, dependencies, and more to the cloud. Finally, we have ensured the reliability of the implementations by benchmarking commonly employed D4RL datasets providing a transparent source of results that can be reused for robust evaluation tools such as performance profiles, probability of improvement, or expected online performance.
CORL: Research-oriented Deep Offline Reinforcement Learning Library
[ "Denis Tarasov", "Alexander Nikulin", "Dmitry Akimov", "Vladislav Kurenkov", "Sergey Kolesnikov" ]
Track/Datasets_and_Benchmarks
poster
2210.07105
[ "https://github.com/tinkoff-ai/CORL" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=Mn9oHNdYCE
@inproceedings{ liu2023xesgm, title={{XES}3G5M: A Knowledge Tracing Benchmark Dataset with Auxiliary Information}, author={Zitao Liu and Qiongqiong Liu and Teng Guo and Jiahao Chen and Shuyan Huang and Xiangyu Zhao and Jiliang Tang and Weiqi Luo and Jian Weng}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=Mn9oHNdYCE} }
Knowledge tracing (KT) is a task that predicts students' future performance based on their historical learning interactions. With the rapid development of deep learning techniques, existing KT approaches follow a data-driven paradigm that uses massive problem-solving records to model students' learning processes. However, although the educational contexts contain various factors that may have an influence on student learning outcomes, existing public KT datasets mainly consist of anonymized ID-like features, which may hinder the research advances towards this field. Therefore, in this work, we present, \emph{XES3G5M}, a large-scale dataset with rich auxiliary information about questions and their associated knowledge components (KCs)\footnote{\label{ft:kc}A KC is a generalization of everyday terms like concept, principle, fact, or skill.}. The XES3G5M dataset is collected from a real-world online math learning platform, which contains 7,652 questions, and 865 KCs with 5,549,635 interactions from 18,066 students. To the best of our knowledge, the XES3G5M dataset not only has the largest number of KCs in math domain but contains the richest contextual information including tree structured KC relations, question types, textual contents and analysis and student response timestamps. Furthermore, we build a comprehensive benchmark on 19 state-of-the-art deep learning based knowledge tracing (DLKT) models. Extensive experiments demonstrate the effectiveness of leveraging the auxiliary information in our XES3G5M with DLKT models. We hope the proposed dataset can effectively facilitate the KT research work.
XES3G5M: A Knowledge Tracing Benchmark Dataset with Auxiliary Information
null
Track/Datasets_and_Benchmarks
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=MfhJWSp3Ea
@inproceedings{ cobb2023aircraftverse, title={AircraftVerse: A Large-Scale Multimodal Dataset of Aerial Vehicle Designs}, author={Adam D. Cobb and Anirban Roy and Daniel Elenius and Frederick Michael Heim and Brian Swenson and Sydney Whittington and James D Walker and Theodore Bapty and Joseph Hite and Karthik Ramani and Christopher McComb and Susmit Jha}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=MfhJWSp3Ea} }
We present AircraftVerse, a publicly available aerial vehicle design dataset. Aircraft design encompasses different physics domains and, hence, multiple modalities of representation. The evaluation of these designs requires the use of scientific analytical and simulation models ranging from computer-aided design tools for structural and manufacturing analysis, computational fluid dynamics tools for drag and lift computation, battery models for energy estimation, and simulation models for flight control and dynamics. AircraftVerse contains $27{,}714$ diverse air vehicle designs - the largest corpus of designs with this level of complexity. Each design comprises the following artifacts: a symbolic design tree describing topology, propulsion subsystem, battery subsystem, and other design details; a STandard for the Exchange of Product (STEP) model data; a 3D CAD design using a stereolithography (STL) file format; a 3D point cloud for the shape of the design; and evaluation results from high fidelity state-of-the-art physics models that characterize performance metrics such as maximum flight distance and hover-time. We also present baseline surrogate models that use different modalities of design representation to predict design performance metrics, which we provide as part of our dataset release. Finally, we discuss the potential impact of this dataset on the use of learning in aircraft design, and more generally, in the emerging field of deep learning for scientific design. AircraftVerse is accompanied by a datasheet as suggested in the recent literature, and it is released under Creative Commons Attribution-ShareAlike (CC BY-SA) license. The dataset with baseline models are hosted at http://doi.org/10.5281/zenodo.6525446, code at https://github.com/SRI-CSL/AircraftVerse, and the dataset description at https://uavdesignverse.onrender.com/.
AircraftVerse: A Large-Scale Multimodal Dataset of Aerial Vehicle Designs
[ "Adam D. Cobb", "Anirban Roy", "Daniel Elenius", "Frederick Michael Heim", "Brian Swenson", "Sydney Whittington", "James D Walker", "Theodore Bapty", "Joseph Hite", "Karthik Ramani", "Christopher McComb", "Susmit Jha" ]
Track/Datasets_and_Benchmarks
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=McAS4XoZJP
@inproceedings{ gao2023alexa, title={Alexa Arena: A User-Centric Interactive Platform for Embodied {AI}}, author={Qiaozi Gao and Govind Thattai and Suhaila Shakiah and Xiaofeng Gao and Shreyas Pansare and Vasu Sharma and Gaurav S. Sukhatme and Hangjie Shi and Bofei Yang and Desheng Zhang and Lucy Hu and Karthika Arumugam and Shui Hu and Matthew Wen and Dinakar Venkateswar Guthy and Shunan Cadence Chung and Rohan Khanna and Osman Ipek and Leslie Ball and Kate Bland and Heather Rocker and Michael Johnston and Reza Ghanadan and Dilek Hakkani-Tur and Prem Natarajan}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=McAS4XoZJP} }
We introduce Alexa Arena, a user-centric simulation platform to facilitate research in building assistive conversational embodied agents. Alexa Arena features multi-room layouts and an abundance of interactable objects. With user-friendly graphics and control mechanisms, the platform supports the development of gamified robotic tasks readily accessible to general human users, allowing high-efficiency data collection and EAI system evaluation. Along with the platform, we introduce a dialog-enabled task completion benchmark with online human evaluations.
Alexa Arena: A User-Centric Interactive Platform for Embodied AI
[ "Qiaozi Gao", "Govind Thattai", "Suhaila Shakiah", "Xiaofeng Gao", "Shreyas Pansare", "Vasu Sharma", "Gaurav S. Sukhatme", "Hangjie Shi", "Bofei Yang", "Desheng Zhang", "Lucy Hu", "Karthika Arumugam", "Shui Hu", "Matthew Wen", "Dinakar Venkateswar Guthy", "Shunan Cadence Chung", "Rohan Khanna", "Osman Ipek", "Leslie Ball", "Kate Bland", "Heather Rocker", "Michael Johnston", "Reza Ghanadan", "Dilek Hakkani-Tur", "Prem Natarajan" ]
Track/Datasets_and_Benchmarks
poster
2303.01586
[ "https://github.com/amazon-science/alexa-arena" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=MZopld6S22
@inproceedings{ wang2023benchmarking, title={Benchmarking and Analyzing 3D-aware Image Synthesis with a Modularized Codebase}, author={Qiuyu Wang and Zifan Shi and Kecheng Zheng and Yinghao Xu and Sida Peng and Yujun Shen}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=MZopld6S22} }
Despite the rapid advance of 3D-aware image synthesis, existing studies usually adopt a mixture of techniques and tricks, leaving it unclear how each part contributes to the final performance in terms of generality. Following the most popular and effective paradigm in this field, which incorporates a neural radiance field (NeRF) into the generator of a generative adversarial network (GAN), we build a well-structured codebase through modularizing the generation process. Such a design allows researchers to develop and replace each module independently, and hence offers an opportunity to fairly compare various approaches and recognize their contributions from the module perspective. The reproduction of a range of cutting-edge algorithms demonstrates the availability of our modularized codebase. We also perform a variety of in-depth analyses, such as the comparison across different types of point feature, the necessity of the tailing upsampler in the generator, the reliance on the camera pose prior, etc., which deepen our understanding of existing methods and point out some further directions of the research work. Code and models will be made publicly available to facilitate the development and evaluation of this field.
Benchmarking and Analyzing 3D-aware Image Synthesis with a Modularized Codebase
[ "Qiuyu Wang", "Zifan Shi", "Kecheng Zheng", "Yinghao Xu", "Sida Peng", "Yujun Shen" ]
Track/Datasets_and_Benchmarks
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=MLLp6AHQFs
@inproceedings{ zohar2023lovm, title={{LOVM}: Language-Only Vision Model Selection}, author={Orr Zohar and Shih-Cheng Huang and Kuan-Chieh Wang and Serena Yeung}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=MLLp6AHQFs} }
Pre-trained multi-modal vision-language models (VLMs) are becoming increasingly popular due to their exceptional performance on downstream vision applications, particularly in the few- and zero-shot settings. However, selecting the best-performing VLM for some downstream applications is non-trivial, as it is dataset and task-dependent. Meanwhile, the exhaustive evaluation of all available VLMs on a novel application is not only time and computationally demanding but also necessitates the collection of a labeled dataset for evaluation. As the number of open-source VLM variants increases, there is a need for an efficient model selection strategy that does not require access to a curated evaluation dataset. This paper proposes a novel task and benchmark for efficiently evaluating VLMs' zero-shot performance on downstream applications without access to the downstream task dataset. Specifically, we introduce a new task LOVM: **L**anguage-**O**nly **V**ision **M**odel Selection , where methods are expected to perform both model selection and performance prediction based solely on a text description of the desired downstream application. We then introduced an extensive LOVM benchmark consisting of ground-truth evaluations of 35 pre-trained VLMs and 23 datasets, where methods are expected to rank the pre-trained VLMs and predict their zero-shot performance.
LOVM: Language-Only Vision Model Selection
[ "Orr Zohar", "Shih-Cheng Huang", "Kuan-Chieh Wang", "Serena Yeung" ]
Track/Datasets_and_Benchmarks
poster
2306.08893
[ "https://github.com/orrzohar/lovm" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=MEa0cQeURw
@inproceedings{ said2023neurograph, title={NeuroGraph: Benchmarks for Graph Machine Learning in Brain Connectomics}, author={Anwar Said and Roza G Bayrak and Tyler Derr and Mudassir Shabbir and Daniel Moyer and Catie Chang and Xenofon D. Koutsoukos}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=MEa0cQeURw} }
Machine learning provides a valuable tool for analyzing high-dimensional functional neuroimaging data, and is proving effective in predicting various neurological conditions, psychiatric disorders, and cognitive patterns. In functional magnetic resonance imaging (MRI) research, interactions between brain regions are commonly modeled using graph-based representations. The potency of graph machine learning methods has been established across myriad domains, marking a transformative step in data interpretation and predictive modeling. Yet, despite their promise, the transposition of these techniques to the neuroimaging domain has been challenging due to the expansive number of potential preprocessing pipelines and the large parameter search space for graph-based dataset construction. In this paper, we introduce NeuroGraph, a collection of graph-based neuroimaging datasets, and demonstrated its utility for predicting multiple categories of behavioral and cognitive traits. We delve deeply into the dataset generation search space by crafting 35 datasets that encompass static and dynamic brain connectivity, running in excess of 15 baseline methods for benchmarking. Additionally, we provide generic frameworks for learning on both static and dynamic graphs. Our extensive experiments lead to several key observations. Notably, using correlation vectors as node features, incorporating larger number of regions of interest, and employing sparser graphs lead to improved performance. To foster further advancements in graph-based data driven neuroimaging analysis, we offer a comprehensive open-source Python package that includes the benchmark datasets, baseline implementations, model training, and standard evaluation.
NeuroGraph: Benchmarks for Graph Machine Learning in Brain Connectomics
[ "Anwar Said", "Roza G Bayrak", "Tyler Derr", "Mudassir Shabbir", "Daniel Moyer", "Catie Chang", "Xenofon D. Koutsoukos" ]
Track/Datasets_and_Benchmarks
poster
2306.06202
[ "https://github.com/anwar-said/neurograph" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=Luc1bZLeMY
@inproceedings{ chen2023chammi, title={{CHAMMI}: A benchmark for channel-adaptive models in microscopy imaging}, author={Zitong Chen and Chau Pham and Siqi Wang and Michael Doron and Nikita Moshkov and Bryan A. Plummer and Juan C Caicedo}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=Luc1bZLeMY} }
Most neural networks assume that input images have a fixed number of channels (three for RGB images). However, there are many settings where the number of channels may vary, such as microscopy images where the number of channels changes depending on instruments and experimental goals. Yet, there has not been a systemic attempt to create and evaluate neural networks that are invariant to the number and type of channels. As a result, trained models remain specific to individual studies and are hardly reusable for other microscopy settings. In this paper, we present a benchmark for investigating channel-adaptive models in microscopy imaging, which consists of 1) a dataset of varied-channel single-cell images, and 2) a biologically relevant evaluation framework. In addition, we adapted several existing techniques to create channel-adaptive models and compared their performance on this benchmark to fixed-channel, baseline models. We find that channel-adaptive models can generalize better to out-of-domain tasks and can be computationally efficient. We contribute a curated dataset and an evaluation API to facilitate objective comparisons in future research and applications.
CHAMMI: A benchmark for channel-adaptive models in microscopy imaging
[ "Zitong Chen", "Chau Pham", "Siqi Wang", "Michael Doron", "Nikita Moshkov", "Bryan A. Plummer", "Juan C Caicedo" ]
Track/Datasets_and_Benchmarks
poster
2310.19224
[ "https://github.com/broadinstitute/MorphEm" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=LegGqdch92
@inproceedings{ garioud2023flair, title={{FLAIR} : a Country-Scale Land Cover Semantic Segmentation Dataset From Multi-Source Optical Imagery}, author={Anatol Garioud and Nicolas Gonthier and Loic Landrieu and Apolline De Wit and Marion Valette and Marc Poup{\'e}e and Sebastien Giordano and Boris Wattrelos}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=LegGqdch92} }
We introduce the French Land cover from Aerospace ImageRy (FLAIR), an extensive dataset from the French National Institute of Geographical and Forest Information (IGN) that provides a unique and rich resource for large-scale geospatial analysis. FLAIR contains high-resolution aerial imagery with a ground sample distance of 20 cm and over 20 billion individually labeled pixels for precise land-cover classification. The dataset also integrates temporal and spectral data from optical satellite time series. FLAIR thus combines data with varying spatial, spectral, and temporal resolutions across over 817 km² of acquisitions representing the full landscape diversity of France. This diversity makes FLAIR a valuable resource for the development and evaluation of novel methods for large-scale land-cover semantic segmentation and raises significant challenges in terms of computer vision, data fusion, and geospatial analysis. We also provide powerful uni- and multi-sensor baseline models that can be employed to assess algorithm's performance and for downstream applications.
FLAIR : a Country-Scale Land Cover Semantic Segmentation Dataset From Multi-Source Optical Imagery
[ "Anatol Garioud", "Nicolas Gonthier", "Loic Landrieu", "Apolline De Wit", "Marion Valette", "Marc Poupée", "Sebastien Giordano", "Boris Wattrelos" ]
Track/Datasets_and_Benchmarks
poster
[ "https://github.com/ignf/flair-2-ai-challenge" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=LaFKTgrZMG
@inproceedings{ mazumder2023dataperf, title={DataPerf: Benchmarks for Data-Centric {AI} Development}, author={Mark Mazumder and Colby Banbury and Xiaozhe Yao and Bojan Karla{\v{s}} and William A Gaviria Rojas and Sudnya Diamos and Greg Diamos and Lynn He and Alicia Parrish and Hannah Rose Kirk and Jessica Quaye and Charvi Rastogi and Douwe Kiela and David Jurado and David Kanter and Rafael Mosquera and Will Cukierski and Juan Ciro and Lora Aroyo and Bilge Acun and Lingjiao Chen and Mehul Smriti Raje and Max Bartolo and Sabri Eyuboglu and Amirata Ghorbani and Emmett Daniel Goodman and Addison Howard and Oana Inel and Tariq Kane and Christine Kirkpatrick and D. Sculley and Tzu-Sheng Kuo and Jonas Mueller and Tristan Thrush and Joaquin Vanschoren and Margaret Warren and Adina Williams and Serena Yeung and Newsha Ardalani and Praveen Paritosh and Ce Zhang and James Y. Zou and Carole-Jean Wu and Cody Coleman and Andrew Ng and Peter Mattson and Vijay Janapa Reddi}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=LaFKTgrZMG} }
Machine learning research has long focused on models rather than datasets, and prominent datasets are used for common ML tasks without regard to the breadth, difficulty, and faithfulness of the underlying problems. Neglecting the fundamental importance of data has given rise to inaccuracy, bias, and fragility in real-world applications, and research is hindered by saturation across existing dataset benchmarks. In response, we present DataPerf, a community-led benchmark suite for evaluating ML datasets and data-centric algorithms. We aim to foster innovation in data-centric AI through competition, comparability, and reproducibility. We enable the ML community to iterate on datasets, instead of just architectures, and we provide an open, online platform with multiple rounds of challenges to support this iterative development. The first iteration of DataPerf contains five benchmarks covering a wide spectrum of data-centric techniques, tasks, and modalities in vision, speech, acquisition, debugging, and diffusion prompting, and we support hosting new contributed benchmarks from the community. The benchmarks, online evaluation platform, and baseline implementations are open source, and the MLCommons Association will maintain DataPerf to ensure long-term benefits to academia and industry.
DataPerf: Benchmarks for Data-Centric AI Development
[ "Mark Mazumder", "Colby Banbury", "Xiaozhe Yao", "Bojan Karlaš", "William A Gaviria Rojas", "Sudnya Diamos", "Greg Diamos", "Lynn He", "Alicia Parrish", "Hannah Rose Kirk", "Jessica Quaye", "Charvi Rastogi", "Douwe Kiela", "David Jurado", "David Kanter", "Rafael Mosquera", "Will Cukierski", "Juan Ciro", "Lora Aroyo", "Bilge Acun", "Lingjiao Chen", "Mehul Smriti Raje", "Max Bartolo", "Sabri Eyuboglu", "Amirata Ghorbani", "Emmett Daniel Goodman", "Addison Howard", "Oana Inel", "Tariq Kane", "Christine Kirkpatrick", "D. Sculley", "Tzu-Sheng Kuo", "Jonas Mueller", "Tristan Thrush", "Joaquin Vanschoren", "Margaret Warren", "Adina Williams", "Serena Yeung", "Newsha Ardalani", "Praveen Paritosh", "Ce Zhang", "James Y. Zou", "Carole-Jean Wu", "Cody Coleman", "Andrew Ng", "Peter Mattson", "Vijay Janapa Reddi" ]
Track/Datasets_and_Benchmarks
poster
2207.10062
[ "https://github.com/mlcommons/dataperf" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=LE4AN1FGjJ
@inproceedings{ tang2023degraded, title={Degraded Polygons Raise Fundamental Questions of Neural Network Perception}, author={Leonard Tang and Dan Ley}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=LE4AN1FGjJ} }
It is well-known that modern computer vision systems often exhibit behaviors misaligned with those of humans: from adversarial attacks to image corruptions, deep learning vision models suffer in a variety of settings that humans capably handle. In light of these phenomena, here we introduce another, orthogonal perspective studying the human-machine vision gap. We revisit the task of recovering images under degradation, first introduced over 30 years ago in the Recognition-by-Components theory of human vision. Specifically, we study the performance and behavior of neural networks on the seemingly simple task of classifying regular polygons at varying orders of degradation along their perimeters. To this end, we implement the Automated Shape Recoverability Test for rapidly generating large-scale datasets of perimeter-degraded regular polygons, modernizing the historically manual creation of image recoverability experiments. We then investigate the capacity of neural networks to recognize and recover such degraded shapes when initialized with different priors. Ultimately, we find that neural networks’ behavior on this simple task conflicts with human behavior, raising a fundamental question of the robustness and learning capabilities of modern computer vision models
Degraded Polygons Raise Fundamental Questions of Neural Network Perception
[ "Leonard Tang", "Dan Ley" ]
Track/Datasets_and_Benchmarks
poster
2306.04955
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=L9I9FhHfS3
@inproceedings{ schumann2023consensus, title={Consensus and Subjectivity of Skin Tone Annotation for {ML} Fairness}, author={Candice Schumann and Gbolahan Oluwafemi Olanubi and Auriel Wright and Ellis Monk and Courtney Heldreth and Susanna Ricco}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=L9I9FhHfS3} }
Understanding different human attributes and how they affect model behavior may become a standard need for all model creation and usage, from traditional computer vision tasks to the newest multimodal generative AI systems. In computer vision specifically, we have relied on datasets augmented with perceived attribute signals (eg, gender presentation, skin tone, and age) and benchmarks enabled by these datasets. Typically labels for these tasks come from human annotators. However, annotating attribute signals, especially skin tone, is a difficult and subjective task. Perceived skin tone is affected by technical factors, like lighting conditions, and social factors that shape an annotator's lived experience. This paper examines the subjectivity of skin tone annotation through a series of annotation experiments using the Monk Skin Tone (MST) scale, a small pool of professional photographers, and a much larger pool of trained crowdsourced annotators. Along with this study we release the Monk Skin Tone Examples (MST-E) dataset, containing 1515 images and 31 videos spread across the full MST scale. MST-E is designed to help train human annotators to annotate MST effectively.Our study shows that annotators can reliably annotate skin tone in a way that aligns with an expert in the MST scale, even under challenging environmental conditions. We also find evidence that annotators from different geographic regions rely on different mental models of MST categories resulting in annotations that systematically vary across regions. Given this, we advise practitioners to use a diverse set of annotators and a higher replication count for each image when annotating skin tone for fairness research.
Consensus and Subjectivity of Skin Tone Annotation for ML Fairness
[ "Candice Schumann", "Gbolahan Oluwafemi Olanubi", "Auriel Wright", "Ellis Monk", "Courtney Heldreth", "Susanna Ricco" ]
Track/Datasets_and_Benchmarks
poster
2305.09073
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=Kn6VRkYqYk
@inproceedings{ recasens2023the, title={The Drunkard{\textquoteright}s Odometry: Estimating Camera Motion in Deforming Scenes}, author={David Recasens and Martin R. Oswald and Marc Pollefeys and Javier Civera}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=Kn6VRkYqYk} }
Estimating camera motion in deformable scenes poses a complex and open research challenge. Most existing non-rigid structure from motion techniques assume to observe also static scene parts besides deforming scene parts in order to establish an anchoring reference. However, this assumption does not hold true in certain relevant application cases such as endoscopies. Deformable odometry and SLAM pipelines, which tackle the most challenging scenario of exploratory trajectories, suffer from a lack of robustness and proper quantitative evaluation methodologies. To tackle this issue with a common benchmark, we introduce the Drunkard's Dataset, a challenging collection of synthetic data targeting visual navigation and reconstruction in deformable environments. This dataset is the first large set of exploratory camera trajectories with ground truth inside 3D scenes where every surface exhibits non-rigid deformations over time. Simulations in realistic 3D buildings lets us obtain a vast amount of data and ground truth labels, including camera poses, RGB images and depth, optical flow and normal maps at high resolution and quality. We further present a novel deformable odometry method, dubbed the Drunkard’s Odometry, which decomposes optical flow estimates into rigid-body camera motion and non-rigid scene deformations. In order to validate our data, our work contains an evaluation of several baselines as well as a novel tracking error metric which does not require ground truth data. Dataset and code: https://davidrecasens.github.io/TheDrunkard'sOdometry/
The Drunkard’s Odometry: Estimating Camera Motion in Deforming Scenes
[ "David Recasens", "Martin R. Oswald", "Marc Pollefeys", "Javier Civera" ]
Track/Datasets_and_Benchmarks
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=KZjSvE2mJz
@inproceedings{ bauer2023mlfmf, title={{MLFMF}: Data Sets for Machine Learning for Mathematical Formalization}, author={Andrej Bauer and Matej Petkovi{\'c} and Ljupco Todorovski}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=KZjSvE2mJz} }
We introduce MLFMF, a collection of data sets for benchmarking recommendation systems used to support formalization of mathematics with proof assistants. These systems help humans identify which previous entries (theorems, constructions, datatypes, and postulates) are relevant in proving a new theorem or carrying out a new construction. Each data set is derived from a library of formalized mathematics written in proof assistants Agda or Lean. The collection includes the largest Lean 4 library Mathlib, and some of the largest Agda libraries: the standard library, the library of univalent mathematics Agda-unimath, and the TypeTopology library. Each data set represents the corresponding library in two ways: as a heterogeneous network, and as a list of s-expressions representing the syntax trees of all the entries in the library. The network contains the (modular) structure of the library and the references between entries, while the s-expressions give complete and easily parsed information about every entry. We report baseline results using standard graph and word embeddings, tree ensembles, and instance-based learning algorithms. The MLFMF data sets provide solid benchmarking support for further investigation of the numerous machine learning approaches to formalized mathematics. The methodology used to extract the networks and the s-expressions readily applies to other libraries, and is applicable to other proof assistants. With more than $250\,000$ entries in total, this is currently the largest collection of formalized mathematical knowledge in machine learnable format.
MLFMF: Data Sets for Machine Learning for Mathematical Formalization
null
Track/Datasets_and_Benchmarks
poster
2310.16005
[ "https://github.com/ul-fmf/mlfmf-data" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=KRBoWULo2w
@inproceedings{ bordes2023pug, title={{PUG}: Photorealistic and Semantically Controllable Synthetic Data for Representation Learning}, author={Florian Bordes and Shashank Shekhar and Mark Ibrahim and Diane Bouchacourt and Pascal Vincent and Ari S. Morcos}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=KRBoWULo2w} }
Synthetic image datasets offer unmatched advantages for designing and evaluating deep neural networks: they make it possible to (i) render as many data samples as needed, (ii) precisely control each scene and yield granular ground truth labels (and captions), (iii) precisely control distribution shifts between training and testing to isolate variables of interest for sound experimentation. Despite such promise, the use of synthetic image data is still limited -- and often played down -- mainly due to their lack of realism. Most works therefore rely on datasets of real images, which have often been scraped from public images on the internet, and may have issues with regards to privacy, bias, and copyright, while offering little control over how objects precisely appear. In this work, we present a path to democratize the use of photorealistic synthetic data: we develop a new generation of interactive environments for representation learning research, that offer both controllability and realism. We use the Unreal Engine, a powerful game engine well known in the entertainment industry, to produce PUG (Photorealistic Unreal Graphics) environments and datasets for representation learning. Using PUG for evaluation and fine-tuning, we demonstrate the potential of PUG to both enable more rigorous evaluations and to improve model training.
PUG: Photorealistic and Semantically Controllable Synthetic Data for Representation Learning
[ "Florian Bordes", "Shashank Shekhar", "Mark Ibrahim", "Diane Bouchacourt", "Pascal Vincent", "Ari S. Morcos" ]
Track/Datasets_and_Benchmarks
poster
2308.03977
[ "https://github.com/facebookresearch/pug" ]
https://huggingface.co/papers/2308.03977
0
0
0
6
[]
[ "facebook/PUG_SPAR", "facebook/PUG_Animals", "facebook/PUG_ImageNet" ]
[]
1
null
https://openreview.net/forum?id=Jsc7WSCZd4
@inproceedings{ hsieh2023sugarcrepe, title={SugarCrepe: Fixing Hackable Benchmarks for Vision-Language Compositionality}, author={Cheng-Yu Hsieh and Jieyu Zhang and Zixian Ma and Aniruddha Kembhavi and Ranjay Krishna}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=Jsc7WSCZd4} }
In the last year alone, a surge of new benchmarks to measure $\textit{compositional}$ understanding of vision-language models have permeated the machine learning ecosystem. Given an image, these benchmarks probe a model's ability to identify its associated caption amongst a set of compositional distractors. Surprisingly, we find significant biases in $\textit{all}$ these benchmarks rendering them hackable. This hackability is so dire that blind models with no access to the image outperform state-of-the-art vision-language models. To remedy this rampant vulnerability, we introduce $\textit{SugarCrepe}$, a new benchmark for vision-language compositionality evaluation. We employ large language models, instead of rule-based templates used in previous benchmarks, to generate fluent and sensical hard negatives, and utilize an adversarial refinement mechanism to maximally reduce biases. We re-evaluate state-of-the-art models and recently proposed compositionality inducing strategies, and find that their improvements were hugely overestimated, suggesting that more innovation is needed in this important direction. We release $\textit{SugarCrepe}$ and the code for evaluation at: https://github.com/RAIVNLab/sugar-crepe.
SugarCrepe: Fixing Hackable Benchmarks for Vision-Language Compositionality
[ "Cheng-Yu Hsieh", "Jieyu Zhang", "Zixian Ma", "Aniruddha Kembhavi", "Ranjay Krishna" ]
Track/Datasets_and_Benchmarks
poster
2306.14610
[ "https://github.com/raivnlab/sugar-crepe" ]
https://huggingface.co/papers/2306.14610
2
0
0
5
[]
[ "Aman-J/SugarCrepe_pp" ]
[]
1
null
https://openreview.net/forum?id=JqWtIIaS8n
@inproceedings{ zheng2023lithobench, title={LithoBench: Benchmarking {AI} Computational Lithography for Semiconductor Manufacturing}, author={Su Zheng and Haoyu Yang and Binwu Zhu and Bei Yu and Martin D.F. Wong}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=JqWtIIaS8n} }
Computational lithography provides algorithmic and mathematical support for resolution enhancement in optical lithography, which is the critical step in semiconductor manufacturing. The time-consuming lithography simulation and mask optimization processes limit the practical application of inverse lithography technology (ILT), a promising solution to the challenges of advanced-node lithography. Although various machine learning methods for ILT have shown promise for reducing the computational burden, this field is in lack of a dataset that can train the models thoroughly and evaluate the performance comprehensively. To boost the development of AI-driven computational lithography, we present the LithoBench dataset, a collection of circuit layout tiles for deep-learning-based lithography simulation and mask optimization. LithoBench consists of more than 120k tiles that are cropped from real circuit designs or synthesized according to the layout topologies of famous ILT testcases. The ground truths are generated by a famous lithography model in academia and an advanced ILT method. Based on the data, we provide a framework to design and evaluate deep neural networks (DNNs) with the data. The framework is used to benchmark state-of-the-art models on lithography simulation and mask optimization. We hope LithoBench can promote the research and development of computational lithography. LithoBench is available at https://anonymous.4open.science/r/lithobench-APPL.
LithoBench: Benchmarking AI Computational Lithography for Semiconductor Manufacturing
[ "Su Zheng", "Haoyu Yang", "Binwu Zhu", "Bei Yu", "Martin D.F. Wong" ]
Track/Datasets_and_Benchmarks
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=JVlWseddak
@inproceedings{ mangalam2023egoschema, title={EgoSchema: A Diagnostic Benchmark for Very Long-form Video Language Understanding}, author={Karttikeya Mangalam and Raiymbek Akshulakov and Jitendra Malik}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=JVlWseddak} }
We introduce EgoSchema, a very long-form video question-answering dataset, and benchmark to evaluate long video understanding capabilities of modern vision and language systems. Derived from Ego4D, EgoSchema consists of over 5000 human curated multiple choice question answer pairs, spanning over 250 hours of real video data, covering a very broad range of natural human activity and behavior. For each question, EgoSchema requires the correct answer to be selected between five given options based on a three-minute-long video clip. While some prior works have proposed video datasets with long clip lengths, we posit that merely the length of the video clip does not truly capture the temporal difficulty of the video task that is being considered. To remedy this, we introduce temporal certificate sets, a general notion for capturing the intrinsic temporal understanding length associated with a broad range of video understanding tasks & datasets. Based on this metric, we find EgoSchema to have intrinsic temporal lengths over 5.7x longer than the second closest dataset and 10x to 100x longer than any other video understanding dataset. Further, our evaluation of several current state-of-the-art video and language models shows them to be severely lacking in long-term video understanding capabilities. Even models with several billions of parameters achieve QA accuracy less than 33% (random is 20%) on the EgoSchema multi-choice question answering task, while humans achieve about 76% accuracy. We posit that EgoSchema, with its long intrinsic temporal structures and diverse complexity, would serve as a valuable evaluation probe for developing effective long-term video understanding systems in the future. Data and Zero-shot model evaluation code will all be open-sourced under the Ego4D license at http://egoschema.github.io.
EgoSchema: A Diagnostic Benchmark for Very Long-form Video Language Understanding
null
Track/Datasets_and_Benchmarks
oral
2308.09126
[ "https://github.com/egoschema/egoschema" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=JGVSxwKHbq
@inproceedings{ ramaswamy2023geode, title={Geo{DE}: a Geographically Diverse Evaluation Dataset for Object Recognition}, author={Vikram V. Ramaswamy and Sing Yu Lin and Dora Zhao and Aaron Bryan Adcock and Laurens van der Maaten and Deepti Ghadiyaram and Olga Russakovsky}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=JGVSxwKHbq} }
Current dataset collection methods typically scrape large amounts of data from the web. While this technique is extremely scalable, data collected in this way tends to reinforce stereotypical biases, can contain personally identifiable information, and typically originates from Europe and North America. In this work, we rethink the dataset collection paradigm and introduce GeoDE, a geographically diverse dataset with 61,940 images from 40 classes and 6 world regions, and no personally identifiable information, collected by soliciting images from people across the world. We analyse GeoDE to understand differences in images collected in this manner compared to web-scraping. Despite the smaller size of this dataset, we demonstrate its use as both an evaluation and training dataset, allowing us to highlight shortcomings in current models, as well as demonstrate improved performance even when training on this small dataset. We release the full dataset and code at https://geodiverse-data-collection.cs.princeton.edu/
GeoDE: a Geographically Diverse Evaluation Dataset for Object Recognition
[ "Vikram V. Ramaswamy", "Sing Yu Lin", "Dora Zhao", "Aaron Bryan Adcock", "Laurens van der Maaten", "Deepti Ghadiyaram", "Olga Russakovsky" ]
Track/Datasets_and_Benchmarks
poster
2301.02560
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=IptxZvA3at
@inproceedings{ lacoste2023geobench, title={{GEO}-Bench: Toward Foundation Models for Earth Monitoring}, author={Alexandre Lacoste and Nils Lehmann and Pau Rodriguez and Evan David Sherwin and Hannah Kerner and Bj{\"o}rn L{\"u}tjens and Jeremy Andrew Irvin and David Dao and Hamed Alemohammad and Alexandre Drouin and Mehmet Gunturkun and Gabriel Huang and David Vazquez and Dava Newman and Yoshua Bengio and Stefano Ermon and Xiao Xiang Zhu}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=IptxZvA3at} }
Recent progress in self-supervision has shown that pre-training large neural networks on vast amounts of unsupervised data can lead to substantial increases in generalization to downstream tasks. Such models, recently coined foundation models, have been transformational to the field of natural language processing. Variants have also been proposed for image data, but their applicability to remote sensing tasks is limited. To stimulate the development of foundation models for Earth monitoring, we propose a benchmark comprised of six classification and six segmentation tasks, which were carefully curated and adapted to be both relevant to the field and well-suited for model evaluation. We accompany this benchmark with a robust methodology for evaluating models and reporting aggregated results to enable a reliable assessment of progress. Finally, we report results for 20 baselines to gain information about the performance of existing models. We believe that this benchmark will be a driver of progress across a variety of Earth monitoring tasks.
GEO-Bench: Toward Foundation Models for Earth Monitoring
[ "Alexandre Lacoste", "Nils Lehmann", "Pau Rodriguez", "Evan David Sherwin", "Hannah Kerner", "Björn Lütjens", "Jeremy Andrew Irvin", "David Dao", "Hamed Alemohammad", "Alexandre Drouin", "Mehmet Gunturkun", "Gabriel Huang", "David Vazquez", "Dava Newman", "Yoshua Bengio", "Stefano Ermon", "Xiao Xiang Zhu" ]
Track/Datasets_and_Benchmarks
poster
2306.03831
[ "https://github.com/servicenow/geo-bench" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=IiRHQ7gvnq
@inproceedings{ bai2023benchmarking, title={Benchmarking Foundation Models with Language-Model-as-an-Examiner}, author={Yushi Bai and Jiahao Ying and Yixin Cao and Xin Lv and Yuze He and Xiaozhi Wang and Jifan Yu and Kaisheng Zeng and Yijia Xiao and Haozhe Lyu and Jiayin Zhang and Juanzi Li and Lei Hou}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=IiRHQ7gvnq} }
Numerous benchmarks have been established to assess the performance of foundation models on open-ended question answering, which serves as a comprehensive test of a model's ability to understand and generate language in a manner similar to humans. Most of these works focus on proposing new datasets, however, we see two main issues within previous benchmarking pipelines, namely testing leakage and evaluation automation. In this paper, we propose a novel benchmarking framework, Language-Model-as-an-Examiner, where the LM serves as a knowledgeable examiner that formulates questions based on its knowledge and evaluates responses in a reference-free manner. Our framework allows for effortless extensibility as various LMs can be adopted as the examiner, and the questions can be constantly updated given more diverse trigger topics. For a more comprehensive and equitable evaluation, we devise three strategies: (1) We instruct the LM examiner to generate questions across a multitude of domains to probe for a broad acquisition, and raise follow-up questions to engage in a more in-depth assessment. (2) Upon evaluation, the examiner combines both scoring and ranking measurements, providing a reliable result as it aligns closely with human annotations. (3) We additionally propose a decentralized Peer-examination method to address the biases in a single examiner. Our data and benchmarking results are available at: http://lmexam.xlore.cn.
Benchmarking Foundation Models with Language-Model-as-an-Examiner
[ "Yushi Bai", "Jiahao Ying", "Yixin Cao", "Xin Lv", "Yuze He", "Xiaozhi Wang", "Jifan Yu", "Kaisheng Zeng", "Yijia Xiao", "Haozhe Lyu", "Jiayin Zhang", "Juanzi Li", "Lei Hou" ]
Track/Datasets_and_Benchmarks
poster
2306.04181
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=Icxwnu9hcO
@inproceedings{ zheng2023nisd, title={{NIS}3D: A Completely Annotated Benchmark for Dense 3D Nuclei Image Segmentation}, author={Wei Zheng and James Cheng Peng and Zeyuan Hou and Boyu Lyu and Mengfan Wang and Xuelong Mi and Shuoxuan Qiao and Yinan Wan and Guoqiang Yu}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=Icxwnu9hcO} }
3D segmentation of nuclei images is a fundamental task for many biological studies. Despite the rapid advances of large-volume 3D imaging acquisition methods and the emergence of sophisticated algorithms to segment the nuclei in recent years, a benchmark with all cells completely annotated is still missing, making it hard to accurately assess and further improve the performance of the algorithms. The existing nuclei segmentation benchmarks either worked on 2D only or annotated a small number of 3D cells, perhaps due to the high cost of 3D annotation for large-scale data. To fulfill the critical need, we constructed NIS3D, a 3D, high cell density, large-volume, and completely annotated Nuclei Image Segmentation benchmark, assisted by our newly designed semi-automatic annotation software. NIS3D provides more than 22,000 cells across multiple most-used species in this area. Each cell is labeled by three independent annotators, so we can measure the variability of each annotation. A confidence score is computed for each cell, allowing more nuanced testing and performance comparison. A comprehensive review on the methods of segmenting 3D dense nuclei was conducted. The benchmark was used to evaluate the performance of several selected state-of-the-art segmentation algorithms. The best of current methods is still far away from human-level accuracy, corroborating the necessity of generating such a benchmark. The testing results also demonstrated the strength and weakness of each method and pointed out the directions of further methodological development. The dataset can be downloaded here: https://github.com/yu-lab-vt/NIS3D.
NIS3D: A Completely Annotated Benchmark for Dense 3D Nuclei Image Segmentation
[ "Wei Zheng", "James Cheng Peng", "Zeyuan Hou", "Boyu Lyu", "Mengfan Wang", "Xuelong Mi", "Shuoxuan Qiao", "Yinan Wan", "Guoqiang Yu" ]
Track/Datasets_and_Benchmarks
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=HvcLKgtbco
@inproceedings{ bai2023towards, title={Towards a Comprehensive Benchmark for High-Level Synthesis Targeted to {FPGA}s}, author={Yunsheng Bai and Atefeh Sohrabizadeh and Zongyue Qin and Ziniu Hu and Yizhou Sun and Jason Cong}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=HvcLKgtbco} }
High-level synthesis (HLS) aims to raise the abstraction layer in hardware design, enabling the design of domain-specific accelerators (DSAs) like field-programmable gate arrays (FPGAs) using C/C++ instead of hardware description languages (HDLs). Compiler directives in the form of pragmas play a crucial role in modifying the microarchitecture within the HLS framework. However, the space of possible microarchitectures grows exponentially with the number of pragmas. Moreover, the evaluation of each candidate design using the HLS tool consumes significant time, ranging from minutes to hours, leading to a time-consuming optimization process. To accelerate this process, machine learning models have been used to predict design quality in milliseconds. However, existing open-source datasets for training such models are limited in terms of design complexity and available optimizations. In this paper, we present HLSyn, the first benchmark that addresses these limitations. It contains more complex programs with a wider range of optimization pragmas, making it a comprehensive dataset for training and evaluating design quality prediction models. The HLSyn benchmark consists of 42 unique programs/kernels, resulting in over 42,000 labeled designs. We conduct an extensive comparison of state-of-the-art baselines to assess their effectiveness in predicting design quality. As an ongoing project, we anticipate expanding the HLSyn benchmark in terms of both quantity and variety of programs to further support the development of this field.
Towards a Comprehensive Benchmark for High-Level Synthesis Targeted to FPGAs
[ "Yunsheng Bai", "Atefeh Sohrabizadeh", "Zongyue Qin", "Ziniu Hu", "Yizhou Sun", "Jason Cong" ]
Track/Datasets_and_Benchmarks
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=HuG4eOFLO9
@inproceedings{ dong2023bullyingk, title={Bullying10K: A Large-Scale Neuromorphic Dataset towards Privacy-Preserving Bullying Recognition}, author={Yiting Dong and Yang Li and Dongcheng Zhao and Guobin Shen and Yi Zeng}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=HuG4eOFLO9} }
The prevalence of violence in daily life poses significant threats to individuals' physical and mental well-being. Using surveillance cameras in public spaces has proven effective in proactively deterring and preventing such incidents. However, concerns regarding privacy invasion have emerged due to their widespread deployment.To address the problem, we leverage Dynamic Vision Sensors (DVS) cameras to detect violent incidents and preserve privacy since it captures pixel brightness variations instead of static imagery. We introduce the Bullying10K dataset, encompassing various actions, complex movements, and occlusions from real-life scenarios. It provides three benchmarks for evaluating different tasks: action recognition, temporal action localization, and pose estimation. With 10,000 event segments, totaling 12 billion events and 255 GB of data, Bullying10K contributes significantly by balancing violence detection and personal privacy persevering. And it also poses a challenge to the neuromorphic dataset. It will serve as a valuable resource for training and developing privacy-protecting video systems. The Bullying10K opens new possibilities for innovative approaches in these domains.
Bullying10K: A Large-Scale Neuromorphic Dataset towards Privacy-Preserving Bullying Recognition
[ "Yiting Dong", "Yang Li", "Dongcheng Zhao", "Guobin Shen", "Yi Zeng" ]
Track/Datasets_and_Benchmarks
poster
2306.11546
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=Hm1Ih3uLII
@inproceedings{ li2023dvsod, title={{DVSOD}: {RGB}-D Video Salient Object Detection}, author={Jingjing Li and Wei Ji and Size Wang and Wenbo Li and Li Cheng}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=Hm1Ih3uLII} }
Salient object detection (SOD) aims to identify standout elements in a scene, with recent advancements primarily focused on integrating depth data (RGB-D) or temporal data from videos to enhance SOD in complex scenes. However, the unison of two types of crucial information remains largely underexplored due to data constraints. To bridge this gap, we in this work introduce the DViSal dataset, fueling further research in the emerging field of RGB-D video salient object detection (DVSOD). Our dataset features 237 diverse RGB-D videos alongside comprehensive annotations, including object and instance-level markings, as well as bounding boxes and scribbles. These resources enable a broad scope for potential research directions. We also conduct benchmarking experiments using various SOD models, affirming the efficacy of multimodal video input for salient object detection. Lastly, we highlight some intriguing findings and promising future research avenues. To foster growth in this field, our dataset and benchmark results are publicly accessible at: https://dvsod.github.io/.
DVSOD: RGB-D Video Salient Object Detection
[ "Jingjing Li", "Wei Ji", "Size Wang", "Wenbo Li", "Li Cheng" ]
Track/Datasets_and_Benchmarks
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=HhcQ0zeqZp
@inproceedings{ liu2023benchmarking, title={Benchmarking Large Language Models on {CME}xam - A comprehensive Chinese Medical Exam Dataset}, author={Junling Liu and Peilin Zhou and Yining Hua and Dading Chong and Zhongyu Tian and Andrew Liu and Helin Wang and Chenyu You and Zhenhua Guo and Zhu Lei and Michael Lingzhi Li}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=HhcQ0zeqZp} }
Recent advancements in large language models (LLMs) have transformed the field of question answering (QA). However, evaluating LLMs in the medical field is challenging due to the lack of standardized and comprehensive datasets. To address this gap, we introduce CMExam, sourced from the Chinese National Medical Licensing Examination. CMExam consists of 60K+ multiple-choice questions for standardized and objective evaluations, as well as solution explanations for model reasoning evaluation in an open-ended manner. For in-depth analyses of LLMs, we invited medical professionals to label five additional question-wise annotations, including disease groups, clinical departments, medical disciplines, areas of competency, and question difficulty levels. Alongside the dataset, we further conducted thorough experiments with representative LLMs and QA algorithms on CMExam. The results show that GPT-4 had the best accuracy of 61.6% and a weighted F1 score of 0.617. These results highlight a great disparity when compared to human accuracy, which stood at 71.6%. For explanation tasks, while LLMs could generate relevant reasoning and demonstrate improved performance after finetuning, they fall short of a desired standard, indicating ample room for improvement. To the best of our knowledge, CMExam is the first Chinese medical exam dataset to provide comprehensive medical annotations. The experiments and findings of LLM evaluation also provide valuable insights into the challenges and potential solutions in developing Chinese medical QA systems and LLM evaluation pipelines.
Benchmarking Large Language Models on CMExam - A comprehensive Chinese Medical Exam Dataset
[ "Junling Liu", "Peilin Zhou", "Yining Hua", "Dading Chong", "Zhongyu Tian", "Andrew Liu", "Helin Wang", "Chenyu You", "Zhenhua Guo", "Zhu Lei", "Michael Lingzhi Li" ]
Track/Datasets_and_Benchmarks
poster
[ "https://github.com/williamliujl/cmexam" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=HfKOIPCvsv
@inproceedings{ kasai2023realtime, title={RealTime {QA}: What's the Answer Right Now?}, author={Jungo Kasai and Keisuke Sakaguchi and yoichi takahashi and Ronan Le Bras and Akari Asai and Xinyan Velocity Yu and Dragomir Radev and Noah A. Smith and Yejin Choi and Kentaro Inui}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=HfKOIPCvsv} }
We introduce RealTime QA, a dynamic question answering (QA) platform that announces questions and evaluates systems on a regular basis (weekly in this version). RealTime QA inquires about the current world, and QA systems need to answer questions about novel events or information. It therefore challenges static, conventional assumptions in open-domain QA datasets and pursues instantaneous applications. We build strong baseline models upon large pretrained language models, including GPT-3 and T5. Our benchmark is an ongoing effort, and this paper presents real-time evaluation results over the past year. Our experimental results show that GPT-3 can often properly update its generation results, based on newly-retrieved documents, highlighting the importance of up-to-date information retrieval. Nonetheless, we find that GPT-3 tends to return outdated answers when retrieved documents do not provide sufficient information to find an answer. This suggests an important avenue for future research: can an open-domain QA system identify such unanswerable cases and communicate with the user or even the retrieval module to modify the retrieval results? We hope that RealTime QA will spur progress in instantaneous applications of question answering and beyond.
RealTime QA: What's the Answer Right Now?
[ "Jungo Kasai", "Keisuke Sakaguchi", "yoichi takahashi", "Ronan Le Bras", "Akari Asai", "Xinyan Velocity Yu", "Dragomir Radev", "Noah A. Smith", "Yejin Choi", "Kentaro Inui" ]
Track/Datasets_and_Benchmarks
poster
2207.13332
[ "https://github.com/realtimeqa/realtimeqa_public" ]
https://huggingface.co/papers/2207.13332
0
1
0
10
[]
[ "monsoon-nlp/relive-qa" ]
[]
1
null
https://openreview.net/forum?id=HYEGXFnPoq
@inproceedings{ patraucean2023perception, title={Perception Test: A Diagnostic Benchmark for Multimodal Video Models}, author={Viorica Patraucean and Lucas Smaira and Ankush Gupta and Adria Recasens Continente and Larisa Markeeva and Dylan Sunil Banarse and Skanda Koppula and Joseph Heyward and Mateusz Malinowski and Yi Yang and Carl Doersch and Tatiana Matejovicova and Yury Sulsky and Antoine Miech and Alexandre Fr{\'e}chette and Hanna Klimczak and Raphael Koster and Junlin Zhang and Stephanie Winkler and Yusuf Aytar and Simon Osindero and Dima Damen and Andrew Zisserman and Joao Carreira}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=HYEGXFnPoq} }
We propose a novel multimodal video benchmark - the Perception Test - to evaluate the perception and reasoning skills of pre-trained multimodal models (e.g. Flamingo, BEiT-3, or GPT-4). Compared to existing benchmarks that focus on computational tasks (e.g. classification, detection or tracking), the Perception Test focuses on skills (Memory, Abstraction, Physics, Semantics) and types of reasoning (descriptive, explanatory, predictive, counterfactual) across video, audio, and text modalities, to provide a comprehensive and efficient evaluation tool. The benchmark probes pre-trained models for their transfer capabilities, in a zero-shot / few-shot or limited finetuning regime. For these purposes, the Perception Test introduces 11.6k real-world videos, 23s average length, designed to show perceptually interesting situations, filmed by around 100 participants worldwide. The videos are densely annotated with six types of labels (multiple-choice and grounded video question-answers, object and point tracks, temporal action and sound segments), enabling both language and non-language evaluations. The fine-tuning and validation splits of the benchmark are publicly available (CC-BY license), in addition to a challenge server with a held-out test split. Human baseline results compared to state-of-the-art video QA models show a substantial gap in performance (91.4% vs 46.2%), suggesting that there is significant room for improvement in multimodal video understanding. Dataset, baselines code, and challenge server are available at https://github.com/deepmind/perception_test
Perception Test: A Diagnostic Benchmark for Multimodal Video Models
[ "Viorica Patraucean", "Lucas Smaira", "Ankush Gupta", "Adria Recasens Continente", "Larisa Markeeva", "Dylan Sunil Banarse", "Skanda Koppula", "Joseph Heyward", "Mateusz Malinowski", "Yi Yang", "Carl Doersch", "Tatiana Matejovicova", "Yury Sulsky", "Antoine Miech", "Alexandre Fréchette", "Hanna Klimczak", "Raphael Koster", "Junlin Zhang", "Stephanie Winkler", "Yusuf Aytar", "Simon Osindero", "Dima Damen", "Andrew Zisserman", "Joao Carreira" ]
Track/Datasets_and_Benchmarks
poster
2305.13786
[ "https://github.com/deepmind/perception_test" ]
https://huggingface.co/papers/2305.13786
2
1
0
24
[]
[]
[]
1
null
https://openreview.net/forum?id=H2Yb28qGLV
@inproceedings{ steshin2023lohi, title={Lo-Hi: Practical {ML} Drug Discovery Benchmark}, author={Simon Steshin}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=H2Yb28qGLV} }
Finding new drugs is getting harder and harder. One of the hopes of drug discovery is to use machine learning models to predict molecular properties. That is why models for molecular property prediction are being developed and tested on benchmarks such as MoleculeNet. However, existing benchmarks are unrealistic and are too different from applying the models in practice. We have created a new practical \emph{Lo-Hi} benchmark consisting of two tasks: Lead Optimization (Lo) and Hit Identification (Hi), corresponding to the real drug discovery process. For the Hi task, we designed a novel molecular splitting algorithm that solves the Balanced Vertex Minimum $k$-Cut problem. We tested state-of-the-art and classic ML models, revealing which works better under practical settings. We analyzed modern benchmarks and showed that they are unrealistic and overoptimistic. Review: https://openreview.net/forum?id=H2Yb28qGLV Lo-Hi benchmark: https://github.com/SteshinSS/lohi_neurips2023 Lo-Hi splitter library: https://github.com/SteshinSS/lohi_splitter
Lo-Hi: Practical ML Drug Discovery Benchmark
[ "Simon Steshin" ]
Track/Datasets_and_Benchmarks
poster
2310.06399
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=GjNvvswoUL
@inproceedings{ aroyo2023dices, title={{DICES} Dataset: Diversity in Conversational {AI} Evaluation for Safety}, author={Lora Aroyo and Alex Taylor and Mark Diaz and Christopher M Homan and Alicia Parrish and Greg Serapio-Garcia and Vinodkumar Prabhakaran and Ding Wang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=GjNvvswoUL} }
Machine learning approaches often require training and evaluation datasets with a clear separation between positive and negative examples. This requirement overly simplifies the natural subjectivity present in many tasks, and obscures the inherent diversity in human perceptions and opinions about many content items. Preserving the variance in content and diversity in human perceptions in datasets is often quite expensive and laborious. This is especially troubling when building safety datasets for conversational AI systems, as safety is socio-culturally situated in this context. To demonstrate this crucial aspect of conversational AI safety, and to facilitate in-depth model performance analyses, we introduce the DICES (Diversity In Conversational AI Evaluation for Safety) dataset that contains fine-grained demographics information about raters, high replication of ratings per item to ensure statistical power for analyses, and encodes rater votes as distributions across different demographics to allow for in-depth explorations of different aggregation strategies. The DICES dataset enables the observation and measurement of variance, ambiguity, and diversity in the context of safety for conversational AI. We further describe a set of metrics that show how rater diversity influences safety perception across different geographic regions, ethnicity groups, age groups, and genders. The goal of the DICES dataset is to be used as a shared resource and benchmark that respects diverse perspectives during safety evaluation of conversational AI systems.
DICES Dataset: Diversity in Conversational AI Evaluation for Safety
null
Track/Datasets_and_Benchmarks
poster
2306.11247
[ "https://github.com/google-research-datasets/dices-dataset" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=GSuP99u2kR
@inproceedings{ li2023llavamed, title={{LL}a{VA}-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day}, author={Chunyuan Li and Cliff Wong and Sheng Zhang and Naoto Usuyama and Haotian Liu and Jianwei Yang and Tristan Naumann and Hoifung Poon and Jianfeng Gao}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=GSuP99u2kR} }
Conversational generative AI has demonstrated remarkable promise for empowering biomedical practitioners, but current investigations focus on unimodal text. Multimodal conversational AI has seen rapid progress by leveraging billions of image-text pairs from the public web, but such general-domain vision-language models still lack sophistication in understanding and conversing about biomedical images. In this paper, we propose a cost-efficient approach for training a vision-language conversational assistant that can answer open-ended research questions of biomedical images. The key idea is to leverage a large-scale, broad-coverage biomedical figure-caption dataset extracted from PubMed Central, use GPT-4 to self-instruct open-ended instruction-following data from the captions, and then fine-tune a large general-domain vision-language model using a novel curriculum learning method. Specifically, the model first learns to align biomedical vocabulary using the figure-caption pairs as is, then learns to master open-ended conversational semantics using GPT-4 generated instruction-following data, broadly mimicking how a layperson gradually acquires biomedical knowledge. This enables us to train a Large Language and Vision Assistant for BioMedicine (LLaVA-Med) in less than 15 hours (with eight A100s). LLaVA-Med exhibits excellent multimodal conversational capability and can follow open-ended instruction to assist with inquiries about a biomedical image. On three standard biomedical visual question answering datasets, LLaVA-Med outperforms previous supervised state-of-the-art on certain metrics. To facilitate biomedical multimodal research, we will release our instruction-following data and the LLaVA-Med model.
LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day
[ "Chunyuan Li", "Cliff Wong", "Sheng Zhang", "Naoto Usuyama", "Haotian Liu", "Jianwei Yang", "Tristan Naumann", "Hoifung Poon", "Jianfeng Gao" ]
Track/Datasets_and_Benchmarks
oral
2306.00890
[ "" ]
https://huggingface.co/papers/2306.00890
5
10
1
9
[ "microsoft/llava-med-7b-delta", "microsoft/llava-med-v1.5-mistral-7b", "katielink/llava-med-7b-vqarad-delta", "katielink/llava-med-7b-pathvqa-delta", "katielink/llava-med-7b-slake-delta", "saurabh-straive/llava_100k_finetuned", "Straive/llava-1.5-13b-lora-100k-8-mar", "GDinesh/llava-1-5", "saurabh-straive/llava-1-5", "cifope/llava-med-tesseract-v1.5-mistral-7b", "starriver030515/LLaVA" ]
[]
[ "dwb2023/model_explorer2", "nassarx/microsoft-llava-med-7b-delta", "dwb2023/model_explorer4", "ERICQIU/microsoft-llava-med-7b-delta", "leilaaaaa/newapp", "hossamelden/try1", "Ozhou/ZAK", "Aranya31/ft_LLaVA-Med" ]
1
null
https://openreview.net/forum?id=GF84C0z45H
@inproceedings{ zhu2023genimage, title={GenImage: A Million-Scale Benchmark for Detecting {AI}-Generated Image}, author={Mingjian Zhu and Hanting Chen and Qiangyu YAN and Xudong Huang and Guanyu Lin and Wei Li and Zhijun Tu and Hailin Hu and Jie Hu and Yunhe Wang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=GF84C0z45H} }
The extraordinary ability of generative models to generate photographic images has intensified concerns about the spread of disinformation, thereby leading to the demand for detectors capable of distinguishing between AI-generated fake images and real images. However, the lack of large datasets containing images from the most advanced image generators poses an obstacle to the development of such detectors. In this paper, we introduce the GenImage dataset, which has the following advantages: 1) Plenty of Images, including over one million pairs of AI-generated fake images and collected real images. 2) Rich Image Content, encompassing a broad range of image classes. 3) State-of-the-art Generators, synthesizing images with advanced diffusion models and GANs. The aforementioned advantages allow the detectors trained on GenImage to undergo a thorough evaluation and demonstrate strong applicability to diverse images. We conduct a comprehensive analysis of the dataset and propose two tasks for evaluating the detection method in resembling real-world scenarios. The cross-generator image classification task measures the performance of a detector trained on one generator when tested on the others. The degraded image classification task assesses the capability of the detectors in handling degraded images such as low-resolution, blurred, and compressed images. With the GenImage dataset, researchers can effectively expedite the development and evaluation of superior AI-generated image detectors in comparison to prevailing methodologies.
GenImage: A Million-Scale Benchmark for Detecting AI-Generated Image
[ "Mingjian Zhu", "Hanting Chen", "Qiangyu YAN", "Xudong Huang", "Guanyu Lin", "Wei Li", "Zhijun Tu", "Hailin Hu", "Jie Hu", "Yunhe Wang" ]
Track/Datasets_and_Benchmarks
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=GF5l0F19Bt
@inproceedings{ morio2023an, title={An {NLP} Benchmark Dataset for Assessing Corporate Climate Policy Engagement}, author={Gaku Morio and Christopher D Manning}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=GF5l0F19Bt} }
As societal awareness of climate change grows, corporate climate policy engagements are attracting attention. We propose a dataset to estimate corporate climate policy engagement from various PDF-formatted documents. Our dataset comes from LobbyMap (a platform operated by global think tank InfluenceMap) that provides engagement categories and stances on the documents. To convert the LobbyMap data into the structured dataset, we developed a pipeline using text extraction and OCR. Our contributions are: (i) Building an NLP dataset including 10K documents on corporate climate policy engagement. (ii) Analyzing the properties and challenges of the dataset. (iii) Providing experiments for the dataset using pre-trained language models. The results show that while Longformer outperforms baselines and other pre-trained models, there is still room for significant improvement. We hope our work begins to bridge research on NLP and climate change.
An NLP Benchmark Dataset for Assessing Corporate Climate Policy Engagement
[ "Gaku Morio", "Christopher D Manning" ]
Track/Datasets_and_Benchmarks
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=FpK2aQfbyo
@inproceedings{ cui2023milipoint, title={MiliPoint: A Point Cloud Dataset for mmWave Radar}, author={Han Cui and Shu Zhong and Jiacheng Wu and Zichao Shen and Naim Dahnoun and Yiren Zhao}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=FpK2aQfbyo} }
Millimetre-wave (mmWave) radar has emerged as an attractive and cost-effective alternative for human activity sensing compared to traditional camera-based systems. mmWave radars are also non-intrusive, providing better protection for user privacy. However, as a Radio Frequency based technology, mmWave radars rely on capturing reflected signals from objects, making them more prone to noise compared to cameras. This raises an intriguing question for the deep learning community: Can we develop more effective point set-based deep learning methods for such attractive sensors? To answer this question, our work, termed MiliPoint, delves into this idea by providing a large-scale, open dataset for the community to explore how mmWave radars can be utilised for human activity recognition. Moreover, MiliPoint stands out as it is larger in size than existing datasets, has more diverse human actions represented, and encompasses all three key tasks in human activity recognition. We have also established a range of point-based deep neural networks such as DGCNN, PointNet++ and PointTransformer, on MiliPoint, which can serve to set the ground baseline for further development.
MiliPoint: A Point Cloud Dataset for mmWave Radar
[ "Han Cui", "Shu Zhong", "Jiacheng Wu", "Zichao Shen", "Naim Dahnoun", "Yiren Zhao" ]
Track/Datasets_and_Benchmarks
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=FC0dsvguFi
@inproceedings{ falta2023lungmb, title={Lung250M-4B: A Combined 3D Dataset for {CT}- and Point Cloud-Based Intra-Patient Lung Registration}, author={Fenja Falta and Christoph Gro{\ss}br{\"o}hmer and Alessa Hering and Alexander Bigalke and Mattias P Heinrich}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=FC0dsvguFi} }
A popular benchmark for intra-patient lung registration is provided by the DIR-LAB COPDgene dataset consisting of large-motion in- and expiratory breath-hold CT pairs. This dataset alone, however, does not provide enough samples to properly train state-of-the-art deep learning methods. Other public datasets often also provide only small sample sizes or include primarily small motions between scans that do not translate well to larger deformations. For point-based geometric registration, the PVT1010 dataset provides a large number of vessel point clouds without any correspondences and a labeled test set corresponding to the COPDgene cases. However, the absence of correspondences for supervision complicates training, and a fair comparison with image-based algorithms is infeasible, since CT scans for the training data are not publicly available. We here provide a combined benchmark for image- and point-based registration approaches. We curated a total of 248 public multi-centric in- and expiratory lung CT scans from 124 patients, which show large motion between scans, processed them to ensure sufficient homogeneity between the data and generated vessel point clouds that are well distributed even deeper inside the lungs. For supervised training, we provide vein and artery segmentations of the vessels and multiple thousand image-derived keypoint correspondences for each pair. For validation, we provide multiple scan pairs with manual landmark annotations. Finally, as first baselines on our new benchmark, we evaluate several image and point cloud registration methods on the dataset.
Lung250M-4B: A Combined 3D Dataset for CT- and Point Cloud-Based Intra-Patient Lung Registration
[ "Fenja Falta", "Christoph Großbröhmer", "Alessa Hering", "Alexander Bigalke", "Mattias P Heinrich" ]
Track/Datasets_and_Benchmarks
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=EPz1DcdPVE
@inproceedings{ charles2023towards, title={Towards Federated Foundation Models: Scalable Dataset Pipelines for Group-Structured Learning}, author={Zachary Charles and Nicole Elyse Mitchell and Krishna Pillutla and Michael Reneer and Zachary Garrett}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=EPz1DcdPVE} }
We introduce Dataset Grouper, a library to create large-scale group-structured (e.g., federated) datasets, enabling federated learning simulation at the scale of foundation models. This library facilitates the creation of group-structured versions of existing datasets based on user-specified partitions, and directly leads to a variety of useful heterogeneous datasets that can be plugged into existing software frameworks. Dataset Grouper offers three key advantages. First, it scales to settings where even a single group's dataset is too large to fit in memory. Second, it provides flexibility, both in choosing the base (non-partitioned) dataset and in defining partitions. Finally, it is framework-agnostic. We empirically demonstrate that Dataset Grouper enables large-scale federated language modeling simulations on datasets that are orders of magnitude larger than in previous work, allowing for federated training of language models with hundreds of millions, and even billions, of parameters. Our experimental results show that algorithms like FedAvg operate more as meta-learning methods than as empirical risk minimization methods at this scale, suggesting their utility in downstream personalization and task-specific adaptation. Dataset Grouper is available at https://github.com/google-research/dataset_grouper.
Towards Federated Foundation Models: Scalable Dataset Pipelines for Group-Structured Learning
[ "Zachary Charles", "Nicole Elyse Mitchell", "Krishna Pillutla", "Michael Reneer", "Zachary Garrett" ]
Track/Datasets_and_Benchmarks
poster
2307.09619
[ "https://github.com/google-research/dataset_grouper" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=EIydMrHBHP
@inproceedings{ hu2023ptadisc, title={{PTAD}isc: A Cross-Course Dataset Supporting Personalized Learning in Cold-Start Scenarios}, author={Liya Hu and Zhiang Dong and Jingyuan Chen and Guifeng Wang and Zhihua Wang and Zhou Zhao and Fei Wu}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=EIydMrHBHP} }
The focus of our work is on diagnostic tasks in personalized learning, such as cognitive diagnosis and knowledge tracing. The goal of these tasks is to assess students' latent proficiency on knowledge concepts through analyzing their historical learning records. However, existing research has been limited to single-course scenarios; cross-course studies have not been explored due to a lack of dataset. We address this issue by constructing PTADisc, a Diverse, Immense, Student-centered dataset that emphasizes its sufficient Cross-course information for personalized learning. PTADisc includes 74 courses, 1,530,100 students, 4,054 concepts, 225,615 problems, and over 680 million student response logs. Based on PTADisc, we developed a model-agnostic Cross-Course Learner Modeling Framework (CCLMF) which utilizes relationships between students' proficiency across courses to alleviate the difficulty of diagnosing student knowledge state in cold-start scenarios. CCLMF uses a meta network to generate personalized mapping functions between courses. The experimental results on PTADisc verify the effectiveness of CCLMF with an average improvement of 4.2% on AUC. We also report the performance of baseline models for cognitive diagnosis and knowledge tracing over PTADisc, demonstrating that our dataset supports a wide scope of research in personalized learning. Additionally, PTADisc contains valuable programming logs and student-group information that are worth exploring in the future.
PTADisc: A Cross-Course Dataset Supporting Personalized Learning in Cold-Start Scenarios
[ "Liya Hu", "Zhiang Dong", "Jingyuan Chen", "Guifeng Wang", "Zhihua Wang", "Zhou Zhao", "Fei Wu" ]
Track/Datasets_and_Benchmarks
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=EFl8zjjXeX
@inproceedings{ wei2023ovparts, title={{OV}-{PARTS}: Towards Open-Vocabulary Part Segmentation}, author={Meng Wei and Xiaoyu Yue and Wenwei Zhang and Shu Kong and Xihui Liu and Jiangmiao Pang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=EFl8zjjXeX} }
Segmenting and recognizing diverse object parts is a crucial ability in applications spanning various computer vision and robotic tasks. While significant progress has been made in object-level Open-Vocabulary Semantic Segmentation (OVSS), i.e., segmenting objects with arbitrary text, the corresponding part-level research poses additional challenges. Firstly, part segmentation inherently involves intricate boundaries, while limited annotated data compounds the challenge. Secondly, part segmentation introduces an open granularity challenge due to the diverse and often ambiguous definitions of parts in the open world. Furthermore, the large-scale vision and language models, which play a key role in the open vocabulary setting, struggle to recognize parts as effectively as objects. To comprehensively investigate and tackle these challenges, we propose an Open-Vocabulary Part Segmentation (OV-PARTS) benchmark. OV-PARTS includes refined versions of two publicly available datasets: Pascal-Part-116 and ADE20K-Part-234. And it covers three specific tasks: Generalized Zero-Shot Part Segmentation, Cross-Dataset Part Segmentation, and Few-Shot Part Segmentation, providing insights into analogical reasoning, open granularity and few-shot adapting abilities of models. Moreover, we analyze and adapt two prevailing paradigms of existing object-level OVSS methods for OV-PARTS. Extensive experimental analysis is conducted to inspire future research in leveraging foundational models for OV-PARTS. The code and dataset are available at https://github.com/kellyiss/OV_PARTS.
OV-PARTS: Towards Open-Vocabulary Part Segmentation
[ "Meng Wei", "Xiaoyu Yue", "Wenwei Zhang", "Shu Kong", "Xihui Liu", "Jiangmiao Pang" ]
Track/Datasets_and_Benchmarks
poster
2310.05107
[ "https://github.com/openrobotlab/ov_parts" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=EBYZSRRzSE
@inproceedings{ liu2023video, title={Video Timeline Modeling For News Story Understanding}, author={Meng Liu and Mingda Zhang and Jialu Liu and Hanjun Dai and Ming-Hsuan Yang and Shuiwang Ji and Zheyun Feng and Boqing Gong}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=EBYZSRRzSE} }
In this paper, we present a novel problem, namely video timeline modeling. Our objective is to create a video-associated timeline from a set of videos related to a specific topic, thereby facilitating the content and structure understanding of the story being told. This problem has significant potential in various real-world applications, for instance, news story summarization. To bootstrap research in this area, we curate a realistic benchmark dataset, YouTube-News-Timeline, consisting of over $12$k timelines and $300$k YouTube news videos. Additionally, we propose a set of quantitative metrics to comprehensively evaluate and compare methodologies. With such a testbed, we further develop and benchmark several deep learning approaches to tackling this problem. We anticipate that this exploratory work will pave the way for further research in video timeline modeling. The assets are available via https://github.com/google-research/google-research/tree/master/video_timeline_modeling.
Video Timeline Modeling For News Story Understanding
[ "Meng Liu", "Mingda Zhang", "Jialu Liu", "Hanjun Dai", "Ming-Hsuan Yang", "Shuiwang Ji", "Zheyun Feng", "Boqing Gong" ]
Track/Datasets_and_Benchmarks
oral
2309.13446
[ "https://github.com/google-research/google-research" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=DSYuRMJnaY
@inproceedings{ suarez2023neural, title={Neural {MMO} 2.0: A Massively Multi-task Addition to Massively Multi-agent Learning}, author={Joseph Suarez and David Bloomin and Kyoung Whan Choe and Hao Xiang Li and Ryan Sullivan and Nishaanth Kanna Ravichandran and Daniel Scott and Rose S Shuman and Herbie Bradley and Louis Castricato and Phillip Isola and Kirsty You and Yuhao Jiang and Qimai Li and Jiaxin Chen and Xiaolong Zhu}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=DSYuRMJnaY} }
Neural MMO 2.0 is a massively multi-agent and multi-task environment for reinforcement learning research. This version features a novel task-system that broadens the range of training settings and poses a new challenge in generalization: evaluation on and against tasks, maps, and opponents never seen during training. Maps are procedurally generated with 128 agents in the standard setting and 1-1024 supported overall. Version 2.0 is a complete rewrite of its predecessor with three-fold improved performance, effectively addressing simulation bottlenecks in online training. Enhancements to compatibility enable training with standard reinforcement learning frameworks designed for much simpler environments. Neural MMO 2.0 is free and open-source with comprehensive documentation available at neuralmmo.github.io and an active community Discord. To spark initial research on this new platform, we are concurrently running a competition at NeurIPS 2023.
Neural MMO 2.0: A Massively Multi-task Addition to Massively Multi-agent Learning
[ "Joseph Suarez", "David Bloomin", "Kyoung Whan Choe", "Hao Xiang Li", "Ryan Sullivan", "Nishaanth Kanna Ravichandran", "Daniel Scott", "Rose S Shuman", "Herbie Bradley", "Louis Castricato", "Phillip Isola", "Kirsty You", "Yuhao Jiang", "Qimai Li", "Jiaxin Chen", "Xiaolong Zhu" ]
Track/Datasets_and_Benchmarks
poster
2311.03736
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=DIeZu6nqvo
@inproceedings{ tang2023egotracks, title={EgoTracks: A Long-term Egocentric Visual Object Tracking Dataset}, author={Hao Tang and Kevin J Liang and Kristen Grauman and Matt Feiszli and Weiyao Wang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=DIeZu6nqvo} }
Visual object tracking is a key component to many egocentric vision problems. However, the full spectrum of challenges of egocentric tracking faced by an embodied AI is underrepresented in many existing datasets; these tend to focus on relatively short, third-person videos. Egocentric video has several distinguishing characteristics from those commonly found in past datasets: frequent large camera motions and hand interactions with objects commonly lead to occlusions or objects exiting the frame, and object appearance can change rapidly due to widely different points of view, scale, or object states. Embodied tracking is also naturally long-term, and being able to consistently (re-)associate objects to their appearances and disappearances over as long as a lifetime is critical. Previous datasets under-emphasize this re-detection problem, and their "framed" nature has led to adoption of various spatiotemporal priors that we find do not necessarily generalize to egocentric video. We thus introduce EgoTracks, a new dataset for long-term egocentric visual object tracking. Sourced from the Ego4D dataset, this new dataset presents a significant challenge to recent state-of-the-art single-object tracking models, which we find score poorly on traditional tracking metrics for our new dataset, compared to popular benchmarks. We further show improvements that can be made to a STARK tracker to significantly increase its performance on egocentric data, resulting in a baseline model we call EgoSTARK. We publicly release our annotations and benchmark, hoping our dataset leads to further advancements in tracking.
EgoTracks: A Long-term Egocentric Visual Object Tracking Dataset
[ "Hao Tang", "Kevin J Liang", "Kristen Grauman", "Matt Feiszli", "Weiyao Wang" ]
Track/Datasets_and_Benchmarks
poster
2301.03213
[ "https://github.com/EGO4D/episodic-memory/tree/main/EgoTracks" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=DILUIcDmU9
@inproceedings{ zheng2023havid, title={{HA}-ViD: A Human Assembly Video Dataset for Comprehensive Assembly Knowledge Understanding}, author={Hao Zheng and Regina Lee and Yuqian Lu}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=DILUIcDmU9} }
Understanding comprehensive assembly knowledge from videos is critical for futuristic ultra-intelligent industry. To enable technological breakthrough, we present HA-ViD – the first human assembly video dataset that features representative industrial assembly scenarios, natural procedural knowledge acquisition process, and consistent human-robot shared annotations. Specifically, HA-ViD captures diverse collaboration patterns of real-world assembly, natural human behaviors and learning progression during assembly, and granulate action annotations to subject, action verb, manipulated object, target object, and tool. We provide 3222 multi-view and multi-modality videos), 1.5M frames, 96K temporal labels and 2M spatial labels. We benchmark four foundational video understanding tasks: action recognition, action segmentation, object detection and multi-object tracking. Importantly, we analyze their performance and the further reasoning steps for comprehending knowledge in assembly progress, process efficiency, task collaboration, skill parameters and human intention. Details of HA-ViD is available at: https://iai-hrc.github.io/ha-vid.
HA-ViD: A Human Assembly Video Dataset for Comprehensive Assembly Knowledge Understanding
[ "Hao Zheng", "Regina Lee", "Yuqian Lu" ]
Track/Datasets_and_Benchmarks
poster
2307.05721
[ "https://github.com/iai-hrc/ha-vid" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=D1MOK2t2t2
@inproceedings{ milani2023bedd, title={{BEDD}: The Mine{RL} {BASALT} Evaluation and Demonstrations Dataset for Training and Benchmarking Agents that Solve Fuzzy Tasks}, author={Stephanie Milani and Anssi Kanervisto and Karolis Ramanauskas and Sander V Schulhoff and Brandon Houghton and Rohin Shah}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=D1MOK2t2t2} }
The MineRL BASALT competition has served to catalyze advances in learning from human feedback through four hard-to-specify tasks in Minecraft, such as create and photograph a waterfall. Given the completion of two years of BASALT competitions, we offer to the community a formalized benchmark through the BASALT Evaluation and Demonstrations Dataset (BEDD), which serves as a resource for algorithm development and performance assessment. BEDD consists of a collection of 26 million image-action pairs from nearly 14,000 videos of human players completing the BASALT tasks in Minecraft. It also includes over 3,000 dense pairwise human evaluations of human and algorithmic agents. These comparisons serve as a fixed, preliminary leaderboard for evaluating newly-developed algorithms. To enable this comparison, we present a streamlined codebase for benchmarking new algorithms against the leaderboard. In addition to presenting these datasets, we conduct a detailed analysis of the data from both datasets to guide algorithm development and evaluation. The released code and data are available at https://github.com/minerllabs/basalt-benchmark.
BEDD: The MineRL BASALT Evaluation and Demonstrations Dataset for Training and Benchmarking Agents that Solve Fuzzy Tasks
[ "Stephanie Milani", "Anssi Kanervisto", "Karolis Ramanauskas", "Sander V Schulhoff", "Brandon Houghton", "Rohin Shah" ]
Track/Datasets_and_Benchmarks
oral
2312.02405
[ "https://github.com/minerllabs/basalt-benchmark" ]
https://huggingface.co/papers/2312.02405
0
1
0
6
[]
[]
[]
1
null
https://openreview.net/forum?id=CsXC6IcdwI
@inproceedings{ wornow2023ehrshot, title={{EHRSHOT}: An {EHR} Benchmark for Few-Shot Evaluation of Foundation Models}, author={Michael Wornow and Rahul Thapa and Ethan Steinberg and Jason Alan Fries and Nigam Shah}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=CsXC6IcdwI} }
While the general machine learning (ML) community has benefited from public datasets, tasks, and models, the progress of ML in healthcare has been hampered by a lack of such shared assets. The success of foundation models creates new challenges for healthcare ML by requiring access to shared pretrained models to validate performance benefits. We help address these challenges through three contributions. First, we publish a new dataset, EHRSHOT, which contains de-identified structured data from the electronic health records (EHRs) of 6,739 patients from Stanford Medicine. Unlike MIMIC-III/IV and other popular EHR datasets, EHRSHOT is longitudinal and not restricted to ICU/ED patients. Second, we publish the weights of CLMBR-T-base, a 141M parameter clinical foundation model pretrained on the structured EHR data of 2.57M patients. We are one of the first to fully release such a model for coded EHR data; in contrast, most prior models released for clinical data (e.g. GatorTron, ClinicalBERT) only work with unstructured text and cannot process the rich, structured data within an EHR. We provide an end-to-end pipeline for the community to validate and build upon its performance. Third, we define 15 few-shot clinical prediction tasks, enabling evaluation of foundation models on benefits such as sample efficiency and task adaptation. Our model, dataset, and code are available here: https://ehrshot.stanford.edu/
EHRSHOT: An EHR Benchmark for Few-Shot Evaluation of Foundation Models
[ "Michael Wornow", "Rahul Thapa", "Ethan Steinberg", "Jason Alan Fries", "Nigam Shah" ]
Track/Datasets_and_Benchmarks
oral
2307.02028
[ "https://github.com/som-shahlab/ehrshot-benchmark" ]
https://huggingface.co/papers/2307.02028
3
3
0
5
[ "StanfordShahLab/clmbr-t-base", "StanfordShahLab/clmbr-t-base-random" ]
[]
[]
1
null
https://openreview.net/forum?id=CpFFRtxcbz
@inproceedings{ xiang2023caremi, title={{CARE}-{MI}: Chinese Benchmark for Misinformation Evaluation in Maternity and Infant Care}, author={Tong Xiang and Liangzhi Li and Wangyue Li and Mingbai Bai and Lu Wei and Bowen Wang and Noa Garcia}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=CpFFRtxcbz} }
The recent advances in natural language processing (NLP), have led to a new trend of applying large language models (LLMs) to real-world scenarios. While the latest LLMs are astonishingly fluent when interacting with humans, they suffer from the misinformation problem by unintentionally generating factually false statements. This can lead to harmful consequences, especially when produced within sensitive contexts, such as healthcare. Yet few previous works have focused on evaluating misinformation in the long-form (LF) generation of LLMs, especially for knowledge-intensive topics. Moreover, although LLMs have been shown to perform well in different languages, misinformation evaluation has been mostly conducted in English. To this end, we present a benchmark, CARE-MI, for evaluating LLM misinformation in: 1) a sensitive topic, specifically the maternity and infant care domain; and 2) a language other than English, namely Chinese. Most importantly, we provide an innovative paradigm for building LF generation evaluation benchmarks that can be transferred to other knowledge-intensive domains and low-resourced languages. Our proposed benchmark fills the gap between the extensive usage of LLMs and the lack of datasets for assessing the misinformation generated by these models. It contains 1,612 expert-checked questions, accompanied with human-selected references. Using our benchmark, we conduct extensive experiments and found that current Chinese LLMs are far from perfect in the topic of maternity and infant care. In an effort to minimize the reliance on human resources for performance evaluation, we offer off-the-shelf judgment models for automatically assessing the LF output of LLMs given benchmark questions. Moreover, we compare potential solutions for LF generation evaluation and provide insights for building better automated metrics.
CARE-MI: Chinese Benchmark for Misinformation Evaluation in Maternity and Infant Care
null
Track/Datasets_and_Benchmarks
poster
2307.01458
[ "https://github.com/meetyou-ai-lab/care-mi" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=CjVdXey4zT
@inproceedings{ mcelfresh2023when, title={When Do Neural Nets Outperform Boosted Trees on Tabular Data?}, author={Duncan C. McElfresh and Sujay Khandagale and Jonathan Valverde and Vishak Prasad C and Ganesh Ramakrishnan and Micah Goldblum and Colin White}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=CjVdXey4zT} }
Tabular data is one of the most commonly used types of data in machine learning. Despite recent advances in neural nets (NNs) for tabular data, there is still an active discussion on whether or not NNs generally outperform gradient-boosted decision trees (GBDTs) on tabular data, with several recent works arguing either that GBDTs consistently outperform NNs on tabular data, or vice versa. In this work, we take a step back and question the importance of this debate. To this end, we conduct the largest tabular data analysis to date, comparing 19 algorithms across 176 datasets, and we find that the 'NN vs. GBDT' debate is overemphasized: for a surprisingly high number of datasets, either the performance difference between GBDTs and NNs is negligible, or light hyperparameter tuning on a GBDT is more important than choosing between NNs and GBDTs. Next, we analyze dozens of metafeatures to determine what \emph{properties} of a dataset make NNs or GBDTs better-suited to perform well. For example, we find that GBDTs are much better than NNs at handling skewed or heavy-tailed feature distributions and other forms of dataset irregularities. Our insights act as a guide for practitioners to determine which techniques may work best on their dataset. Finally, with the goal of accelerating tabular data research, we release the TabZilla Benchmark Suite: a collection of the 36 'hardest' of the datasets we study. Our benchmark suite, codebase, and all raw results are available at https://github.com/naszilla/tabzilla.
When Do Neural Nets Outperform Boosted Trees on Tabular Data?
[ "Duncan C. McElfresh", "Sujay Khandagale", "Jonathan Valverde", "Vishak Prasad C", "Ganesh Ramakrishnan", "Micah Goldblum", "Colin White" ]
Track/Datasets_and_Benchmarks
poster
2305.02997
[ "https://github.com/naszilla/tabzilla" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=CiRHWaRbp0
@inproceedings{ stimberg2023benchmarking, title={Benchmarking Robustness to Adversarial Image Obfuscations}, author={Florian Stimberg and Ayan Chakrabarti and Chun-Ta Lu and Hussein Hazimeh and Otilia Stretcu and Wei Qiao and Yintao Liu and Merve Kaya and Cyrus Rashtchian and Ariel Fuxman and Mehmet Nejat Tek and Sven Gowal}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=CiRHWaRbp0} }
Automated content filtering and moderation is an important tool that allows online platforms to build striving user communities that facilitate cooperation and prevent abuse. Unfortunately, resourceful actors try to bypass automated filters in a bid to post content that violate platform policies and codes of conduct. To reach this goal, these malicious actors may obfuscate policy violating images (e.g., overlay harmful images by carefully selected benign images or visual patterns) to prevent machine learning models from reaching the correct decision. In this paper, we invite researchers to tackle this specific issue and present a new image benchmark. This benchmark, based on ImageNet, simulates the type of obfuscations created by malicious actors. It goes beyond Image-Net-C and ImageNet-C-bar by proposing general, drastic, adversarial modifications that preserve the original content intent. It aims to tackle a more common adversarial threat than the one considered by lp-norm bounded adversaries. We evaluate 33 pretrained models on the benchmark and train models with different augmentations, architectures and training methods on subsets of the obfuscations to measure generalization. Our hope is that this benchmark will encourage researchers to test their models and methods and try to find new approaches that are more robust to these obfuscations.
Benchmarking Robustness to Adversarial Image Obfuscations
[ "Florian Stimberg", "Ayan Chakrabarti", "Chun-Ta Lu", "Hussein Hazimeh", "Otilia Stretcu", "Wei Qiao", "Yintao Liu", "Merve Kaya", "Cyrus Rashtchian", "Ariel Fuxman", "Mehmet Nejat Tek", "Sven Gowal" ]
Track/Datasets_and_Benchmarks
poster
2301.12993
[ "https://github.com/deepmind/image_obfuscation_benchmark" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=Cf2c9Pk9yF
@inproceedings{ taesiri2023imagenethard, title={ImageNet-Hard: The Hardest Images Remaining from a Study of the Power of Zoom and Spatial Biases in Image Classification}, author={Mohammad Reza Taesiri and Giang Nguyen and Sarra Habchi and Cor-Paul Bezemer and Anh Nguyen}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=Cf2c9Pk9yF} }
Image classifiers are information-discarding machines, by design. Yet, how these models discard information remains mysterious. We hypothesize that one way for image classifiers to reach high accuracy is to first zoom to the most discriminative region in the image and then extract features from there to predict image labels, discarding the rest of the image. Studying six popular networks ranging from AlexNet to CLIP, we find that proper framing of the input image can lead to the correct classification of 98.91% of ImageNet images. Furthermore, we uncover positional biases in various datasets, especially a strong center bias in two popular datasets: ImageNet-A and ObjectNet. Finally, leveraging our insights into the potential of zooming, we propose a test-time augmentation (TTA) technique that improves classification accuracy by forcing models to explicitly perform zoom-in operations before making predictions. Our method is more interpretable, accurate, and faster than MEMO, a state-of-the-art (SOTA) TTA method. We introduce ImageNet-Hard, a new benchmark that challenges SOTA classifiers including large vision-language models even when optimal zooming is allowed.
ImageNet-Hard: The Hardest Images Remaining from a Study of the Power of Zoom and Spatial Biases in Image Classification
[ "Mohammad Reza Taesiri", "Giang Nguyen", "Sarra Habchi", "Cor-Paul Bezemer", "Anh Nguyen" ]
Track/Datasets_and_Benchmarks
poster
[ "" ]
https://huggingface.co/papers/2304.05538
4
2
0
5
[]
[ "taesiri/imagenet-hard", "taesiri/imagenet-hard-4K" ]
[]
1
null
https://openreview.net/forum?id=CSJYz1Zovj
@inproceedings{ scepanovic2023medsat, title={MedSat: A Public Health Dataset for England Featuring Medical Prescriptions and Satellite Imagery}, author={Sanja Scepanovic and Ivica Obadic and Sagar Joglekar and Laura GIUSTARINI and Cristiano Nattero and Daniele Quercia and Xiao Xiang Zhu}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=CSJYz1Zovj} }
As extreme weather events become more frequent, understanding their impact on human health becomes increasingly crucial. However, the utilization of Earth Observation to effectively analyze the environmental context in relation to health remains limited. This limitation is primarily due to the lack of fine-grained spatial and temporal data in public and population health studies, hindering a comprehensive understanding of health outcomes. Additionally, obtaining appropriate environmental indices across different geographical levels and timeframes poses a challenge. For the years 2019 (pre-COVID) and 2020 (COVID), we collected spatio-temporal indicators for all Lower Layer Super Output Areas in England. These indicators included: i) 111 sociodemographic features linked to health in existing literature, ii) 43 environmental point features (e.g., greenery and air pollution levels), iii) 4 seasonal composite satellite images each with 11 bands, and iv) prescription prevalence associated with five medical conditions (depression, anxiety, diabetes, hypertension, and asthma), opioids and total prescriptions. We combined these indicators into a single MedSat dataset, the availability of which presents an opportunity for the machine learning community to develop new techniques specific to public health. These techniques would address challenges such as handling large and complex data volumes, performing effective feature engineering on environmental and sociodemographic factors, capturing spatial and temporal dependencies in the models, addressing imbalanced data distributions, developing novel computer vision methods for health modeling based on satellite imagery, ensuring model explainability, and achieving generalization beyond the specific geographical region.
MedSat: A Public Health Dataset for England Featuring Medical Prescriptions and Satellite Imagery
[ "Sanja Scepanovic", "Ivica Obadic", "Sagar Joglekar", "Laura GIUSTARINI", "Cristiano Nattero", "Daniele Quercia", "Xiao Xiang Zhu" ]
Track/Datasets_and_Benchmarks
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=CG0L2PFrb1
@inproceedings{ tirumala2023d, title={D4: Improving {LLM} Pretraining via Document De-Duplication and Diversification}, author={Kushal Tirumala and Daniel Simig and Armen Aghajanyan and Ari S. Morcos}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=CG0L2PFrb1} }
Over recent years, an increasing amount of compute and data has been poured into training large language models (LLMs), usually by doing one-pass learning on as many tokens as possible randomly selected from large-scale web corpora. While training on ever-larger portions of the internet leads to consistent performance improvements, the size of these improvements diminishes with scale, and there has been little work exploring the effect of data selection on pre-training and downstream performance beyond simple de-duplication methods such as MinHash. Here, we show that careful data selection (on top of de-duplicated data) via pre-trained model embeddings can speed up training (20% efficiency gains) and improves average downstream accuracy on 16 NLP tasks (up to 2%) at the 6.7B model scale. Furthermore, we show that repeating data intelligently consistently outperforms baseline training (while repeating random data performs worse than baseline training). Our results indicate that clever data selection can significantly improve LLM pre-training, calls into question the common practice of training for a single epoch on as much data as possible, and demonstrates a path to keep improving our models past the limits of randomly sampling web data.
D4: Improving LLM Pretraining via Document De-Duplication and Diversification
null
Track/Datasets_and_Benchmarks
poster
2308.12284
[ "" ]
https://huggingface.co/papers/2308.12284
2
0
0
4
[]
[]
[]
1
null
https://openreview.net/forum?id=C0zw2ERKiQ
@inproceedings{ yang2023revisiting, title={Revisiting the Evaluation of Image Synthesis with {GAN}s}, author={Mengping Yang and Ceyuan Yang and Yichi Zhang and Qingyan Bai and Yujun Shen and Bo Dai}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=C0zw2ERKiQ} }
A good metric, which promises a reliable comparison between solutions, is essential for any well-defined task. Unlike most vision tasks that have per-sample ground-truth, image synthesis tasks target generating unseen data and hence are usually evaluated through a distributional distance between one set of real samples and another set of generated samples. This study presents an empirical investigation into the evaluation of synthesis performance, with generative adversarial networks (GANs) as a representative of generative models. In particular, we make in-depth analyses of various factors, including how to represent a data point in the representation space, how to calculate a fair distance using selected samples, and how many instances to use from each set. Extensive experiments conducted on multiple datasets and settings reveal several important findings. Firstly, a group of models that include both CNN-based and ViT-based architectures serve as reliable and robust feature extractors for measurement evaluation. Secondly, Centered Kernel Alignment (CKA) provides a better comparison across various extractors and hierarchical layers in one model. Finally, CKA is more sample-efficient and enjoys better agreement with human judgment in characterizing the similarity between two internal data correlations. These findings contribute to the development of a new measurement system, which enables a consistent and reliable re-evaluation of current state-of-the-art generative models.
Revisiting the Evaluation of Image Synthesis with GANs
[ "Mengping Yang", "Ceyuan Yang", "Yichi Zhang", "Qingyan Bai", "Yujun Shen", "Bo Dai" ]
Track/Datasets_and_Benchmarks
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=BgY17iEnTb
@inproceedings{ tsuruta2023avidahil, title={{AVID}a-h{IL}6: A Large-Scale {VHH} Dataset Produced from an Immunized Alpaca for Predicting Antigen-Antibody Interactions}, author={Hirofumi Tsuruta and Hiroyuki Yamazaki and Ryota Maeda and Ryotaro Tamura and Jennifer N. Wei and Zelda E Mariet and Poomarin Phloyphisut and Hidetoshi Shimokawa and Joseph R. Ledsam and Lucy J Colwell and Akihiro Imura}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=BgY17iEnTb} }
Antibodies have become an important class of therapeutic agents to treat human diseases. To accelerate therapeutic antibody discovery, computational methods, especially machine learning, have attracted considerable interest for predicting specific interactions between antibody candidates and target antigens such as viruses and bacteria. However, the publicly available datasets in existing works have notable limitations, such as small sizes and the lack of non-binding samples and exact amino acid sequences. To overcome these limitations, we have developed AVIDa-hIL6, a large-scale dataset for predicting antigen-antibody interactions in the variable domain of heavy chain of heavy chain antibodies (VHHs), produced from an alpaca immunized with the human interleukin-6 (IL-6) protein, as antigens. By leveraging the simple structure of VHHs, which facilitates identification of full-length amino acid sequences by DNA sequencing technology, AVIDa-hIL6 contains 573,891 antigen-VHH pairs with amino acid sequences. All the antigen-VHH pairs have reliable labels for binding or non-binding, as generated by a novel labeling method. Furthermore, via introduction of artificial mutations, AVIDa-hIL6 contains 30 different mutants in addition to wild-type IL-6 protein. This characteristic provides opportunities to develop machine learning models for predicting changes in antibody binding by antigen mutations. We report experimental benchmark results on AVIDa-hIL6 by using machine learning models. The results indicate that the existing models have potential, but further research is needed to generalize them to predict effective antibodies against unknown mutants. The dataset is available at https://avida-hil6.cognanous.com.
AVIDa-hIL6: A Large-Scale VHH Dataset Produced from an Immunized Alpaca for Predicting Antigen-Antibody Interactions
[ "Hirofumi Tsuruta", "Hiroyuki Yamazaki", "Ryota Maeda", "Ryotaro Tamura", "Jennifer N. Wei", "Zelda E Mariet", "Poomarin Phloyphisut", "Hidetoshi Shimokawa", "Joseph R. Ledsam", "Lucy J Colwell", "Akihiro Imura" ]
Track/Datasets_and_Benchmarks
poster
2306.03329
[ "https://github.com/cognano/avida-hil6" ]
https://huggingface.co/papers/2306.03329
0
0
0
11
[]
[ "alchemab/il6-binding-prediction" ]
[]
1
null
https://openreview.net/forum?id=BR1m3JIoKm
@inproceedings{ kazemi2023boardgameqa, title={Boardgame{QA}: A Dataset for Natural Language Reasoning with Contradictory Information}, author={Mehran Kazemi and Quan Yuan and Deepti Bhatia and Najoung Kim and Xin Xu and Vaiva Imbrasaite and Deepak Ramachandran}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=BR1m3JIoKm} }
Automated reasoning with unstructured natural text is a key requirement for many potential applications of NLP and for developing robust AI systems. Recently, Language Models (LMs) have demonstrated complex reasoning capacities even without any finetuning. However, existing evaluation for automated reasoning assumes access to a consistent and coherent set of information over which models reason. When reasoning in the real-world, the available information is frequently inconsistent or contradictory, and therefore models need to be equipped with a strategy to resolve such conflicts when they arise. One widely-applicable way of resolving conflicts is to impose preferences over information sources (e.g., based on source credibility or information recency) and adopt the source with higher preference. In this paper, we formulate the problem of reasoning with contradictory information guided by preferences over sources as the classical problem of defeasible reasoning, and develop a dataset called BoardgameQA for measuring the reasoning capacity of LMs in this setting. BoardgameQA also incorporates reasoning with implicit background knowledge, to better reflect reasoning problems in downstream applications. We benchmark various LMs on BoardgameQA and the results reveal a significant gap in the reasoning capacity of state-of-the-art LMs on this problem, showing that reasoning with conflicting information does not surface out-of-the-box in LMs. While performance can be improved with finetuning, it nevertheless remains poor.
BoardgameQA: A Dataset for Natural Language Reasoning with Contradictory Information
[ "Mehran Kazemi", "Quan Yuan", "Deepti Bhatia", "Najoung Kim", "Xin Xu", "Vaiva Imbrasaite", "Deepak Ramachandran" ]
Track/Datasets_and_Benchmarks
poster
2306.07934
[ "" ]
https://huggingface.co/papers/2306.07934
1
0
0
7
[]
[ "tasksource/Boardgame-QA" ]
[]
1
null
https://openreview.net/forum?id=BNwsJ4bFsc
@inproceedings{ hall2023visogender, title={VisoGender: A dataset for benchmarking gender bias in image-text pronoun resolution}, author={Siobhan Mackenzie Hall and Fernanda Gon{\c{c}}alves Abrantes and Hanwen Zhu and Grace Sodunke and Aleksandar Shtedritski and Hannah Rose Kirk}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=BNwsJ4bFsc} }
We introduce VisoGender, a novel dataset for benchmarking gender bias in vision-language models. We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas, where each image is associated with a caption containing a pronoun relationship of subjects and objects in the scene. VisoGender is balanced by gender representation in professional roles, supporting bias evaluation in two ways: i) resolution bias, where we evaluate the difference between pronoun resolution accuracies for image subjects with gender presentations perceived as masculine versus feminine by human annotators and ii) retrieval bias, where we compare ratios of professionals perceived to have masculine and feminine gender presentations retrieved for a gender-neutral search query. We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes. While the direction and magnitude of gender bias depends on the task and the model being evaluated, captioning models are generally less biased than Vision-Language Encoders.
VisoGender: A dataset for benchmarking gender bias in image-text pronoun resolution
[ "Siobhan Mackenzie Hall", "Fernanda Gonçalves Abrantes", "Hanwen Zhu", "Grace Sodunke", "Aleksandar Shtedritski", "Hannah Rose Kirk" ]
Track/Datasets_and_Benchmarks
poster
[ "https://github.com/oxai/visogender" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=AwhpBEqmyo
@inproceedings{ bugliarello2023storybench, title={StoryBench: A Multifaceted Benchmark for Continuous Story Visualization}, author={Emanuele Bugliarello and Hernan Moraldo and Ruben Villegas and Mohammad Babaeizadeh and Mohammad Taghi Saffar and Han Zhang and Dumitru Erhan and Vittorio Ferrari and Pieter-Jan Kindermans and Paul Voigtlaender}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=AwhpBEqmyo} }
Generating video stories from text prompts is a complex task. In addition to having high visual quality, videos need to realistically adhere to a sequence of text prompts whilst being consistent throughout the frames. Creating a benchmark for video generation requires data annotated over time, which contrasts with the single caption used often in video datasets. To fill this gap, we collect comprehensive human annotations on three existing datasets, and introduce StoryBench: a new, challenging multi-task benchmark to reliably evaluate forthcoming text-to-video models. Our benchmark includes three video generation tasks of increasing difficulty: action execution, where the next action must be generated starting from a conditioning video; story continuation, where a sequence of actions must be executed starting from a conditioning video; and story generation, where a video must be generated from only text prompts. We evaluate small yet strong text-to-video baselines, and show the benefits of training on story-like data algorithmically generated from existing video captions. Finally, we establish guidelines for human evaluation of video stories, and reaffirm the need of better automatic metrics for video generation. StoryBench aims at encouraging future research efforts in this exciting new area.
StoryBench: A Multifaceted Benchmark for Continuous Story Visualization
[ "Emanuele Bugliarello", "Hernan Moraldo", "Ruben Villegas", "Mohammad Babaeizadeh", "Mohammad Taghi Saffar", "Han Zhang", "Dumitru Erhan", "Vittorio Ferrari", "Pieter-Jan Kindermans", "Paul Voigtlaender" ]
Track/Datasets_and_Benchmarks
poster
2308.11606
[ "" ]
https://huggingface.co/papers/2308.11606
0
1
0
10
[]
[]
[]
1
null
https://openreview.net/forum?id=AvttCE8n3H
@inproceedings{ silcock2023a, title={A Massive Scale Semantic Similarity Dataset of Historical English}, author={Emily Silcock and Abhishek Arora and Melissa Dell}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=AvttCE8n3H} }
A diversity of tasks use language models trained on semantic similarity data. While there are a variety of datasets that capture semantic similarity, they are either constructed from modern web data or are relatively small datasets created in the past decade by human annotators. This study utilizes a novel source, newly digitized articles from off-copyright, local U.S. newspapers, to assemble a massive-scale semantic similarity dataset spanning 70 years from 1920 to 1989 and containing nearly 400M positive semantic similarity pairs. Historically, around half of articles in U.S. local newspapers came from newswires like the Associated Press. While local papers reproduced articles from the newswire, they wrote their own headlines, which form abstractive summaries of the associated articles. We associate articles and their headlines by exploiting document layouts and language understanding. We then use deep neural methods to detect which articles are from the same underlying source, in the presence of substantial noise and abridgement. The headlines of reproduced articles form positive semantic similarity pairs. The resulting publicly available HEADLINES dataset is significantly larger than most existing semantic similarity datasets and covers a much longer span of time. It will facilitate the application of contrastively trained semantic similarity models to a variety of tasks, including the study of semantic change across space and time.
A Massive Scale Semantic Similarity Dataset of Historical English
[ "Emily Silcock", "Abhishek Arora", "Melissa Dell" ]
Track/Datasets_and_Benchmarks
poster
2306.17810
[ "" ]
https://huggingface.co/papers/2306.17810
0
0
0
2
[]
[ "dell-research-harvard/headlines-semantic-similarity" ]
[]
1
null
https://openreview.net/forum?id=As4101fOG1
@inproceedings{ blumenstiel2023what, title={What a {MESS}: Multi-Domain Evaluation of Zero-Shot Semantic Segmentation}, author={Benedikt Blumenstiel and Johannes Jakubik and Hilde Kuehne and Michael V{\"o}ssing}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=As4101fOG1} }
While semantic segmentation has seen tremendous improvements in the past, there are still significant labeling efforts necessary and the problem of limited generalization to classes that have not been present during training. To address this problem, zero-shot semantic segmentation makes use of large self-supervised vision-language models, allowing zero-shot transfer to unseen classes. In this work, we build a benchmark for Multi-domain Evaluation of Zero-Shot Semantic Segmentation (MESS), which allows a holistic analysis of performance across a wide range of domain-specific datasets such as medicine, engineering, earth monitoring, biology, and agriculture. To do this, we reviewed 120 datasets, developed a taxonomy, and classified the datasets according to the developed taxonomy. We select a representative subset consisting of 22 datasets and propose it as the MESS benchmark. We evaluate eight recently published models on the proposed MESS benchmark and analyze characteristics for the performance of zero-shot transfer models. The toolkit is available at https://github.com/blumenstiel/MESS.
What a MESS: Multi-Domain Evaluation of Zero-Shot Semantic Segmentation
[ "Benedikt Blumenstiel", "Johannes Jakubik", "Hilde Kuehne", "Michael Vössing" ]
Track/Datasets_and_Benchmarks
poster
2306.15521
[ "https://github.com/blumenstiel/mess" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=ApqgcSnhjh
@inproceedings{ tian2023occd, title={Occ3D: A Large-Scale 3D Occupancy Prediction Benchmark for Autonomous Driving}, author={Xiaoyu Tian and Tao Jiang and Longfei Yun and Yucheng Mao and Huitong Yang and Yue Wang and Yilun Wang and Hang Zhao}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=ApqgcSnhjh} }
Robotic perception requires the modeling of both 3D geometry and semantics. Existing methods typically focus on estimating 3D bounding boxes, neglecting finer geometric details and struggling to handle general, out-of-vocabulary objects. 3D occupancy prediction, which estimates the detailed occupancy states and semantics of a scene, is an emerging task to overcome these limitations. To support 3D occupancy prediction, we develop a label generation pipeline that produces dense, visibility-aware labels for any given scene. This pipeline comprises three stages: voxel densification, occlusion reasoning, and image-guided voxel refinement. We establish two benchmarks, derived from the Waymo Open Dataset and the nuScenes Dataset, namely Occ3D-Waymo and Occ3D-nuScenes benchmarks. Furthermore, we provide an extensive analysis of the proposed dataset with various baseline models. Lastly, we propose a new model, dubbed Coarse-to-Fine Occupancy (CTF-Occ) network, which demonstrates superior performance on the Occ3D benchmarks.The code, data, and benchmarks are released at \url{https://tsinghua-mars-lab.github.io/Occ3D/}.
Occ3D: A Large-Scale 3D Occupancy Prediction Benchmark for Autonomous Driving
[ "Xiaoyu Tian", "Tao Jiang", "Longfei Yun", "Yucheng Mao", "Huitong Yang", "Yue Wang", "Yilun Wang", "Hang Zhao" ]
Track/Datasets_and_Benchmarks
poster
2304.14365
[ "https://github.com/Tsinghua-MARS-Lab/Occ3D" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=AIeeXKsspI
@inproceedings{ zhou2023youtubepd, title={YouTube{PD}: A Multimodal Benchmark for Parkinson{\textquoteright}s Disease Analysis}, author={Andy Zhou and Samuel Li and Pranav Sriram and Xiang Li and Jiahua Dong and Ansh Sharma and Yuanyi Zhong and Shirui Luo and Maria Jaromin and Volodymyr Kindratenko and Joerg Heintz and Christopher Zallek and Yu-Xiong Wang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=AIeeXKsspI} }
The healthcare and AI communities have witnessed a growing interest in the development of AI-assisted systems for automated diagnosis of Parkinson's Disease (PD), one of the most prevalent neurodegenerative disorders. However, the progress in this area has been significantly impeded by the absence of a unified, publicly available benchmark, which prevents comprehensive evaluation of existing PD analysis methods and the development of advanced models. This work overcomes these challenges by introducing YouTubePD -- the *first* publicly available multimodal benchmark designed for PD analysis. We crowd-source existing videos featured with PD from YouTube, exploit multimodal information including *in-the-wild* videos, audios, and facial landmarks across 200+ subject videos, and provide dense and diverse annotations from a clinical expert. Based on our benchmark, we propose three challenging and complementary tasks encompassing *both discriminative and generative* tasks, along with a comprehensive set of corresponding baselines. Experimental evaluation showcases the potential of modern deep learning and computer vision techniques, in particular the generalizability of the models developed on our YouTubePD to real-world clinical settings, while revealing their limitations. We hope that our work paves the way for future research in this direction.
YouTubePD: A Multimodal Benchmark for Parkinson’s Disease Analysis
[ "Andy Zhou", "Samuel Li", "Pranav Sriram", "Xiang Li", "Jiahua Dong", "Ansh Sharma", "Yuanyi Zhong", "Shirui Luo", "Maria Jaromin", "Volodymyr Kindratenko", "Joerg Heintz", "Christopher Zallek", "Yu-Xiong Wang" ]
Track/Datasets_and_Benchmarks
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=AA2uO0HHmr
@inproceedings{ rozumnyi2023estimating, title={Estimating Generic 3D Room Structures from 2D Annotations}, author={Denys Rozumnyi and Stefan Popov and Kevis-kokitsi Maninis and Matthias Nie{\ss}ner and Vittorio Ferrari}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=AA2uO0HHmr} }
Indoor rooms are among the most common use cases in 3D scene understanding. Current state-of-the-art methods for this task are driven by large annotated datasets. Room layouts are especially important, consisting of structural elements in 3D, such as wall, floor, and ceiling. However, they are difficult to annotate, especially on pure RGB video. We propose a novel method to produce generic 3D room layouts just from 2D segmentation masks, which are easy to annotate for humans. Based on these 2D annotations, we automatically reconstruct 3D plane equations for the structural elements and their spatial extent in the scene, and connect adjacent elements at the appropriate contact edges. We annotate and publicly release 2246 3D room layouts on the RealEstate10k dataset, containing YouTube videos. We demonstrate the high quality of these 3D layouts annotations with extensive experiments.
Estimating Generic 3D Room Structures from 2D Annotations
[ "Denys Rozumnyi", "Stefan Popov", "Kevis-kokitsi Maninis", "Matthias Nießner", "Vittorio Ferrari" ]
Track/Datasets_and_Benchmarks
poster
2306.09077
[ "https://github.com/google-research/cad-estate" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=9lOVNw7guQ
@inproceedings{ thai2023lowshot, title={Low-shot Object Learning with Mutual Exclusivity Bias}, author={Ngoc Anh Thai and Ahmad Humayun and Stefan Stojanov and Zixuan Huang and Bikram Boote and James Matthew Rehg}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=9lOVNw7guQ} }
This paper introduces Low-shot Object Learning with Mutual Exclusivity Bias (LSME), the first computational framing of mutual exclusivity bias, a phenomenon commonly observed in infants during word learning. We provide a novel dataset, comprehensive baselines, and a SOTA method to enable the ML community to tackle this challenging learning task. The goal of LSME is to analyze an RGB image of a scene containing multiple objects and correctly associate a previously-unknown object instance with a provided category label. This association is then used to perform low-shot learning to test category generalization. We provide a data generation pipeline for the LSME problem and conduct a thorough analysis of the factors that contribute to its difficulty. Additionally, we evaluate the performance of multiple baselines, including state-of-the-art foundation models. Finally, we present a baseline approach that outperforms state-of-the-art models in terms of low-shot accuracy. Code and data are available at https://github.com/rehg-lab/LSME.
Low-shot Object Learning with Mutual Exclusivity Bias
[ "Ngoc Anh Thai", "Ahmad Humayun", "Stefan Stojanov", "Zixuan Huang", "Bikram Boote", "James Matthew Rehg" ]
Track/Datasets_and_Benchmarks
poster
2312.03533
[ "https://github.com/rehg-lab/lsme" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=9gkrbrFzZj
@inproceedings{ tan2023openstl, title={Open{STL}: A Comprehensive Benchmark of Spatio-Temporal Predictive Learning}, author={Cheng Tan and Siyuan Li and Zhangyang Gao and Wenfei Guan and Zedong Wang and Zicheng Liu and Lirong Wu and Stan Z. Li}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=9gkrbrFzZj} }
Spatio-temporal predictive learning is a learning paradigm that enables models to learn spatial and temporal patterns by predicting future frames from given past frames in an unsupervised manner. Despite remarkable progress in recent years, a lack of systematic understanding persists due to the diverse settings, complex implementation, and difficult reproducibility. Without standardization, comparisons can be unfair and insights inconclusive. To address this dilemma, we propose OpenSTL, a comprehensive benchmark for spatio-temporal predictive learning that categorizes prevalent approaches into recurrent-based and recurrent-free models. OpenSTL provides a modular and extensible framework implementing various state-of-the-art methods. We conduct standard evaluations on datasets across various domains, including synthetic moving object trajectory, human motion, driving scenes, traffic flow, and weather forecasting. Based on our observations, we provide a detailed analysis of how model architecture and dataset properties affect spatio-temporal predictive learning performance. Surprisingly, we find that recurrent-free models achieve a good balance between efficiency and performance than recurrent models. Thus, we further extend the common MetaFormers to boost recurrent-free spatial-temporal predictive learning. We open-source the code and models at https://github.com/chengtan9907/OpenSTL.
OpenSTL: A Comprehensive Benchmark of Spatio-Temporal Predictive Learning
[ "Cheng Tan", "Siyuan Li", "Zhangyang Gao", "Wenfei Guan", "Zedong Wang", "Zicheng Liu", "Lirong Wu", "Stan Z. Li" ]
Track/Datasets_and_Benchmarks
poster
2306.11249
[ "https://github.com/chengtan9907/OpenSTL" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=9gLnjw8DfA
@inproceedings{ kitamoto2023digital, title={Digital Typhoon: Long-term Satellite Image Dataset for the Spatio-Temporal Modeling of Tropical Cyclones}, author={Asanobu Kitamoto and Jared Hwang and Bastien Vuillod and Lucas Gautier and Yingtao Tian and Tarin Clanuwat}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=9gLnjw8DfA} }
This paper presents the official release of the Digital Typhoon dataset, the longest typhoon satellite image dataset for 40+ years aimed at benchmarking machine learning models for long-term spatio-temporal data. To build the dataset, we developed a workflow to create an infrared typhoon-centered image for cropping using Lambert azimuthal equal-area projection referring to the best track data. We also address data quality issues such as inter-satellite calibration to create a homogeneous dataset. To take advantage of the dataset, we organized machine learning tasks by the types and targets of inference, with other tasks for meteorological analysis, societal impact, and climate change. The benchmarking results on the analysis, forecasting, and reanalysis for the intensity suggest that the dataset is challenging for recent deep learning models, due to many choices that affect the performance of various models. This dataset reduces the barrier for machine learning researchers to meet large-scale real-world events called tropical cyclones and develop machine learning models that may contribute to advancing scientific knowledge on tropical cyclones as well as solving societal and sustainability issues such as disaster reduction and climate change. The dataset is publicly available at http://agora.ex.nii.ac.jp/digital-typhoon/dataset/ and https://github.com/kitamoto-lab/digital-typhoon/.
Digital Typhoon: Long-term Satellite Image Dataset for the Spatio-Temporal Modeling of Tropical Cyclones
[ "Asanobu Kitamoto", "Jared Hwang", "Bastien Vuillod", "Lucas Gautier", "Yingtao Tian", "Tarin Clanuwat" ]
Track/Datasets_and_Benchmarks
oral
2311.02665
[ "https://github.com/kitamoto-lab/digital-typhoon" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=9Z1cmO7S7o
@inproceedings{ mathiasen2023generating, title={Generating {QM}1B with Py{SCF}\$\_\{{\textbackslash}text\{{IPU}\}\}\$}, author={Alexander Mathiasen and Hatem Helal and Kerstin Klaeser and Paul Balanca and Josef Dean and Carlo Luschi and Dominique Beaini and Andrew W Fitzgibbon and Dominic Masters}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=9Z1cmO7S7o} }
The emergence of foundation models in Computer Vision and Natural Language Processing have resulted in immense progress on downstream tasks. This progress was enabled by datasets with billions of training examples. Similar benefits are yet to be unlocked for quantum chemistry, where the potential of deep learning is constrained by comparatively small datasets with 100k to 20M training examples. These datasets are limited in size because the labels are computed using the accurate (but computationally demanding) predictions of Density Functional Theory (DFT). Notably, prior DFT datasets were created using CPU supercomputers without leveraging hardware acceleration. In this paper, we take a first step towards utilising hardware accelerators by introducing the data generator PySCF$_{\text{IPU}}$ using Intelligence Processing Units (IPUs). This allows us to create the dataset QM1B with one billion training examples containing 9-11 heavy atoms. We demonstrate that a simple baseline neural network (SchNet 9M) improves its performance by simply increasing the amount of training data without additional inductive biases. To encourage future researchers to use QM1B responsibly, we highlight several limitations of QM1B and emphasise the low resolution of our DFT options, which also serves as motivation for even larger, more accurate datasets.
Generating QM1B with PySCF_IPU
[ "Alexander Mathiasen", "Hatem Helal", "Kerstin Klaeser", "Paul Balanca", "Josef Dean", "Carlo Luschi", "Dominique Beaini", "Andrew W Fitzgibbon", "Dominic Masters" ]
Track/Datasets_and_Benchmarks
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=9U8bqr8epr
@inproceedings{ corlatescu2023embersim, title={{EMBERS}im: A Large-Scale Databank for Boosting Similarity Search in Malware Analysis}, author={Dragos Georgian Corlatescu and Alexandru Dinu and Mihaela Gaman and Paul Sumedrea}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=9U8bqr8epr} }
In recent years there has been a shift from heuristics based malware detection towards machine learning, which proves to be more robust in the current heavily adversarial threat landscape. While we acknowledge machine learning to be better equipped to mine for patterns in the increasingly high amounts of similar-looking files, we also note a remarkable scarcity of the data available for similarity targeted research. Moreover, we observe that the focus in the few related works falls on quantifying similarity in malware, often overlooking the clean data. This one-sided quantification is especially dangerous in the context of detection bypass. We propose to address the deficiencies in the space of similarity research on binary files, starting from EMBER — one of the largest malware classification datasets. We enhance EMBER with similarity information as well as malware class tags, to enable further research in the similarity space. Our contribution is threefold: (1) we publish EMBERSim, an augmented version of EMBER, that includes similarity informed tags; (2) we enrich EMBERSim with automatically determined malware class tags using the open-source tool AVClass on VirusTotal data and (3) we describe and share the implementation for our class scoring technique and leaf similarity method.
EMBERSim: A Large-Scale Databank for Boosting Similarity Search in Malware Analysis
[ "Dragos Georgian Corlatescu", "Alexandru Dinu", "Mihaela Gaman", "Paul Sumedrea" ]
Track/Datasets_and_Benchmarks
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=9CKx9SsSSc
@inproceedings{ jiang2023adgym, title={{ADG}ym: Design Choices for Deep Anomaly Detection}, author={Minqi Jiang and Chaochuan Hou and Ao Zheng and Songqiao Han and Hailiang Huang and Qingsong Wen and Xiyang Hu and Yue Zhao}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=9CKx9SsSSc} }
Deep learning (DL) techniques have recently found success in anomaly detection (AD) across various fields such as finance, medical services, and cloud computing. However, most of the current research tends to view deep AD algorithms as a whole, without dissecting the contributions of individual design choices like loss functions and network architectures. This view tends to diminish the value of preliminary steps like data preprocessing, as more attention is given to newly designed loss functions, network architectures, and learning paradigms. In this paper, we aim to bridge this gap by asking two key questions: (i) Which design choices in deep AD methods are crucial for detecting anomalies? (ii) How can we automatically select the optimal design choices for a given AD dataset, instead of relying on generic, pre-existing solutions? To address these questions, we introduce ADGym, a platform specifically crafted for comprehensive evaluation and automatic selection of AD design elements in deep methods. Our extensive experiments reveal that relying solely on existing leading methods is not sufficient. In contrast, models developed using ADGym significantly surpass current state-of-the-art techniques.
ADGym: Design Choices for Deep Anomaly Detection
[ "Minqi Jiang", "Chaochuan Hou", "Ao Zheng", "Songqiao Han", "Hailiang Huang", "Qingsong Wen", "Xiyang Hu", "Yue Zhao" ]
Track/Datasets_and_Benchmarks
poster
2309.15376
[ "https://github.com/minqi824/adgym" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=8gDJXL652A
@inproceedings{ hu2023pairwise, title={Pairwise {GUI} Dataset Construction Between Android Phones and Tablets}, author={Han Hu and Haolan Zhan and Yujin Huang and Di Liu}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=8gDJXL652A} }
In the current landscape of pervasive smartphones and tablets, apps frequently exist across both platforms. Although apps share most graphic user interfaces (GUIs) and functionalities across phones and tablets, developers often rebuild from scratch for tablet versions, escalating costs and squandering existing design resources. Researchers are attempting to collect data and employ deep learning in automated GUIs development to enhance developers' productivity. There are currently several publicly accessible GUI page datasets for phones, but none for pairwise GUIs between phones and tablets. This poses a significant barrier to the employment of deep learning in automated GUI development. In this paper, we introduce the Papt dataset, a pioneering pairwise GUI dataset tailored for Android phones and tablets, encompassing 10,035 phone-tablet GUI page pairs sourced from 5,593 unique app pairs. We propose novel pairwise GUI collection approaches for constructing this dataset and delineate its advantages over currently prevailing datasets in the field. Through preliminary experiments on this dataset, we analyze the present challenges of utilizing deep learning in automated GUI development.
Pairwise GUI Dataset Construction Between Android Phones and Tablets
[ "Han Hu", "Haolan Zhan", "Yujin Huang", "Di Liu" ]
Track/Datasets_and_Benchmarks
poster
2310.04755
[ "https://github.com/huhangithub/papt" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=8bqjirgxQM
@inproceedings{ gandhi2023understanding, title={Understanding Social Reasoning in Language Models with Language Models}, author={Kanishk Gandhi and Jan-Philipp Fr{\"a}nken and Tobias Gerstenberg and Noah Goodman}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=8bqjirgxQM} }
As Large Language Models (LLMs) become increasingly integrated into our everyday lives, understanding their ability to comprehend human mental states becomes critical for ensuring effective interactions. However, despite the recent attempts to assess the Theory-of-Mind (ToM) reasoning capabilities of LLMs, the degree to which these models can align with human ToM remains a nuanced topic of exploration. This is primarily due to two distinct challenges: (1) the presence of inconsistent results from previous evaluations, and (2) concerns surrounding the validity of existing evaluation methodologies. To address these challenges, we present a novel framework for procedurally generating evaluations with LLMs by populating causal templates. Using our framework, we create a new social reasoning benchmark (BigToM) for LLMs which consists of 25 controls and 5,000 model-written evaluations. We find that human participants rate the quality of our benchmark higher than previous crowd-sourced evaluations and comparable to expert-written evaluations. Using BigToM, we evaluate the social reasoning capabilities of a variety of LLMs and compare model performances with human performance. Our results suggest that GPT4 has ToM capabilities that mirror human inference patterns, though less reliable, while other LLMs struggle.
Understanding Social Reasoning in Language Models with Language Models
[ "Kanishk Gandhi", "Jan-Philipp Fränken", "Tobias Gerstenberg", "Noah Goodman" ]
Track/Datasets_and_Benchmarks
oral
2306.15448
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=8ZRAHNT7E9
@inproceedings{ toshev2023lagrangebench, title={LagrangeBench: A Lagrangian Fluid Mechanics Benchmarking Suite}, author={Artur Toshev and Gianluca Galletti and Fabian Fritz and Stefan Adami and Nikolaus A. Adams}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=8ZRAHNT7E9} }
Machine learning has been successfully applied to grid-based PDE modeling in various scientific applications. However, learned PDE solvers based on Lagrangian particle discretizations, which are the preferred approach to problems with free surfaces or complex physics, remain largely unexplored. We present LagrangeBench, the first benchmarking suite for Lagrangian particle problems, focusing on temporal coarse-graining. In particular, our contribution is: (a) seven new fluid mechanics datasets (four in 2D and three in 3D) generated with the Smoothed Particle Hydrodynamics (SPH) method including the Taylor-Green vortex, lid-driven cavity, reverse Poiseuille flow, and dam break, each of which includes different physics like solid wall interactions or free surface, (b) efficient JAX-based API with various recent training strategies and three neighbor search routines, and (c) JAX implementation of established Graph Neural Networks (GNNs) like GNS and SEGNN with baseline results. Finally, to measure the performance of learned surrogates we go beyond established position errors and introduce physical metrics like kinetic energy MSE and Sinkhorn distance for the particle distribution. Our codebase is available under the URL: [https://github.com/tumaer/lagrangebench](https://github.com/tumaer/lagrangebench).
LagrangeBench: A Lagrangian Fluid Mechanics Benchmarking Suite
[ "Artur Toshev", "Gianluca Galletti", "Fabian Fritz", "Stefan Adami", "Nikolaus A. Adams" ]
Track/Datasets_and_Benchmarks
poster
2309.16342
[ "https://github.com/tumaer/lagrangebench" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=8TMhs2pIfG
@inproceedings{ jampani2023navi, title={{NAVI}: Category-Agnostic Image Collections with High-Quality 3D Shape and Pose Annotations}, author={Varun Jampani and Kevis-kokitsi Maninis and Andreas Engelhardt and Arjun Karpur and Karen Truong and Kyle Sargent and Stefan Popov and Andre Araujo and Ricardo Martin Brualla and Kaushal Patel and Daniel Vlasic and Vittorio Ferrari and Ameesh Makadia and Ce Liu and Yuanzhen Li and Howard Zhou}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=8TMhs2pIfG} }
Recent advances in neural reconstruction enable high-quality 3D object reconstruction from casually captured image collections. Current techniques mostly analyze their progress on relatively simple image collections where SfM techniques can provide ground-truth (GT) camera poses. We note that SfM techniques tend to fail on in-the-wild image collections such as image search results with varying backgrounds and illuminations. To enable systematic research progress on 3D reconstruction from casual image captures, we propose `NAVI': a new dataset of category-agnostic image collections of objects with high-quality 3D scans along with per-image 2D-3D alignments providing near-perfect GT camera parameters. These 2D-3D alignments allow us to extract accurate derivative annotations such as dense pixel correspondences, depth and segmentation maps. We demonstrate the use of NAVI image collections on different problem settings and show that NAVI enables more thorough evaluations that were not possible with existing datasets. We believe NAVI is beneficial for systematic research progress on 3D reconstruction and correspondence estimation.
NAVI: Category-Agnostic Image Collections with High-Quality 3D Shape and Pose Annotations
[ "Varun Jampani", "Kevis-kokitsi Maninis", "Andreas Engelhardt", "Arjun Karpur", "Karen Truong", "Kyle Sargent", "Stefan Popov", "Andre Araujo", "Ricardo Martin Brualla", "Kaushal Patel", "Daniel Vlasic", "Vittorio Ferrari", "Ameesh Makadia", "Ce Liu", "Yuanzhen Li", "Howard Zhou" ]
Track/Datasets_and_Benchmarks
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=846X3N11bf
@inproceedings{ cui2023probio, title={ProBio: A Protocol-guided Multimodal Dataset for Molecular Biology Lab}, author={Jieming Cui and Ziren Gong and Baoxiong Jia and Siyuan Huang and Zilong Zheng and Jianzhu Ma and Yixin Zhu}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=846X3N11bf} }
The challenge of replicating research results has posed a significant impediment to the field of molecular biology. The advent of modern intelligent systems has led to notable progress in various domains. Consequently, we embarked on an investigation of intelligent monitoring systems as a means of tackling the issue of the reproducibility crisis. Specifically, we first curate a comprehensive multimodal dataset, named ProBio, as an initial step towards this objective. This dataset comprises fine-grained hierarchical annotations intended for the purpose of studying activity understanding in BioLab. Next, we devise two challenging benchmarks, transparent solution tracking and multimodal action recognition, to emphasize the unique characteristics and difficulties associated with activity understanding in BioLab settings. Finally, we provide a thorough experimental evaluation of contemporary video understanding models and highlight their limitations in this specialized domain to identify potential avenues for future research. We hope ProBio with associated benchmarks may garner increased focus on modern AI techniques in the realm of molecular biology.
ProBio: A Protocol-guided Multimodal Dataset for Molecular Biology Lab
[ "Jieming Cui", "Ziren Gong", "Baoxiong Jia", "Siyuan Huang", "Zilong Zheng", "Jianzhu Ma", "Yixin Zhu" ]
Track/Datasets_and_Benchmarks
poster
2311.00556
[ "" ]
https://huggingface.co/papers/2311.00556
1
0
0
7
[]
[]
[]
1
null
https://openreview.net/forum?id=7tMgzSvopH
@inproceedings{ augustyniak2023massively, title={Massively Multilingual Corpus of Sentiment Datasets and Multi-faceted Sentiment Classification Benchmark}, author={Lukasz Augustyniak and Szymon Wo{\'z}niak and Marcin Gruza and Piotr Gramacki and Krzysztof Rajda and Miko{\l}aj Morzy and Tomasz Jan Kajdanowicz}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=7tMgzSvopH} }
Despite impressive advancements in multilingual corpora collection and model training, developing large-scale deployments of multilingual models still presents a significant challenge. This is particularly true for language tasks that are culture-dependent. One such example is the area of multilingual sentiment analysis, where affective markers can be subtle and deeply ensconced in culture. This work presents the most extensive open massively multilingual corpus of datasets for training sentiment models. The corpus consists of 79 manually selected datasets from over 350 datasets reported in the scientific literature based on strict quality criteria. The corpus covers 27 languages representing 6 language families. Datasets can be queried using several linguistic and functional features. In addition, we present a multi-faceted sentiment classification benchmark summarizing hundreds of experiments conducted on different base models, training objectives, dataset collections, and fine-tuning strategies.
Massively Multilingual Corpus of Sentiment Datasets and Multi-faceted Sentiment Classification Benchmark
[ "Lukasz Augustyniak", "Szymon Woźniak", "Marcin Gruza", "Piotr Gramacki", "Krzysztof Rajda", "Mikołaj Morzy", "Tomasz Jan Kajdanowicz" ]
Track/Datasets_and_Benchmarks
poster
[ "" ]
https://huggingface.co/papers/2306.07902
1
0
0
7
[]
[ "Brand24/mms" ]
[]
1
null
https://openreview.net/forum?id=7kc4gtEk3b
@inproceedings{ liu2023a, title={A Comprehensive Benchmark for Neural Human Radiance Fields}, author={Kenkun Liu and Derong Jin and Ailing Zeng and Xiaoguang Han and Lei Zhang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=7kc4gtEk3b} }
The past two years have witnessed a significant increase in interest concerning NeRF-based human body rendering. While this surge has propelled considerable advancements, it has also led to an influx of methods and datasets. This explosion complicates experimental settings and makes fair comparisons challenging. In this work, we design and execute thorough studies into unified evaluation settings and metrics to establish a fair and reasonable benchmark for human NeRF models. To reveal the effects of extant models, we benchmark them against diverse and hard scenes. Additionally, we construct a cross-subject benchmark pre-trained on large-scale datasets to assess generalizable methods. Finally, we analyze the essential components for animatability and generalizability, and make HumanNeRF from monocular videos generalizable, as the inaugural baseline. We hope these benchmarks and analyses could serve the community.
A Comprehensive Benchmark for Neural Human Radiance Fields
[ "Kenkun Liu", "Derong Jin", "Ailing Zeng", "Xiaoguang Han", "Lei Zhang" ]
Track/Datasets_and_Benchmarks
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=7bghy0Gq75
@inproceedings{ wiederhold2023hoh, title={{HOH}: Markerless Multimodal Human-Object-Human Handover Dataset with Large Object Count}, author={Noah Wiederhold and Ava Megyeri and DiMaggio Paris and Sean Banerjee and Natasha Kholgade Banerjee}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=7bghy0Gq75} }
We present the HOH (Human-Object-Human) Handover Dataset, a large object count dataset with 136 objects, to accelerate data-driven research on handover studies, human-robot handover implementation, and artificial intelligence (AI) on handover parameter estimation from 2D and 3D data of two-person interactions. HOH contains multi-view RGB and depth data, skeletons, fused point clouds, grasp type and handedness labels, object, giver hand, and receiver hand 2D and 3D segmentations, giver and receiver comfort ratings, and paired object metadata and aligned 3D models for 2,720 handover interactions spanning 136 objects and 20 giver-receiver pairs—40 with role-reversal—organized from 40 participants. We also show experimental results of neural networks trained using HOH to perform grasp, orientation, and trajectory prediction. As the only fully markerless handover capture dataset, HOH represents natural human-human handover interactions, overcoming challenges with markered datasets that require specific suiting for body tracking, and lack high-resolution hand tracking. To date, HOH is the largest handover dataset in terms of object count, participant count, pairs with role reversal accounted for, and total interactions captured.
HOH: Markerless Multimodal Human-Object-Human Handover Dataset with Large Object Count
null
Track/Datasets_and_Benchmarks
poster
2310.00723
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=7VSBaP2OXN
@inproceedings{ gulino2023waymax, title={Waymax: An Accelerated, Data-Driven Simulator for Large-Scale Autonomous Driving Research}, author={Cole Gulino and Justin Fu and Wenjie Luo and George Tucker and Eli Bronstein and Yiren Lu and Jean Harb and Xinlei Pan and Yan Wang and Xiangyu Chen and John D Co-Reyes and Rishabh Agarwal and Rebecca Roelofs and Yao Lu and Nico Montali and Paul Mougin and Zoey Zeyu Yang and Brandyn White and Aleksandra Faust and Rowan Thomas McAllister and Dragomir Anguelov and Benjamin Sapp}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=7VSBaP2OXN} }
Simulation is an essential tool to develop and benchmark autonomous vehicle planning software in a safe and cost-effective manner. However, realistic simulation requires accurate modeling of multi-agent interactive behaviors to be trustworthy, behaviors which can be highly nuanced and complex. To address these challenges, we introduce Waymax, a new data-driven simulator for autonomous driving in multi-agent scenes, designed for large-scale simulation and testing. Waymax uses publicly-released, real-world driving data (e.g., the Waymo Open Motion Dataset) to initialize or play back a diverse set of multi-agent simulated scenarios. It runs entirely on hardware accelerators such as TPUs/GPUs and supports in-graph simulation for training, making it suitable for modern large-scale, distributed machine learning workflows. To support online training and evaluation, Waymax includes several learned and hard-coded behavior models that allow for realistic interaction within simulation. To supplement Waymax, we benchmark a suite of popular imitation and reinforcement learning algorithms with ablation studies on different design decisions, where we highlight the effectiveness of routes as guidance for planning agents and the ability of RL to overfit against simulated agents.
Waymax: An Accelerated, Data-Driven Simulator for Large-Scale Autonomous Driving Research
[ "Cole Gulino", "Justin Fu", "Wenjie Luo", "George Tucker", "Eli Bronstein", "Yiren Lu", "Jean Harb", "Xinlei Pan", "Yan Wang", "Xiangyu Chen", "John D Co-Reyes", "Rishabh Agarwal", "Rebecca Roelofs", "Yao Lu", "Nico Montali", "Paul Mougin", "Zoey Zeyu Yang", "Brandyn White", "Aleksandra Faust", "Rowan Thomas McAllister", "Dragomir Anguelov", "Benjamin Sapp" ]
Track/Datasets_and_Benchmarks
poster
2310.08710
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=7AjdHnjIHX
@inproceedings{ le2023cococounterfactuals, title={{COCO}-Counterfactuals: Automatically Constructed Counterfactual Examples for Image-Text Pairs}, author={Tiep Le and Vasudev Lal and Phillip Howard}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=7AjdHnjIHX} }
Counterfactual examples have proven to be valuable in the field of natural language processing (NLP) for both evaluating and improving the robustness of language models to spurious correlations in datasets. Despite their demonstrated utility for NLP, multimodal counterfactual examples have been relatively unexplored due to the difficulty of creating paired image-text data with minimal counterfactual changes. To address this challenge, we introduce a scalable framework for automatic generation of counterfactual examples using text-to-image diffusion models. We use our framework to create COCO-Counterfactuals, a multimodal counterfactual dataset of paired image and text captions based on the MS-COCO dataset. We validate the quality of COCO-Counterfactuals through human evaluations and show that existing multimodal models are challenged by our counterfactual image-text pairs. Additionally, we demonstrate the usefulness of COCO-Counterfactuals for improving out-of-domain generalization of multimodal vision-language models via training data augmentation. We make our code and the COCO-Counterfactuals dataset publicly available.
COCO-Counterfactuals: Automatically Constructed Counterfactual Examples for Image-Text Pairs
[ "Tiep Le", "Vasudev Lal", "Phillip Howard" ]
Track/Datasets_and_Benchmarks
poster
2309.14356
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=71uRr9N39A
@inproceedings{ yu2023qh, title={{QH}9: A Quantum Hamiltonian Prediction Benchmark for {QM}9 Molecules}, author={Haiyang Yu and Meng Liu and Youzhi Luo and Alex Strasser and Xiaofeng Qian and Xiaoning Qian and Shuiwang Ji}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=71uRr9N39A} }
Supervised machine learning approaches have been increasingly used in accelerating electronic structure prediction as surrogates of first-principle computational methods, such as density functional theory (DFT). While numerous quantum chemistry datasets focus on chemical properties and atomic forces, the ability to achieve accurate and efficient prediction of the Hamiltonian matrix is highly desired, as it is the most important and fundamental physical quantity that determines the quantum states of physical systems and chemical properties. In this work, we generate a new Quantum Hamiltonian dataset, named as QH9, to provide precise Hamiltonian matrices for 999 molecular dynamics trajectories and 130,831 stable molecular geometries, based on the QM9 dataset. By designing benchmark tasks with various molecules, we show that current machine learning models have the capacity to predict Hamiltonian matrices for arbitrary molecules. Both the QH9 dataset and the baseline models are provided to the community through an open-source benchmark, which can be highly valuable for developing machine learning methods and accelerating molecular and materials design for scientific and technological applications. Our benchmark is publicly available at https://github.com/divelab/AIRS/tree/main/OpenDFT/QHBench.
QH9: A Quantum Hamiltonian Prediction Benchmark for QM9 Molecules
[ "Haiyang Yu", "Meng Liu", "Youzhi Luo", "Alex Strasser", "Xiaofeng Qian", "Xiaoning Qian", "Shuiwang Ji" ]
Track/Datasets_and_Benchmarks
poster
2306.09549
[ "https://github.com/divelab/AIRS" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=6zcfrSz98y
@inproceedings{ li2023mathcalm, title={\${\textbackslash}mathcal\{M\}{\textasciicircum}4\$: A Unified {XAI} Benchmark for Faithfulness Evaluation of Feature Attribution Methods across Metrics, Modalities and Models}, author={Xuhong Li and Mengnan Du and Jiamin Chen and Yekun Chai and Himabindu Lakkaraju and Haoyi Xiong}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=6zcfrSz98y} }
While Explainable Artificial Intelligence (XAI) techniques have been widely studied to explain predictions made by deep neural networks, the way to evaluate the faithfulness of explanation results remains challenging, due to the heterogeneity of explanations for various models and the lack of ground-truth explanations. This paper introduces an XAI benchmark named $\mathcal{M}^4$, which allows evaluating various input feature attribution methods using the same set of faithfulness metrics across multiple data modalities (images and texts) and network structures (ResNets, MobileNets, Transformers). A taxonomy for the metrics has been proposed as well. We first categorize commonly used XAI evaluation metrics into three groups based on the ground truth they require. We then implement classic and state-of-the-art feature attribution methods using InterpretDL and conduct extensive experiments to compare methods and gain insights. Extensive experiments have been conducted to provide holistic evaluations as benchmark baselines. Several interesting observations are noticed for designing attribution algorithms. The implementation of state-of-the-art explanation methods and evaluation metrics of $\mathcal{M}^4$ is publicly available at \url{https://github.com/PaddlePaddle/InterpretDL}.
ℳ^4: A Unified XAI Benchmark for Faithfulness Evaluation of Feature Attribution Methods across Metrics, Modalities and Models
[ "Xuhong Li", "Mengnan Du", "Jiamin Chen", "Yekun Chai", "Himabindu Lakkaraju", "Haoyi Xiong" ]
Track/Datasets_and_Benchmarks
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=6jOlRwnqbb
@inproceedings{ tanke2023humans, title={Humans in Kitchens: A Dataset for Multi-Person Human Motion Forecasting with Scene Context}, author={Julian Alexander Tanke and Oh-Hun Kwon and Felix Benjamin Mueller and Andreas Doering and Juergen Gall}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=6jOlRwnqbb} }
Forecasting human motion of multiple persons is very challenging. It requires to model the interactions between humans and the interactions with objects and the environment. For example, a person might want to make a coffee, but if the coffee machine is already occupied the person will have to wait. These complex relations between scene geometry and persons arise constantly in our daily lives, and models that wish to accurately forecast human behavior will have to take them into consideration. To facilitate research in this direction, we propose Humans in Kitchens, a large-scale multi-person human motion dataset with annotated 3D human poses, scene geometry and activities per person and frame. Our dataset consists of over 7.3h recorded data of up to 16 persons at the same time in four kitchen scenes, with more than 4M annotated human poses, represented by a parametric 3D body model. In addition, dynamic scene geometry and objects like chair or cupboard are annotated per frame. As first benchmarks, we propose two protocols for short-term and long-term human motion forecasting.
Humans in Kitchens: A Dataset for Multi-Person Human Motion Forecasting with Scene Context
[ "Julian Alexander Tanke", "Oh-Hun Kwon", "Felix Benjamin Mueller", "Andreas Doering", "Juergen Gall" ]
Track/Datasets_and_Benchmarks
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=6iRH9SITva
@inproceedings{ dannenfelser2023into, title={Into the Single Cell Multiverse: an End-to-End Dataset for Procedural Knowledge Extraction in Biomedical Texts}, author={Ruth Dannenfelser and Jeffrey Zhong and Ran Zhang and Vicky Yao}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=6iRH9SITva} }
Many of the most commonly explored natural language processing (NLP) information extraction tasks can be thought of as evaluations of declarative knowledge, or fact-based information extraction. Procedural knowledge extraction, i.e., breaking down a described process into a series of steps, has received much less attention, perhaps in part due to the lack of structured datasets that capture the knowledge extraction process from end-to-end. To address this unmet need, we present FlaMBé (Flow annotations for Multiverse Biological entities), a collection of expert-curated datasets across a series of complementary tasks that capture procedural knowledge in biomedical texts. This dataset is inspired by the observation that one ubiquitous source of procedural knowledge that is described as unstructured text is within academic papers describing their methodology. The workflows annotated in FlaMBé are from texts in the burgeoning field of single cell research, a research area that has become notorious for the number of software tools and complexity of workflows used. Additionally, FlaMBé provides, to our knowledge, the largest manually curated named entity recognition (NER) and disambiguation (NED) datasets for tissue/cell type, a fundamental biological entity that is critical for knowledge extraction in the biomedical research domain. Beyond providing a valuable dataset to enable further development of NLP models for procedural knowledge extraction, automating the process of workflow mining also has important implications for advancing reproducibility in biomedical research.
Into the Single Cell Multiverse: an End-to-End Dataset for Procedural Knowledge Extraction in Biomedical Texts
[ "Ruth Dannenfelser", "Jeffrey Zhong", "Ran Zhang", "Vicky Yao" ]
Track/Datasets_and_Benchmarks
oral
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=6hZIfAY9GD
@inproceedings{ yu2023large, title={Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias}, author={Yue Yu and Yuchen Zhuang and Jieyu Zhang and Yu Meng and Alexander Ratner and Ranjay Krishna and Jiaming Shen and Chao Zhang}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=6hZIfAY9GD} }
Large language models (LLMs) have been recently leveraged as training data generators for various natural language processing (NLP) tasks. While previous research has explored different approaches to training models using generated data, they generally rely on simple class-conditional prompts, which may limit the diversity of the generated data and inherit systematic biases of LLM. Thus, we investigate training data generation with diversely attributed prompts (e.g., specifying attributes like length and style), which have the potential to yield diverse and attributed generated data. Our investigation focuses on datasets with high cardinality and diverse domains, wherein we demonstrate that attributed prompts outperform simple class-conditional prompts in terms of the resulting model's performance. Additionally, we present a comprehensive empirical study on data generation encompassing vital aspects like bias, diversity, and efficiency, and highlight three key observations: firstly, synthetic datasets generated by simple prompts exhibit significant biases, such as regional bias; secondly, attribute diversity plays a pivotal role in enhancing model performance; lastly, attributed prompts achieve the performance of simple class-conditional prompts while utilizing only 5\% of the querying cost of ChatGPT associated with the latter. The data and code are available on {\url{https://github.com/yueyu1030/AttrPrompt}}.
Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias
[ "Yue Yu", "Yuchen Zhuang", "Jieyu Zhang", "Yu Meng", "Alexander Ratner", "Ranjay Krishna", "Jiaming Shen", "Chao Zhang" ]
Track/Datasets_and_Benchmarks
poster
2306.15895
[ "https://github.com/yueyu1030/attrprompt" ]
https://huggingface.co/papers/2306.15895
2
0
0
8
[]
[ "yyu/amazon-attrprompt", "yyu/arxiv-attrprompt", "yyu/stackexchange-attrprompt", "yyu/reddit-attrprompt" ]
[]
1
null
https://openreview.net/forum?id=6URyQ9QhYv
@inproceedings{ birhane2023into, title={Into the {LAION}{\textquoteright}s Den: Investigating Hate in Multimodal Datasets}, author={Abeba Birhane and vinay uday prabhu and Sanghyun Han and Vishnu Boddeti and Sasha Luccioni}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=6URyQ9QhYv} }
`Scale the model, scale the data, scale the compute' is the reigning sentiment in the world of generative AI today. While the impact of model scaling has been extensively studied, we are only beginning to scratch the surface of data scaling and its consequences. This is especially of critical importance in the context of vision-language datasets such as LAION. These datasets are continually growing in size and are built based on large-scale internet dumps such as the Common Crawl, which is known to have numerous drawbacks ranging from quality, legality, and content. The datasets then serve as the backbone for large generative models, contributing to the operationalization and perpetuation of harmful societal and historical biases and stereotypes. In this paper, we investigate the effect of scaling datasets on hateful content through a comparative audit of two datasets: LAION-400M and LAION-2B. Our results show that hate content increased by nearly **12%** with dataset scale, measured both qualitatively and quantitatively using a metric that we term as Hate Content Rate (HCR). We also found that filtering dataset contents based on Not Safe For Work (NSFW) values calculated based on images alone does not exclude all the harmful content in alt-text. Instead, we found that trace amounts of hateful, targeted, and aggressive text remain even when carrying out conservative filtering. We end with a reflection and a discussion of the significance of our results for dataset curation and usage in the AI community. Code and the meta-data assets curated in this paper are publicly available at https://github.com/vinayprabhu/hate_scaling. Content warning: This paper contains examples of hateful text that might be disturbing, distressing, and/or offensive.
Into the LAION’s Den: Investigating Hate in Multimodal Datasets
null
Track/Datasets_and_Benchmarks
poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=5OjLGiJW3u
@inproceedings{ ellis2023smacv, title={{SMAC}v2: An Improved Benchmark for Cooperative Multi-Agent Reinforcement Learning}, author={Benjamin Ellis and Jonathan Cook and Skander Moalla and Mikayel Samvelyan and Mingfei Sun and Anuj Mahajan and Jakob Nicolaus Foerster and Shimon Whiteson}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=5OjLGiJW3u} }
The availability of challenging benchmarks has played a key role in the recent progress of machine learning. In cooperative multi-agent reinforcement learning, the StarCraft Multi-Agent Challenge (SMAC) has become a popular testbed for centralised training with decentralised execution. However, after years of sustained improvement on SMAC, algorithms now achieve near-perfect performance. In this work, we conduct new analysis demonstrating that SMAC lacks the stochasticity and partial observability to require complex *closed-loop* policies. In particular, we show that an *open-loop* policy conditioned only on the timestep can achieve non-trivial win rates for many SMAC scenarios. To address this limitation, we introduce SMACv2, a new version of the benchmark where scenarios are procedurally generated and require agents to generalise to previously unseen settings (from the same distribution) during evaluation. We also introduce the extended partial observability challenge (EPO), which augments SMACv2 to ensure meaningful partial observability. We show that these changes ensure the benchmark requires the use of *closed-loop* policies. We evaluate state-of-the-art algorithms on SMACv2 and show that it presents significant challenges not present in the original benchmark. Our analysis illustrates that SMACv2 addresses the discovered deficiencies of SMAC and can help benchmark the next generation of MARL methods. Videos of training are available on our [website](https://sites.google.com/view/smacv2).
SMACv2: An Improved Benchmark for Cooperative Multi-Agent Reinforcement Learning
[ "Benjamin Ellis", "Jonathan Cook", "Skander Moalla", "Mikayel Samvelyan", "Mingfei Sun", "Anuj Mahajan", "Jakob Nicolaus Foerster", "Shimon Whiteson" ]
Track/Datasets_and_Benchmarks
poster
2212.07489
[ "https://github.com/oxwhirl/smacv2" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=5HisVXnx0n
@inproceedings{ park2023glemos, title={{GLEMOS}: Benchmark for Instantaneous Graph Learning Model Selection}, author={Namyong Park and Ryan A. Rossi and Xing Wang and Antoine Simoulin and Nesreen K. Ahmed and Christos Faloutsos}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=5HisVXnx0n} }
The choice of a graph learning (GL) model (i.e., a GL algorithm and its hyperparameter settings) has a significant impact on the performance of downstream tasks. However, selecting the right GL model becomes increasingly difficult and time consuming as more and more GL models are developed. Accordingly, it is of great significance and practical value to equip users of GL with the ability to perform a near-instantaneous selection of an effective GL model without manual intervention. Despite the recent attempts to tackle this important problem, there has been no comprehensive benchmark environment to evaluate the performance of GL model selection methods. To bridge this gap, we present GLEMOS in this work, a comprehensive benchmark for instantaneous GL model selection that makes the following contributions. (i) GLEMOS provides extensive benchmark data for fundamental GL tasks, i.e., link prediction and node classification, including the performances of 366 models on 457 graphs on these tasks. (ii) GLEMOS designs multiple evaluation settings, and assesses how effectively representative model selection techniques perform in these different settings. (iii) GLEMOS is designed to be easily extended with new models, new graphs, and new performance records. (iv) Based on the experimental results, we discuss the limitations of existing approaches and highlight future research directions. To promote research on this significant problem, we make the benchmark data and code publicly available at https://namyongpark.github.io/glemos.
GLEMOS: Benchmark for Instantaneous Graph Learning Model Selection
[ "Namyong Park", "Ryan A. Rossi", "Xing Wang", "Antoine Simoulin", "Nesreen K. Ahmed", "Christos Faloutsos" ]
Track/Datasets_and_Benchmarks
poster
2404.01578
[ "https://github.com/facebookresearch/glemos" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=5FnttJZQFn
@inproceedings{ montali2023the, title={The Waymo Open Sim Agents Challenge}, author={Nico Montali and John Lambert and Paul Mougin and Alex Kuefler and Nicholas Rhinehart and Michelle Li and Cole Gulino and Tristan Emrich and Zoey Zeyu Yang and Shimon Whiteson and Brandyn White and Dragomir Anguelov}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=5FnttJZQFn} }
Simulation with realistic, interactive agents represents a key task for autonomous vehicle software development. In this work, we introduce the Waymo Open Sim Agents Challenge (WOSAC). WOSAC is the first public challenge to tackle this task and propose corresponding metrics. The goal of the challenge is to stimulate the design of realistic simulators that can be used to evaluate and train a behavior model for autonomous driving. We outline our evaluation methodology, present results for a number of different baseline simulation agent methods, and analyze several submissions to the 2023 competition which ran from March 16, 2023 to May 23, 2023. The WOSAC evaluation server remains open for submissions and we discuss open problems for the task.
The Waymo Open Sim Agents Challenge
[ "Nico Montali", "John Lambert", "Paul Mougin", "Alex Kuefler", "Nicholas Rhinehart", "Michelle Li", "Cole Gulino", "Tristan Emrich", "Zoey Zeyu Yang", "Shimon Whiteson", "Brandyn White", "Dragomir Anguelov" ]
Track/Datasets_and_Benchmarks
oral
2305.12032
[ "https://github.com/wangwenxi-handsome/joint-multipathpp" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=5Exz7eaBXH
@inproceedings{ tung2023physion, title={Physion++: Evaluating Physical Scene Understanding that Requires Online Inference of Different Physical Properties}, author={Hsiao-Yu Tung and Mingyu Ding and Zhenfang Chen and Daniel Bear and Chuang Gan and Joshua B. Tenenbaum and Daniel LK Yamins and Judith E Fan and Kevin A. Smith}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=5Exz7eaBXH} }
General physical scene understanding requires more than simply localizing and recognizing objects -- it requires knowledge that objects can have different latent properties (e.g., mass or elasticity), and that those properties affect the outcome of physical events. While there has been great progress in physical and video prediction models in recent years, benchmarks to test their performance typically do not require an understanding that objects have individual physical properties, or at best test only those properties that are directly observable (e.g., size or color). This work proposes a novel dataset and benchmark, termed Physion++, that rigorously evaluates visual physical prediction in artificial systems under circumstances where those predictions rely on accurate estimates of the latent physical properties of objects in the scene. Specifically, we test scenarios where accurate prediction relies on estimates of properties such as mass, friction, elasticity, and deformability, and where the values of those properties can only be inferred by observing how objects move and interact with other objects or fluids. We evaluate the performance of a number of state-of-the-art prediction models that span a variety of levels of learning vs. built-in knowledge, and compare that performance to a set of human predictions. We find that models that have been trained using standard regimes and datasets do not spontaneously learn to make inferences about latent properties, but also that models that encode objectness and physical states tend to make better predictions. However, there is still a huge gap between all models and human performance, and all models' predictions correlate poorly with those made by humans, suggesting that no state-of-the-art model is learning to make physical predictions in a human-like way. These results show that current deep learning models that succeed in some settings nevertheless fail to achieve human-level physical prediction in other cases, especially those where latent property inference is required. Project page: https://dingmyu.github.io/physion_v2/
Physion++: Evaluating Physical Scene Understanding that Requires Online Inference of Different Physical Properties
null
Track/Datasets_and_Benchmarks
poster
2306.15668
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
null
https://openreview.net/forum?id=5ADv5OfQgU
@inproceedings{ ivanovic2023trajdata, title={trajdata: A Unified Interface to Multiple Human Trajectory Datasets}, author={Boris Ivanovic and Guanyu Song and Igor Gilitschenski and Marco Pavone}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=5ADv5OfQgU} }
The field of trajectory forecasting has grown significantly in recent years, partially owing to the release of numerous large-scale, real-world human trajectory datasets for autonomous vehicles (AVs) and pedestrian motion tracking. While such datasets have been a boon for the community, they each use custom and unique data formats and APIs, making it cumbersome for researchers to train and evaluate methods across multiple datasets. To remedy this, we present trajdata: a unified interface to multiple human trajectory datasets. At its core, trajdata provides a simple, uniform, and efficient representation and API for trajectory and map data. As a demonstration of its capabilities, in this work we conduct a comprehensive empirical evaluation of existing trajectory datasets, providing users with a rich understanding of the data underpinning much of current pedestrian and AV motion forecasting research, and proposing suggestions for future datasets from these insights. trajdata is permissively licensed (Apache 2.0) and can be accessed online at https://github.com/NVlabs/trajdata.
trajdata: A Unified Interface to Multiple Human Trajectory Datasets
[ "Boris Ivanovic", "Guanyu Song", "Igor Gilitschenski", "Marco Pavone" ]
Track/Datasets_and_Benchmarks
poster
2307.13924
[ "https://github.com/nvlabs/trajdata" ]
https://huggingface.co/papers/2307.13924
1
2
0
4
[]
[]
[]
1
null
https://openreview.net/forum?id=4kV7qDi0EB
@inproceedings{ ash2023wcld, title={{WCLD}: Curated Large Dataset of Criminal Cases from Wisconsin Circuit Courts}, author={Elliott Ash and Naman Goel and Nianyun Li and Claudia Marangon and Peiyao Sun}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=4kV7qDi0EB} }
Machine learning based decision-support tools in criminal justice systems are subjects of intense discussions and academic research. There are important open questions about the utility and fairness of such tools. Academic researchers often rely on a few small datasets that are not sufficient to empirically study various real-world aspects of these questions. In this paper, we contribute WCLD, a curated large dataset of 1.5 million criminal cases from circuit courts in the U.S. state of Wisconsin. We used reliable public data from 1970 to 2020 to curate attributes like prior criminal counts and recidivism outcomes. The dataset contains large number of samples from five racial groups, in addition to information like sex and age (at judgment and first offense). Other attributes in this dataset include neighborhood characteristics obtained from census data, detailed types of offense, charge severity, case decisions, sentence lengths, year of filing etc. We also provide pseudo-identifiers for judge, county and zipcode. The dataset will not only enable researchers to more rigorously study algorithmic fairness in the context of criminal justice, but also relate algorithmic challenges with various systemic issues. We also discuss in detail the process of constructing the dataset and provide a datasheet. The WCLD dataset is available at https://clezdata.github.io/wcld/.
WCLD: Curated Large Dataset of Criminal Cases from Wisconsin Circuit Courts
[ "Elliott Ash", "Naman Goel", "Nianyun Li", "Claudia Marangon", "Peiyao Sun" ]
Track/Datasets_and_Benchmarks
poster
2310.18724
[ "" ]
https://huggingface.co/papers/2310.18724
0
1
0
5
[]
[]
[]
1
null
https://openreview.net/forum?id=4dsMX3RnF0
@inproceedings{ sizikova2023knowledgebased, title={Knowledge-based in silico models and dataset for the comparative evaluation of mammography {AI} for a range of breast characteristics, lesion conspicuities and doses}, author={Elena Sizikova and Niloufar Saharkhiz and Diksha Sharma and Miguel Lago and Berkman Sahiner and Jana Gut Delfino and Aldo Badano}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=4dsMX3RnF0} }
To generate evidence regarding the safety and efficacy of artificial intelligence (AI) enabled medical devices, AI models need to be evaluated on a diverse population of patient cases, some of which may not be readily available. We propose an evaluation approach for testing medical imaging AI models that relies on in silico imaging pipelines in which stochastic digital models of human anatomy (in object space) with and without pathology are imaged using a digital replica imaging acquisition system to generate realistic synthetic image datasets. Here, we release M-SYNTH, a dataset of cohorts with four breast fibroglandular density distributions imaged at different exposure levels using Monte Carlo x-ray simulations with the publicly available Virtual Imaging Clinical Trial for Regulatory Evaluation (VICTRE) toolkit. We utilize the synthetic dataset to analyze AI model performance and find that model performance decreases with increasing breast density and increases with higher mass density, as expected. As exposure levels decrease, AI model performance drops with the highest performance achieved at exposure levels lower than the nominal recommended dose for the breast type.
Knowledge-based in silico models and dataset for the comparative evaluation of mammography AI for a range of breast characteristics, lesion conspicuities and doses
[ "Elena Sizikova", "Niloufar Saharkhiz", "Diksha Sharma", "Miguel Lago", "Berkman Sahiner", "Jana Gut Delfino", "Aldo Badano" ]
Track/Datasets_and_Benchmarks
poster
2310.18494
[ "https://github.com/didsr/msynth-release" ]
https://huggingface.co/papers/2310.18494
0
2
0
7
[]
[ "didsr/msynth" ]
[]
1
null
https://openreview.net/forum?id=4d8dO5sAeM
@inproceedings{ chen2023benchmarking, title={Benchmarking Robustness of Adaptation Methods on Pre-trained Vision-Language Models}, author={Shuo Chen and Jindong Gu and Zhen Han and Yunpu Ma and Philip Torr and Volker Tresp}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=4d8dO5sAeM} }
Various adaptation methods, such as LoRA, prompts, and adapters, have been proposed to enhance the performance of pre-trained vision-language models in specific domains. As test samples in real-world applications usually differ from adaptation data, the robustness of these adaptation methods against distribution shifts are essential. In this study, we assess the robustness of 11 widely-used adaptation methods across 4 vision-language datasets under multimodal corruptions. Concretely, we introduce 7 benchmark datasets, including 96 visual and 87 textual corruptions, to investigate the robustness of different adaptation methods, the impact of available adaptation examples, and the influence of trainable parameter size during adaptation. Our analysis reveals that: 1) Adaptation methods are more sensitive to text corruptions than visual corruptions. 2) Full fine-tuning does not consistently provide the highest robustness; instead, adapters can achieve better robustness with comparable clean performance. 3) Contrary to expectations, our findings indicate that increasing the number of adaptation data and parameters does not guarantee enhanced robustness; instead, it results in even lower robustness. We hope this study could benefit future research in the development of robust multimodal adaptation methods. The benchmark, code, and dataset used in this study can be accessed at https://adarobustness.github.io.
Benchmarking Robustness of Adaptation Methods on Pre-trained Vision-Language Models
[ "Shuo Chen", "Jindong Gu", "Zhen Han", "Yunpu Ma", "Philip Torr", "Volker Tresp" ]
Track/Datasets_and_Benchmarks
poster
2306.02080
[ "" ]
https://huggingface.co/papers/2306.02080
0
0
0
6
[]
[]
[]
1
null
https://openreview.net/forum?id=3z9YV29Ogn
@inproceedings{ kaltenborn2023climateset, title={ClimateSet: A Large-Scale Climate Model Dataset for Machine Learning}, author={Julia Kaltenborn and Charlotte Emilie Elektra Lange and Venkatesh Ramesh and Philippe Brouillard and Yaniv Gurwicz and Chandni Nagda and Jakob Runge and Peer Nowack and David Rolnick}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2023}, url={https://openreview.net/forum?id=3z9YV29Ogn} }
Climate models have been key for assessing the impact of climate change and simulating future climate scenarios. The machine learning (ML) community has taken an increased interest in supporting climate scientists’ efforts on various tasks such as climate model emulation, downscaling, and prediction tasks. Many of those tasks have been addressed on datasets created with single climate models. However, both the climate science and ML communities have suggested that to address those tasks at scale, we need large, consistent, and ML-ready climate model datasets. Here, we introduce ClimateSet, a dataset containing the inputs and outputs of 36 climate models from the Input4MIPs and CMIP6 archives. In addition, we provide a modular dataset pipeline for retrieving and preprocessing additional climate models and scenarios. We showcase the potential of our dataset by using it as a benchmark for ML-based climate model emulation. We gain new insights about the performance and generalization capabilities of the different ML models by analyzing their performance across different climate models. Furthermore, the dataset can be used to train an ML emulator on several climate models instead of just one. Such a “super-emulator” can quickly project new climate change scenarios, complementing existing scenarios already provided to policymakers. We believe ClimateSet will create the basis needed for the ML community to tackle climate-related tasks at scale.
ClimateSet: A Large-Scale Climate Model Dataset for Machine Learning
[ "Julia Kaltenborn", "Charlotte Emilie Elektra Lange", "Venkatesh Ramesh", "Philippe Brouillard", "Yaniv Gurwicz", "Chandni Nagda", "Jakob Runge", "Peer Nowack", "David Rolnick" ]
Track/Datasets_and_Benchmarks
poster
2311.03721
[ "https://github.com/RolnickLab/ClimateSet" ]
https://huggingface.co/papers/2311.03721
0
0
0
9
[]
[]
[]
1