bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 286
1.35k
| abstract
stringlengths 558
2.37k
| title
stringlengths 18
163
| authors
sequencelengths 1
56
⌀ | id
stringclasses 1
value | type
stringclasses 2
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 63
values | n_linked_authors
int64 -1
10
| upvotes
int64 -1
45
| num_comments
int64 -1
6
| n_authors
int64 -1
40
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
10
| Spaces
sequencelengths 0
100
| paper_page_exists_pre_conf
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://openreview.net/forum?id=zr1e15kczE | @inproceedings{
zhang2023live,
title={Live Graph Lab: Towards Open, Dynamic and Real Transaction Graphs with {NFT}},
author={Zhen Zhang and Bingqiao Luo and Shengliang Lu and Bingsheng He},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=zr1e15kczE}
} | Numerous studies have been conducted to investigate the properties of large-scale temporal graphs. Despite the ubiquity of these graphs in real-world scenarios, it's usually impractical for us to obtain the whole real-time graphs due to privacy concerns and technical limitations. In this paper, we introduce the concept of {\it Live Graph Lab} for temporal graphs, which enables open, dynamic and real transaction graphs from blockchains. Among them, Non-fungible tokens (NFTs) have become one of the most prominent parts of blockchain over the past several years. With more than \$40 billion market capitalization, this decentralized ecosystem produces massive, anonymous and real transaction activities, which naturally forms a complicated transaction network. However, there is limited understanding about the characteristics of this emerging NFT ecosystem from a temporal graph analysis perspective. To mitigate this gap, we instantiate a live graph with NFT transaction network and investigate its dynamics to provide new observations and insights. Specifically, through downloading and parsing the NFT transaction activities, we obtain a temporal graph with more than 4.5 million nodes and 124 million edges. Then, a series of measurements are presented to understand the properties of the NFT ecosystem. Through comparisons with social, citation, and web networks, our analyses give intriguing findings and point out potential directions for future exploration. Finally, we also study machine learning models in this live graph to enrich the current datasets and provide new opportunities for the graph community. The source codes and dataset are available at https://livegraphlab.github.io. | Live Graph Lab: Towards Open, Dynamic and Real Transaction Graphs with NFT | [
"Zhen Zhang",
"Bingqiao Luo",
"Shengliang Lu",
"Bingsheng He"
] | Track/Datasets_and_Benchmarks | poster | 2310.11709 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=zbEYTg2F1U | @inproceedings{
desai2023asl,
title={{ASL} Citizen: A Community-Sourced Dataset for Advancing Isolated Sign Language Recognition},
author={Aashaka Desai and Lauren Berger and Fyodor O Minakov and Vanessa Milan and Chinmay Singh and Kriston L Pumphrey and Richard Ladner and Hal Daum{\'e} III and Alex Xijie Lu and Naomi Caselli and Danielle Bragg},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=zbEYTg2F1U}
} | Sign languages are used as a primary language by approximately 70 million D/deaf people world-wide. However, most communication technologies operate in spoken and written languages, creating inequities in access. To help tackle this problem, we release ASL Citizen, the first crowdsourced Isolated Sign Language Recognition (ISLR) dataset, collected with consent and containing 83,399 videos for 2,731 distinct signs filmed by 52 signers in a variety of environments. We propose that this dataset be used for sign language dictionary retrieval for American Sign Language (ASL), where a user demonstrates a sign to their webcam to retrieve matching signs from a dictionary. We show that training supervised machine learning classifiers with our dataset advances the state-of-the-art on metrics relevant for dictionary retrieval, achieving 63\% accuracy and a recall-at-10 of 91\%, evaluated entirely on videos of users who are not present in the training or validation sets. | ASL Citizen: A Community-Sourced Dataset for Advancing Isolated Sign Language Recognition | [
"Aashaka Desai",
"Lauren Berger",
"Fyodor O Minakov",
"Vanessa Milan",
"Chinmay Singh",
"Kriston L Pumphrey",
"Richard Ladner",
"Hal Daumé III",
"Alex Xijie Lu",
"Naomi Caselli",
"Danielle Bragg"
] | Track/Datasets_and_Benchmarks | poster | 2304.05934 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=zRYSJbcRcV | @inproceedings{
kuang2023stanfordorb,
title={Stanford-{ORB}: A Real-World 3D Object Inverse Rendering Benchmark},
author={Zhengfei Kuang and Yunzhi Zhang and Hong-Xing Yu and Samir Agarwala and Shangzhe Wu and Jiajun Wu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=zRYSJbcRcV}
} | We introduce Stanford-ORB, a new real-world 3D Object inverse Rendering Benchmark. Recent advances in inverse rendering have enabled a wide range of real-world applications in 3D content generation, moving rapidly from research and commercial use cases to consumer devices. While the results continue to improve, there is no real-world benchmark that can quantitatively assess and compare the performance of various inverse rendering methods. Existing real-world datasets typically only consist of the shape and multi-view images of objects, which are not sufficient for evaluating the quality of material recovery and object relighting. Methods capable of recovering material and lighting often resort to synthetic data for quantitative evaluation, which on the other hand does not guarantee generalization to complex real-world environments. We introduce a new dataset of real-world objects captured under a variety of natural scenes with ground-truth 3D scans, multi-view images, and environment lighting. Using this dataset, we establish the first comprehensive real-world evaluation benchmark for object inverse rendering tasks from in-the-wild scenes, and compare the performance of various existing methods. All data, code, and models can be accessed at https://stanfordorb.github.io/ | Stanford-ORB: A Real-World 3D Object Inverse Rendering Benchmark | [
"Zhengfei Kuang",
"Yunzhi Zhang",
"Hong-Xing Yu",
"Samir Agarwala",
"Shangzhe Wu",
"Jiajun Wu"
] | Track/Datasets_and_Benchmarks | poster | 2310.16044 | [
"https://github.com/StanfordORB/Stanford-ORB"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=zQU33Uh3qM | @inproceedings{
yuan2023revisiting,
title={Revisiting Out-of-distribution Robustness in {NLP}: Benchmarks, Analysis, and {LLM}s Evaluations},
author={Lifan Yuan and Yangyi Chen and Ganqu Cui and Hongcheng Gao and FangYuan Zou and Xingyi Cheng and Heng Ji and Zhiyuan Liu and Maosong Sun},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=zQU33Uh3qM}
} | This paper reexamines the research on out-of-distribution (OOD) robustness in the field of NLP. We find that the distribution shift settings in previous studies commonly lack adequate challenges, hindering the accurate evaluation of OOD robustness. To address these issues, we propose a benchmark construction protocol that ensures clear differentiation and challenging distribution shifts. Then we introduce
BOSS, a Benchmark suite for Out-of-distribution robustneSS evaluation covering 5 tasks and 20 datasets. Based on BOSS, we conduct a series of experiments on pretrained language models for analysis and evaluation of OOD robustness. First, for vanilla fine-tuning, we examine the relationship between in-distribution (ID) and OOD performance. We identify three typical types that unveil the inner learning
mechanism, which could potentially facilitate the forecasting of OOD robustness, correlating with the advancements on ID datasets. Then, we evaluate 5 classic methods on BOSS and find that, despite exhibiting some effectiveness in specific cases, they do not offer significant improvement compared to vanilla fine-tuning. Further, we evaluate 5 LLMs with various adaptation paradigms and find that when sufficient ID data is available, fine-tuning domain-specific models outperform LLMs on ID examples significantly. However, in the case of OOD instances, prioritizing LLMs with in-context learning yields better results. We identify that both fine-tuned small models and LLMs face challenges in effectively addressing downstream tasks. The code is public at https://github.com/lifan-yuan/OOD_NLP. | Revisiting Out-of-distribution Robustness in NLP: Benchmarks, Analysis, and LLMs Evaluations | [
"Lifan Yuan",
"Yangyi Chen",
"Ganqu Cui",
"Hongcheng Gao",
"FangYuan Zou",
"Xingyi Cheng",
"Heng Ji",
"Zhiyuan Liu",
"Maosong Sun"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=zGthDp4yYe | @inproceedings{
liu2023realdad,
title={Real3D-{AD}: A Dataset of Point Cloud Anomaly Detection},
author={Jiaqi Liu and Guoyang Xie and ruitao chen and Xinpeng Li and Jinbao Wang and Yong Liu and Chengjie Wang and Feng Zheng},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=zGthDp4yYe}
} | High-precision point cloud anomaly detection is the gold standard for identifying the defects of advancing machining and precision manufacturing. Despite some methodological advances in this area, the scarcity of datasets and the lack of a systematic benchmark hinder its development. We introduce Real3D-AD, a challenging high-precision point cloud anomaly detection dataset, addressing the limitations in the field. With 1,254 high-resolution 3D items (from forty thousand to millions of points for each item), Real3D-AD is the largest dataset for high-precision 3D industrial anomaly detection to date. Real3D-AD surpasses existing 3D anomaly detection datasets available in terms of point cloud resolution (0.0010mm-0.0015mm), $360^{\circ}$ degree coverage and perfect prototype. Additionally, we present a comprehensive benchmark for Real3D-AD, revealing the absence of baseline methods for high-precision point cloud anomaly detection. To address this, we propose Reg3D-AD, a registration-based 3D anomaly detection method incorporating a novel feature memory bank that preserves local and global representations. Extensive experiments on the Real3D-AD dataset highlight the effectiveness of Reg3D-AD. For reproducibility and accessibility, we provide the Real3D-AD dataset, benchmark source code, and Reg3D-AD on our website: https://github.com/M-3LAB/Real3D-AD. | Real3D-AD: A Dataset of Point Cloud Anomaly Detection | [
"Jiaqi Liu",
"Guoyang Xie",
"ruitao chen",
"Xinpeng Li",
"Jinbao Wang",
"Yong Liu",
"Chengjie Wang",
"Feng Zheng"
] | Track/Datasets_and_Benchmarks | poster | 2309.13226 | [
"https://github.com/m-3lab/real3d-ad"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=zFvvdJblZm | @inproceedings{
shen2023a,
title={A High-Resolution Dataset for Instance Detection with Multi-View Object Capture},
author={QIANQIAN SHEN and Yunhan Zhao and Nahyun Kwon and Jeeeun Kim and Yanan Li and Shu Kong},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=zFvvdJblZm}
} | Instance detection (InsDet) is a long-lasting problem in robotics and computer vision, aiming to detect object instances (predefined by some visual examples) in a cluttered scene. Despite its practical significance, its advancement is overshadowed by Object Detection, which aims to detect objects belonging to some predefined classes. One major reason is that current InsDet datasets are too small in scale by today's standards. For example, the popular InsDet dataset GMU (published in 2016) has only 23 instances, far less than COCO (80 classes), a well-known object detection dataset published in 2014. We are motivated to introduce a new InsDet dataset and protocol. First, we define a realistic setup for InsDet: training data consists of multi-view instance captures, along with diverse scene images allowing synthesizing training images by pasting instance images on them with free box annotations. Second, we release a real-world database, which contains multi-view capture of 100 object instances, and high-resolution (6k$\times$8k) testing images. Third, we extensively study baseline methods for InsDet on our dataset, analyze their performance and suggest future work. Somewhat surprisingly, using the off-the-shelf class-agnostic segmentation model (Segment Anything Model, SAM) and the self-supervised feature representation DINOv2 performs the best, achieving $>$10 AP better than end-to-end trained InsDet models that repurpose object detectors (e.g., FasterRCNN and RetinaNet). | A High-Resolution Dataset for Instance Detection with Multi-View Object Capture | [
"QIANQIAN SHEN",
"Yunhan Zhao",
"Nahyun Kwon",
"Jeeeun Kim",
"Yanan Li",
"Shu Kong"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=ygXSNrIU1p | @inproceedings{
liu2023symmetryinformed,
title={Symmetry-Informed Geometric Representation for Molecules, Proteins, and Crystalline Materials},
author={Shengchao Liu and weitao Du and Yanjing Li and Zhuoxinran Li and Zhiling Zheng and Chenru Duan and Zhi-Ming Ma and Omar M. Yaghi and Anima Anandkumar and Christian Borgs and Jennifer T Chayes and Hongyu Guo and Jian Tang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=ygXSNrIU1p}
} | Artificial intelligence for scientific discovery has recently generated significant interest within the machine learning and scientific communities, particularly in the domains of chemistry, biology, and material discovery. For these scientific problems, molecules serve as the fundamental building blocks, and machine learning has emerged as a highly effective and powerful tool for modeling their geometric structures. Nevertheless, due to the rapidly evolving process of the field and the knowledge gap between science ({\eg}, physics, chemistry, \& biology) and machine learning communities, a benchmarking study on geometrical representation for such data has not been conducted. To address such an issue, in this paper, we first provide a unified view of the current symmetry-informed geometric methods, classifying them into three main categories: invariance, equivariance with spherical frame basis, and equivariance with vector frame basis. Then we propose a platform, coined Geom3D, which enables benchmarking the effectiveness of geometric strategies. Geom3D contains 16 advanced symmetry-informed geometric representation models and 14 geometric pretraining methods over 52 diverse tasks, including small molecules, proteins, and crystalline materials. We hope that Geom3D can, on the one hand, eliminate barriers for machine learning researchers interested in exploring scientific problems; and, on the other hand, provide valuable guidance for researchers in computational chemistry, structural biology, and materials science, aiding in the informed selection of representation techniques for specific applications. The source code is available on \href{https://github.com/chao1224/Geom3D}{the GitHub repository}. | Symmetry-Informed Geometric Representation for Molecules, Proteins, and Crystalline Materials | [
"Shengchao Liu",
"weitao Du",
"Yanjing Li",
"Zhuoxinran Li",
"Zhiling Zheng",
"Chenru Duan",
"Zhi-Ming Ma",
"Omar M. Yaghi",
"Anima Anandkumar",
"Christian Borgs",
"Jennifer T Chayes",
"Hongyu Guo",
"Jian Tang"
] | Track/Datasets_and_Benchmarks | poster | 2306.09375 | [
"https://github.com/chao1224/geom3d"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=yZQDF9f6bR | @inproceedings{
li2023diplomat,
title={Diplomat: A Dialogue Dataset for Situated Prag{MAT}ic Reasoning},
author={Hengli Li and Song-Chun Zhu and Zilong Zheng},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=yZQDF9f6bR}
} | The ability to discern and comprehend pragmatic meanings is a cornerstone of social and emotional intelligence, referred to as pragmatic reasoning. Despite the strides made in the development of Large Language Models (LLMs), such as ChatGPT, these models grapple with capturing the nuanced and ambiguous facets of language, falling short of the aspiration to build human-like conversational agents. In this work, we introduce a novel benchmark, the **DiPlomat**, which delves into the fundamental components of conversational pragmatic reasoning, encompassing situational context reasoning, open-world knowledge acquisition, and unified figurative language understanding. We start by collecting a new human-annotated dialogue dataset, composed of 4,177 multi-turn dialogues and a vocabulary of 48,900 words. Along with the dataset, two tasks are proposed to evaluate machines' pragmatic reasoning capabilities, namely, Pragmatic Reasoning and Identification(PIR) and Conversational Question Answering (CQA). Furthermore, we probe into a zero-shot natural language inference task, where the significance of context in pragmatic reasoning is underscored. Experimental findings illustrate the existing limitations of current prevailing LLMs in the realm of pragmatic reasoning, shedding light on the pressing need for further research to facilitate the emergence of emotional intelligence within human-like conversational agents. | Diplomat: A Dialogue Dataset for Situated PragMATic Reasoning | [
"Hengli Li",
"Song-Chun Zhu",
"Zilong Zheng"
] | Track/Datasets_and_Benchmarks | poster | 2306.09030 | [
""
] | https://huggingface.co/papers/2306.09030 | 1 | 1 | 0 | 3 | [] | [
"henry12348/DiPlomat"
] | [] | 1 |
null | https://openreview.net/forum?id=yXLyhKvK4D | @inproceedings{
zhou2023opengsl,
title={Open{GSL}: A Comprehensive Benchmark for Graph Structure Learning},
author={Zhiyao Zhou and Sheng Zhou and Bochao Mao and Xuanyi Zhou and Jiawei Chen and Qiaoyu Tan and Daochen Zha and Yan Feng and Chun Chen and Can Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=yXLyhKvK4D}
} | Graph Neural Networks (GNNs) have emerged as the *de facto* standard for representation learning on graphs, owing to their ability to effectively integrate graph topology and node attributes. However, the inherent suboptimal nature of node connections, resulting from the complex and contingent formation process of graphs, presents significant challenges in modeling them effectively. To tackle this issue, Graph Structure Learning (GSL), a family of data-centric learning approaches, has garnered substantial attention in recent years. The core concept behind GSL is to jointly optimize the graph structure and the corresponding GNN models. Despite the proposal of numerous GSL methods, the progress in this field remains unclear due to inconsistent experimental protocols, including variations in datasets, data processing techniques, and splitting strategies. In this paper, we introduce OpenGSL, the first comprehensive benchmark for GSL, aimed at addressing this gap. OpenGSL enables a fair comparison among state-of-the-art GSL methods by evaluating them across various popular datasets using uniform data processing and splitting strategies. Through extensive experiments, we observe that existing GSL methods do not consistently outperform vanilla GNN counterparts. We also find that there is no significant correlation between the homophily of the learned structure and task performance, challenging the common belief. Moreover, we observe that the learned graph structure demonstrates a strong generalization ability across different GNN models, despite the high computational and space consumption. We hope that our open-sourced library will facilitate rapid and equitable evaluation and inspire further innovative research in this field. The code of the benchmark can be found in https://github.com/OpenGSL/OpenGSL. | OpenGSL: A Comprehensive Benchmark for Graph Structure Learning | [
"Zhiyao Zhou",
"Sheng Zhou",
"Bochao Mao",
"Xuanyi Zhou",
"Jiawei Chen",
"Qiaoyu Tan",
"Daochen Zha",
"Yan Feng",
"Chun Chen",
"Can Wang"
] | Track/Datasets_and_Benchmarks | poster | 2306.10280 | [
"https://github.com/opengsl/opengsl"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=yWpY5I3XyX | @inproceedings{
liu2023fetv,
title={{FETV}: A Benchmark for Fine-Grained Evaluation of Open-Domain Text-to-Video Generation},
author={Yuanxin Liu and Lei Li and Shuhuai Ren and Rundong Gao and Shicheng Li and Sishuo Chen and Xu Sun and Lu Hou},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=yWpY5I3XyX}
} | Recently, open-domain text-to-video (T2V) generation models have made remarkable progress. However, the promising results are mainly shown by the qualitative cases of generated videos, while the quantitative evaluation of T2V models still faces two critical problems. Firstly, existing studies lack fine-grained evaluation of T2V models on different categories of text prompts. Although some benchmarks have categorized the prompts, their categorization either only focuses on a single aspect or fails to consider the temporal information in video generation. Secondly, it is unclear whether the automatic evaluation metrics are consistent with human standards. To address these problems, we propose **FETV**, a benchmark for **F**ine-grained **E**valuation of **T**ext-to-**V**ideo generation. FETV is multi-aspect, categorizing the prompts based on three orthogonal aspects: the major content, the attributes to control and the prompt complexity. FETV is also temporal-aware, which introduces several temporal categories tailored for video generation.
Based on FETV, we conduct comprehensive manual evaluations of four representative T2V models, revealing their pros and cons on different categories of prompts from different aspects. We also extend FETV as a testbed to evaluate the reliability of automatic T2V metrics. The multi-aspect categorization of FETV enables fine-grained analysis of the metrics' reliability in different scenarios. We find that existing automatic metrics (e.g., CLIPScore and FVD) correlate poorly with human evaluation. To address this problem, we explore several solutions to improve CLIPScore and FVD, and develop two automatic metrics that exhibit significant higher correlation with humans than existing metrics. Benchmark page: https://github.com/llyx97/FETV. | FETV: A Benchmark for Fine-Grained Evaluation of Open-Domain Text-to-Video Generation | [
"Yuanxin Liu",
"Lei Li",
"Shuhuai Ren",
"Rundong Gao",
"Shicheng Li",
"Sishuo Chen",
"Xu Sun",
"Lu Hou"
] | Track/Datasets_and_Benchmarks | poster | 2311.01813 | [
"https://github.com/llyx97/fetv"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=yEf8NSqTPu | @inproceedings{
starner2023popsign,
title={PopSign {ASL} v1.0: An Isolated American Sign Language Dataset Collected via Smartphones},
author={Thad Starner and Sean Forbes and Matthew So and David Martin and Rohit Sridhar and Gururaj Deshpande and Sam Sepah and Sahir Shahryar and Khushi Bhardwaj and Tyler Kwok and Daksh Sehgal and Saad Hassan and Bill Neubauer and Sofia Anandi Vempala and Alec Tan and Jocelyn Heath and Unnathi Utpal Kumar and Priyanka Vijayaraghavan Mosur and Tavenner M. Hall and Rajandeep Singh and Christopher Zhang Cui and Glenn Cameron and Sohier Dane and Garrett Tanzer},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=yEf8NSqTPu}
} | PopSign is a smartphone-based bubble-shooter game that helps hearing parents
of deaf infants learn sign language. To help parents practice their ability to sign,
PopSign is integrating sign language recognition as part of its gameplay. For
training the recognizer, we introduce the PopSign ASL v1.0 dataset that collects
examples of 250 isolated American Sign Language (ASL) signs using Pixel 4A
smartphone selfie cameras in a variety of environments. It is the largest publicly
available, isolated sign dataset by number of examples and is the first dataset to
focus on one-handed, smartphone signs. We collected over 210,000 examples
at 1944x2592 resolution made by 47 consenting Deaf adult signers for whom
American Sign Language is their primary language. We manually reviewed 217,866
of these examples, of which 175,022 (approximately 700 per sign) were the sign
intended for the educational game. 39,304 examples were recognizable as a sign
but were not the desired variant or were a different sign. We provide a training set
of 31 signers, a validation set of eight signers, and a test set of eight signers. A
baseline LSTM model for the 250-sign vocabulary achieves 82.1% accuracy (81.9%
class-weighted F1 score) on the validation set and 84.2% (83.9% class-weighted
F1 score) on the test set. Gameplay suggests that accuracy will be sufficient for
creating educational games involving sign language recognition. | PopSign ASL v1.0: An Isolated American Sign Language Dataset Collected via Smartphones | [
"Thad Starner",
"Sean Forbes",
"Matthew So",
"David Martin",
"Rohit Sridhar",
"Gururaj Deshpande",
"Sam Sepah",
"Sahir Shahryar",
"Khushi Bhardwaj",
"Tyler Kwok",
"Daksh Sehgal",
"Saad Hassan",
"Bill Neubauer",
"Sofia Anandi Vempala",
"Alec Tan",
"Jocelyn Heath",
"Unnathi Utpal Kumar",
"Priyanka Vijayaraghavan Mosur",
"Tavenner M. Hall",
"Rajandeep Singh",
"Christopher Zhang Cui",
"Glenn Cameron",
"Sohier Dane",
"Garrett Tanzer"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=xzEtNSuDJk | @inproceedings{
liu2023libero,
title={{LIBERO}: Benchmarking Knowledge Transfer for Lifelong Robot Learning},
author={Bo Liu and Yifeng Zhu and Chongkai Gao and Yihao Feng and qiang liu and Yuke Zhu and Peter Stone},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=xzEtNSuDJk}
} | Lifelong learning offers a promising paradigm of building a generalist agent that learns and adapts over its lifespan.
Unlike traditional lifelong learning problems in image and text domains, which primarily involve the transfer of declarative knowledge of entities and concepts, lifelong learning in decision-making (LLDM) also necessitates the transfer of procedural knowledge, such as actions and behaviors. To advance research in LLDM, we introduce LIBERO, a novel benchmark of lifelong learning for robot manipulation. Specifically, LIBERO highlights five key research topics in LLDM: 1) how to efficiently transfer declarative knowledge, procedural knowledge, or the mixture of both; 2) how to design effective policy architectures and 3) effective algorithms for LLDM; 4) the robustness of a lifelong learner with respect to task ordering; and 5) the effect of model pretraining for LLDM. We develop an extendible procedural generation pipeline that can in principle generate infinitely many tasks. For benchmarking purpose, we create four task suites (130 tasks in total) that we use to investigate the above-mentioned research topics. To support sample-efficient learning, we provide high-quality human-teleoperated demonstration data for all tasks. Our extensive experiments present several insightful or even unexpected discoveries: sequential finetuning outperforms existing lifelong learning methods in forward transfer, no single visual encoder architecture excels at all types of knowledge transfer, and naive supervised pretraining can hinder agents' performance in the subsequent LLDM. | LIBERO: Benchmarking Knowledge Transfer for Lifelong Robot Learning | [
"Bo Liu",
"Yifeng Zhu",
"Chongkai Gao",
"Yihao Feng",
"qiang liu",
"Yuke Zhu",
"Peter Stone"
] | Track/Datasets_and_Benchmarks | poster | 2306.03310 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=xo6zDI8gvB | @inproceedings{
hu2023a,
title={A Multi-modal Global Instance Tracking Benchmark ({MGIT}): Better Locating Target in Complex Spatio-temporal and Causal Relationship},
author={Shiyu Hu and Dailing Zhang and Meiqi Wu and Xiaokun Feng and Xuchen Li and Xin Zhao and Kaiqi Huang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=xo6zDI8gvB}
} | Tracking an arbitrary moving target in a video sequence is the foundation for high-level tasks like video understanding. Although existing visual-based trackers have demonstrated good tracking capabilities in short video sequences, they always perform poorly in complex environments, as represented by the recently proposed global instance tracking task, which consists of longer videos with more complicated narrative content.
Recently, several works have introduced natural language into object tracking, desiring to address the limitations of relying only on a single visual modality. However, these selected videos are still short sequences with uncomplicated spatio-temporal and causal relationships, and the provided semantic descriptions are too simple to characterize video content.
To address these issues, we (1) first propose a new multi-modal global instance tracking benchmark named MGIT. It consists of 150 long video sequences with a total of 2.03 million frames, aiming to fully represent the complex spatio-temporal and causal relationships coupled in longer narrative content.
(2) Each video sequence is annotated with three semantic grains (i.e., action, activity, and story) to model the progressive process of human cognition. We expect this multi-granular annotation strategy can provide a favorable environment for multi-modal object tracking research and long video understanding.
(3) Besides, we execute comparative experiments on existing multi-modal object tracking benchmarks, which not only explore the impact of different annotation methods, but also validate that our annotation method is a feasible solution for coupling human understanding into semantic labels.
(4) Additionally, we conduct detailed experimental analyses on MGIT, and hope the explored performance bottlenecks of existing algorithms can support further research in multi-modal object tracking.
The proposed benchmark, experimental results, and toolkit will be released gradually on http://videocube.aitestunion.com/. | A Multi-modal Global Instance Tracking Benchmark (MGIT): Better Locating Target in Complex Spatio-temporal and Causal Relationship | [
"Shiyu Hu",
"Dailing Zhang",
"Meiqi Wu",
"Xiaokun Feng",
"Xuchen Li",
"Xin Zhao",
"Kaiqi Huang"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=xhbIud48JN | @inproceedings{
zhang2023situatedgen,
title={SituatedGen: Incorporating Geographical and Temporal Contexts into Generative Commonsense Reasoning},
author={Yunxiang Zhang and Xiaojun Wan},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=xhbIud48JN}
} | Recently, commonsense reasoning in text generation has attracted much attention. Generative commonsense reasoning is the task that requires machines, given a group of keywords, to compose a single coherent sentence with commonsense plausibility. While existing datasets targeting generative commonsense reasoning focus on everyday scenarios, it is unclear how well machines reason under specific geographical and temporal contexts. We formalize this challenging task as SituatedGen, where machines with commonsense should generate a pair of contrastive sentences given a group of keywords including geographical or temporal entities. We introduce a corresponding English dataset consisting of 8,268 contrastive sentence pairs, which are built upon several existing commonsense reasoning benchmarks with minimal manual labor. Experiments show that state-of-the-art generative language models struggle to generate sentences with commonsense plausibility and still lag far behind human performance. Our dataset is publicly available at https://github.com/yunx-z/situated_gen. | SituatedGen: Incorporating Geographical and Temporal Contexts into Generative Commonsense Reasoning | [
"Yunxiang Zhang",
"Xiaojun Wan"
] | Track/Datasets_and_Benchmarks | poster | 2306.12552 | [
"https://github.com/shobrook/communities"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=xewwYquInO | @inproceedings{
weber2023wordscape,
title={WordScape: a Pipeline to extract multilingual, visually rich Documents with Layout Annotations from Web Crawl Data},
author={Maurice Weber and Carlo Siebenschuh and Rory Marshall Butler and Anton Alexandrov and Valdemar Ragnar Thanner and Georgios Tsolakis and Haris Jabbar and Ian Foster and Bo Li and Rick Stevens and Ce Zhang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=xewwYquInO}
} | We introduce WordScape, a novel pipeline for the creation of cross-disciplinary, multilingual corpora comprising millions of pages with annotations for document layout detection. Relating visual and textual items on document pages has gained further significance with the advent of multimodal models. Various approaches proved effective for visual question answering or layout segmentation. However, the interplay of text, tables, and visuals remains challenging for a variety of document understanding tasks. In particular, many models fail to generalize well to diverse domains and new languages due to insufficient availability of training data. WordScape addresses these limitations. Our automatic annotation pipeline parses the Open XML structure of Word documents obtained from the web, jointly providing layout-annotated document images and their textual representations. In turn, WordScape offers unique properties as it (1) leverages the ubiquity of the Word file format on the internet, (2) is readily accessible through the Common Crawl web corpus, (3) is adaptive to domain-specific documents, and (4) offers culturally and linguistically diverse document pages with natural semantic structure and high-quality text. Together with the pipeline, we will additionally release 9.5M urls to word documents which can be processed using WordScape to create a dataset of over 40M pages. Finally, we investigate the quality of text and layout annotations extracted by WordScape, assess the impact on document understanding benchmarks, and demonstrate that manual labeling costs can be substantially reduced. | WordScape: a Pipeline to extract multilingual, visually rich Documents with Layout Annotations from Web Crawl Data | [
"Maurice Weber",
"Carlo Siebenschuh",
"Rory Marshall Butler",
"Anton Alexandrov",
"Valdemar Ragnar Thanner",
"Georgios Tsolakis",
"Haris Jabbar",
"Ian Foster",
"Bo Li",
"Rick Stevens",
"Ce Zhang"
] | Track/Datasets_and_Benchmarks | poster | 2312.10188 | [
"https://github.com/DS3Lab/WordScape"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=xT3i5GS3zU | @inproceedings{
li2023gslb,
title={{GSLB}: The Graph Structure Learning Benchmark},
author={Zhixun Li and Liang Wang and Xin Sun and Yifan Luo and Yanqiao Zhu and Dingshuo Chen and Yingtao Luo and Xiangxin Zhou and Qiang Liu and Shu Wu and Liang Wang and Jeffrey Xu Yu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=xT3i5GS3zU}
} | Graph Structure Learning (GSL) has recently garnered considerable attention due to its ability to optimize both the parameters of Graph Neural Networks (GNNs) and the computation graph structure simultaneously. Despite the proliferation of GSL methods developed in recent years, there is no standard experimental setting or fair comparison for performance evaluation, which creates a great obstacle to understanding the progress in this field. To fill this gap, we systematically analyze the performance of GSL in different scenarios and develop a comprehensive Graph Structure Learning Benchmark (GSLB) curated from 20 diverse graph datasets and 16 distinct GSL algorithms. Specifically, GSLB systematically investigates the characteristics of GSL in terms of three dimensions: effectiveness, robustness, and complexity. We comprehensively evaluate state-of-the-art GSL algorithms in node- and graph-level tasks, and analyze their performance in robust learning and model complexity. Further, to facilitate reproducible research, we have developed an easy-to-use library for training, evaluating, and visualizing different GSL methods. Empirical results of our extensive experiments demonstrate the ability of GSL and reveal its potential benefits on various downstream tasks, offering insights and opportunities for future research. The code of GSLB is available at: https://github.com/GSL-Benchmark/GSLB. | GSLB: The Graph Structure Learning Benchmark | [
"Zhixun Li",
"Liang Wang",
"Xin Sun",
"Yifan Luo",
"Yanqiao Zhu",
"Dingshuo Chen",
"Yingtao Luo",
"Xiangxin Zhou",
"Qiang Liu",
"Shu Wu",
"Liang Wang",
"Jeffrey Xu Yu"
] | Track/Datasets_and_Benchmarks | poster | 2310.05174 | [
"https://github.com/gsl-benchmark/gslb"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=xKYtTmtyI2 | @inproceedings{
narins2023validated,
title={Validated Image Caption Rating Dataset},
author={Lothar Narins and Andrew T Scott and Aakash Gautam and Anagha Kulkarni and Mar Castanon and Benjamin Kao and Shasta Ihorn and Yue-Ting Siu and James M Mason and Alexander Mario Blum and Ilmi Yoon},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=xKYtTmtyI2}
} | We present a new high-quality validated image caption rating (VICR) dataset. How well a caption fits an image can be difficult to assess due to the subjective nature of caption quality. How do we evaluate whether a caption is good? We generated a new dataset to help answer this question by using our new image caption rating system, which consists of a novel robust rating scale and gamified approach to gathering human ratings. We show that our approach is consistent and teachable. 113 participants were involved in generating the dataset, which is composed of 68,217 ratings among 15,646 image-caption pairs. Our new dataset has greater inter-rater agreement than the state of the art, and custom machine learning rating predictors that were trained on our dataset outperform previous metrics. We improve over Flickr8k-Expert in Kendall's $W$ by 12\% and in Fleiss' $\kappa$ by 19\%, and thus provide a new benchmark dataset for image caption rating. | Validated Image Caption Rating Dataset | [
"Lothar Narins",
"Andrew T Scott",
"Aakash Gautam",
"Anagha Kulkarni",
"Mar Castanon",
"Benjamin Kao",
"Shasta Ihorn",
"Yue-Ting Siu",
"James M Mason",
"Alexander Mario Blum",
"Ilmi Yoon"
] | Track/Datasets_and_Benchmarks | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=xJ7YWXQOrg | @inproceedings{
frieder2023mathematical,
title={Mathematical Capabilities of Chat{GPT}},
author={Simon Frieder and Luca Pinchetti and Alexis Chevalier and Ryan-Rhys Griffiths and Tommaso Salvatori and Thomas Lukasiewicz and Philipp Christian Petersen and Julius Berner},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=xJ7YWXQOrg}
} | We investigate the mathematical capabilities of two versions of ChatGPT (released 9-January-2023 and 30-January-2023) and of GPT-4 by testing them on publicly available datasets, as well as hand-crafted ones, using a novel evaluation scheme. In contrast to formal mathematics, where large databases of formal proofs are available (e.g., mathlib, the Lean Mathematical Library), current datasets of natural-language mathematics used to benchmark language models either cover only elementary mathematics or are very small. We address this by publicly releasing two new datasets: GHOSTS and miniGHOSTS. These are the first natural-language datasets curated by working researchers in mathematics that (1) aim to cover graduate-level mathematics, (2) provide a holistic overview of the mathematical capabilities of language models, and (3) distinguish multiple dimensions of mathematical reasoning. These datasets test, by using 1636 human expert evaluations, whether ChatGPT and GPT-4 can be helpful assistants to professional mathematicians by emulating use cases that arise in the daily professional activities of mathematicians. We benchmark the models on a range of fine-grained performance metrics. For advanced mathematics, this is the most detailed evaluation effort to date. We find that ChatGPT and GPT-4 can be used most successfully as mathematical assistants for querying facts, acting as mathematical search engines and knowledge base interfaces. GPT-4 can additionally be used for undergraduate-level mathematics but fails on graduate-level difficulty. Contrary to many positive reports in the media about GPT-4 and ChatGPT's exam-solving abilities (a potential case of selection bias), their overall mathematical performance is well below the level of a graduate student. Hence, if you aim to use ChatGPT to pass a graduate-level math exam, you would be better off copying from your average peer! | Mathematical Capabilities of ChatGPT | [
"Simon Frieder",
"Luca Pinchetti",
"Alexis Chevalier",
"Ryan-Rhys Griffiths",
"Tommaso Salvatori",
"Thomas Lukasiewicz",
"Philipp Christian Petersen",
"Julius Berner"
] | Track/Datasets_and_Benchmarks | poster | 2301.13867 | [
"https://github.com/xyfrieder/science-ghosts"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=wiw5mnja8W | @inproceedings{
berrevoets2023allsim,
title={AllSim: Simulating and Benchmarking Resource Allocation Policies in Multi-User Systems},
author={Jeroen Berrevoets and Daniel Jarrett and Alex James Chan and Mihaela van der Schaar},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=wiw5mnja8W}
} | Numerous real-world systems, ranging from healthcare to energy grids, involve users competing for finite and potentially scarce resources. Designing policies for resource allocation in such real-world systems is challenging for many reasons, including the changing nature of user types and their (possibly urgent) need for resources. Researchers have developed numerous machine learning solutions for determining resource allocation policies in these challenging settings. However, a key limitation has been the absence of good methods and test-beds for benchmarking these policies; almost all resource allocation policies are benchmarked in environments which are either completely synthetic or do not allow _any_ deviation from historical data. In this paper we introduce AllSim, which is a benchmarking environment for realistically simulating the impact and utility of policies for resource allocation in systems in which users compete for such scarce resources. Building such a benchmarking environment is challenging because it needs to successfully take into account _the entire collective_ of potential users and the impact a resource allocation policy has on all the other users in the system. AllSim's benchmarking environment is modular (each component being parameterized individually), learnable (informed by historical data), and customizable (adaptable to changing conditions). These, when interacting with an allocation policy, produce a dataset of simulated outcomes for evaluation and comparison of such policies. We believe AllSim is an essential step towards a more systematic evaluation of policies for scarce resource allocation compared to current approaches for benchmarking such methods. | AllSim: Simulating and Benchmarking Resource Allocation Policies in Multi-User Systems | [
"Jeroen Berrevoets",
"Daniel Jarrett",
"Alex James Chan",
"Mihaela van der Schaar"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=wgDcbBMSfh | @inproceedings{
ding2023crosscodeeval,
title={CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion},
author={Yangruibo Ding and Zijian Wang and Wasi Uddin Ahmad and Hantian Ding and Ming Tan and Nihal Jain and Murali Krishna Ramanathan and Ramesh Nallapati and Parminder Bhatia and Dan Roth and Bing Xiang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=wgDcbBMSfh}
} | Code completion models have made significant progress in recent years, yet current popular evaluation datasets, such as HumanEval and MBPP, predominantly focus on code completion tasks within a single file. This over-simplified setting falls short of representing the real-world software development scenario where repositories span multiple files with numerous cross-file dependencies, and accessing and understanding cross-file context is often required to complete the code correctly.
To fill in this gap, we propose CrossCodeEval, a diverse and multilingual code completion benchmark that necessitates an in-depth cross-file contextual understanding to complete the code accurately. CrossCodeEval is built on a diverse set of real-world, open-sourced, permissively-licensed repositories in four popular programming languages: Python, Java, TypeScript, and C#. To create examples that strictly require cross-file context for accurate completion, we propose a straightforward yet efficient static-analysis-based approach to pinpoint the use of cross-file context within the current file.
Extensive experiments on state-of-the-art code language models like CodeGen and StarCoder demonstrate that CrossCodeEval is extremely challenging when the relevant cross-file context is absent, and we see clear improvements when adding these context into the prompt. However, despite such improvements, the pinnacle of performance remains notably unattained even with the highest-performing model, indicating that CrossCodeEval is also capable of assessing model's capability in leveraging extensive context to make better code completion. Finally, we benchmarked various methods in retrieving cross-file context, and show that CrossCodeEval can also be used to measure the capability of code retrievers. | CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion | [
"Yangruibo Ding",
"Zijian Wang",
"Wasi Uddin Ahmad",
"Hantian Ding",
"Ming Tan",
"Nihal Jain",
"Murali Krishna Ramanathan",
"Ramesh Nallapati",
"Parminder Bhatia",
"Dan Roth",
"Bing Xiang"
] | Track/Datasets_and_Benchmarks | poster | 2310.11248 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=weHBzTLXpH | @inproceedings{
huang2023ticompbench,
title={T2I-CompBench: A Comprehensive Benchmark for Open-world Compositional Text-to-image Generation},
author={Kaiyi Huang and Kaiyue Sun and Enze Xie and Zhenguo Li and Xihui Liu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=weHBzTLXpH}
} | Despite the stunning ability to generate high-quality images by recent text-to-image models, current approaches often struggle to effectively compose objects with different attributes and relationships into a complex and coherent scene. We propose T2I-CompBench, a comprehensive benchmark for open-world compositional text-to-image generation, consisting of 6,000 compositional text prompts from 3 categories (attribute binding, object relationships, and complex compositions) and 6 sub-categories (color binding, shape binding, texture binding, spatial relationships, non-spatial relationships, and complex compositions). We further propose several evaluation metrics specifically designed to evaluate compositional text-to-image generation and explore the potential and limitations of multimodal LLMs for evaluation. We introduce a new approach, Generative mOdel finetuning with Reward-driven Sample selection (GORS), to boost the compositional text-to-image generation abilities of pretrained text-to-image models. Extensive experiments and evaluations are conducted to benchmark previous methods on T2I-CompBench, and to validate the effectiveness of our proposed evaluation metrics and GORS approach. Project page is available at https://karine-h.github.io/T2I-CompBench/. | T2I-CompBench: A Comprehensive Benchmark for Open-world Compositional Text-to-image Generation | [
"Kaiyi Huang",
"Kaiyue Sun",
"Enze Xie",
"Zhenguo Li",
"Xihui Liu"
] | Track/Datasets_and_Benchmarks | poster | 2307.06350 | [
""
] | https://huggingface.co/papers/2307.06350 | 4 | 6 | 0 | 5 | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=w4zZNC4ZaV | @inproceedings{
wang2023how,
title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=w4zZNC4ZaV}
} | In this work we explore recent advances in instruction-tuning language models on a range of open instruction-following datasets. Despite recent claims that open models can be on par with state-of-the-art proprietary models, these claims are often accompanied by limited evaluation, making it difficult to compare models across the board and determine the utility of various resources. We provide a large set of instruction-tuned models from 6.7B to 65B parameters in size, trained on 12 instruction datasets ranging from manually curated (e.g., OpenAssistant) to synthetic and distilled (e.g., Alpaca) and systematically evaluate them on their factual knowledge, reasoning, multilinguality, coding, safety, and open-ended instruction following abilities through a collection of automatic, model-based, and human-based metrics. We further introduce Tülu, our best performing instruction-tuned model suite finetuned on a combination of high-quality open resources.
Our experiments show that different instruction-tuning datasets can uncover or enhance specific skills, while no single dataset (or combination) provides the best performance across all evaluations. Interestingly, we find that model and human preference-based evaluations fail to reflect differences in model capabilities exposed by benchmark-based evaluations, suggesting the need for the type of systemic evaluation performed in this work. Our evaluations show that the best model in any given evaluation reaches on average 87% of ChatGPT performance, and 73% of GPT-4 performance, suggesting that further investment in building better base models and instruction-tuning data is required to close the gap. We release our instruction-tuned models, including a fully finetuned 65B Tülu, along with our code, data, and evaluation framework to facilitate future research. | How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources | [
"Yizhong Wang",
"Hamish Ivison",
"Pradeep Dasigi",
"Jack Hessel",
"Tushar Khot",
"Khyathi Chandu",
"David Wadden",
"Kelsey MacMillan",
"Noah A. Smith",
"Iz Beltagy",
"Hannaneh Hajishirzi"
] | Track/Datasets_and_Benchmarks | oral | 2306.04751 | [
"https://github.com/allenai/open-instruct"
] | https://huggingface.co/papers/2306.04751 | 8 | 5 | 0 | 11 | [
"fireballoon/baichuan-vicuna-7b",
"allenai/tulu-65b",
"allenai/tulu-30b",
"TheBloke/baichuan-vicuna-7B-GGML",
"TheBloke/tulu-30B-GPTQ",
"allenai/tulu-7b",
"allenai/tulu-13b",
"TheBloke/tulu-30B-GGML",
"TheBloke/tulu-7B-GPTQ",
"allenai/open-instruct-stanford-alpaca-7b",
"TheBloke/tulu-13B-GPTQ",
"TheBloke/tulu-13B-GGML",
"TheBloke/tulu-30B-fp16",
"TheBloke/tulu-7B-GGML",
"TheBloke/baichuan-vicuna-7B-GPTQ",
"allenai/open-instruct-human-mix-65b",
"TheBloke/tulu-7B-fp16",
"allenai/open-instruct-pythia-6.9b-tulu",
"TheBloke/Tulu-30B-SuperHOT-8K-fp16",
"TheBloke/Tulu-13B-SuperHOT-8K-GGML",
"allenai/open-instruct-sharegpt-65b",
"TheBloke/tulu-13B-fp16",
"allenai/open-instruct-opt-6.7b-tulu",
"TheBloke/Tulu-13B-SuperHOT-8K-fp16",
"TheBloke/Tulu-7B-SuperHOT-8K-fp16",
"allenai/open-instruct-code-alpaca-7b",
"allenai/open-instruct-gpt4-alpaca-7b",
"allenai/open-instruct-unnatural-instructions-7b",
"allenai/open-instruct-human-mix-30b",
"allenai/open-instruct-stanford-alpaca-13b",
"allenai/open-instruct-flan-v2-7b",
"allenai/open-instruct-unnatural-instructions-13b",
"TheBloke/Tulu-13B-SuperHOT-8K-GPTQ",
"TheBloke/Tulu-7B-SuperHOT-8K-GGML",
"TheBloke/Tulu-7B-SuperHOT-8K-GPTQ",
"TheBloke/open-instruct-human-mix-65B-GGUF",
"TheBloke/tulu-7B-GGUF",
"allenai/open-instruct-dolly-7b",
"allenai/open-instruct-human-mix-7b",
"allenai/open-instruct-sni-7b",
"allenai/open-instruct-cot-7b",
"allenai/open-instruct-sharegpt-7b",
"allenai/open-instruct-oasst1-7b",
"allenai/open-instruct-baize-7b",
"allenai/open-instruct-self-instruct-7b",
"allenai/open-instruct-oasst1-13b",
"allenai/open-instruct-baize-13b",
"allenai/open-instruct-self-instruct-13b",
"allenai/open-instruct-code-alpaca-13b",
"allenai/open-instruct-cot-13b",
"allenai/open-instruct-dolly-13b",
"allenai/open-instruct-gpt4-alpaca-13b",
"allenai/open-instruct-sni-13b",
"allenai/open-instruct-flan-v2-13b",
"allenai/open-instruct-human-mix-13b",
"allenai/open-instruct-sharegpt-13b",
"allenai/open-instruct-sharegpt-30b",
"TheBloke/open-instruct-human-mix-65B-GPTQ",
"TheBloke/open-instruct-human-mix-65B-AWQ",
"TheBloke/open-instruct-human-mix-65B-fp16",
"TheBloke/tulu-13B-GGUF",
"TheBloke/tulu-13B-AWQ",
"TheBloke/tulu-30B-AWQ",
"TheBloke/tulu-30B-GGUF",
"TheBloke/tulu-7B-AWQ"
] | [
"xzuyn/tulu-uncensored-alpaca",
"xzuyn/tulu-uncensored",
"allenai/tulu-v1-sft-mixture"
] | [
"open-llm-leaderboard/open_llm_leaderboard",
"Intel/low_bit_open_llm_leaderboard",
"Sharathhebbar24/One-stop-for-Open-source-models",
"BAAI/open_cn_llm_leaderboard",
"gsaivinay/open_llm_leaderboard",
"meval/multilingual-chatbot-arena-leaderboard",
"GTBench/GTBench",
"felixz/open_llm_leaderboard",
"OPTML-Group/UnlearnCanvas-Benchmark",
"Vikhrmodels/small-shlepa-lb",
"li-qing/FIRE",
"b1sheng/kg_llm_leaderboard_test",
"neubla/neubla-llm-evaluation-board",
"rodrigomasini/data_only_open_llm_leaderboard",
"Docfile/open_llm_leaderboard",
"tianleliphoebe/visual-arena",
"Ashmal/MobiLlama",
"Guinnessgshep/TheBloke-Tulu-30B-SuperHOT-8K-fp16",
"smothiki/open_llm_leaderboard",
"pngwn/open_llm_leaderboard",
"pngwn/open_llm_leaderboard_two",
"choco9966/LeaderboardTest",
"0x1668/open_llm_leaderboard",
"pngwn/open_llm_leaderboard-check",
"asir0z/open_llm_leaderboard",
"kbmlcoding/open_llm_leaderboard_free",
"choco9966/open-ko-llm-leaderboard",
"aichampions/open_llm_leaderboard",
"Adeco/open_llm_leaderboard",
"anirudh937/open_llm_leaderboard",
"smothiki/open_llm_leaderboard2",
"K00B404/One-stop-till-you-drop",
"alexshengzhili/calahealthgpt",
"dbasu/multilingual-chatbot-arena-leaderboard",
"Bofeee5675/FIRE",
"evelyn-lo/evelyn",
"yuantao-infini-ai/demo_test",
"zjasper666/bf16_vs_fp8"
] | 1 |
null | https://openreview.net/forum?id=vv3cocNsEK | @inproceedings{
afouras2023htstep,
title={{HT}-Step: Aligning Instructional Articles with How-To Videos},
author={Triantafyllos Afouras and Effrosyni Mavroudi and Tushar Nagarajan and Huiyu Wang and Lorenzo Torresani},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=vv3cocNsEK}
} | We introduce HT-Step, a large-scale dataset containing temporal annotations of instructional article steps in cooking videos. It includes 122k segment-level annotations over 20k narrated videos (approximately 2.3k hours) of the HowTo100M dataset.
Each annotation provides a temporal interval, and a categorical step label from a taxonomy of 4,958 unique steps automatically mined from wikiHow articles which include rich descriptions of each step.
Our dataset significantly surpasses existing labeled step datasets in terms of scale, number of tasks, and richness of natural language step descriptions. Based on these annotations, we introduce a strongly supervised benchmark for aligning instructional articles with how-to videos and present a comprehensive evaluation of baseline methods for this task.
By publicly releasing these annotations and defining rigorous evaluation protocols and metrics,
we hope to significantly accelerate research in the field of procedural activity understanding. | HT-Step: Aligning Instructional Articles with How-To Videos | [
"Triantafyllos Afouras",
"Effrosyni Mavroudi",
"Tushar Nagarajan",
"Huiyu Wang",
"Lorenzo Torresani"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=viktK3nO5b | @inproceedings{
si2023spokenwoz,
title={Spoken{WOZ}: A Large-Scale Speech-Text Benchmark for Spoken Task-Oriented Dialogue Agents},
author={Shuzheng Si and Wentao Ma and Haoyu Gao and Yuchuan Wu and Ting-En Lin and Yinpei Dai and Hangyu Li and Rui Yan and Fei Huang and Yongbin Li},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=viktK3nO5b}
} | Task-oriented dialogue (TOD) models have made significant progress in recent years. However, previous studies primarily focus on datasets written by annotators, which has resulted in a gap between academic research and real-world spoken con- versation scenarios. While several small-scale spoken TOD datasets are proposed to address robustness issues such as ASR errors, they ignore the unique challenges in spoken conversation. To tackle the limitations, we introduce SpokenWOZ, a large-scale speech-text dataset for spoken TOD, containing 8 domains, 203k turns, 5.7k dialogues and 249 hours of audios from human-to-human spoken conversations. SpokenWOZ further incorporates common spoken characteristics such as word-by-word processing and reasoning in spoken language. Based on these characteristics, we present cross-turn slot and reasoning slot detection as new challenges. We conduct experiments on various baselines, including text-modal models, newly proposed dual-modal models, and LLMs, e.g., ChatGPT. The results show that the current models still have substantial room for improvement in spoken conversation, where the most advanced dialogue state tracker only achieves 25.65% in joint goal accuracy and the SOTA end-to-end model only correctly completes the user request in 52.1% of dialogues. Our dataset, code, and leaderboard are available at https://spokenwoz.github.io/SpokenWOZ-github.io/. | SpokenWOZ: A Large-Scale Speech-Text Benchmark for Spoken Task-Oriented Dialogue Agents | [
"Shuzheng Si",
"Wentao Ma",
"Haoyu Gao",
"Yuchuan Wu",
"Ting-En Lin",
"Yinpei Dai",
"Hangyu Li",
"Rui Yan",
"Fei Huang",
"Yongbin Li"
] | Track/Datasets_and_Benchmarks | poster | 2305.13040 | [
""
] | https://huggingface.co/papers/2305.13040 | 0 | 1 | 0 | 10 | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=vfzXDRTcF4 | @inproceedings{
sun2023journeydb,
title={Journey{DB}: A Benchmark for Generative Image Understanding},
author={Keqiang Sun and Junting Pan and Yuying Ge and Hao Li and Haodong Duan and Xiaoshi Wu and Renrui Zhang and Aojun Zhou and Zipeng Qin and Yi Wang and Jifeng Dai and Yu Qiao and Limin Wang and Hongsheng Li},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=vfzXDRTcF4}
} | While recent advancements in vision-language models have had a transformative impact on multi-modal comprehension, the extent to which these models possess the ability to comprehend generated images remains uncertain. Synthetic images, in comparison to real data, encompass a higher level of diversity in terms of both content and style, thereby presenting significant challenges for the models to fully grasp. In light of this challenge, we introduce a comprehensive dataset, referred to as JourneyDB, that caters to the domain of generative images within the context of multi-modal visual understanding. Our meticulously curated dataset comprises 4 million distinct and high-quality generated images, each paired with the corresponding text prompts that were employed in their creation. Furthermore, we additionally introduce an external subset with results of another 22 text-to-image generative models, which makes JourneyDB a comprehensive benchmark for evaluating the comprehension of generated images. On our dataset, we have devised four benchmarks to assess the performance of generated image comprehension in relation to both content and style interpretation. These benchmarks encompass prompt inversion, style retrieval, image captioning, and visual question answering. Lastly, we evaluate the performance of state-of-the-art multi-modal models when applied to the JourneyDB dataset, providing a comprehensive analysis of their strengths and limitations in comprehending generated content. We anticipate that the proposed dataset and benchmarks will facilitate further research in the field of generative content understanding. The dataset is publicly available at https://journeydb.github.io. | JourneyDB: A Benchmark for Generative Image Understanding | null | Track/Datasets_and_Benchmarks | poster | 2307.00716 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=vZf7jrX1el | @inproceedings{
xu2023datadriven,
title={Data-Driven Network Neuroscience: On Data Collection and Benchmark},
author={Jiaxing Xu and Yunhan Yang and David Tse Jung Huang and Sophi Shilpa Gururajapathy and Yiping Ke and Miao Qiao and Alan Wang and Haribalan Kumar and Josh McGeown and Eryn Kwon},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=vZf7jrX1el}
} | This paper presents a comprehensive and quality collection of functional human brain network data for potential research in the intersection of neuroscience, machine learning, and graph analytics.
Anatomical and functional MRI images have been used to understand the functional connectivity of the human brain and are particularly important in identifying underlying neurodegenerative conditions such as Alzheimer's, Parkinson's, and Autism. Recently, the study of the brain in the form of brain networks using machine learning and graph analytics has become increasingly popular, especially to predict the early onset of these conditions. A brain network, represented as a graph, retains rich structural and positional information that traditional examination methods are unable to capture. However, the lack of publicly accessible brain network data prevents researchers from data-driven explorations. One of the main difficulties lies in the complicated domain-specific preprocessing steps and the exhaustive computation required to convert the data from MRI images into brain networks. We bridge this gap by collecting a large amount of MRI images from public databases and a private source, working with domain experts to make sensible design choices, and preprocessing the MRI images to produce a collection of brain network datasets. The datasets originate from 6 different sources, cover 4 brain conditions, and consist of a total of 2,702 subjects.
We test our graph datasets on 12 machine learning models to provide baselines and validate the data quality on a recent graph analysis model. To lower the barrier to entry and promote the research in this interdisciplinary field, we release our brain network data and complete preprocessing details including codes at https://doi.org/10.17608/k6.auckland.21397377 and https://github.com/brainnetuoa/data_driven_network_neuroscience. | Data-Driven Network Neuroscience: On Data Collection and Benchmark | [
"Jiaxing Xu",
"Yunhan Yang",
"David Tse Jung Huang",
"Sophi Shilpa Gururajapathy",
"Yiping Ke",
"Miao Qiao",
"Alan Wang",
"Haribalan Kumar",
"Josh McGeown",
"Eryn Kwon"
] | Track/Datasets_and_Benchmarks | poster | 2211.12421 | [
"https://github.com/brainnetuoa/data_driven_network_neuroscience"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=vZ9tA3o3hr | @inproceedings{
yeh2023sustaingym,
title={SustainGym: Reinforcement Learning Environments for Sustainable Energy Systems},
author={Christopher Yeh and Victor Li and Rajeev Datta and Julio Arroyo and Nicolas Christianson and Chi Zhang and Yize Chen and Mohammad Mehdi Hosseini and Azarang Golmohammadi and Yuanyuan Shi and Yisong Yue and Adam Wierman},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=vZ9tA3o3hr}
} | The lack of standardized benchmarks for reinforcement learning (RL) in sustainability applications has made it difficult to both track progress on specific domains and identify bottlenecks for researchers to focus their efforts. In this paper, we present SustainGym, a suite of five environments designed to test the performance of RL algorithms on realistic sustainable energy system tasks, ranging from electric vehicle charging to carbon-aware data center job scheduling. The environments test RL algorithms under realistic distribution shifts as well as in multi-agent settings. We show that standard off-the-shelf RL algorithms leave significant room for improving performance and highlight the challenges ahead for introducing RL to real-world sustainability tasks. | SustainGym: Reinforcement Learning Environments for Sustainable Energy Systems | [
"Christopher Yeh",
"Victor Li",
"Rajeev Datta",
"Julio Arroyo",
"Nicolas Christianson",
"Chi Zhang",
"Yize Chen",
"Mohammad Mehdi Hosseini",
"Azarang Golmohammadi",
"Yuanyuan Shi",
"Yisong Yue",
"Adam Wierman"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=vTrRq6vCQH | @inproceedings{
xie2023pixiu,
title={{PIXIU}: A Comprehensive Benchmark, Instruction Dataset and Large Language Model for Finance},
author={Qianqian Xie and Weiguang Han and Xiao Zhang and Yanzhao Lai and Min Peng and Alejandro Lopez-Lira and Jimin Huang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=vTrRq6vCQH}
} | Although large language models (LLMs) have shown great performance in natural language processing (NLP) in the financial domain, there are no publicly available financially tailored LLMs, instruction tuning datasets, and evaluation benchmarks, which is critical for continually pushing forward the open-source development of financial artificial intelligence (AI). This paper introduces PIXIU, a comprehensive framework including the first financial LLM based on fine-tuning LLaMA with instruction data, the first instruction data with 128K data samples to support the fine-tuning, and an evaluation benchmark with 8 tasks and 15 datasets. We first construct the large-scale multi-task instruction data considering a variety of financial tasks, financial document types, and financial data modalities. We then propose a financial LLM called FinMA by fine-tuning LLaMA with the constructed dataset to be able to follow instructions for various financial tasks. To support the evaluation of financial LLMs, we propose a standardized benchmark that covers a set of critical financial tasks, including six financial NLP tasks and two financial prediction tasks. With this benchmark, we conduct a detailed analysis of FinMA and several existing LLMs, uncovering their strengths and weaknesses in handling critical financial tasks. The model, datasets, benchmark, and experimental results are open-sourced to facilitate future research in financial AI. | PIXIU: A Comprehensive Benchmark, Instruction Dataset and Large Language Model for Finance | [
"Qianqian Xie",
"Weiguang Han",
"Xiao Zhang",
"Yanzhao Lai",
"Min Peng",
"Alejandro Lopez-Lira",
"Jimin Huang"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=v4PMCdSaAT | @inproceedings{
cherepanova2023a,
title={A Performance-Driven Benchmark for Feature Selection in Tabular Deep Learning},
author={Valeriia Cherepanova and Roman Levin and Gowthami Somepalli and Jonas Geiping and C. Bayan Bruss and Andrew Gordon Wilson and Tom Goldstein and Micah Goldblum},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=v4PMCdSaAT}
} | Academic tabular benchmarks often contain small sets of curated features. In contrast, data scientists typically collect as many features as possible into their datasets, and even engineer new features from existing ones. To prevent over-fitting in subsequent downstream modeling, practitioners commonly use automated feature selection methods that identify a reduced subset of informative features. Existing benchmarks for tabular feature selection consider classical downstream models, toy synthetic datasets, or do not evaluate feature selectors on the basis of downstream performance. We construct a challenging feature selection benchmark evaluated on downstream neural networks including transformers, using real datasets and multiple methods for generating extraneous features. We also propose Deep Lasso -- an input-gradient-based analogue of LASSO for neural networks that outperforms classical feature selection methods on challenging problems such as selecting from corrupted or second-order features. | A Performance-Driven Benchmark for Feature Selection in Tabular Deep Learning | [
"Valeriia Cherepanova",
"Roman Levin",
"Gowthami Somepalli",
"Jonas Geiping",
"C. Bayan Bruss",
"Andrew Gordon Wilson",
"Tom Goldstein",
"Micah Goldblum"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=ugRnHKMK95 | @inproceedings{
chung2023turbulence,
title={Turbulence in Focus: Benchmarking Scaling Behavior of 3D Volumetric Super-Resolution with {BLASTN}et 2.0 Data},
author={Wai Tong Chung and Bassem Akoush and Pushan Sharma and Alex Tamkin and Ki Sung Jung and Jacqueline Chen and Jack Guo and Davy Brouzet and Mohsen Talei and Bruno Savard and Alexei Y Poludnenko and Matthias Ihme},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=ugRnHKMK95}
} | Analysis of compressible turbulent flows is essential for applications related to propulsion, energy generation, and the environment.
Here, we present BLASTNet 2.0, a 2.2 TB network-of-datasets containing 744 full-domain samples from 34 high-fidelity direct numerical simulations, which addresses the current limited availability of 3D high-fidelity reacting and non-reacting compressible turbulent flow simulation data. With this data, we benchmark a total of 49 variations of five deep learning approaches for 3D super-resolution - which can be applied for improving scientific imaging, simulations, turbulence models, as well as in computer vision applications. We perform neural scaling analysis on these models to examine the performance of different machine learning (ML) approaches, including two scientific ML techniques. We demonstrate that (i) predictive performance can scale with model size and cost, (ii) architecture matters significantly, especially for smaller models, and (iii) the benefits of physics-based losses can persist with increasing model size. The outcomes of this benchmark study are anticipated to offer insights that can aid the design of 3D super-resolution models, especially for turbulence models, while this data is expected to foster ML methods for a broad range of flow physics applications. This data is publicly available with download links and browsing tools consolidated at https://blastnet.github.io. | Turbulence in Focus: Benchmarking Scaling Behavior of 3D Volumetric Super-Resolution with BLASTNet 2.0 Data | [
"Wai Tong Chung",
"Bassem Akoush",
"Pushan Sharma",
"Alex Tamkin",
"Ki Sung Jung",
"Jacqueline Chen",
"Jack Guo",
"Davy Brouzet",
"Mohsen Talei",
"Bruno Savard",
"Alexei Y Poludnenko",
"Matthias Ihme"
] | Track/Datasets_and_Benchmarks | poster | 2309.13457 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=uccHPGDlao | @inproceedings{
zheng2023judging,
title={Judging {LLM}-as-a-Judge with {MT}-Bench and Chatbot Arena},
author={Lianmin Zheng and Wei-Lin Chiang and Ying Sheng and Siyuan Zhuang and Zhanghao Wu and Yonghao Zhuang and Zi Lin and Zhuohan Li and Dacheng Li and Eric Xing and Hao Zhang and Joseph E. Gonzalez and Ion Stoica},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=uccHPGDlao}
} | Evaluating large language model (LLM) based chat assistants is challenging due to their broad capabilities and the inadequacy of existing benchmarks in measuring human preferences.
To address this, we explore using strong LLMs as judges to evaluate these models on more open-ended questions.
We examine the usage and limitations of LLM-as-a-judge, including position, verbosity, and self-enhancement biases, as well as limited reasoning ability, and propose solutions to mitigate some of them.
We then verify the agreement between LLM judges and human preferences by introducing two benchmarks: MT-bench, a multi-turn question set; and Chatbot Arena, a crowdsourced battle platform.
Our results reveal that strong LLM judges like GPT-4 can match both controlled and crowdsourced human preferences well, achieving over 80\% agreement, the same level of agreement between humans.
Hence, LLM-as-a-judge is a scalable and explainable way to approximate human preferences, which are otherwise very expensive to obtain.
Additionally, we show our benchmark and traditional benchmarks complement each other by evaluating several variants of LLaMA and Vicuna.
The MT-bench questions, 3K expert votes, and 30K conversations with human preferences are publicly available at https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge. | Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena | [
"Lianmin Zheng",
"Wei-Lin Chiang",
"Ying Sheng",
"Siyuan Zhuang",
"Zhanghao Wu",
"Yonghao Zhuang",
"Zi Lin",
"Zhuohan Li",
"Dacheng Li",
"Eric Xing",
"Hao Zhang",
"Joseph E. Gonzalez",
"Ion Stoica"
] | Track/Datasets_and_Benchmarks | poster | 2306.05685 | [
"https://github.com/lm-sys/fastchat"
] | https://huggingface.co/papers/2306.05685 | 5 | 27 | 2 | 13 | [
"lmsys/vicuna-13b-delta-v0",
"lmsys/vicuna-13b-delta-v1.1",
"lmsys/vicuna-33b-v1.3",
"lmsys/vicuna-7b-v1.5",
"stabilityai/stablelm-zephyr-3b",
"lmsys/vicuna-13b-v1.5-16k",
"lmsys/vicuna-7b-delta-v1.1",
"lmsys/vicuna-13b-v1.5",
"lmsys/vicuna-13b-v1.3",
"lmsys/vicuna-7b-delta-v0",
"lmsys/vicuna-7b-v1.3",
"lmsys/vicuna-13b-v1.1",
"lmsys/vicuna-7b-v1.5-16k",
"lmsys/vicuna-7b-v1.1",
"SeaLLMs/SeaLLM-7B-v2",
"TheBloke/vicuna-13B-v1.5-16K-GGML",
"tenyx/Llama3-TenyxChat-70B",
"TheBloke/vicuna-13B-v1.5-16K-GGUF",
"nvidia/SteerLM-llama2-13B",
"google/shieldgemma-2b",
"TheBloke/vicuna-13B-v1.5-16K-GPTQ",
"TheBloke/Vicuna-33B-1-3-SuperHOT-8K-GPTQ",
"tenyx/TenyxChat-7B-v1",
"TheBloke/vicuna-13B-v1.5-GGML",
"CyberNative/CyberBase-13b",
"TheBloke/vicuna-13b-v1.3.0-GPTQ",
"ghost-x/ghost-8b-beta-1608",
"TheBloke/vicuna-33B-GGML",
"TheBloke/vicuna-7B-v1.3-GGML",
"TheBloke/vicuna-7B-v1.3-GPTQ",
"TheBloke/vicuna-13B-v1.5-GGUF",
"TheBloke/vicuna-33B-GGUF",
"TheBloke/vicuna-13b-v1.3.0-GGML",
"TheBloke/vicuna-7B-v1.5-GGUF",
"TheBloke/vicuna-7B-v1.5-GPTQ",
"TheBloke/vicuna-33B-GPTQ",
"h2oai/h2o-danube3-4b-chat-GGUF",
"ghost-x/ghost-8b-beta",
"TheBloke/vicuna-7B-v1.5-GGML",
"google/shieldgemma-27b",
"h2oai/h2o-danube3-500m-chat-GGUF",
"TheBloke/vicuna-7B-v1.5-16K-GGML",
"tenyx/TenyxChat-8x7B-v1",
"TheBloke/Vicuna-33B-1-3-SuperHOT-8K-GGML",
"TheBloke/vicuna-13B-v1.5-GPTQ",
"TheBloke/TenyxChat-7B-v1-GGUF",
"TheBloke/vicuna-7B-v1.5-16K-GPTQ",
"TheBloke/vicuna-7B-v1.5-16K-GGUF",
"google/shieldgemma-9b",
"SeaLLMs/SeaLLM-7B-v2-gguf",
"Nanbeige/Nanbeige2-8B-Chat",
"jphme/vicuna-13b-v1.3-ger",
"aisingapore/llama3-8b-cpt-sea-lionv2-instruct",
"h2oai/h2o-danube2-1.8b-chat-GGUF",
"TheBloke/Vicuna-13B-1-3-SuperHOT-8K-GPTQ",
"Radiantloom/radiantloom-support-assist-7b",
"TheBloke/Vicuna-33B-1-3-SuperHOT-8K-fp16",
"scb10x/llama-3-typhoon-v1.5x-8b-instruct",
"Nanbeige/Nanbeige2-16B-Chat",
"LoneStriker/SeaLLM-7B-v2-GGUF",
"scb10x/llama-3-typhoon-v1.5x-70b-instruct",
"TheBloke/Vicuna-13B-v1.3-German-GGML",
"TheBloke/vicuna-7B-v1.5-AWQ",
"TheBloke/Vicuna-7B-v1-3-SuperHOT-8K-GGML",
"TheBloke/TenyxChat-7B-v1-AWQ",
"TheBloke/Vicuna-13B-v1.3-German-GPTQ",
"TheBloke/Vicuna-7B-v1-3-SuperHOT-8K-fp16",
"TheBloke/Vicuna-7B-v1-3-SuperHOT-8K-GPTQ",
"VatsaDev/ChatGpt-nano",
"Radiantloom/radiantloom-email-assist-7b",
"localmodels/Vicuna-7B-v1.3-ggml",
"ghost-x/ghost-8b-beta-1608-gguf",
"TheBloke/vicuna-33B-AWQ",
"s3nh/lmsys-vicuna-13b-v1.5-16k-GGML",
"scb10x/llama-3-typhoon-v1.5x-70b-instruct-awq",
"PsiPi/lmsys_vicuna-13b-v1.5-16k-exl2-3.0bpw",
"Radiantloom/radintloom-mistral-7b-fusion",
"mit-han-lab/vicuna-7b-v1.3-4bit-g128-awq",
"TheBloke/vicuna-13B-v1.5-AWQ",
"TheBloke/vicuna-13B-v1.5-16K-AWQ",
"TheBloke/vicuna-7B-v1.5-16K-AWQ",
"QuantFactory/shieldgemma-2b-GGUF",
"QuantFactory/shieldgemma-9b-GGUF",
"s3nh/vicuna-13b-v1.5-GGML",
"Shreyas0706/vicuna-7b-v1.5-GGUF",
"Radiantloom/radiantloom-llama-70b-instruct",
"giraffe176/WestLake_Noromaid_OpenHermes_neural-chatv0.1",
"QuantFactory/ghost-8b-beta-GGUF",
"Vigneshwaran-D/vicuna-13b-v1.5-gguf",
"TheBloke/TenyxChat-7B-v1-GPTQ",
"Radiantloom/radiantloom-mixtral-8x7b-fusion",
"LoneStriker/TenyxChat-8x7B-v1-4.0bpw-h6-exl2",
"TheBloke/Vicuna-13B-1-3-SuperHOT-8K-fp16",
"PsiPi/lmsys_vicuna-13b-v1.5-16k-exl2-5.53bpw",
"mit-han-lab/vicuna-33b-v1.3-4bit-g128-awq",
"sharpbai/vicuna-7b-v1.3",
"mit-han-lab/vicuna-13b-v1.3-4bit-g128-awq",
"wang7776/vicuna-7b-v1.3-attention-sparsity-20",
"sagarsdesai/vicuna-7b-v1.5-awq",
"wang7776/vicuna-7b-v1.3-attention-sparsity-30"
] | [
"gretelai/synthetic_text_to_sql",
"lmsys/chatbot_arena_conversations",
"lmsys/mt_bench_human_judgments",
"gretelai/synthetic_pii_finance_multilingual",
"LGAI-EXAONE/KoMT-Bench",
"HuggingFaceH4/mt_bench_prompts",
"m-a-p/CHC-Bench",
"bofenghuang/mt-bench-french",
"Junrulu/MT-Bench-Plus",
"d-llm/chatbot_arena_conversations"
] | [
"open-llm-leaderboard/open_llm_leaderboard",
"h2oai/h2ogpt-chatbot",
"allenai/WildBench",
"lmsys/mt-bench",
"h2oai/h2ogpt-chatbot2",
"Intel/low_bit_open_llm_leaderboard",
"eduagarcia/open_pt_llm_leaderboard",
"Sharathhebbar24/One-stop-for-Open-source-models",
"HuggingFaceH4/human_eval_llm_leaderboard",
"ZhangYuhan/3DGen-Arena",
"AILab-CVC/SEED-Bench_Leaderboard",
"BAAI/open_cn_llm_leaderboard",
"lixin4ever/VideoLLaMA2",
"lighthouzai/guardrails-arena",
"shi-labs/VCoder",
"allenai/ZebraLogic",
"gsaivinay/open_llm_leaderboard",
"Fucius/OMG-InstantID",
"Fucius/OMG",
"AILab-CVC/EvalCrafter",
"EvanTHU/MotionLLM",
"featherless-ai/try-this-model",
"tenyx/Llama3-TenyxChat-70B",
"SeaLLMs/SeaLLM-Chat",
"lamhieu/ghost-8b-beta-8k",
"Auto-Arena/Leaderboard",
"tenyx/TenyxChat-7B-v1",
"SeaLLMs/SeaExam_leaderboard",
"meval/multilingual-chatbot-arena-leaderboard",
"lamhieu/ghost-8b-beta-128k",
"speakleash/mt-bench-pl",
"Hellisotherpeople/Gadsby",
"GTBench/GTBench",
"LanguageBind/Video-Bench",
"EmbeddedLLM/chat-template-generation",
"zeno-ml/chatbot-report",
"toloka/open-llm-leaderboard",
"AlekseyKorshuk/model-evaluation",
"tenyx/TenyxChat-8x7B-v1",
"lyx97/TempCompass",
"Mahadih534/Open-Source_LLM_ChatBot",
"SeaEval/SeaEval_Leaderboard",
"Justinrune/LLaMA-Factory",
"IS2Lab/S-Eval",
"felixz/open_llm_leaderboard",
"Tonic/TonicsStableLM3B",
"OpenSafetyLab/Salad-Bench-Leaderboard",
"Vikhrmodels/small-shlepa-lb",
"officialhimanshu595/llama-factory",
"OPTML-Group/UnlearnCanvas-Benchmark",
"timdettmers/guanaco-65b-4bit",
"1aurent/cogvlm_captionner",
"li-qing/FIRE",
"bardsai/performance-llm-board",
"Yiqin/ChatVID",
"kenken999/fastapi_django_main_live",
"terapyon/gh-issue-search",
"lamhieu/etherll-ghost-8b-beta-coder",
"open-nlp/Chris-lab",
"huangzhong0406/Llama3-TenyxChat-70B",
"HemaAM/GPT_train_on_LLaMa",
"pe-nlp/mt-bench",
"mikeee/multilingual-dokugpt",
"b1sheng/kg_llm_leaderboard_test",
"tianleliphoebe/visual-arena",
"Ashmal/MobiLlama",
"lambdabrendan/Lambda-LLM-Calculator",
"RinInori/Vicuna_ChatBot",
"PrarthanaTS/tsai-gpt-from-scratch",
"imjunaidafzal/can-it-run-llm",
"RinInori/vicuna_finetuned_6_sentiments",
"yuhuili/EAGLE",
"his0/h2ogpt-chatbot",
"srikanth-nm/ai_seeker",
"rodrigomasini/data_only_open_llm_leaderboard",
"atimughal662/InfoFusion",
"Zulelee/langchain-chatchat",
"neubla/neubla-llm-evaluation-board",
"shimizukawa/python-no-senpai",
"SeaLLMs/SeaLLM-7B-v2.5-simple",
"RaviNaik/ERA-SESSION22",
"Docfile/open_llm_leaderboard",
"hucoa/chatbot-arena",
"ruslanmv/Open-Source-LLM-Chatbot",
"awacke1/MTBench-8Way-MoE",
"nxphi47/MultiPurpose-Chatbot-DEMO",
"gkiran6/MTBenchmarkForChatGPTMetricsScoring",
"ssbagpcm/Llama3-TenyxChat-70B",
"jpachika/MTBenchmarkForChatGPTMetricsScoring",
"Sijuade/GPTNEXTWORD",
"spalli2/01-24-24-MTBenchmarkForChatGPTMetricsScoring",
"anantgupta129/LitGPT-Pythia-160M",
"awacke1/Model-MTBenchmark",
"jclay19/MTBenchmarkForChatGPTMetricsScoring",
"bssvdkc/MTBenchmarkForChatGPTMetricsScoring",
"PeepDaSlan9/B2BMGMT_chatbot-arena-leaderboard",
"alexkueck/ChatBotLI2Klein",
"venkyyuvy/GPT_redpajama",
"coium/google-shieldgemma-2b",
"akashkj/H2OGPT"
] | 1 |
null | https://openreview.net/forum?id=uXBO47JcJT | @inproceedings{
jin2023amazonm,
title={Amazon-M2: A Multilingual Multi-locale Shopping Session Dataset for Recommendation and Text Generation},
author={Wei Jin and Haitao Mao and Zheng Li and Haoming Jiang and Chen Luo and Hongzhi Wen and Haoyu Han and Hanqing Lu and Zhengyang Wang and Ruirui Li and Zhen Li and Monica Xiao Cheng and Rahul Goutam and Haiyang Zhang and Karthik Subbian and Suhang Wang and Yizhou Sun and Jiliang Tang and Bing Yin and Xianfeng Tang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=uXBO47JcJT}
} | Modeling customer shopping intentions is a crucial task for e-commerce, as it directly impacts user experience and engagement. Thus, accurately understanding customer preferences is essential for providing personalized recommendations. Session-based recommendation, which utilizes customer session data to predict their next interaction, has become increasingly popular.
However, existing session datasets have limitations in terms of item attributes, user diversity, and dataset scale. As a result, they cannot comprehensively capture the spectrum of user behaviors and preferences.
To bridge this gap, we present the Amazon Multilingual Multi-locale Shopping Session Dataset, namely Amazon-M2. It is the first multilingual dataset consisting of millions of user sessions from six different locales, where the major languages of products are English, German, Japanese, French, Italian, and Spanish.
Remarkably, the dataset can help us enhance personalization and understanding of user preferences, which can benefit various existing tasks as well as enable new tasks. To test the potential of the dataset, we introduce three tasks in this work:
(1) next-product recommendation, (2) next-product recommendation with domain shifts, and (3) next-product title generation.
With the above tasks, we benchmark a range of algorithms on our proposed dataset, drawing new insights for further research and practice.
In addition, based on the proposed dataset and tasks, we hosted a competition in the KDD CUP 2023 https://www.aicrowd.com/challenges/amazon-kdd-cup-23-multilingual-recommendation-challenge and have attracted thousands of users and submissions. The winning solutions and the associated workshop can be accessed at our website~https://kddcup23.github.io/. | Amazon-M2: A Multilingual Multi-locale Shopping Session Dataset for Recommendation and Text Generation | [
"Wei Jin",
"Haitao Mao",
"Zheng Li",
"Haoming Jiang",
"Chen Luo",
"Hongzhi Wen",
"Haoyu Han",
"Hanqing Lu",
"Zhengyang Wang",
"Ruirui Li",
"Zhen Li",
"Monica Xiao Cheng",
"Rahul Goutam",
"Haiyang Zhang",
"Karthik Subbian",
"Suhang Wang",
"Yizhou Sun",
"Jiliang Tang",
"Bing Yin",
"Xianfeng Tang"
] | Track/Datasets_and_Benchmarks | poster | 2307.09688 | [
"https://github.com/haitaomao/amazon-m2"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=uJT68uPtC0 | @inproceedings{
shi2023mhisdoc,
title={M5HisDoc: A Large-scale Multi-style Chinese Historical Document Analysis Benchmark},
author={Yongxin Shi and Chongyu Liu and Dezhi Peng and Cheng Jian and Jiarong Huang and Lianwen Jin},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=uJT68uPtC0}
} | Recognizing and organizing text in correct reading order plays a crucial role in historical document analysis and preservation. While existing methods have shown promising performance, they often struggle with challenges such as diverse layouts, low image quality, style variations, and distortions. This is primarily due to the lack of consideration for these issues in the current benchmarks, which hinders the development and evaluation of historical document analysis and recognition (HDAR) methods in complex real-world scenarios. To address this gap, this paper introduces a complex multi-style Chinese historical document analysis benchmark, named M5HisDoc. The M5 indicates five properties of style, ie., Multiple layouts, Multiple document types, Multiple calligraphy styles, Multiple backgrounds, and Multiple challenges. The M5HisDoc dataset consists of two subsets, M5HisDoc-R (Regular) and M5HisDoc-H (Hard). The M5HisDoc-R subset comprises 4,000 historical document images. To ensure high-quality annotations, we meticulously perform manual annotation and triple-checking. To replicate real-world conditions for historical document analysis applications, we incorporate image rotation, distortion, and resolution reduction into M5HisDoc-R subset to form a new challenging subset named M5HisDoc-H, which contains the same number of images as M5HisDoc-R. The dataset exhibits diverse styles, significant scale variations, dense texts, and an extensive character set. We conduct benchmarking experiments on five tasks: text line detection, text line recognition, character detection, character recognition, and reading order prediction. We also conduct cross-validation with other benchmarks. Experimental results demonstrate that the M5HisDoc dataset can offer new challenges and great opportunities for future research in this field, thereby providing deep insights into the solution for HDAR. The dataset is available at https://github.com/HCIILAB/M5HisDoc. | M5HisDoc: A Large-scale Multi-style Chinese Historical Document Analysis Benchmark | [
"Yongxin Shi",
"Chongyu Liu",
"Dezhi Peng",
"Cheng Jian",
"Jiarong Huang",
"Lianwen Jin"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=uIppiU2JKP | @inproceedings{
qian2023synthcity,
title={Synthcity: a benchmark framework for diverse use cases of tabular synthetic data},
author={Zhaozhi Qian and Rob Davis and Mihaela van der Schaar},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=uIppiU2JKP}
} | Accessible high-quality data is the bread and butter of machine learning research, and the demand for data has exploded as larger and more advanced ML models are built across different domains. Yet, real data often contain sensitive information, are subject to various biases, and are costly to acquire, which compromise their quality and accessibility. Synthetic data have thus emerged as a complement to, sometimes even a replacement for, real data for ML training. However, the landscape of synthetic data research has been fragmented due to the diverse range of data modalities, such as tabular, time series, and images, and the wide array of use cases, including privacy preservation, fairness considerations, and data augmentation. This fragmentation poses practical challenges when comparing and selecting synthetic data generators in for different problem settings. To this end, we develop Synthcity, an open-source Python library that allows researchers and practitioners to perform one-click benchmarking of synthetic data generators across data modalities and use cases. Beyond benchmarking, Synthcity serves as a centralized toolkit for accessing cutting-edge data generators. In addition, Synthcity’s flexible plug-in style API makes it easy to incorporate additional data generators into the framework. Using examples of tabular data generation and data augmentation, we illustrate the general applicability of Synthcity, and the insight one can obtain. | Synthcity: a benchmark framework for diverse use cases of tabular synthetic data | [
"Zhaozhi Qian",
"Rob Davis",
"Mihaela van der Schaar"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=uIj1jDc8k6 | @inproceedings{
dev2023building,
title={Building Socio-culturally Inclusive Stereotype Resources with Community Engagement},
author={Sunipa Dev and Jaya Goyal and Dinesh Tewari and Shachi Dave and Vinodkumar Prabhakaran},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=uIj1jDc8k6}
} | With rapid development and deployment of generative language models in global settings, there is an urgent need to also scale our measurements of harm, not just in the number and types of harms covered, but also how well they account for local cultural contexts, including marginalized identities and the social biases experienced by them.
Current evaluation paradigms are limited in their abilities to address this, as they are not representative of diverse, locally situated but global, socio-cultural perspectives. It is imperative that our evaluation resources are enhanced and calibrated by including people and experiences from different cultures and societies worldwide, in order to prevent gross underestimations or skews in measurements of harm. In this work, we demonstrate a socio-culturally aware expansion of evaluation resources in the Indian societal context, specifically for the harm of stereotyping. We devise a community engaged effort to build a resource which contains stereotypes for axes of disparity that are uniquely present in India. The resultant resource increases the number of stereotypes known for and in the Indian context by over 1000 stereotypes across many unique identities. We also demonstrate the utility and effectiveness of such expanded resources for evaluations of language models.
CONTENT WARNING: This paper contains examples of stereotypes that may be offensive. | Building Socio-culturally Inclusive Stereotype Resources with Community Engagement | [
"Sunipa Dev",
"Jaya Goyal",
"Dinesh Tewari",
"Shachi Dave",
"Vinodkumar Prabhakaran"
] | Track/Datasets_and_Benchmarks | poster | 2307.10514 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=uHlKNCDAJb | @inproceedings{
li2023scenarionet,
title={ScenarioNet: Open-Source Platform for Large-Scale Traffic Scenario Simulation and Modeling},
author={Quanyi Li and Zhenghao Peng and Lan Feng and Zhizheng Liu and Chenda Duan and Wenjie Mo and Bolei Zhou},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=uHlKNCDAJb}
} | Large-scale driving datasets such as Waymo Open Dataset and nuScenes substantially accelerate autonomous driving research, especially for perception tasks such as 3D detection and trajectory forecasting. Since the driving logs in these datasets contain HD maps and detailed object annotations which accurately reflect the real-world complexity of traffic behaviors, we can harvest a massive number of complex traffic scenarios and recreate their digital twins in simulation. Compared to the hand-crafted scenarios often used in existing simulators, data-driven scenarios collected from the real world can facilitate many research opportunities in machine learning and autonomous driving. In this work, we present ScenarioNet, an open-source platform for large-scale traffic scenario modeling and simulation. ScenarioNet defines a unified scenario description format and collects a large-scale repository of real-world traffic scenarios from the heterogeneous data in various driving datasets including Waymo, nuScenes, Lyft L5, and nuPlan datasets. These scenarios can be further replayed and interacted with in multiple views from Bird-Eye-View layout to realistic 3D rendering in MetaDrive simulator. This provides a benchmark for evaluating the safety of autonomous driving stacks in simulation before their real-world deployment. We further demonstrate the strengths of ScenarioNet on large-scale scenario generation, imitation learning, and reinforcement learning in both single-agent and multi-agent settings. Code, demo videos, and website are available at https://github.com/metadriverse/scenarionet | ScenarioNet: Open-Source Platform for Large-Scale Traffic Scenario Simulation and Modeling | [
"Quanyi Li",
"Zhenghao Peng",
"Lan Feng",
"Zhizheng Liu",
"Chenda Duan",
"Wenjie Mo",
"Bolei Zhou"
] | Track/Datasets_and_Benchmarks | poster | 2306.12241 | [
"https://github.com/metadriverse/scenarionet"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=u2cXRGm95Y | @inproceedings{
ning2023uukg,
title={{UUKG}: Unified Urban Knowledge Graph Dataset for Urban Spatiotemporal Prediction},
author={Yansong Ning and Hao Liu and Hao Henry Wang and Zhenyu Zeng and Hui Xiong},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=u2cXRGm95Y}
} | Accurate Urban SpatioTemporal Prediction (USTP) is of great importance to the development and operation of the smart city. As an emerging building block, multi-sourced urban data are usually integrated as urban knowledge graphs (UrbanKGs) to provide critical knowledge for urban spatiotemporal prediction models. However, existing UrbanKGs are often tailored for specific downstream prediction tasks and are not publicly available, which limits the potential advancement. This paper presents UUKG, the unified urban knowledge graph dataset for knowledge-enhanced urban spatiotemporal predictions. Specifically, we first construct UrbanKGs consisting of millions of triplets for two metropolises by connecting heterogeneous urban entities such as administrative boroughs, POIs, and road segments.
Moreover, we conduct qualitative and quantitative analysis on constructed UrbanKGs and uncover diverse high-order structural patterns, such as hierarchies and cycles, that can be leveraged to benefit downstream USTP tasks. To validate and facilitate the use of UrbanKGs, we implement and evaluate 15 KG embedding methods on the KG completion task and integrate the learned KG embeddings into 9 spatiotemporal models for five different USTP tasks. The extensive experimental results not only provide benchmarks of knowledge-enhanced USTP models under different task settings but also highlight the potential of state-of-the-art high-order structure-aware UrbanKG embedding methods. We hope the proposed UUKG fosters research on urban knowledge graphs and broad smart city applications. The dataset and source code are available at https://github.com/usail-hkust/UUKG/. | UUKG: Unified Urban Knowledge Graph Dataset for Urban Spatiotemporal Prediction | [
"Yansong Ning",
"Hao Liu",
"Hao Henry Wang",
"Zhenyu Zeng",
"Hui Xiong"
] | Track/Datasets_and_Benchmarks | poster | 2306.11443 | [
"https://github.com/usail-hkust/uukg"
] | https://huggingface.co/papers/2306.11443 | 0 | 0 | 0 | 5 | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=tz7XkY6S9Z | @inproceedings{
sul2023mr,
title={Mr. HiSum: A Large-scale Dataset for Video Highlight Detection and Summarization},
author={Jinhwan Sul and Jihoon Han and Joonseok Lee},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=tz7XkY6S9Z}
} | Video highlight detection is a task to automatically select the most engaging moments from a long video. This problem is highly challenging since it aims to learn a general way of finding highlights from a variety of videos in the real world.The task has an innate subjectivity because the definition of a highlight differs across individuals. Therefore, to detect consistent and meaningful highlights, prior benchmark datasets have been labeled by multiple (5-20) raters. Due to the high cost of manual labeling, most existing public benchmarks are in extremely small scale, containing only a few tens or hundreds of videos. This insufficient benchmark scale causes multiple issues such as unstable evaluation or high sensitivity in traintest splits. We present Mr. HiSum, a large-scale dataset for video highlight detection and summarization, containing 31,892 videos and reliable labels aggregated over 50,000+ users per video. We empirically prove reliability of the labels as frame importance by cross-dataset transfer and user study. | Mr. HiSum: A Large-scale Dataset for Video Highlight Detection and Summarization | null | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=tk27oD2cBw | @inproceedings{
suzgun2023the,
title={The Harvard {USPTO} Patent Dataset: A Large-Scale, Well-Structured, and Multi-Purpose Corpus of Patent Applications},
author={Mirac Suzgun and Luke Melas-Kyriazi and Suproteem K Sarkar and Scott Kominers and Stuart Shieber},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=tk27oD2cBw}
} | Innovation is a major driver of economic and social development, and information about many kinds of innovation is embedded in semi-structured data from patents and patent applications. Though the impact and novelty of innovations expressed in patent data are difficult to measure through traditional means, machine learning offers a promising set of techniques for evaluating novelty, summarizing contributions, and embedding semantics. In this paper, we introduce the Harvard USPTO Patent Dataset (HUPD), a large-scale, well-structured, and multi-purpose corpus of English-language patent applications filed to the United States Patent and Trademark Office (USPTO) between 2004 and 2018. With more than 4.5 million patent documents, HUPD is two to three times larger than comparable corpora. Unlike other NLP patent datasets, HUPD contains the inventor-submitted versions of patent applications, not the final versions of granted patents, allowing us to study patentability at the time of filing using NLP methods for the first time. It is also novel in its inclusion of rich structured data alongside the text of patent filings: By providing each application’s metadata along with all of its text fields, HUPD enables researchers to perform new sets of NLP tasks that leverage variation in structured covariates. As a case study on the types of research HUPD makes possible, we introduce a new task to the NLP community -- patent acceptance prediction. We additionally show the structured metadata provided in HUPD allows us to conduct explicit studies of concept shifts for this task. We find that performance on patent acceptance prediction decays when models trained in one context are evaluated on different innovation categories and over time. Finally, we demonstrate how HUPD can be used for three additional tasks: Multi-class classification of patent subject areas, language modeling, and abstractive summarization. Put together, our publicly-available dataset aims to advance research extending language and classification models to diverse and dynamic real-world data distributions. | The Harvard USPTO Patent Dataset: A Large-Scale, Well-Structured, and Multi-Purpose Corpus of Patent Applications | [
"Mirac Suzgun",
"Luke Melas-Kyriazi",
"Suproteem K Sarkar",
"Scott Kominers",
"Stuart Shieber"
] | Track/Datasets_and_Benchmarks | oral | 2207.04043 | [
"https://github.com/suzgunmirac/hupd"
] | https://huggingface.co/papers/2207.04043 | 1 | 0 | 0 | 5 | [] | [
"HUPD/hupd",
"egm517/hupd_augmented"
] | [] | 1 |
null | https://openreview.net/forum?id=tOd8rSjcWz | @inproceedings{
zhu2023multimodal,
title={Multimodal C4: An Open, Billion-scale Corpus of Images Interleaved with Text},
author={Wanrong Zhu and Jack Hessel and Anas Awadalla and Samir Yitzhak Gadre and Jesse Dodge and Alex Fang and Youngjae Yu and Ludwig Schmidt and William Yang Wang and Yejin Choi},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=tOd8rSjcWz}
} | In-context vision and language models like Flamingo support arbitrarily interleaved sequences of images and text as input.
This format not only enables few-shot learning via interleaving independent supervised (image, text) examples, but also, more complex prompts involving interaction between images, e.g., ``What do image A and image B have in common?''
To support this interface, pretraining occurs over web corpora that similarly contain interleaved images+text.
To date, however, large-scale data of this form have not been publicly available.
We release Multimodal C4, an augmentation of the popular text-only C4 corpus with images interleaved.
We use a linear assignment algorithm to place images into longer bodies of text using CLIP features, a process that we show outperforms alternatives.
Multimodal C4 spans everyday topics like cooking, travel, technology, etc. A manual inspection of a random sample of documents shows that a vast majority (88\%) of images are topically relevant, and that linear assignment frequently selects individual sentences specifically well-aligned with each image (80\%).
After filtering NSFW images, ads, etc., the resulting corpus consists of 101.2M documents with 571M images interleaved in 43B English tokens. | Multimodal C4: An Open, Billion-scale Corpus of Images Interleaved with Text | [
"Wanrong Zhu",
"Jack Hessel",
"Anas Awadalla",
"Samir Yitzhak Gadre",
"Jesse Dodge",
"Alex Fang",
"Youngjae Yu",
"Ludwig Schmidt",
"William Yang Wang",
"Yejin Choi"
] | Track/Datasets_and_Benchmarks | poster | 2304.06939 | [
"https://github.com/allenai/mmc4"
] | https://huggingface.co/papers/2304.06939 | 2 | 0 | 0 | 10 | [
"openflamingo/OpenFlamingo-9B-vitl-mpt7b",
"openflamingo/OpenFlamingo-3B-vitl-mpt1b",
"openflamingo/OpenFlamingo-3B-vitl-mpt1b-langinstruct",
"openflamingo/OpenFlamingo-4B-vitl-rpj3b",
"openflamingo/OpenFlamingo-4B-vitl-rpj3b-langinstruct"
] | [] | [
"openflamingo/OpenFlamingo",
"lakshayt/MemeGradio"
] | 1 |
null | https://openreview.net/forum?id=tIW4kbnJIM | @inproceedings{
hu2023nurvid,
title={NurViD: A Large Expert-Level Video Database for Nursing Procedure Activity Understanding},
author={Ming Hu and Lin Wang and Siyuan Yan and Don Ma and Qingli Ren and Peng Xia and Wei Feng and Peibo Duan and Lie Ju and Zongyuan Ge},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=tIW4kbnJIM}
} | The application of deep learning to nursing procedure activity understanding has the potential to greatly enhance the quality and safety of nurse-patient interactions. By utilizing the technique, we can facilitate training and education, improve quality control, and enable operational compliance monitoring. However, the development of automatic recognition systems in this field is currently hindered by the scarcity of appropriately labeled datasets. The existing video datasets pose several limitations: 1) these datasets are small-scale in size to support comprehensive investigations of nursing activity; 2) they primarily focus on single procedures, lacking expert-level annotations for various nursing procedures and action steps; and 3) they lack temporally localized annotations, which prevents the effective localization of targeted actions within longer video sequences. To mitigate these limitations, we propose NurViD, a large video dataset with expert-level annotation for nursing procedure activity understanding. NurViD consists of over 1.5k videos totaling 144 hours, making it approximately four times longer than the existing largest nursing activity datasets. Notably, it encompasses 51 distinct nursing procedures and 177 action steps, providing a much more comprehensive coverage compared to existing datasets that primarily focus on limited procedures. To evaluate the efficacy of current deep learning methods on nursing activity understanding, we establish three benchmarks on NurViD: procedure recognition on untrimmed videos, procedure and action recognition on trimmed videos, and action detection. Our benchmark and code will be available at https://github.com/minghu0830/NurViD-benchmark. | NurViD: A Large Expert-Level Video Database for Nursing Procedure Activity Understanding | [
"Ming Hu",
"Lin Wang",
"Siyuan Yan",
"Don Ma",
"Qingli Ren",
"Peng Xia",
"Wei Feng",
"Peibo Duan",
"Lie Ju",
"Zongyuan Ge"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | https://huggingface.co/papers/2310.13347 | 1 | 0 | 0 | 10 | [] | [] | [] | 1 |
|
null | https://openreview.net/forum?id=s6qtLyR6uJ | @inproceedings{
lange2023neuroevobench,
title={NeuroEvoBench: Benchmarking Evolutionary Optimizers for Deep Learning Applications},
author={Robert Tjarko Lange and Yujin Tang and Yingtao Tian},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=s6qtLyR6uJ}
} | Recently, the Deep Learning community has become interested in evolutionary optimization (EO) as a means to address hard optimization problems, e.g. meta-learning through long inner loop unrolls or optimizing non-differentiable operators. One core reason for this trend has been the recent innovation in hardware acceleration and compatible software -- making distributed population evaluations much easier than before. Unlike for gradient descent-based methods though, there is a lack of hyperparameter understanding and best practices for EO – arguably due to severely less `graduate student descent' and benchmarking being performed for EO methods. Additionally, classical benchmarks from the evolutionary community provide few practical insights for Deep Learning applications. This poses challenges for newcomers to hardware-accelerated EO and hinders significant adoption. Hence, we establish a new benchmark of EO methods (NEB) tailored toward Deep Learning applications and exhaustively evaluate traditional and meta-learned EO. We investigate core scientific questions including resource allocation, fitness shaping, normalization, regularization & scalability of EO. The benchmark is open-sourced at https://github.com/neuroevobench/neuroevobench under Apache-2.0 license. | NeuroEvoBench: Benchmarking Evolutionary Optimizers for Deep Learning Applications | [
"Robert Tjarko Lange",
"Yujin Tang",
"Yingtao Tian"
] | Track/Datasets_and_Benchmarks | poster | [
"https://github.com/neuroevobench/neuroevobench"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=rXi13M3PKc | @inproceedings{
johnson2023oceanbench,
title={OceanBench: The Sea Surface Height Edition},
author={Juan Emmanuel Johnson and Quentin Febvre and Anastasia Gorbunova and Sammy Metref and Maxime Ballarotta and Julien Le Sommer and Ronan Fablet},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=rXi13M3PKc}
} | The ocean is a crucial component of the Earth's system.
It profoundly influences human activities and plays a critical role in climate regulation.
Our understanding has significantly improved over the last decades with the advent of satellite remote sensing data, allowing us to capture essential sea surface quantities over the globe, e.g., sea surface height (SSH).
Despite their ever-increasing abundance, ocean satellite data presents challenges for information extraction due to their sparsity and irregular sampling, signal complexity, and noise.
Machine learning (ML) techniques have demonstrated their capabilities in dealing with large-scale, complex signals.
Therefore we see an opportunity for these ML models to harness the full extent of the information contained in ocean satellite data.
However, data representation and relevant evaluation metrics can be the defining factors when determining the success of applied ML.
The processing steps from the raw observation data to a ML-ready state and from model outputs to interpretable quantities require domain expertise, which can be a significant barrier to entry for ML researchers.
In addition, imposing fixed processing steps, like committing to specific variables, regions, and geometries, will narrow the scope of ML models and their potential impact on real-world applications.
OceanBench is a unifying framework that provides standardized processing steps that comply with domain-expert standards.
It is designed with a flexible and pedagogical abstraction: it a) provides plug-and-play data and pre-configured pipelines for ML researchers to benchmark their models w.r.t. ML and domain-related baselines and b) provides a transparent and configurable framework for researchers to customize and extend the pipeline for their tasks.
In this work, we demonstrate the OceanBench framework through a first edition dedicated to SSH interpolation challenges.
We provide datasets and ML-ready benchmarking pipelines for the long-standing problem of interpolating observations from simulated ocean satellite data, multi-modal and multi-sensor fusion issues, and transfer-learning to real ocean satellite observations.
The OceanBench framework is available at https://github.com/jejjohnson/oceanbench and the dataset registry is available at https://github.com/quentinf00/oceanbench-data-registry. | OceanBench: The Sea Surface Height Edition | [
"Juan Emmanuel Johnson",
"Quentin Febvre",
"Anastasia Gorbunova",
"Sammy Metref",
"Maxime Ballarotta",
"Julien Le Sommer",
"Ronan Fablet"
] | Track/Datasets_and_Benchmarks | poster | 2309.15599 | [
"https://github.com/jejjohnson/oceanbench"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=rR1c6rzXHa | @inproceedings{
qin2023rdsuite,
title={{RD}-Suite: A Benchmark for Ranking Distillation},
author={Zhen Qin and Rolf Jagerman and Rama Kumar Pasumarthi and Honglei Zhuang and He Zhang and Aijun Bai and Kai Hui and Le Yan and Xuanhui Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=rR1c6rzXHa}
} | The distillation of ranking models has become an important topic in both academia and industry. In recent years, several advanced methods have been proposed to tackle this problem, often leveraging ranking information from teacher rankers that is absent in traditional classification settings. To date, there is no well-established consensus on how to evaluate this class of models. Moreover, inconsistent benchmarking on a wide range of tasks and datasets make it difficult to assess or invigorate advances in this field. This paper first examines representative prior arts on ranking distillation, and raises three questions to be answered around methodology and reproducibility. To that end, we propose a systematic and unified benchmark, Ranking Distillation Suite (RD-Suite), which is a suite of tasks with 4 large real-world datasets, encompassing two major modalities (textual and numeric) and two applications (standard distillation and distillation transfer). RD-Suite consists of benchmark results that challenge some of the common wisdom in the field, and the release of datasets with teacher scores and evaluation scripts for future research. RD-Suite paves the way towards better understanding of ranking distillation, facilities more research in this direction, and presents new challenges. | RD-Suite: A Benchmark for Ranking Distillation | [
"Zhen Qin",
"Rolf Jagerman",
"Rama Kumar Pasumarthi",
"Honglei Zhuang",
"He Zhang",
"Aijun Bai",
"Kai Hui",
"Le Yan",
"Xuanhui Wang"
] | Track/Datasets_and_Benchmarks | poster | 2306.04455 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=r30thTMcaM | @inproceedings{
{\"o}stling2023the,
title={The Cambridge Law Corpus: A Corpus for Legal {AI} Research},
author={Andreas {\"O}stling and Holli Sargeant and Huiyuan Xie and Ludwig Konrad Bull and Alexander Terenin and Leif Jonsson and M{\r{a}}ns Magnusson and Felix Steffek},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=r30thTMcaM}
} | We introduce the Cambridge Law Corpus (CLC), a dataset for legal AI research. It consists of over 250 000 court cases from the UK. Most cases are from the 21st century, but the corpus includes cases as old as the 16th century. This paper presents the first release of the corpus, containing the raw text and meta-data. Together with the corpus, we provide annotations on case outcomes for 638 cases, done by legal experts. Using our annotated data, we have trained and evaluated case outcome extraction with GPT-3, GPT-4 and RoBERTa models to provide benchmarks. We include an extensive legal and ethical discussion to address the potentially sensitive nature of this material. As a consequence, the corpus will only be released for research purposes under certain restrictions. | The Cambridge Law Corpus: A Dataset for Legal AI Research | [
"Andreas Östling",
"Holli Sargeant",
"Huiyuan Xie",
"Ludwig Konrad Bull",
"Alexander Terenin",
"Leif Jonsson",
"Måns Magnusson",
"Felix Steffek"
] | Track/Datasets_and_Benchmarks | poster | 2309.12269 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=qynH28Y4xE | @inproceedings{
kamani2023wyze,
title={Wyze Rule: Federated Rule Dataset for Rule Recommendation Benchmarking},
author={Mohammad Mahdi Kamani and Yuhang Yao and Hanjia Lyu and Zhongwei Cheng and Lin Chen and Liangju Li and Carlee Joe-Wong and Jiebo Luo},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=qynH28Y4xE}
} | In the rapidly evolving landscape of smart home automation, the potential of IoT devices is vast. In this realm, rules are the main tool utilized for this automation, which are predefined conditions or triggers that establish connections between devices, enabling seamless automation of specific processes. However, one significant challenge researchers face is the lack of comprehensive datasets to explore and advance the field of smart home rule recommendations. These datasets are essential for developing and evaluating intelligent algorithms that can effectively recommend rules for automating processes while preserving the privacy of the users, as it involves personal information about users' daily lives. To bridge this gap, we present the Wyze Rule Dataset, a large-scale dataset designed specifically for smart home rule recommendation research. Wyze Rule encompasses over 1 million rules gathered from a diverse user base of 300,000 individuals from Wyze Labs, offering an extensive and varied collection of real-world data. With a focus on federated learning, our dataset is tailored to address the unique challenges of a cross-device federated learning setting in the recommendation domain, featuring a large-scale number of clients with widely heterogeneous data. To establish a benchmark for comparison and evaluation, we have meticulously implemented multiple baselines in both centralized and federated settings. Researchers can leverage these baselines to gauge the performance and effectiveness of their rule recommendation systems, driving advancements in the domain. The Wyze Rule Dataset is publicly accessible through [HuggingFace](https://huggingface.co/datasets/wyzelabs/RuleRecommendation)'s dataset API. | Wyze Rule: Federated Rule Dataset for Rule Recommendation Benchmarking | [
"Mohammad Mahdi Kamani",
"Yuhang Yao",
"Hanjia Lyu",
"Zhongwei Cheng",
"Lin Chen",
"Liangju Li",
"Carlee Joe-Wong",
"Jiebo Luo"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=qmCxdPkNsa | @inproceedings{
tomilin2023coom,
title={{COOM}: A Game Benchmark for Continual Reinforcement Learning},
author={Tristan Tomilin and Meng Fang and Yudi Zhang and Mykola Pechenizkiy},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=qmCxdPkNsa}
} | The advancement of continual reinforcement learning (RL) has been facing various obstacles, including standardized metrics and evaluation protocols, demanding computational requirements, and a lack of widely accepted standard benchmarks. In response to these challenges, we present COOM ($\textbf{C}$ontinual D$\textbf{OOM}$), a continual RL benchmark tailored for embodied pixel-based RL. COOM presents a meticulously crafted suite of task sequences set within visually distinct 3D environments, serving as a robust evaluation framework to assess crucial aspects of continual RL, such as catastrophic forgetting, knowledge transfer, and sample-efficient learning. Following an in-depth empirical evaluation of popular continual learning (CL) methods, we pinpoint their limitations, provide valuable insight into the benchmark and highlight unique algorithmic challenges. This makes our work the first to benchmark image-based CRL in 3D environments with embodied perception. The primary objective of the COOM benchmark is to offer the research community a valuable and cost-effective challenge. It seeks to deepen our comprehension of the capabilities and limitations of current and forthcoming CL methods in an RL setting. The code and environments are open-sourced and accessible on GitHub. | COOM: A Game Benchmark for Continual Reinforcement Learning | [
"Tristan Tomilin",
"Meng Fang",
"Yudi Zhang",
"Mykola Pechenizkiy"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=qi0Zrm6E5E | @inproceedings{
dreczkowski2023framework,
title={Framework and Benchmarks for Combinatorial and Mixed-variable Bayesian Optimization},
author={Kamil Dreczkowski and Antoine Grosnit and Haitham Bou Ammar},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=qi0Zrm6E5E}
} | This paper introduces a modular framework for Mixed-variable and Combinatorial Bayesian Optimization (MCBO) to address the lack of systematic benchmarking and standardized evaluation in the field. Current MCBO papers often introduce non-diverse or non-standard benchmarks to evaluate their methods, impeding the proper assessment of different MCBO primitives and their combinations. Additionally, papers introducing a solution for a single MCBO primitive often omit benchmarking against baselines that utilize the same methods for the remaining primitives. This omission is primarily due to the significant implementation overhead involved, resulting in a lack of controlled assessments and an inability to showcase the merits of a contribution effectively.
To overcome these challenges, our proposed framework enables an effortless combination of Bayesian Optimization components, and provides a diverse set of synthetic and real-world benchmarking tasks.
Leveraging this flexibility, we implement 47 novel MCBO algorithms and benchmark them against seven existing MCBO solvers and five standard black-box optimization algorithms on ten tasks, conducting over 4000 experiments.
Our findings reveal a superior combination of MCBO primitives outperforming existing approaches and illustrate the significance of model fit and the use of a trust region. We make our MCBO library available under the MIT license at \url{https://github.com/huawei-noah/HEBO/tree/master/MCBO}. | Framework and Benchmarks for Combinatorial and Mixed-variable Bayesian Optimization | [
"Kamil Dreczkowski",
"Antoine Grosnit",
"Haitham Bou Ammar"
] | Track/Datasets_and_Benchmarks | poster | 2306.09803 | [
"https://github.com/huawei-noah/hebo"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=qf4CWnrvZa | @inproceedings{
lehman2023vtac,
title={{VT}aC: A Benchmark Dataset of Ventricular Tachycardia Alarms from {ICU} Monitors},
author={Li-wei H. Lehman and Benjamin E Moody and Harsh Deep and Feng Wu and Hasan Saeed and Lucas McCullum and Diane Perry and Tristan Struja and Qiao Li and Gari Clifford and Roger Mark},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=qf4CWnrvZa}
} | False arrhythmia alarms in intensive care units (ICUs) are a continuing problem despite considerable effort from industrial and academic algorithm developers. Of all life-threatening arrhythmias, ventricular tachycardia (VT) stands out as the most challenging arrhythmia to detect reliably. We introduce a new annotated VT alarm database, VTaC (Ventricular Tachycardia annotated alarms from ICUs) consisting of over 5,000 waveform recordings with VT alarms triggered by bedside monitors in the ICUs. Each VT alarm in the dataset has been labeled by at least two independent human expert annotators. The dataset encompasses data collected from ICUs in three major US hospitals and includes data from three leading bedside monitor manufacturers, providing a diverse and representative collection of alarm waveform data. Each waveform recording comprises at least two electrocardiogram (ECG) leads and one or more pulsatile waveforms, such as photoplethysmogram (PPG or PLETH) and arterial blood pressure (ABP) waveforms. We demonstrate the utility of this new benchmark dataset for the task of false arrhythmia alarm reduction, and present performance of multiple machine learning approaches, including conventional supervised machine learning, deep learning, contrastive learning and generative approaches for the task of VT false alarm reduction. | VTaC: A Benchmark Dataset of Ventricular Tachycardia Alarms from ICU Monitors | [
"Li-wei H. Lehman",
"Benjamin E Moody",
"Harsh Deep",
"Feng Wu",
"Hasan Saeed",
"Lucas McCullum",
"Diane Perry",
"Tristan Struja",
"Qiao Li",
"Gari Clifford",
"Roger Mark"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=qY9LR74O3Z | @inproceedings{
lee2023holistic,
title={Holistic Evaluation of Text-to-Image Models},
author={Tony Lee and Michihiro Yasunaga and Chenlin Meng and Yifan Mai and Joon Sung Park and Agrim Gupta and Yunzhi Zhang and Deepak Narayanan and Hannah Benita Teufel and Marco Bellagente and Minguk Kang and Taesung Park and Jure Leskovec and Jun-Yan Zhu and Li Fei-Fei and Jiajun Wu and Stefano Ermon and Percy Liang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=qY9LR74O3Z}
} | The stunning qualitative improvement of text-to-image models has led to their widespread attention and adoption. However, we lack a comprehensive quantitative understanding of their capabilities and risks. To fill this gap, we introduce a new benchmark, Holistic Evaluation of Text-to-Image Models (HEIM). Whereas previous evaluations focus mostly on image-text alignment and image quality, we identify 12 aspects, including text-image alignment, image quality, aesthetics, originality, reasoning, knowledge, bias, toxicity, fairness, robustness, multilinguality, and efficiency. We curate 62 scenarios encompassing these aspects and evaluate 26 state-of-the-art text-to-image models on this benchmark. Our results reveal that no single model excels in all aspects, with different models demonstrating different strengths. We release the generated images and human evaluation results for full transparency at https://crfm.stanford.edu/heim/latest and the code at https://github.com/stanford-crfm/helm, which is integrated with the HELM codebase | Holistic Evaluation of Text-to-Image Models | [
"Tony Lee",
"Michihiro Yasunaga",
"Chenlin Meng",
"Yifan Mai",
"Joon Sung Park",
"Agrim Gupta",
"Yunzhi Zhang",
"Deepak Narayanan",
"Hannah Benita Teufel",
"Marco Bellagente",
"Minguk Kang",
"Taesung Park",
"Jure Leskovec",
"Jun-Yan Zhu",
"Li Fei-Fei",
"Jiajun Wu",
"Stefano Ermon",
"Percy Liang"
] | Track/Datasets_and_Benchmarks | oral | 2311.04287 | [
"https://github.com/stanford-crfm/helm"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=qWsQi9DGJb | @inproceedings{
chaptoukaev2023stressid,
title={Stress{ID}: a Multimodal Dataset for Stress Identification},
author={Hava Chaptoukaev and Valeriya Strizhkova and Michele Panariello and Bianca Dalpaos and Aglind Reka and Valeria Manera and Susanne Thummler and Esma ISMAILOVA and Nicholas Evans and Francois Bremond and Massimiliano Todisco and Maria A Zuluaga and Laura M. Ferrari},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=qWsQi9DGJb}
} | StressID is a new dataset specifically designed for stress identification from
unimodal and multimodal data. It contains videos of facial expressions, audio
recordings, and physiological signals. The video and audio recordings are acquired
using an RGB camera with an integrated microphone. The physiological data
is composed of electrocardiography (ECG), electrodermal activity (EDA), and
respiration signals that are recorded and monitored using a wearable device. This
experimental setup ensures a synchronized and high-quality multimodal data col-
lection. Different stress-inducing stimuli, such as emotional video clips, cognitive
tasks including mathematical or comprehension exercises, and public speaking
scenarios, are designed to trigger a diverse range of emotional responses. The
final dataset consists of recordings from 65 participants who performed 11 tasks,
as well as their ratings of perceived relaxation, stress, arousal, and valence levels.
StressID is one of the largest datasets for stress identification that features three
different sources of data and varied classes of stimuli, representing more than
39 hours of annotated data in total. StressID offers baseline models for stress
classification including a cleaning, feature extraction, and classification phase for
each modality. Additionally, we provide multimodal predictive models combining
video, audio, and physiological inputs. The data and the code for the baselines are
available at https://project.inria.fr/stressid/. | StressID: a Multimodal Dataset for Stress Identification | [
"Hava Chaptoukaev",
"Valeriya Strizhkova",
"Michele Panariello",
"Bianca Dalpaos",
"Aglind Reka",
"Valeria Manera",
"Susanne Thummler",
"Esma ISMAILOVA",
"Nicholas Evans",
"Francois Bremond",
"Massimiliano Todisco",
"Maria A Zuluaga",
"Laura M. Ferrari"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=qVXYU3F017 | @inproceedings{
luccioni2023stable,
title={Stable Bias: Evaluating Societal Representations in Diffusion Models},
author={Sasha Luccioni and Christopher Akiki and Margaret Mitchell and Yacine Jernite},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=qVXYU3F017}
} | As machine learning-enabled Text-to-Image (TTI) systems are becoming increasingly prevalent and seeing growing adoption as commercial services, characterizing the social biases they exhibit is a necessary first step to lowering their risk of discriminatory outcomes. This evaluation, however, is made more difficult by the synthetic nature of these systems’ outputs: common definitions of diversity are grounded in social categories of people living in the world, whereas the artificial depictions of fictive humans created by these systems have no inherent gender or ethnicity. To address this need, we propose a new method for exploring the social biases in TTI systems. Our approach relies on characterizing the variation in generated images triggered by enumerating gender and ethnicity markers in the prompts, and comparing it to the variation engendered by spanning different professions. This allows us to (1) identify specific bias trends, (2) provide targeted scores to directly compare models in terms of diversity and representation, and (3) jointly model interdependent social variables to support a multidimensional analysis. We leverage this method to analyze images generated by 3 popular TTI systems (Dall·E 2 , Stable Diffusion v 1.4 and 2) and find that while all of their outputs show correlations with US labor demographics, they also consistently under-represent marginalized identities to different extents. We also release the datasets and low-code interactive bias exploration platforms developed for
this work, as well as the necessary tools to similarly evaluate additional TTI systems. | Stable Bias: Evaluating Societal Representations in Diffusion Models | null | Track/Datasets_and_Benchmarks | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=qG7IkQ7IBO | @inproceedings{
huang2023temporal,
title={Temporal Graph Benchmark for Machine Learning on Temporal Graphs},
author={Shenyang Huang and Farimah Poursafaei and Jacob Danovitch and Matthias Fey and Weihua Hu and Emanuele Rossi and Jure Leskovec and Michael M. Bronstein and Guillaume Rabusseau and Reihaneh Rabbany},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=qG7IkQ7IBO}
} | We present the Temporal Graph Benchmark (TGB), a collection of challenging and diverse benchmark datasets for realistic, reproducible, and robust evaluation of machine learning models on temporal graphs. TGB datasets are of large scale, spanning years in duration, incorporate both node and edge-level prediction tasks and cover a diverse set of domains including social, trade, transaction, and transportation networks. For both tasks, we design evaluation protocols based on realistic use-cases. We extensively benchmark each dataset and find that the performance of common models can vary drastically across datasets. In addition, on dynamic node property prediction tasks, we show that simple methods often achieve superior performance compared to existing temporal graph models. We believe that these findings open up opportunities for future research on temporal graphs. Finally, TGB provides an automated machine learning pipeline for reproducible and accessible temporal graph research, including data loading, experiment setup and performance evaluation. TGB will be maintained and updated on a regular basis and welcomes community feedback. TGB datasets, data loaders, example codes, evaluation setup, and leaderboards are publicly available at https://tgb.complexdatalab.com/. | Temporal Graph Benchmark for Machine Learning on Temporal Graphs | [
"Shenyang Huang",
"Farimah Poursafaei",
"Jacob Danovitch",
"Matthias Fey",
"Weihua Hu",
"Emanuele Rossi",
"Jure Leskovec",
"Michael M. Bronstein",
"Guillaume Rabusseau",
"Reihaneh Rabbany"
] | Track/Datasets_and_Benchmarks | poster | 2307.01026 | [
"https://github.com/shenyanghuang/tgb"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=q9hc7R8N7P | @inproceedings{
gustafson2023exploring,
title={Exploring Why Object Recognition Performance Degrades Across Income Levels and Geographies with Factor Annotations},
author={Laura Gustafson and Megan Richards and Melissa Hall and Caner Hazirbas and Diane Bouchacourt and Mark Ibrahim},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=q9hc7R8N7P}
} | Despite impressive advances in object-recognition, deep learning systems’ performance degrades significantly across geographies and lower income levels---raising pressing concerns of inequity. Addressing such performance gaps remains a challenge, as little is understood about why performance degrades across incomes or geographies.
We take a step in this direction by annotating images from Dollar Street, a popular benchmark of geographically and economically diverse images, labeling each image with factors such as color, shape, and background. These annotations unlock a new granular view into how objects differ across incomes/regions. We then use these object differences to pinpoint model vulnerabilities across incomes and regions.
We study a range of modern vision models, finding that performance disparities are most associated with differences in _texture, occlusion_, and images with _darker lighting_.
We illustrate how insights from our factor labels can surface mitigations to improve models' performance disparities.
As an example, we show that mitigating a model's vulnerability to texture
can improve performance on the lower income level.
**We release all the factor annotations along with an interactive dashboard
to facilitate research into more equitable vision systems.** | Exploring Why Object Recognition Performance Degrades Across Income Levels and Geographies with Factor Annotations | [
"Laura Gustafson",
"Megan Richards",
"Melissa Hall",
"Caner Hazirbas",
"Diane Bouchacourt",
"Mark Ibrahim"
] | Track/Datasets_and_Benchmarks | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=q4XNX15kSe | @inproceedings{
liu2023rppgtoolbox,
title={r{PPG}-Toolbox: Deep Remote {PPG} Toolbox},
author={Xin Liu and Girish Narayanswamy and Akshay Paruchuri and Xiaoyu Zhang and Jiankai Tang and Yuzhe Zhang and Soumyadip Sengupta and Shwetak Patel and Yuntao Wang and Daniel McDuff},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=q4XNX15kSe}
} | Camera-based physiological measurement is a fast growing field of computer vision. Remote photoplethysmography (rPPG) utilizes imaging devices (e.g., cameras) to measure the peripheral blood volume pulse (BVP) via photoplethysmography, and enables cardiac measurement via webcams and smartphones. However, the task is non-trivial with important pre-processing, modeling and post-processing steps required to obtain state-of-the-art results. Replication of results and benchmarking of new models is critical for scientific progress; however, as with many other applications of deep learning, reliable codebases are not easy to find or use. We present a comprehensive toolbox, rPPG-Toolbox, unsupervised and supervised rPPG models with support for public benchmark datasets, data augmentation and systematic evaluation: https://github.com/ubicomplab/rPPG-Toolbox. | rPPG-Toolbox: Deep Remote PPG Toolbox | [
"Xin Liu",
"Girish Narayanswamy",
"Akshay Paruchuri",
"Xiaoyu Zhang",
"Jiankai Tang",
"Yuzhe Zhang",
"Soumyadip Sengupta",
"Shwetak Patel",
"Yuntao Wang",
"Daniel McDuff"
] | Track/Datasets_and_Benchmarks | poster | 2210.00716 | [
"https://github.com/ubicomplab/rppg-toolbox"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=q3FJk2Nvkk | @inproceedings{
leroy2023impmarl,
title={{IMP}-{MARL}: a Suite of Environments for Large-scale Infrastructure Management Planning via {MARL}},
author={Pascal Leroy and Pablo G. Morato and Jonathan Pisane and Athanasios Kolios and Damien Ernst},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=q3FJk2Nvkk}
} | We introduce IMP-MARL, an open-source suite of multi-agent reinforcement learning (MARL) environments for large-scale Infrastructure Management Planning (IMP), offering a platform for benchmarking the scalability of cooperative MARL methods in real-world engineering applications.
In IMP, a multi-component engineering system is subject to a risk of failure due to its components' damage condition.
Specifically, each agent plans inspections and repairs for a specific system component, aiming to minimise maintenance costs while cooperating to minimise system failure risk.
With IMP-MARL, we release several environments including one related to offshore wind structural systems, in an effort to meet today's needs to improve management strategies to support sustainable and reliable energy systems.
Supported by IMP practical engineering environments featuring up to 100 agents, we conduct a benchmark campaign, where the scalability and performance of state-of-the-art cooperative MARL methods are compared against expert-based heuristic policies.
The results reveal that centralised training with decentralised execution methods scale better with the number of agents than fully centralised or decentralised RL approaches, while also outperforming expert-based heuristic policies in most IMP environments.
Based on our findings, we additionally outline remaining cooperation and scalability challenges that future MARL methods should still address.
Through IMP-MARL, we encourage the implementation of new environments and the further development of MARL methods. | IMP-MARL: a Suite of Environments for Large-scale Infrastructure Management Planning via MARL | [
"Pascal Leroy",
"Pablo G. Morato",
"Jonathan Pisane",
"Athanasios Kolios",
"Damien Ernst"
] | Track/Datasets_and_Benchmarks | poster | 2306.11551 | [
"https://github.com/moratodpg/imp_marl"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=pyhv4qYCEJ | @inproceedings{
wang2023evaluating,
title={Evaluating Self-Supervised Learning for Molecular Graph Embeddings},
author={Hanchen Wang and Jean Kaddour and Shengchao Liu and Jian Tang and Joan Lasenby and Qi Liu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=pyhv4qYCEJ}
} | Graph Self-Supervised Learning (GSSL) provides a robust pathway for acquiring embeddings without expert labelling, a capability that carries profound implications for molecular graphs due to the staggering number of potential molecules and the high cost of obtaining labels. However, GSSL methods are designed not for optimisation within a specific domain but rather for transferability across a variety of downstream tasks. This broad applicability complicates their evaluation. Addressing this challenge, we present "Molecular Graph Representation Evaluation" (MOLGRAPHEVAL), generating detailed profiles of molecular graph embeddings with interpretable and diversified attributes. MOLGRAPHEVAL offers a suite of probing tasks grouped into three categories: (i) generic graph, (ii) molecular substructure, and (iii) embedding space properties. By leveraging MOLGRAPHEVAL to benchmark existing GSSL methods against both current downstream datasets and our suite of tasks, we uncover significant inconsistencies between inferences drawn solely from existing datasets and those derived from more nuanced probing. These findings suggest that current evaluation methodologies fail to capture the entirety of the landscape. | Evaluating Self-Supervised Learning for Molecular Graph Embeddings | [
"Hanchen Wang",
"Jean Kaddour",
"Shengchao Liu",
"Jian Tang",
"Joan Lasenby",
"Qi Liu"
] | Track/Datasets_and_Benchmarks | poster | 2206.08005 | [
"https://github.com/hansen7/molgrapheval"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=pvdm4B6JMK | @inproceedings{
feng2023chessgpt,
title={Chess{GPT}: Bridging Policy Learning and Language Modeling},
author={Xidong Feng and Yicheng Luo and Ziyan Wang and Hongrui Tang and Mengyue Yang and Kun Shao and David Henry Mguni and Yali Du and Jun Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=pvdm4B6JMK}
} | When solving decision-making tasks, humans typically depend on information from two key sources: (1) Historical policy data, which provides interaction replay from the environment, and (2) Analytical insights in natural language form, exposing the invaluable thought process or strategic considerations. Despite this, the majority of preceding research focuses on only one source: they either use historical replay exclusively to directly learn policy or value functions, or engaged in language model training utilizing mere language corpus. In this paper, we argue that a powerful autonomous agent should cover both sources. Thus, we propose ChessGPT, a GPT model bridging policy learning and language modeling by integrating data from these two sources in Chess games. Specifically, we build a large-scale game and language dataset related to chess. Leveraging the dataset, we showcase two model examples ChessCLIP and ChessGPT, integrating policy learning and language modeling. Finally, we propose a full evaluation framework for evaluating language model's chess ability. Experimental results validate our model and dataset's effectiveness. We open source our code, model, and dataset at https://github.com/waterhorse1/ChessGPT. | ChessGPT: Bridging Policy Learning and Language Modeling | [
"Xidong Feng",
"Yicheng Luo",
"Ziyan Wang",
"Hongrui Tang",
"Mengyue Yang",
"Kun Shao",
"David Henry Mguni",
"Yali Du",
"Jun Wang"
] | Track/Datasets_and_Benchmarks | poster | 2306.09200 | [
"https://github.com/waterhorse1/chessgpt"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=pu3sNlrgQr | @inproceedings{
kotar2023are,
title={Are These the Same Apple? Comparing Images Based on Object Intrinsics},
author={Klemen Kotar and Stephen Tian and Hong-Xing Yu and Daniel LK Yamins and Jiajun Wu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=pu3sNlrgQr}
} | The human visual system can effortlessly recognize an object under different extrinsic factors such as lighting, object poses, and background, yet current computer vision systems often struggle with these variations. An important step to understanding and improving artificial vision systems is to measure image similarity purely based on intrinsic object properties that define object identity. This problem has been studied in the computer vision literature as re-identification, though mostly restricted to specific object categories such as people and cars. We propose to extend it to general object categories, exploring an image similarity metric based on object intrinsics. To benchmark such measurements, we collect the Common paired objects Under differenT Extrinsics (CUTE) dataset of 18, 000 images of 180 objects under different extrinsic factors such as lighting, poses, and imaging conditions. While existing methods such as LPIPS and CLIP scores do not measure object intrinsics well, we find that combining deep features learned from contrastive self-supervised learning with foreground filtering is a simple yet effective approach to approximating the similarity. We conduct an extensive survey of pre-trained features and foreground extraction methods to arrive at a strong baseline that best measures intrinsic object-centric image similarity among current methods. Finally, we demonstrate that our approach can aid in downstream applications such as acting as an analog for human subjects and improving generalizable re-identification. Please see our project website at https://s-tian.github.io/projects/cute/ for visualizations of the data and demos of our metric. | Are These the Same Apple? Comparing Images Based on Object Intrinsics | [
"Klemen Kotar",
"Stephen Tian",
"Hong-Xing Yu",
"Daniel LK Yamins",
"Jiajun Wu"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=plAix1NxhU | @inproceedings{
phothilimthana2023tpugraphs,
title={TpuGraphs: A Performance Prediction Dataset on Large Tensor Computational Graphs},
author={Phitchaya Mangpo Phothilimthana and Sami Abu-El-Haija and Kaidi Cao and Bahare Fatemi and Michael Burrows and Charith Mendis and Bryan Perozzi},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=plAix1NxhU}
} | Precise hardware performance models play a crucial role in code optimizations. They can assist compilers in making heuristic decisions or aid autotuners in identifying the optimal configuration for a given program. For example, the autotuner for XLA, a machine learning compiler, discovered 10–20\% speedup on state-of-the-art models serving substantial production traffic at Google. Although there exist a few datasets for program performance prediction, they target small sub-programs such as basic blocks or kernels. This paper introduces TpuGraphs, a performance prediction dataset on full tensor programs, represented as computational graphs, running on Tensor Processing Units (TPUs). Each graph in the dataset represents the main computation of a machine learning workload, e.g., a training epoch or an inference step. Each data sample contains a computational graph, a compilation configuration, and the execution time of the graph when compiled with the configuration. The graphs in the dataset are collected from open-source machine learning programs, featuring popular model architectures (e.g., ResNet, EfficientNet, Mask R-CNN, and Transformer). TpuGraphs provides 25x more graphs than the largest graph property prediction dataset (with comparable graph sizes), and 770x larger graphs on average compared to existing performance prediction datasets on machine learning programs. This graph-level prediction task on large graphs introduces new challenges in learning, ranging from scalability, training efficiency, to model quality. | TpuGraphs: A Performance Prediction Dataset on Large Tensor Computational Graphs | [
"Phitchaya Mangpo Phothilimthana",
"Sami Abu-El-Haija",
"Kaidi Cao",
"Bahare Fatemi",
"Michael Burrows",
"Charith Mendis",
"Bryan Perozzi"
] | Track/Datasets_and_Benchmarks | poster | 2308.13490 | [
"https://github.com/google-research-datasets/tpu_graphs"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=pX5xlL1T4C | @inproceedings{
ro{\v{s}}kar2023renku,
title={Renku: a platform for sustainable data science},
author={Rok Ro{\v{s}}kar and Chandrasekhar Ramakrishnan and Michele Volpi and Fernando Perez-Cruz and Lilian Gasser and Firat Ozdemir and Patrick Paitz and Mohammad Alisafaee and Philipp Fischer and Ralf Grubenmann and Eliza Jean Harris and Tasko Olevski and Carl Remlinger and Luis Salamanca and Elisabet Capon Garcia and Lorenzo Cavazzi and Jakub Chrobasik and Darlin Andrea Cordoba Osnas and Alessandro Degano and Jimena Dupre and Wesley Johnson and Eike Kettner and Laura Kinkead and Sean Murphy and Flora Thiebaut and Olivier Verscheure},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=pX5xlL1T4C}
} | Data and code working together is fundamental to machine learning (ML), but the context around datasets and interactions between datasets and code are in general captured only rudimentarily. Context such as how the dataset was prepared and created, what source data were used, what code was used in processing, how the dataset evolved, and where it has been used and reused can provide much insight, but this information is often poorly documented. That is unfortunate since it makes datasets into black-boxes with potentially hidden characteristics that have downstream consequences. We argue that making dataset preparation more accessible and dataset usage easier to record and document would have significant benefits for the ML community: it would allow for greater diversity in datasets by inviting modification to published sources, simplify use of alternative datasets and, in doing so, make results more transparent and robust, while allowing for all contributions to be adequately credited. We present a platform, Renku, designed to support and encourage such sustainable development and use of data, datasets, and code, and we demonstrate its benefits through a few illustrative projects which span the spectrum from dataset creation to dataset consumption and showcasing. | Renku: a platform for sustainable data science | null | Track/Datasets_and_Benchmarks | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=pWkrU6raMt | @inproceedings{
mouatadid2023subseasonalclimateusa,
title={SubseasonalClimate{USA}: A Dataset for Subseasonal Forecasting and Benchmarking},
author={Soukayna Mouatadid and Paulo Orenstein and Genevieve Elaine Flaspohler and Miruna Oprescu and Judah Cohen and Franklyn Wang and Sean Edward Knight and Maria Geogdzhayeva and Samuel James Levang and Ernest Fraenkel and Lester Mackey},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=pWkrU6raMt}
} | Subseasonal forecasting of the weather two to six weeks in advance is critical for resource allocation and advance disaster notice but poses many challenges for the forecasting community. At this forecast horizon, physics-based dynamical models have limited skill, and the targets for prediction depend in a complex manner on both local weather variables and global climate variables. Recently, machine learning methods have shown promise in advancing the state of the art but only at the cost of complex data curation, integrating expert knowledge with aggregation across multiple relevant data sources, file formats, and temporal and spatial resolutions. To streamline this process and accelerate future development, we introduce SubseasonalClimateUSA, a curated dataset for training and benchmarking subseasonal forecasting models in the United States. We use this dataset to benchmark a diverse suite of models, including operational dynamical models, classical meteorological baselines, and ten state-of-the-art machine learning and deep learning-based methods from the literature. Overall, our benchmarks suggest simple and effective ways to extend the accuracy of current operational models. SubseasonalClimateUSA is regularly updated and accessible via the https://github.com/microsoft/subseasonal_data/ Python package. | SubseasonalClimateUSA: A Dataset for Subseasonal Forecasting and Benchmarking | [
"Soukayna Mouatadid",
"Paulo Orenstein",
"Genevieve Elaine Flaspohler",
"Miruna Oprescu",
"Judah Cohen",
"Franklyn Wang",
"Sean Edward Knight",
"Maria Geogdzhayeva",
"Samuel James Levang",
"Ernest Fraenkel",
"Lester Mackey"
] | Track/Datasets_and_Benchmarks | poster | 2109.10399 | [
"https://github.com/microsoft/subseasonal_data"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=pV1xV2RK6I | @inproceedings{
zhuang2023toolqa,
title={Tool{QA}: A Dataset for {LLM} Question Answering with External Tools},
author={Yuchen Zhuang and Yue Yu and Kuan Wang and Haotian Sun and Chao Zhang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=pV1xV2RK6I}
} | Large Language Models (LLMs) have demonstrated impressive performance in various NLP tasks, but they still suffer from challenges such as hallucination and weak numerical reasoning. To overcome these challenges, external tools can be used to enhance LLMs' question-answering abilities. However, current evaluation methods do not distinguish between questions that can be answered using LLMs' internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMs' ability to use external tools for question answering. Our development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. Importantly, we strive to minimize the overlap between our benchmark data and LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use reasoning abilities. We conducted an in-depth diagnosis of existing tool-use LLMs to highlight their strengths, weaknesses, and potential improvements. Our findings set a new benchmark for evaluating LLMs and suggest new directions for future advancements. Our data and code are freely available for the broader scientific community on GitHub. | ToolQA: A Dataset for LLM Question Answering with External Tools | [
"Yuchen Zhuang",
"Yue Yu",
"Kuan Wang",
"Haotian Sun",
"Chao Zhang"
] | Track/Datasets_and_Benchmarks | poster | 2306.13304 | [
"https://github.com/night-chen/toolqa"
] | https://huggingface.co/papers/2306.13304 | 2 | 0 | 0 | 5 | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=pTSNoBTk8E | @inproceedings{
bhamidipaty2023dynadojo,
title={DynaDojo: An Extensible Platform for Benchmarking Scaling in Dynamical System Identification},
author={Logan Mondal Bhamidipaty and Tommy Bruzzese and Caryn Tran and Rami Ratl Mrad and Max Kanwal},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=pTSNoBTk8E}
} | Modeling complex dynamical systems poses significant challenges, with traditional methods struggling to work across a variety of systems and scale to high-dimensional dynamics. In response, we present DynaDojo, a novel benchmarking platform designed for data-driven dynamical system identification. DynaDojo enables comprehensive evaluation of how an algorithm's performance scales across three key dimensions: (1) the number of training samples provided, (2) the complexity of the dynamical system being modeled, and (3) the training samples required to achieve a target error threshold. Furthermore, DynaDojo enables studying out-of-distribution generalization (by providing multiple test conditions for each system) and active learning (by supporting closed-loop control). Through its user-friendly and easily extensible API, DynaDojo accommodates a wide range of user-defined $\texttt{Algorithms}$, $\texttt{Systems}$, and $\texttt{Challenges}$ (scaling metrics). The platform also prioritizes resource-efficient training for running on a cluster. To showcase its utility, in DynaDojo $\texttt{0.9}$, we include implementations of 7 baseline algorithms and 20 dynamical systems, along with many demo notebooks. This work aspires to make DynaDojo a unifying benchmarking platform for system identification, paralleling the role of OpenAI’s Gym in reinforcement learning. | DynaDojo: An Extensible Benchmarking Platform for Scalable Dynamical System Identification | [
"Logan Mondal Bhamidipaty",
"Tommy Bruzzese",
"Caryn Tran",
"Rami Ratl Mrad",
"Max Kanwal"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=pRnrg2bWr0 | @inproceedings{
liu2023openillumination,
title={OpenIllumination: A Multi-Illumination Dataset for Inverse Rendering Evaluation on Real Objects},
author={Isabella Liu and Linghao Chen and Ziyang Fu and Liwen Wu and Haian Jin and Zhong Li and Chin Ming Ryan Wong and Yi Xu and Ravi Ramamoorthi and Zexiang Xu and Hao Su},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=pRnrg2bWr0}
} | We introduce OpenIllumination, a real-world dataset containing over 108K images of 64 objects with diverse materials, captured under 72 camera views and a large number of different illuminations. For each image in the dataset, we provide accurate camera parameters, illumination ground truth, and foreground segmentation masks. Our dataset enables the quantitative evaluation of most inverse rendering and material decomposition methods for real objects. We examine several state-of-the-art inverse rendering methods on our dataset and compare their performances. The dataset and code can be found on the project page: https://oppo-us-research.github.io/OpenIllumination. | OpenIllumination: A Multi-Illumination Dataset for Inverse Rendering Evaluation on Real Objects | [
"Isabella Liu",
"Linghao Chen",
"Ziyang Fu",
"Liwen Wu",
"Haian Jin",
"Zhong Li",
"Chin Ming Ryan Wong",
"Yi Xu",
"Ravi Ramamoorthi",
"Zexiang Xu",
"Hao Su"
] | Track/Datasets_and_Benchmarks | poster | 2309.07921 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=oz4AGs0phP | @inproceedings{
zhu2023synmob,
title={SynMob: Creating High-Fidelity Synthetic {GPS} Trajectory Dataset for Urban Mobility Analysis},
author={Yuanshao Zhu and Yongchao Ye and Ying Wu and Xiangyu Zhao and James Yu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=oz4AGs0phP}
} | Urban mobility analysis has been extensively studied in the past decade using a vast amount of GPS trajectory data, which reveals hidden patterns in movement and human activity within urban landscapes. Despite its significant value, the availability of such datasets often faces limitations due to privacy concerns, proprietary barriers, and quality inconsistencies. To address these challenges, this paper presents a synthetic trajectory dataset with high fidelity, offering a general solution to these data accessibility issues. Specifically, the proposed dataset adopts a diffusion model as its synthesizer, with the primary aim of accurately emulating the spatial-temporal behavior of the original trajectory data. These synthesized data can retain the geo-distribution and statistical properties characteristic of real-world datasets. Through rigorous analysis and case studies, we validate the high similarity and utility between the proposed synthetic trajectory dataset and real-world counterparts. Such validation underscores the practicality of synthetic datasets for urban mobility analysis and advocates for its wider acceptance within the research community. Finally, we publicly release the trajectory synthesizer and datasets, aiming to enhance the quality and availability of synthetic trajectory datasets and encourage continued contributions to this rapidly evolving field. The dataset is released for public online availability https://github.com/Applied-Machine-Learning-Lab/SynMob. | SynMob: Creating High-Fidelity Synthetic GPS Trajectory Dataset for Urban Mobility Analysis | [
"Yuanshao Zhu",
"Yongchao Ye",
"Ying Wu",
"Xiangyu Zhao",
"James Yu"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=oi1MUMk5NF | @inproceedings{
goshvadi2023discs,
title={{DISCS}: A Benchmark for Discrete Sampling},
author={Katayoon Goshvadi and Haoran Sun and Xingchao Liu and Azade Nova and Ruqi Zhang and Will Sussman Grathwohl and Dale Schuurmans and Hanjun Dai},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=oi1MUMk5NF}
} | Sampling in discrete spaces, with critical applications in simulation and optimization, has recently been boosted by significant advances in gradient-based approaches that exploit modern accelerators like GPUs. However, two key challenges are hindering further advancement in research on discrete sampling. First, since there is no consensus on experimental settings and evaluation setups, the empirical results in different research papers are often not comparable. Second, implementing samplers and target distributions often requires a nontrivial amount of effort in terms of calibration and parallelism. To tackle these challenges, we propose DISCS (DISCrete Sampling), a tailored package and benchmark that supports unified and efficient experiment implementation and evaluations for discrete sampling in three types of tasks: sampling from classical graphical models and energy based generative models, and sampling for solving combinatorial optimization. Throughout the comprehensive evaluations in DISCS, we gained new insights into scalability, design principles for proposal distributions, and lessons for adaptive sampling design. DISCS efficiently implements representative discrete samplers in existing research works as baselines and offers a simple interface that researchers can conveniently add new discrete samplers and directly compare their performance with the benchmark result in a calibrated setup. | DISCS: A Benchmark for Discrete Sampling | [
"Katayoon Goshvadi",
"Haoran Sun",
"Xingchao Liu",
"Azade Nova",
"Ruqi Zhang",
"Will Sussman Grathwohl",
"Dale Schuurmans",
"Hanjun Dai"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=oQSfcVTNr1 | @inproceedings{
wang2023soundcam,
title={SoundCam: A Dataset for Finding Humans Using Room Acoustics},
author={Mason Long Wang and Samuel Clarke and Jui-Hsien Wang and Ruohan Gao and Jiajun Wu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=oQSfcVTNr1}
} | A room’s acoustic properties are a product of the room’s geometry, the objects within the room, and their specific positions. A room’s acoustic properties can be characterized by its impulse response (RIR) between a source and listener location, or roughly inferred from recordings of natural signals present in the room. Variations in the positions of objects in a room can effect measurable changes in the room’s acoustic properties, as characterized by the RIR. Existing datasets of RIRs either do not systematically vary positions of objects in an environment, or they consist of only simulated RIRs. We present SoundCam, the largest dataset of unique RIRs from in-the-wild rooms publicly released to date. It includes 5,000 10-channel real-world measurements of room impulse responses and 2,000 10-channel recordings of music in three different rooms, including a controlled acoustic lab, an in-the-wild living room, and a conference room, with different humans in positions throughout each room. We show that these measurements can be used for interesting tasks, such as detecting and identifying humans, and tracking their positions. | SoundCam: A Dataset for Finding Humans Using Room Acoustics | [
"Mason Long Wang",
"Samuel Clarke",
"Jui-Hsien Wang",
"Ruohan Gao",
"Jiajun Wu"
] | Track/Datasets_and_Benchmarks | poster | 2311.03517 | [
""
] | https://huggingface.co/papers/2311.03517 | 2 | 10 | 0 | 5 | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=oIUXpBnyjv | @inproceedings{
niu2023lightzero,
title={LightZero: A Unified Benchmark for Monte Carlo Tree Search in General Sequential Decision Scenarios},
author={Yazhe Niu and Yuan Pu and Zhenjie Yang and Xueyan Li and Tong Zhou and Jiyuan Ren and Shuai Hu and Hongsheng Li and Yu Liu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=oIUXpBnyjv}
} | Building agents based on tree-search planning capabilities with learned models has achieved remarkable success in classic decision-making problems, such as Go and Atari.
However, it has been deemed challenging or even infeasible to extend Monte Carlo Tree Search (MCTS) based algorithms to diverse real-world applications, especially when these environments involve complex action spaces and significant simulation costs, or inherent stochasticity.
In this work, we introduce LightZero, the first unified benchmark for deploying MCTS/MuZero in general sequential decision scenarios.
Specificially, we summarize the most critical challenges in designing a general MCTS-style decision-making solver, then decompose the tightly-coupled algorithm and system design of tree-search RL methods into distinct sub-modules.
By incorporating more appropriate exploration and optimization strategies, we can significantly enhance these sub-modules and construct powerful LightZero agents to tackle tasks across a wide range of domains, such as board games, Atari, MuJoCo, MiniGrid and GoBigger.
Detailed benchmark results reveal the significant potential of such methods in building scalable and efficient decision intelligence.
The code is available as part of OpenDILab at https://github.com/opendilab/LightZero. | LightZero: A Unified Benchmark for Monte Carlo Tree Search in General Sequential Decision Scenarios | [
"Yazhe Niu",
"Yuan Pu",
"Zhenjie Yang",
"Xueyan Li",
"Tong Zhou",
"Jiyuan Ren",
"Shuai Hu",
"Hongsheng Li",
"Yu Liu"
] | Track/Datasets_and_Benchmarks | oral | 2310.08348 | [
"https://github.com/opendilab/LightZero"
] | https://huggingface.co/papers/2310.08348 | 1 | 4 | 0 | 9 | [
"OpenDILabCommunity/PongNoFrameskip-v4-MuZero",
"OpenDILabCommunity/PongNoFrameskip-v4-SampledEfficientZero",
"OpenDILabCommunity/LunarLander-v2-MuZero",
"OpenDILabCommunity/CartPole-v0-GumbelMuZero",
"OpenDILabCommunity/CartPole-v0-EfficientZero",
"OpenDILabCommunity/TicTacToe-play-with-bot-AlphaZero",
"OpenDILabCommunity/TicTacToe-play-with-bot-MuZero",
"OpenDILabCommunity/BreakoutNoFrameskip-v4-MuZero",
"OpenDILabCommunity/MsPacmanNoFrameskip-v4-MuZero",
"OpenDILabCommunity/Pendulum-v1-MuZero",
"OpenDILabCommunity/CartPole-v0-SampledEfficientZero",
"OpenDILabCommunity/PongNoFrameskip-v4-EfficientZero",
"OpenDILabCommunity/CartPole-v0-MuZero",
"OpenDILabCommunity/Pendulum-v1-EfficientZero",
"OpenDILabCommunity/MsPacmanNoFrameskip-v4-EfficientZero",
"OpenDILabCommunity/TicTacToe-play-with-bot-SampledAlphaZero",
"OpenDILabCommunity/TicTacToe-play-with-bot-GumbelMuZero",
"OpenDILabCommunity/LunarLander-v2-EfficientZero",
"OpenDILabCommunity/Pendulum-v1-SampledEfficientZero",
"OpenDILabCommunity/MsPacmanNoFrameskip-v4-SampledEfficientZero"
] | [] | [] | 1 |
null | https://openreview.net/forum?id=n8hpztIuet | @inproceedings{
cai2023smplerx,
title={{SMPL}er-X: Scaling Up Expressive Human Pose and Shape Estimation},
author={Zhongang Cai and Wanqi Yin and Ailing Zeng and CHEN WEI and Qingping SUN and Yanjun Wang and Hui En Pang and Haiyi Mei and Mingyuan Zhang and Lei Zhang and Chen Change Loy and Lei Yang and Ziwei Liu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=n8hpztIuet}
} | Expressive human pose and shape estimation (EHPS) unifies body, hands, and face motion capture with numerous applications. Despite encouraging progress, current state-of-the-art methods still depend largely on a confined set of training datasets. In this work, we investigate scaling up EHPS towards the first generalist foundation model (dubbed SMPLer-X), with up to ViT-Huge as the backbone and training with up to 4.5M instances from diverse data sources. With big data and the large model, SMPLer-X exhibits strong performance across diverse test benchmarks and excellent transferability to even unseen environments. 1) For the data scaling, we perform a systematic investigation on 32 EHPS datasets, including a wide range of scenarios that a model trained on any single dataset cannot handle. More importantly, capitalizing on insights obtained from the extensive benchmarking process, we optimize our training scheme and select datasets that lead to a significant leap in EHPS capabilities. 2) For the model scaling, we take advantage of vision transformers to study the scaling law of model sizes in EHPS. Moreover, our finetuning strategy turn SMPLer-X into specialist models, allowing them to achieve further performance boosts. Notably, our foundation model SMPLer-X consistently delivers state-of-the-art results on seven benchmarks such as AGORA (107.2 mm NMVE), UBody (57.4 mm PVE), EgoBody (63.6 mm PVE), and EHF (62.3 mm PVE without finetuning). | SMPLer-X: Scaling Up Expressive Human Pose and Shape Estimation | [
"Zhongang Cai",
"Wanqi Yin",
"Ailing Zeng",
"CHEN WEI",
"Qingping SUN",
"Yanjun Wang",
"Hui En Pang",
"Haiyi Mei",
"Mingyuan Zhang",
"Lei Zhang",
"Chen Change Loy",
"Lei Yang",
"Ziwei Liu"
] | Track/Datasets_and_Benchmarks | poster | 2309.17448 | [
"https://github.com/caizhongang/SMPLer-X"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=n581purqB4 | @inproceedings{
qu2023abdomenatlask,
title={AbdomenAtlas-8K: Annotating 8,000 {CT} Volumes for Multi-Organ Segmentation in Three Weeks},
author={Chongyu Qu and Tiezheng Zhang and Hualin Qiao and Jie Liu and Yucheng Tang and Alan Yuille and Zongwei Zhou},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=n581purqB4}
} | Annotating medical images, particularly for organ segmentation, is laborious and time-consuming. For example, annotating an abdominal organ requires an estimated rate of 30-60 minutes per CT volume based on the expertise of an annotator and the size, visibility, and complexity of the organ. Therefore, publicly available datasets for multi-organ segmentation are often limited in data size and organ diversity. This paper proposes an active learning procedure to expedite the annotation process for organ segmentation and creates the largest multi-organ dataset (by far) with the spleen, liver, kidneys, stomach, gallbladder, pancreas, aorta, and IVC annotated in 8,448 CT volumes, equating to 3.2 million slices. The conventional annotation methods would take an experienced annotator up to 1,600 weeks (or roughly 30.8 years) to complete this task. In contrast, our annotation procedure has accomplished this task in three weeks (based on an 8-hour workday, five days a week) while maintaining a similar or even better annotation quality. This achievement is attributed to three unique properties of our method: (1) label bias reduction using multiple pre-trained segmentation models, (2) effective error detection in the model predictions, and (3) attention guidance for annotators to make corrections on the most salient errors. Furthermore, we summarize the taxonomy of common errors made by AI algorithms and annotators. This allows for continuous improvement of AI and annotations, significantly reducing the annotation costs required to create large-scale datasets for a wider variety of medical imaging tasks. Code and dataset are available at https://github.com/MrGiovanni/AbdomenAtlas | AbdomenAtlas-8K: Annotating 8,000 CT Volumes for Multi-Organ Segmentation in Three Weeks | [
"Chongyu Qu",
"Tiezheng Zhang",
"Hualin Qiao",
"Jie Liu",
"Yucheng Tang",
"Alan Yuille",
"Zongwei Zhou"
] | Track/Datasets_and_Benchmarks | poster | 2305.09666 | [
"https://github.com/mrgiovanni/abdomenatlas"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=n4OwK8cpx2 | @inproceedings{
chen2023reasoner,
title={{REASONER}: An Explainable Recommendation Dataset with Comprehensive Labeling Ground Truths},
author={Xu Chen and Jingsen Zhang and Lei Wang and Quanyu Dai and Zhenhua Dong and Ruiming Tang and Rui Zhang and Li Chen and Xin Zhao and Ji-Rong Wen},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=n4OwK8cpx2}
} | Explainable recommendation has attracted much attention from the industry and academic communities. It has shown great potential to improve the recommendation persuasiveness, informativeness and user satisfaction. In the past few years, while a lot of promising explainable recommender models have been proposed, the datasets used to evaluate them still suffer from several limitations, for example, the explanation ground truths are not labeled by the real users, the explanations are mostly single-modal and around only one aspect. To bridge these gaps, in this paper, we build a new explainable recommendation dataset, which, to our knowledge, is the first contribution that provides a large amount of real user labeled multi-modal and multi-aspect explaination ground truths. In specific, we firstly develop a video recommendation platform, where a series of questions around the recommendation explainability are carefully designed. Then, we recruit about 3000 high-quality labelers with different backgrounds to use the system, and collect their behaviors and feedback to our questions. In this paper, we detail the construction process of our dataset and also provide extensive analysis on its characteristics. In addition, we develop a library, where ten well-known explainable recommender models are implemented in a unified framework. Based on this library, we build several benchmarks for different explainable recommendation tasks. At last, we present many new opportunities brought by our dataset, which are expected to promote the field of explainable recommendation. Our dataset, library and the related documents have been released at https://reasoner2023.github.io/. | REASONER: An Explainable Recommendation Dataset with Comprehensive Labeling Ground Truths | [
"Xu Chen",
"Jingsen Zhang",
"Lei Wang",
"Quanyu Dai",
"Zhenhua Dong",
"Ruiming Tang",
"Rui Zhang",
"Li Chen",
"Xin Zhao",
"Ji-Rong Wen"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=n2wW7goGky | @inproceedings{
chauhan2023airdelhi,
title={AirDelhi: Fine-Grained Spatio-Temporal Particulate Matter Dataset From Delhi For {ML} based Modeling},
author={Sachin Chauhan and Zeel B Patel and Sayan Ranu and Rijurekha Sen and Nipun Batra},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=n2wW7goGky}
} | Air pollution poses serious health concerns in developing countries, such as India, necessitating large-scale measurement for correlation analysis, policy recommendations, and informed decision-making. However, fine-grained data collection is costly. Specifically, static sensors for pollution measurement cost several thousand dollars per unit, leading to inadequate deployment and coverage. To complement the existing sparse static sensor network, we propose a mobile sensor network utilizing lower-cost PM2.5 sensors mounted on public buses in the Delhi-NCR region of India. Through this exercise, we introduce a novel dataset AirDelhi comprising PM2.5 and PM10 measurements. This dataset is made publicly available, at https://www.cse.iitd.ac.in/pollutiondata, serving as a valuable resource for machine learning (ML) researchers and environmentalists. We present three key contributions with the release of this dataset. Firstly, through in-depth statistical analysis, we demonstrate that the released dataset significantly differs from existing pollution datasets, highlighting its uniqueness and potential for new insights. Secondly, the dataset quality been validated against existing expensive sensors. Thirdly, we conduct a benchmarking exercise (https://github.com/sachin-iitd/DelhiPMDatasetBenchmark), evaluating state-of-the-art methods for interpolation, feature imputation, and forecasting on this dataset, which is the largest publicly available PM dataset to date. The results of the benchmarking exercise underscore the substantial disparities in accuracy between the proposed dataset and other publicly available datasets. This finding highlights the complexity and richness of our dataset, emphasizing its value for advancing research in the field of air pollution. | AirDelhi: Fine-Grained Spatio-Temporal Particulate Matter Dataset From Delhi For ML based Modeling | null | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
- Downloads last month
- 148