bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 286
1.35k
| abstract
stringlengths 558
2.37k
| title
stringlengths 18
163
| authors
sequencelengths 1
56
⌀ | id
stringclasses 1
value | type
stringclasses 2
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 63
values | n_linked_authors
int64 -1
10
| upvotes
int64 -1
45
| num_comments
int64 -1
6
| n_authors
int64 -1
40
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
10
| Spaces
sequencelengths 0
100
| paper_page_exists_pre_conf
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://openreview.net/forum?id=hVhoVxVD9D | @inproceedings{
stewart2023ssleol,
title={{SSL}4{EO}-L: Datasets and Foundation Models for Landsat Imagery},
author={Adam J Stewart and Nils Lehmann and Isaac Corley and Yi Wang and Yi-Chia Chang and Nassim Ait Ait Ali Braham and Shradha Sehgal and Caleb Robinson and Arindam Banerjee},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=hVhoVxVD9D}
} | The Landsat program is the longest-running Earth observation program in history, with 50+ years of data acquisition by 8 satellites. The multispectral imagery captured by sensors onboard these satellites is critical for a wide range of scientific fields. Despite the increasing popularity of deep learning and remote sensing, the majority of researchers still use decision trees and random forests for Landsat image analysis due to the prevalence of small labeled datasets and lack of foundation models. In this paper, we introduce SSL4EO-L, the first ever dataset designed for Self-Supervised Learning for Earth Observation for the Landsat family of satellites (including 3 sensors and 2 product levels) and the largest Landsat dataset in history (5M image patches). Additionally, we modernize and re-release the L7 Irish and L8 Biome cloud detection datasets, and introduce the first ML benchmark datasets for Landsats 4–5 TM and Landsat 7 ETM+ SR. Finally, we pre-train the first foundation models for Landsat imagery using SSL4EO-L and evaluate their performance on multiple semantic segmentation tasks. All datasets and model weights are available via the TorchGeo library, making reproducibility and experimentation easy, and enabling scientific advancements in the burgeoning field of remote sensing for a multitude of downstream applications. | SSL4EO-L: Datasets and Foundation Models for Landsat Imagery | [
"Adam J Stewart",
"Nils Lehmann",
"Isaac Corley",
"Yi Wang",
"Yi-Chia Chang",
"Nassim Ait Ait Ali Braham",
"Shradha Sehgal",
"Caleb Robinson",
"Arindam Banerjee"
] | Track/Datasets_and_Benchmarks | poster | 2306.09424 | [
"https://github.com/microsoft/torchgeo"
] | https://huggingface.co/papers/2306.09424 | 1 | 0 | 0 | 9 | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=hL8uGYjlHU | @inproceedings{
jang2023msodai,
title={M\${\textasciicircum}\{2\}\${SODAI}: Multi-Modal Maritime Object Detection Dataset With {RGB} and Hyperspectral Image Sensors},
author={Jonggyu Jang and Sangwoo Oh and Youjin Kim and Dongmin Seo and Youngchol Choi and Hyun Jong Yang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=hL8uGYjlHU}
} | Object detection in aerial images is a growing area of research, with maritime object detection being a particularly important task for reliable surveillance, monitoring, and active rescuing. Notwithstanding astonishing advances of computer vision
technologies, detecting ships and floating matters in these images are challenging due to factors such as object distance. What makes it worse is pervasive sea surface effects such as sunlight reflection, wind, and waves.
Hyperspectral image (HSI) sensors, providing more than 100 channels in wavelengths of visible and near-infrared, can extract intrinsic information of materials from a few pixels of HSIs.
The advent of HSI sensors motivates us to leverage HSIs to circumvent false positives due to the sea surface effects.
Unfortunately, there are few public HSI datasets due to the high cost and labor involved in collecting them, hindering object detection research based on HSIs.
We have collected and annotated a new dataset called ``Multi-Modal Ship and flOating matter Detection in Aerial Images (M$^{2}$SODAI),'', which includes synchronized image pairs of RGB and HSI data, along with bounding box labels for nearly 6,000 instances per category.
We also propose a new multi-modal extension of the feature pyramid network called DoubleFPN.
Extensive experiments on our benchmark demonstrate that fusion of RGB and HSI data can enhance mAP, especially in the presence of the sea surface effects. | M^2SODAI: Multi-Modal Maritime Object Detection Dataset With RGB and Hyperspectral Image Sensors | [
"Jonggyu Jang",
"Sangwoo Oh",
"Youjin Kim",
"Dongmin Seo",
"Youngchol Choi",
"Hyun Jong Yang"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=hJPATsBb3l | @inproceedings{
zhang2023mexam,
title={M3Exam: A Multilingual, Multimodal, Multilevel Benchmark for Examining Large Language Models},
author={Wenxuan Zhang and Mahani Aljunied and Chang Gao and Yew Ken Chia and Lidong Bing},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=hJPATsBb3l}
} | Despite the existence of various benchmarks for evaluating natural language processing models, we argue that human exams are a more suitable means of evaluating general intelligence for large language models (LLMs), as they inherently demand a much wider range of abilities such as language understanding, domain knowledge, and problem-solving skills. To this end, we introduce M3Exam, a novel benchmark sourced from real and official human exam questions for evaluating LLMs in a multilingual, multimodal, and multilevel context. M3Exam exhibits three unique characteristics: (1) multilingualism, encompassing questions from multiple countries that require strong multilingual proficiency and cultural knowledge; (2) multimodality, accounting for the multimodal nature of many exam questions to test the model's multimodal understanding capability; and (3) multilevel structure, featuring exams from three critical educational periods to comprehensively assess a model's proficiency at different levels. In total, M3Exam contains 12,317 questions in 9 diverse languages with three educational levels, where about 23\% of the questions require processing images for successful solving. We assess the performance of top-performing LLMs on M3Exam and find that current models, including GPT-4, still struggle with multilingual text, particularly in low-resource and non-Latin script languages. Multimodal LLMs also perform poorly with complex multimodal questions. We believe that M3Exam can be a valuable resource for comprehensively evaluating LLMs by examining their multilingual and multimodal abilities and tracking their development. Data and evaluation code is available at \url{https://github.com/DAMO-NLP-SG/M3Exam}. | M3Exam: A Multilingual, Multimodal, Multilevel Benchmark for Examining Large Language Models | [
"Wenxuan Zhang",
"Mahani Aljunied",
"Chang Gao",
"Yew Ken Chia",
"Lidong Bing"
] | Track/Datasets_and_Benchmarks | poster | 2306.05179 | [
"https://github.com/damo-nlp-sg/m3exam"
] | https://huggingface.co/papers/2306.05179 | 2 | 2 | 0 | 5 | [
"SeaLLMs/SeaLLM-7B-v2",
"SeaLLMs/SeaLLM-13B-Chat",
"SeaLLMs/SeaLLM-7B-v2.5",
"SeaLLMs/SeaLLMs-v3-7B-Chat",
"SeaLLMs/SeaLLM-7B-v2-gguf",
"SeaLLMs/SeaLLMs-v3-1.5B-Chat",
"LoneStriker/SeaLLM-7B-v2-GGUF",
"SeaLLMs/SeaLLMs-v3-1.5B",
"SeaLLMs/SeaLLMs-v3-7B",
"SeaLLMs/SeaLMMM-7B-v0.1",
"SorawitChok/SeaLLM-7B-v2.5-AWQ",
"QuantFactory/SeaLLM3-7B-Chat-GGUF",
"lightontech/SeaLLM3-7B-Chat-AWQ",
"LoneStriker/SeaLLM-7B-v2-6.0bpw-h6-exl2",
"LoneStriker/SeaLLM-7B-v2-3.0bpw-h6-exl2",
"LoneStriker/SeaLLM-7B-v2-8.0bpw-h8-exl2",
"LoneStriker/SeaLLM-7B-v2-5.0bpw-h6-exl2",
"LoneStriker/SeaLLM-7B-v2-4.0bpw-h6-exl2",
"LoneStriker/SeaLLM-7B-v2-AWQ",
"QuantFactory/SeaLLM-7B-v2-GGUF",
"QuantFactory/SeaLLM-7B-v2.5-GGUF",
"RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2-gguf",
"RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2-8bits",
"RichardErkhov/SeaLLMs_-_SeaLLM-7B-v2.5-gguf",
"NghiemAbe/SeaLLM-7B-v2.5-AWQ",
"SorawitChok/SeaLLM3-7B-Chat-AWQ",
"QuantFactory/SeaLLMs-v3-7B-Chat-GGUF",
"NghiemAbe/SeaLLM-v2.5-Legal-v4",
"NghiemAbe/SeaLLM-v2.5-Legal-v4-AWQ",
"RichardErkhov/SeaLLMs_-_SeaLLMs-v3-7B-gguf",
"RichardErkhov/SeaLLMs_-_SeaLLMs-v3-7B-Chat-gguf",
"mohan11111/SeaLLM-7B-v2-pytorch",
"RichardErkhov/SeaLLMs_-_SeaLLMs-v3-1.5B-gguf"
] | [
"chiayewken/m3exam",
"SEACrowd/m3exam",
"floschne/multimodal-m3exam"
] | [
"eduagarcia/open_pt_llm_leaderboard",
"SeaLLMs/SeaLLM-Chat",
"Auto-Arena/Leaderboard",
"SeaLLMs/SeaExam_leaderboard",
"SeaEval/SeaEval_Leaderboard",
"SeaLLMs/SeaLLM-7B-v2.5-simple",
"nxphi47/MultiPurpose-Chatbot-DEMO",
"nxphi47/test-zero-gpu"
] | 1 |
null | https://openreview.net/forum?id=gO0kS0eE0F | @inproceedings{
ahdritz2023openproteinset,
title={OpenProteinSet: Training data for structural biology at scale},
author={Gustaf Ahdritz and Nazim Bouatta and Sachin Kadyan and Lukas Jarosch and Dan Berenberg and Ian Fisk and Andrew Martin Watkins and Stephen Ra and Richard Bonneau and Mohammed AlQuraishi},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=gO0kS0eE0F}
} | Multiple sequence alignments (MSAs) of proteins encode rich biological information and have been workhorses in bioinformatic methods for tasks like protein design and protein structure prediction for decades. Recent breakthroughs like AlphaFold2 that use transformers to attend directly over large quantities of raw MSAs have reaffirmed their importance. Generation of MSAs is highly computationally intensive, however, and no datasets comparable to those used to train AlphaFold2 have been made available to the research community, hindering progress in machine learning for proteins. To remedy this problem, we introduce OpenProteinSet, an open-source corpus of more than 16 million MSAs, associated structural homologs from the Protein Data Bank, and AlphaFold2 protein structure predictions. We have previously demonstrated the utility of OpenProteinSet by successfully retraining AlphaFold2 on it. We expect OpenProteinSet to be broadly useful as training and validation data for 1) diverse tasks focused on protein structure, function, and design and 2) large-scale multimodal machine learning research. | OpenProteinSet: Training data for structural biology at scale | [
"Gustaf Ahdritz",
"Nazim Bouatta",
"Sachin Kadyan",
"Lukas Jarosch",
"Dan Berenberg",
"Ian Fisk",
"Andrew Martin Watkins",
"Stephen Ra",
"Richard Bonneau",
"Mohammed AlQuraishi"
] | Track/Datasets_and_Benchmarks | poster | 2308.05326 | [
"https://github.com/aqlaboratory/openfold"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=gMYsxTin4x | @inproceedings{
bae2023sit,
title={SiT Dataset: Socially Interactive Pedestrian Trajectory Dataset for Social Navigation Robots},
author={Jongwook Bae and Jungho Kim and Junyong Yun and Changwon Kang and Jeongseon Choi and Chanhyeok Kim and Junho Lee and Jungwook Choi and Jun Won Choi},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=gMYsxTin4x}
} | To ensure secure and dependable mobility in environments shared by humans and robots, social navigation robots should possess the capability to accurately perceive and predict the trajectories of nearby pedestrians. In this paper, we present a novel dataset of pedestrian trajectories, referred to as Social Interactive Pedestrian Trajectory (SiT) dataset, which can be used to train pedestrian detection, tracking, and trajectory prediction models needed to design social navigation robots. Our dataset includes sequential raw data captured by two 3D LiDARs and five cameras covering a 360-degree view, two inertial measurement units (IMUs), and real-time kinematic positioning (RTK), as well as annotations including 2D & 3D boxes, object classes, and object IDs. Thus far, various human trajectory datasets have been introduced to support the development of pedestrian motion forecasting models. Our SiT dataset differs from these datasets in the following three respects. First, whereas the pedestrian trajectory data in other datasets were obtained from static scenes, our data was collected while the robot navigated in a crowded environment, capturing human-robot interactive scenarios in motion. Second, unlike many autonomous driving datasets where pedestrians are usually at a distance from vehicles and found on pedestrian paths, our dataset offers a distinctive view of navigation robots interacting closely with humans in crowded settings.Third, our dataset has been carefully organized to facilitate the training and evaluation of end-to-end prediction models encompassing 3D detection, 3D multi-object tracking, and trajectory prediction. This design allows for an end-to-end unified modular approach across different tasks. We introduce a comprehensive benchmark for assessing models across all aforementioned tasks and present the performance of multiple baseline models as part of our evaluation. Our dataset provides a strong foundation for future research in pedestrian trajectory prediction, which could expedite the development of safe and agile social navigation robots. The SiT dataset, development kit, and trained models are publicly available at: https://spalaboratory.github.io/SiT/ | SiT Dataset: Socially Interactive Pedestrian Trajectory Dataset for Social Navigation Robots | [
"Jongwook Bae",
"Jungho Kim",
"Junyong Yun",
"Changwon Kang",
"Jeongseon Choi",
"Chanhyeok Kim",
"Junho Lee",
"Jungwook Choi",
"Jun Won Choi"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=gFf0a0ZxJM | @inproceedings{
ge2023openagi,
title={Open{AGI}: When {LLM} Meets Domain Experts},
author={Yingqiang Ge and Wenyue Hua and Kai Mei and jianchao ji and Juntao Tan and Shuyuan Xu and Zelong Li and Yongfeng Zhang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=gFf0a0ZxJM}
} | Human Intelligence (HI) excels at combining basic skills to solve complex tasks. This capability is vital for Artificial Intelligence (AI) and should be embedded in comprehensive AI Agents, enabling them to harness expert models for complex task-solving towards Artificial General Intelligence (AGI). Large Language Models (LLMs) show promising learning and reasoning abilities, and can effectively use external models, tools, plugins, or APIs to tackle complex problems. In this work, we introduce OpenAGI, an open-source AGI research and development platform designed for solving multi-step, real-world tasks. Specifically, OpenAGI uses a dual strategy, integrating standard benchmark tasks for benchmarking and evaluation, and open-ended tasks including more expandable models, tools, plugins, or APIs for creative problem-solving. Tasks are presented as natural language queries to the LLM, which then selects and executes appropriate models. We also propose a Reinforcement Learning from Task Feedback (RLTF) mechanism that uses task results to improve the LLM's task-solving ability, which creates a self-improving AI feedback loop. While we acknowledge that AGI is a broad and multifaceted research challenge with no singularly defined solution path, the integration of LLMs with domain-specific expert models, inspired by mirroring the blend of general and specialized intelligence in humans, offers a promising approach towards AGI. We are open-sourcing the OpenAGI project's code, dataset, benchmarks, evaluation methods, and the UI demo to foster community involvement in AGI advancement: https://github.com/agiresearch/OpenAGI. | OpenAGI: When LLM Meets Domain Experts | null | Track/Datasets_and_Benchmarks | poster | 2304.04370 | [
"https://github.com/agiresearch/openagi"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=g7OX2sOJtn | @inproceedings{
yang2023leandojo,
title={LeanDojo: Theorem Proving with Retrieval-Augmented Language Models},
author={Kaiyu Yang and Aidan M Swope and Alex Gu and Rahul Chalamala and Peiyang Song and Shixing Yu and Saad Godil and Ryan Prenger and Anima Anandkumar},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=g7OX2sOJtn}
} | Large language models (LLMs) have shown promise in proving formal theorems using proof assistants such as Lean. However, existing methods are difficult to reproduce or build on, due to private code, data, and large compute requirements. This has created substantial barriers to research on machine learning methods for theorem proving. This paper removes these barriers by introducing LeanDojo: an open-source Lean playground consisting of toolkits, data, models, and benchmarks. LeanDojo extracts data from Lean and enables interaction with the proof environment programmatically. It contains fine-grained annotations of premises in proofs, providing valuable data for premise selection—a key bottleneck in theorem proving. Using this data, we develop ReProver (Retrieval-Augmented Prover): an LLM-based prover augmented with retrieval for selecting premises from a vast math library. It is inexpensive and needs only one GPU week of training. Our retriever leverages LeanDojo's program analysis capability to identify accessible premises and hard negative examples, which makes retrieval much more effective. Furthermore, we construct a new benchmark consisting of 98,734 theorems and proofs extracted from Lean's math library. It features challenging data split requiring the prover to generalize to theorems relying on novel premises that are never used in training. We use this benchmark for training and evaluation, and experimental results demonstrate the effectiveness of ReProver over non-retrieval baselines and GPT-4. We thus provide the first set of open-source LLM-based theorem provers without any proprietary datasets and release it under a permissive MIT license to facilitate further research. | LeanDojo: Theorem Proving with Retrieval-Augmented Language Models | [
"Kaiyu Yang",
"Aidan M Swope",
"Alex Gu",
"Rahul Chalamala",
"Peiyang Song",
"Shixing Yu",
"Saad Godil",
"Ryan Prenger",
"Anima Anandkumar"
] | Track/Datasets_and_Benchmarks | oral | 2306.15626 | [
"https://github.com/lean-dojo/leandojo"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=g5v3Ig6WVq | @inproceedings{
shen2023auslandaily,
title={Auslan-Daily: Australian Sign Language Translation for Daily Communication and News},
author={Xin Shen and Shaozu Yuan and Hongwei Sheng and Heming Du and Xin Yu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=g5v3Ig6WVq}
} | Sign language translation (SLT) aims to convert a continuous sign language video clip into a spoken language. Considering different geographic regions generally have their own native sign languages, it is valuable to establish corresponding SLT datasets to support related communication and research. Auslan, as a sign language specific to Australia, still lacks a dedicated large-scale dataset for SLT.
To fill this gap, we curate an Australian Sign Language translation dataset, dubbed Auslan-Daily, which is collected from the Auslan educational TV series and Auslan TV programs. The former involves daily communications among multiple signers in the wild, while the latter comprises sign language videos for up-to-date news, weather forecasts, and documentaries. In particular, Auslan-Daily has two main features: (1) the topics are diverse and signed by multiple signers, and (2) the scenes in our dataset are more complex, e.g., captured in various environments, gesture interference during multi-signers' interactions and various camera positions. With a collection of more than 45 hours of high-quality Auslan video materials, we invite Auslan experts to align different fine-grained visual and language pairs, including video $\leftrightarrow$ fingerspelling, video $\leftrightarrow$ gloss, and video $\leftrightarrow$ sentence. As a result, Auslan-Daily contains multi-grained annotations that can be utilized to accomplish various fundamental sign language tasks, such as signer detection, sign spotting, fingerspelling detection, isolated sign language recognition, sign language translation and alignment. Moreover, we benchmark results with state-of-the-art models for each task in Auslan-Daily. Experiments indicate that Auslan-Daily is a highly challenging SLT dataset, and we hope this dataset will contribute to the development of Auslan and the advancement of sign languages worldwide in a broader context. All datasets and benchmarks are available at Auslan-Daily. | Auslan-Daily: Australian Sign Language Translation for Daily Communication and News | [
"Xin Shen",
"Shaozu Yuan",
"Hongwei Sheng",
"Heming Du",
"Xin Yu"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=g0QovXbFw3 | @inproceedings{
ji2023beavertails,
title={BeaverTails: Towards Improved Safety Alignment of {LLM} via a Human-Preference Dataset},
author={Jiaming Ji and Mickel Liu and Juntao Dai and Xuehai Pan and Chi Zhang and Ce Bian and Boyuan Chen and Ruiyang Sun and Yizhou Wang and Yaodong Yang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=g0QovXbFw3}
} | In this paper, we introduce the BeaverTails dataset, aimed at fostering research on safety alignment in large language models (LLMs). This dataset uniquely separates annotations of helpfulness and harmlessness for question-answering pairs, thus offering distinct perspectives on these crucial attributes. In total, we have gathered safety meta-labels for 333,963 question-answer (QA) pairs and 361,903 pairs of expert comparison data for both the helpfulness and harmlessness metrics. We further showcase applications of BeaverTails in content moderation and reinforcement learning with human feedback (RLHF), emphasizing its potential for practical safety measures in LLMs. We believe this dataset provides vital resources for the community, contributing towards the safe development and deployment of LLMs. Our project page is available at the following URL: https://sites.google.com/view/pku-beavertails. | BeaverTails: Towards Improved Safety Alignment of LLM via a Human-Preference Dataset | [
"Jiaming Ji",
"Mickel Liu",
"Juntao Dai",
"Xuehai Pan",
"Chi Zhang",
"Ce Bian",
"Boyuan Chen",
"Ruiyang Sun",
"Yizhou Wang",
"Yaodong Yang"
] | Track/Datasets_and_Benchmarks | poster | 2307.04657 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=fvKaLF1ns8 | @inproceedings{
yang2023intercode,
title={InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback},
author={John Yang and Akshara Prabhakar and Karthik R Narasimhan and Shunyu Yao},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=fvKaLF1ns8}
} | Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash, Spider, and MBPP datasets. We demonstrate InterCode’s viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct and Plan & Solve. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. | InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback | [
"John Yang",
"Akshara Prabhakar",
"Karthik R Narasimhan",
"Shunyu Yao"
] | Track/Datasets_and_Benchmarks | poster | 2306.14898 | [
"https://github.com/princeton-nlp/intercode"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=fr3OT4rosO | @inproceedings{
miyanishi2023cityrefer,
title={CityRefer: Geography-aware 3D Visual Grounding Dataset on City-scale Point Cloud Data},
author={Taiki Miyanishi and Fumiya Kitamori and Shuhei Kurita and Jungdae Lee and Motoaki Kawanabe and Nakamasa Inoue},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=fr3OT4rosO}
} | City-scale 3D point cloud is a promising way to express detailed and complicated outdoor structures. It encompasses both the appearance and geometry features of segmented city components, including cars, streets, and buildings that can be utilized for attractive applications such as user-interactive navigation of autonomous vehicles and drones. However, compared to the extensive text annotations available for images and indoor scenes, the scarcity of text annotations for outdoor scenes poses a significant challenge for achieving these applications. To tackle this problem, we introduce the CityRefer dataset for city-level visual grounding. The dataset consists of 35k natural language descriptions of 3D objects appearing in SensatUrban city scenes and 5k landmarks labels synchronizing with OpenStreetMap. To ensure the quality and accuracy of the dataset, all descriptions and labels in the CityRefer dataset are manually verified. We also have developed a baseline system that can learn encoded language descriptions, 3D object instances, and geographical information about the city's landmarks to perform visual grounding on the CityRefer dataset. To the best of our knowledge, the CityRefer dataset is the largest city-level visual grounding dataset for localizing specific 3D objects. | CityRefer: Geography-aware 3D Visual Grounding Dataset on City-scale Point Cloud Data | [
"Taiki Miyanishi",
"Fumiya Kitamori",
"Shuhei Kurita",
"Jungdae Lee",
"Motoaki Kawanabe",
"Nakamasa Inoue"
] | Track/Datasets_and_Benchmarks | poster | [
"https://github.com/atr-dbi/cityrefer"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=fZq8Tw0jdm | @inproceedings{
dell2023american,
title={American Stories: A Large-Scale Structured Text Dataset of Historical U.S. Newspapers},
author={Melissa Dell and Jacob Carlson and Tom Bryan and Emily Silcock and Abhishek Arora and Zejiang Shen and Luca D'Amico-Wong and Quan Le and Pablo Querubin and Leander Heldring},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=fZq8Tw0jdm}
} | Existing full text datasets of U.S. public domain newspapers do not recognize the often complex layouts of newspaper scans, and as a result the digitized content scrambles texts from articles, headlines, captions, advertisements, and other layout regions. OCR quality can also be low. This study develops a novel, deep learning pipeline for extracting full article texts from newspaper images and applies it to the nearly 20 million scans in Library of Congress's public domain Chronicling America collection. The pipeline includes layout detection, legibility classification, custom OCR, and association of article texts spanning multiple bounding boxes. To achieve high scalability, it is built with efficient architectures designed for mobile phones. The resulting American Stories dataset provides high quality data that could be used for pre-training a large language model to achieve better understanding of historical English and historical world knowledge. The dataset could also be added to the external database of a retrieval-augmented language model to make historical information - ranging from interpretations of political events to minutiae about the lives of people's ancestors - more widely accessible. Furthermore, structured article texts facilitate using transformer-based methods for popular social science applications like topic classification, detection of reproduced content, and news story clustering. Finally, American Stories provides a massive silver quality dataset for innovating multimodal layout analysis models and other multimodal applications. | American Stories: A Large-Scale Structured Text Dataset of Historical U.S. Newspapers | [
"Melissa Dell",
"Jacob Carlson",
"Tom Bryan",
"Emily Silcock",
"Abhishek Arora",
"Zejiang Shen",
"Luca D'Amico-Wong",
"Quan Le",
"Pablo Querubin",
"Leander Heldring"
] | Track/Datasets_and_Benchmarks | poster | 2308.12477 | [
""
] | https://huggingface.co/papers/2308.12477 | 0 | 0 | 0 | 10 | [] | [
"dell-research-harvard/AmericanStories"
] | [] | 1 |
null | https://openreview.net/forum?id=fOrm2rGX2r | @inproceedings{
huang2023ceval,
title={C-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for Foundation Models},
author={Yuzhen Huang and Yuzhuo Bai and Zhihao Zhu and Junlei Zhang and Jinghan Zhang and Tangjun Su and Junteng Liu and Chuancheng Lv and Yikai Zhang and jiayi lei and Yao Fu and Maosong Sun and Junxian He},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=fOrm2rGX2r}
} | New NLP benchmarks are urgently needed to align with the rapid development of large language models (LLMs). We present C-Eval, the first comprehensive Chinese evaluation suite designed to assess advanced knowledge and reasoning abilities of foundation models in a Chinese context. C-Eval comprises multiple-choice questions across four difficulty levels: middle school, high school, college, and professional. The questions span 52 diverse disciplines, ranging from humanities to science and engineering. C-Eval is accompanied by C-Eval Hard, a subset of very challenging subjects in C-Eval that requires advanced reasoning abilities to solve. We conduct a comprehensive evaluation of the most advanced LLMs on C-Eval, including both English- and Chinese-oriented models. Results indicate that only GPT-4 could achieve an average accuracy of over 60%, suggesting that there is still significant room for improvement for current LLMs. We anticipate C-Eval will help analyze important strengths and shortcomings of foundation models, and foster their development and growth for Chinese users. | C-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for Foundation Models | [
"Yuzhen Huang",
"Yuzhuo Bai",
"Zhihao Zhu",
"Junlei Zhang",
"Jinghan Zhang",
"Tangjun Su",
"Junteng Liu",
"Chuancheng Lv",
"Yikai Zhang",
"jiayi lei",
"Yao Fu",
"Maosong Sun",
"Junxian He"
] | Track/Datasets_and_Benchmarks | poster | 2305.08322 | [
"https://github.com/hkust-nlp/ceval"
] | https://huggingface.co/papers/2305.08322 | 5 | 0 | 1 | 14 | [
"Qwen/Qwen-7B-Chat",
"Qwen/Qwen-14B-Chat",
"Qwen/Qwen-72B-Chat",
"Qwen/Qwen-1_8B-Chat",
"Qwen/Qwen-14B-Chat-Int4",
"Qwen/Qwen-7B-Chat-Int4",
"Qwen/Qwen-1_8B",
"Qwen/Qwen-72B-Chat-Int4",
"Qwen/Qwen-1_8B-Chat-Int4",
"TheBloke/Qwen-14B-Chat-GPTQ",
"tangger/Qwen-7B-Chat",
"deeplang-ai/LingoWhale-8B",
"Qwen/Qwen-72B-Chat-Int8",
"TheBloke/Qwen-14B-Chat-AWQ",
"bibimbap/Qwen-7B-Chat",
"Qwen/Qwen-7B-Chat-Int8",
"Xorbits/Qwen-7B-Chat-GGUF",
"TheBloke/Qwen-7B-Chat-AWQ",
"Qwen/Qwen-14B-Chat-Int8",
"openerotica/Qwen-7B-Chat-GPTQ",
"Qwen/Qwen-1_8B-Chat-Int8",
"openerotica/Qwen-7B-GPTQ",
"TheBloke/Qwen-7B-Chat-GPTQ",
"bibimbap/Qwen-7B-Chat-Int4",
"4bit/Qwen-14B-Chat-Int4",
"openerotica/Qwen-7B-Chat-128g-4bit",
"Xorbits/Qwen-14B-Chat-GGUF",
"pipyp/iamqwen7bee",
"bibimbap/Qwen-7B",
"KIST-robot-intelligence/Qwen-14B-Chat-GGUF-Quantization",
"yyjjtt/test-model2",
"pipyp/qwendebug",
"yzsydlc/qwen2",
"reyvan/Qwen-1_8B-8bit",
"Xfgll/RuleGPT-enlogic",
"RichardErkhov/Qwen_-_Qwen-1_8B-Chat-gguf",
"Xfgll/RuleGPT-en-grammar",
"RichardErkhov/Qwen_-_Qwen-1_8B-gguf",
"Xfgll/RuleGPT-grammarcn",
"Xfgll/RuleGPT-en-decompose"
] | [
"ceval/ceval-exam",
"cryptom/ceval-exam",
"erhwenkuo/ceval-exam-zhtw"
] | [
"JohnSmith9982/ChuanhuChatGPT",
"qingxu98/gpt-academic",
"LanguageBind/MoE-LLaVA",
"eduagarcia/open_pt_llm_leaderboard",
"ZhangYuhan/3DGen-Arena",
"gsaivinay/open_llm_leaderboard",
"mikeee/qwen-7b-chat",
"meval/multilingual-chatbot-arena-leaderboard",
"MILVLG/IMPChat",
"EmbeddedLLM/chat-template-generation",
"IS2Lab/S-Eval",
"Tonic/Qwen1_8B-Chat",
"Justinrune/LLaMA-Factory",
"yhavinga/dutch-tokenizer-arena",
"officialhimanshu595/llama-factory",
"JohnSmith9982/ChuanhuChatGPT_Beta",
"li-qing/FIRE",
"ominous94/ChuanhuChatGPT",
"Lajonbot/Chatbot-Share",
"eson/kplug",
"kenken999/fastapi_django_main_live",
"FISHYA/ChuanhuChatGPT",
"justest/GPT-Academic-with-B3n-AI",
"markqiu/prinvest_mate",
"cryptokael/ChuanhuChatGPT",
"calvinchaochao/text_generation",
"Zulelee/langchain-chatchat",
"s3nh/Chatbot-Share",
"tianleliphoebe/visual-arena",
"zjuzjw/gpt-academic",
"llmbb/LLMBB-Agent",
"Ashmal/MobiLlama",
"hzwluoye/gpt-academic",
"Docfile/open_llm_leaderboard",
"qgyd2021/chat_with_llm",
"xun/Qwen-Token-Calc",
"qiao125/ChuanhuChatGPT",
"CaiRou-Huang/gpt-academic-test",
"Yijun-Yang/ReadReview",
"PegaMichael/Taiwan-LLaMa2-Copy",
"Kate0816/ChuanhuChatGPT1121",
"Ayndpa/gpt-academic",
"silk-road/ChatHaruhi-Qwen118k-Extended",
"cming0420/gpt-academic",
"willdas/ChuanhuChatGPT",
"everr/gpt-academicrrrr",
"NLPark/Qwen1_8B-Chat",
"hengkai/gpt-academic",
"cn208138/ChuanhuChatGPT",
"KevinLi0628/gpt-academic",
"TeamTonic/TruEraMultiMed",
"tjtanaa/chat-template-generation",
"pscpeng/ChuanhuChatGPT",
"Cyburger/die",
"Tonic/TruLensQwen1_8B",
"kuxian/gpt-academic",
"chiye/ChuanhuChatGPT",
"hongdaaaaaaaa/gpt-academic",
"thepianist9/LinlyTalk",
"QLWD/gpt-academic",
"ztYU/ChuanhuChatGPT",
"adminstr/gpt-academic",
"thepianist9/Linly",
"DrBadass/gpt-academic",
"mlike/ChuanhuChatGPT",
"JACK-Chen/gpt-academic-private",
"thepianist9/Loonly",
"qinglin96/gpt-academic3.6",
"lihuaaa/ChuanhuChatGPT",
"justseemore/gpt-academic",
"thepianist9/lop",
"darren1231/gpt-academic_2",
"Amadeus111111/ChuanhuChatGPT",
"new-ames/gpt-academic-Joy",
"united-avatars/linly-talker",
"CaiRou-Huang/TwLLM7B-v2.0-base",
"knowfoot/ChuanhuChatGPT",
"behindeu/gpt-academic",
"linxianzhong0128/Linly-Talker",
"DuanSuKa/gpt-academic2",
"Ho2/ChuanhuChatGPT",
"nubifere/vis-llm-ft",
"Chuanming/gpt-academic",
"shuozhang2/Monkey",
"DaY1zz/ChuanhuChatGPT",
"leong001/gpt-academic",
"BuzzHr/gpt-academic002",
"nexzhu/ChuanhuChatGPT",
"Rong233/gpt-academic-for-Jiang",
"Leachim/gpt-academic",
"KKK33697/ChuanhuChatGPT",
"JerryYin777/gpt-academic-hust",
"durukan/gptacademic",
"divilis/newchatgpt",
"yl5545/gpt-academic",
"pallavijaini/NeuralChat-LLAMA-POC",
"zizhongfeiyang/zizhongfeiyang",
"zhou005/gpt-academic",
"bibimbap/Qwen-7B-Chat",
"Meowoo/ChuanhuChatGPT"
] | 1 |
null | https://openreview.net/forum?id=fKzSz0oyaI | @inproceedings{
schlichtkrull2023averitec,
title={{AV}eriTeC: A Dataset for Real-world Claim Verification with Evidence from the Web},
author={Michael Sejr Schlichtkrull and Zhijiang Guo and Andreas Vlachos},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=fKzSz0oyaI}
} | Existing datasets for automated fact-checking have substantial limitations, such as relying on artificial claims, lacking annotations for evidence and intermediate reasoning, or including evidence published after the claim. In this paper we introduce AVeriTeC, a new dataset of 4,568 real-world claims covering fact-checks by 50 different organizations. Each claim is annotated with question-answer pairs supported by evidence available online, as well as textual justifications explaining how the evidence combines to produce a verdict. Through a multi-round annotation process, we avoid common pitfalls including context dependence, evidence insufficiency, and temporal leakage, and reach a substantial inter-annotator agreement of $\kappa=0.619$ on verdicts. We develop a baseline as well as an evaluation scheme for verifying claims through question-answering against the open web. | AVeriTeC: A Dataset for Real-world Claim Verification with Evidence from the Web | null | Track/Datasets_and_Benchmarks | poster | 2305.13117 | [
"https://github.com/michschli/averitec"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=epUQ40eCzk | @inproceedings{
chen2023twigma,
title={{TWIGMA}: A dataset of {AI}-Generated Images with Metadata From Twitter},
author={Yiqun T. Chen and James Zou},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=epUQ40eCzk}
} | Recent progress in generative artificial intelligence (gen-AI) has enabled the generation of photo-realistic and artistically-inspiring photos at a single click, catering to millions of users online. To explore how people use gen-AI models such as DALLE and StableDiffusion, it is critical to understand the themes, contents, and variations present in the AI-generated photos. In this work, we introduce TWIGMA (TWItter Generative-ai images with MetadatA), a comprehensive dataset encompassing over 800,000 gen-AI images collected from Jan 2021 to March 2023 on Twitter, with associated metadata (e.g., tweet text, creation date, number of likes), available at https://zenodo.org/records/8031785. Through a comparative analysis of TWIGMA with natural images and human artwork, we find that gen-AI images possess distinctive characteristics and exhibit, on average, lower variability when compared to their non-gen-AI counterparts. Additionally, we find that the similarity between a gen-AI image and natural images is inversely correlated with the number of likes. Finally, we observe a longitudinal shift in the themes of AI-generated images on Twitter, with users increasingly sharing artistically sophisticated content such as intricate human portraits, whereas their interest in simple subjects such as natural scenes and animals has decreased. Our findings underscore the significance of TWIGMA as a unique data resource for studying AI-generated images. | TWIGMA: A dataset of AI-Generated Images with Metadata From Twitter | [
"Yiqun T. Chen",
"James Zou"
] | Track/Datasets_and_Benchmarks | poster | 2306.08310 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=eM6WLko4Dv | @inproceedings{
yin2023lamm,
title={{LAMM}: Language-Assisted Multi-Modal Instruction-Tuning Dataset, Framework, and Benchmark},
author={Zhenfei Yin and Jiong WANG and Jianjian Cao and Zhelun Shi and Dingning Liu and Mukai Li and Xiaoshui Huang and Zhiyong Wang and Lu Sheng and LEI BAI and Jing Shao and Wanli Ouyang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=eM6WLko4Dv}
} | Large language models have emerged as a promising approach towards achieving general-purpose AI agents. The thriving open-source LLM community has greatly accelerated the development of agents that support human-machine dialogue interaction through natural language processing. However, human interaction with the world extends beyond only text as a modality, and other modalities such as vision are also crucial. Recent works on multi-modal large language models, such as GPT-4V and Bard, have demonstrated their effectiveness in handling visual modalities. However, the transparency of these works is limited and insufficient to support academic research. To the best of our knowledge, we present one of the very first open-source endeavors in the field, LAMM, encompassing a Language-Assisted Multi-Modal instruction tuning dataset, framework, and benchmark. Our aim is to establish LAMM as a growing ecosystem for training and evaluating MLLMs, with a specific focus on facilitating AI agents capable of bridging the gap between ideas and execution, thereby enabling seamless human-AI interaction. Our main contribution is three-fold: 1) We present a comprehensive dataset and benchmark, which cover a wide range of vision tasks for 2D and 3D vision. Extensive experiments validate the effectiveness of our dataset and benchmark. 2) We outline the detailed methodology of constructing multi-modal instruction tuning datasets and benchmarks for MLLMs, enabling rapid scaling and extension of MLLM research to diverse domains, tasks, and modalities. 3) We provide a primary but potential MLLM training framework optimized for modality extension. We also provide baseline models, comprehensive experimental observations, and analysis to accelerate future research. Our baseline model is trained within 24 A100 GPU hours, framework supports training with V100 and RTX3090 is available thanks to the open-source society. Codes and data are now available at https://openlamm.github.io. | LAMM: Language-Assisted Multi-Modal Instruction-Tuning Dataset, Framework, and Benchmark | [
"Zhenfei Yin",
"Jiong WANG",
"Jianjian Cao",
"Zhelun Shi",
"Dingning Liu",
"Mukai Li",
"Xiaoshui Huang",
"Zhiyong Wang",
"Lu Sheng",
"LEI BAI",
"Jing Shao",
"Wanli Ouyang"
] | Track/Datasets_and_Benchmarks | poster | 2306.06687 | [
"https://github.com/openlamm/lamm"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=eJ5nu9qvWz | @inproceedings{
du2023mhub,
title={M\${\textasciicircum}2\$Hub: Unlocking the Potential of Machine Learning for Materials Discovery},
author={Yuanqi Du and Yingheng Wang and Yining Huang and Jianan Canal Li and Yanqiao Zhu and Tian Xie and Chenru Duan and John Gregoire and Carla P Gomes},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=eJ5nu9qvWz}
} | We introduce M$^2$Hub, a toolkit for advancing machine learning in materials discovery. Machine learning has achieved remarkable progress in modeling molecular structures, especially biomolecules for drug discovery. However, the development of machine learning approaches for modeling materials structures lag behind, which is partly due to the lack of an integrated platform that enables access to diverse tasks for materials discovery. To bridge this gap, M$^2$Hub will enable easy access to materials discovery tasks, datasets, machine learning methods, evaluations, and benchmark results that cover the entire workflow. Specifically, the first release of M$^2$Hub focuses on three key stages in materials discovery: virtual screening, inverse design, and molecular simulation, including 9 datasets that covers 6 types of materials with 56 tasks across 8 types of material properties. We further provide 2 synthetic datasets for the purpose of generative tasks on materials. In addition to random data splits, we also provide 3 additional data partitions to reflect the real-world materials discovery scenarios. State-of-the-art machine learning methods (including those are suitable for materials structures but never compared in the literature) are benchmarked on representative tasks. Our codes and library are publicly available at \url{https://github.com/yuanqidu/M2Hub}. | M^2Hub: Unlocking the Potential of Machine Learning for Materials Discovery | [
"Yuanqi Du",
"Yingheng Wang",
"Yining Huang",
"Jianan Canal Li",
"Yanqiao Zhu",
"Tian Xie",
"Chenru Duan",
"John Gregoire",
"Carla P Gomes"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=eEK99egXeB | @inproceedings{
jiang2023opendataval,
title={OpenDataVal: a Unified Benchmark for Data Valuation},
author={Kevin Fu Jiang and Weixin Liang and James Zou and Yongchan Kwon},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=eEK99egXeB}
} | Assessing the quality and impact of individual data points is critical for improving model performance and mitigating undesirable biases within the training dataset. Several data valuation algorithms have been proposed to quantify data quality, however, there lacks a systemic and standardized benchmarking system for data valuation. In this paper, we introduce *OpenDataVal*, an easy-to-use and unified benchmark framework that empowers researchers and practitioners to apply and compare various data valuation algorithms. *OpenDataVal* provides an integrated environment that includes (i) a diverse collection of image, natural language, and tabular datasets, (ii) implementations of eleven different state-of-the-art data valuation algorithms, and (iii) a prediction model API that can import any models in scikit-learn. Furthermore, we propose four downstream machine learning tasks for evaluating the quality of data values. We perform benchmarking analysis using *OpenDataVal*, quantifying and comparing the efficacy of state-of-the-art data valuation approaches. We find that no single algorithm performs uniformly best across all tasks, and an appropriate algorithm should be employed for a user's downstream task. *OpenDataVal* is publicly available at https://opendataval.github.io with comprehensive documentation. Furthermore, we provide a leaderboard where researchers can evaluate the effectiveness of their own data valuation algorithms. | OpenDataVal: a Unified Benchmark for Data Valuation | [
"Kevin Fu Jiang",
"Weixin Liang",
"James Zou",
"Yongchan Kwon"
] | Track/Datasets_and_Benchmarks | poster | 2306.10577 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=e9n4JjkmXZ | @inproceedings{
kirchhof2023url,
title={{URL}: A Representation Learning Benchmark for Transferable Uncertainty Estimates},
author={Michael Kirchhof and B{\'a}lint Mucs{\'a}nyi and Seong Joon Oh and Enkelejda Kasneci},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=e9n4JjkmXZ}
} | Representation learning has significantly driven the field to develop pretrained models that can act as a valuable starting point when transferring to new datasets. With the rising demand for reliable machine learning and uncertainty quantification, there is a need for pretrained models that not only provide embeddings but also transferable uncertainty estimates. To guide the development of such models, we propose the Uncertainty-aware Representation Learning (URL) benchmark. Besides the transferability of the representations, it also measures the zero-shot transferability of the uncertainty estimate using a novel metric. We apply URL to evaluate ten uncertainty quantifiers that are pretrained on ImageNet and transferred to eight downstream datasets. We find that approaches that focus on the uncertainty of the representation itself or estimate the prediction risk directly outperform those that are based on the probabilities of upstream classes. Yet, achieving transferable uncertainty quantification remains an open challenge. Our findings indicate that it is not necessarily in conflict with traditional representation learning goals. Code is available at [https://github.com/mkirchhof/url](https://github.com/mkirchhof/url). | URL: A Representation Learning Benchmark for Transferable Uncertainty Estimates | [
"Michael Kirchhof",
"Bálint Mucsányi",
"Seong Joon Oh",
"Enkelejda Kasneci"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | https://huggingface.co/papers/2307.03810 | 1 | 0 | 0 | 4 | [] | [] | [] | 1 |
|
null | https://openreview.net/forum?id=doV2nhGm1l | @inproceedings{
ng2023hyperskin,
title={Hyper-Skin: A Hyperspectral Dataset for Reconstructing Facial Skin-Spectra from {RGB} Images},
author={Pai Chet Ng and Zhixiang Chi and Yannick Verdie and Juwei Lu and Konstantinos N Plataniotis},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=doV2nhGm1l}
} | We introduce Hyper-Skin, a hyperspectral dataset covering wide range of wavelengths from visible (VIS) spectrum (400nm - 700nm) to near-infrared (NIR) spectrum (700nm - 1000nm), uniquely designed to facilitate research on facial skin-spectra reconstruction.
By reconstructing skin spectra from RGB images, our dataset enables the study of hyperspectral skin analysis, such as melanin and hemoglobin concentrations, directly on the consumer device.
Overcoming limitations of existing datasets, Hyper-Skin consists of diverse facial skin data collected with a pushbroom hyperspectral camera.
With 330 hyperspectral cubes from 51 subjects, the dataset covers the facial skin from different angles and facial poses.
Each hyperspectral cube has dimensions of 1024$\times$1024$\times$448, resulting in millions of spectra vectors per image.
The dataset, carefully curated in adherence to ethical guidelines, includes paired hyperspectral images and synthetic RGB images generated using real camera responses.
We demonstrate the efficacy of our dataset by showcasing skin spectra reconstruction using state-of-the-art models on 31 bands of hyperspectral data resampled in the VIS and NIR spectrum.
This Hyper-Skin dataset would be a valuable resource to NeurIPS community, encouraging the development of novel algorithms for skin spectral reconstruction while fostering interdisciplinary collaboration in hyperspectral skin analysis related to cosmetology and skin's well-being.
Instructions to request the data and the related benchmarking codes are publicly available at: https://github.com/hyperspectral-skin/Hyper-Skin-2023. | Hyper-Skin: A Hyperspectral Dataset for Reconstructing Facial Skin-Spectra from RGB Images | [
"Pai Chet Ng",
"Zhixiang Chi",
"Yannick Verdie",
"Juwei Lu",
"Konstantinos N Plataniotis"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=dhJ8VbcEtX | @inproceedings{
goswami2023aqua,
title={{AQ}uA: A Benchmarking Tool for Label Quality Assessment},
author={Mononito Goswami and Vedant Sanil and Arjun Choudhry and Arvind Srinivasan and Chalisa Udompanyawit and Artur Dubrawski},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=dhJ8VbcEtX}
} | Machine learning (ML) models are only as good as the data they are trained on. But recent studies have found datasets widely used to train and evaluate ML models, e.g. _ImageNet_, to have pervasive labeling errors. Erroneous labels on the train set hurt ML models' ability to generalize, and they impact evaluation and model selection using the test set. Consequently, learning in the presence of labeling errors is an active area of research, yet this field lacks a comprehensive benchmark to evaluate these methods. Most of these methods are evaluated on a few computer vision datasets with significant variance in the experimental protocols. With such a large pool of methods and inconsistent evaluation, it is also unclear how ML practitioners can choose the right models to assess label quality in their data. To this end, we propose a benchmarking environment _AQuA_ to rigorously evaluate methods that enable machine learning in the presence of label noise. We also introduce a design space to delineate concrete design choices of label error detection models. We hope that our proposed design space and benchmark enable practitioners to choose the right tools to improve their label quality and that our benchmark enables objective and rigorous evaluation of machine learning tools facing mislabeled data. | AQuA: A Benchmarking Tool for Label Quality Assessment | [
"Mononito Goswami",
"Vedant Sanil",
"Arjun Choudhry",
"Arvind Srinivasan",
"Chalisa Udompanyawit",
"Artur Dubrawski"
] | Track/Datasets_and_Benchmarks | poster | 2306.09467 | [
"https://github.com/autonlab/aqua"
] | https://huggingface.co/papers/2306.09467 | 0 | 1 | 0 | 6 | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=dVaWCDMBof | @inproceedings{
gadre2023datacomp,
title={DataComp: In search of the next generation of multimodal datasets},
author={Samir Yitzhak Gadre and Gabriel Ilharco and Alex Fang and Jonathan Hayase and Georgios Smyrnis and Thao Nguyen and Ryan Marten and Mitchell Wortsman and Dhruba Ghosh and Jieyu Zhang and Eyal Orgad and Rahim Entezari and Giannis Daras and Sarah M Pratt and Vivek Ramanujan and Yonatan Bitton and Kalyani Marathe and Stephen Mussmann and Richard Vencu and Mehdi Cherti and Ranjay Krishna and Pang Wei Koh and Olga Saukh and Alexander Ratner and Shuran Song and Hannaneh Hajishirzi and Ali Farhadi and Romain Beaumont and Sewoong Oh and Alex Dimakis and Jenia Jitsev and Yair Carmon and Vaishaal Shankar and Ludwig Schmidt},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=dVaWCDMBof}
} | Multimodal datasets are a critical component in recent breakthroughs such as CLIP, Stable Diffusion and GPT-4, yet their design does not receive the same research attention as model architectures or training algorithms. To address this shortcoming in the machine learning ecosystem, we introduce DataComp, a testbed for dataset experiments centered around a new candidate pool of 12.8 billion image-text pairs from Common Crawl. Participants in our benchmark design new filtering techniques or curate new data sources and then evaluate their new dataset by running our standardized CLIP training code and testing the resulting model on 38 downstream test sets. Our benchmark consists of multiple compute scales spanning four orders of magnitude, which enables the study of scaling trends and makes the benchmark accessible to researchers with varying resources. Our baseline experiments show that the DataComp workflow leads to better training sets. Our best baseline, DataComp-1B, enables training a CLIP ViT-L/14 from scratch to 79.2% zero-shot accuracy on ImageNet, outperforming OpenAI's CLIP ViT-L/14 by 3.7 percentage points while using the same training procedure and compute. We release \datanet and all accompanying code at www.datacomp.ai. | DataComp: In search of the next generation of multimodal datasets | [
"Samir Yitzhak Gadre",
"Gabriel Ilharco",
"Alex Fang",
"Jonathan Hayase",
"Georgios Smyrnis",
"Thao Nguyen",
"Ryan Marten",
"Mitchell Wortsman",
"Dhruba Ghosh",
"Jieyu Zhang",
"Eyal Orgad",
"Rahim Entezari",
"Giannis Daras",
"Sarah M Pratt",
"Vivek Ramanujan",
"Yonatan Bitton",
"Kalyani Marathe",
"Stephen Mussmann",
"Richard Vencu",
"Mehdi Cherti",
"Ranjay Krishna",
"Pang Wei Koh",
"Olga Saukh",
"Alexander Ratner",
"Shuran Song",
"Hannaneh Hajishirzi",
"Ali Farhadi",
"Romain Beaumont",
"Sewoong Oh",
"Alex Dimakis",
"Jenia Jitsev",
"Yair Carmon",
"Vaishaal Shankar",
"Ludwig Schmidt"
] | Track/Datasets_and_Benchmarks | oral | 2304.14108 | [
"https://github.com/mlfoundations/datacomp"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=dOeBYjxSoq | @inproceedings{
zhang2023sgp,
title={{SG}{\texttimes}P : A Sorghum Genotype {\texttimes} Phenotype Prediction Dataset and Benchmark},
author={Zeyu Zhang and Robert Pless and Nadia Shakoor and Austin Carnahan and Abby Stylianou},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=dOeBYjxSoq}
} | Large scale field-phenotyping approaches have the potential to solve important questions about the relationship of plant genotype to plant phenotype. Computational approaches to measuring the phenotype (the observable plant features) are required to address the problem at a large scale, but machine learning approaches to extract phenotypes from sensor data have been hampered by limited access to (a) sufficiently large, organized multi-sensor datasets, (b) field trials that have a large scale and significant number of genotypes, (c) full genetic sequencing of those phenotypes, and (d) datasets sufficiently organized so that algorithm centered researchers can directly address the real biological problems. To address this, we present SGxP, a novel benchmark dataset from a large-scale field trial consisting of the complete genotype of over 300 sorghum varieties, and time sequences of imagery from several field plots growing each variety, taken with RGB and laser 3D scanner imaging. To lower the barrier to entry and facilitate further developments, we provide a set of well organized, multi-sensor imagery and corresponding genomic data. We implement baseline deep learning based phenotyping approaches to create baseline results for individual sensors and multi-sensor fusion for detecting genetic mutations with known impacts. We also provide and support an open-ended challenge by identifying thousands of genetic mutations whose phenotypic impacts are currently unknown. A web interface for machine learning researchers and practitioners to share approaches, visualizations and hypotheses supports engagement with plant biologists to further the understanding of the sorghum genotype x phenotype relationship. The full dataset, leaderboard (including baseline results) and discussion forums can be found at http://sorghumsnpbenchmark.com. | SG×P : A Sorghum Genotype × Phenotype Prediction Dataset and Benchmark | [
"Zeyu Zhang",
"Robert Pless",
"Nadia Shakoor",
"Austin Carnahan",
"Abby Stylianou"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=dK1Rs1o0Ij | @inproceedings{
hansen2023reimagining,
title={Reimagining Synthetic Tabular Data Generation through Data-Centric {AI}: A Comprehensive Benchmark},
author={Lasse Hansen and Nabeel Seedat and Mihaela van der Schaar and Andrija Petrovic},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=dK1Rs1o0Ij}
} | Synthetic data serves as an alternative in training machine learning models, particularly when real-world data is limited or inaccessible. However, ensuring that synthetic data mirrors the complex nuances of real-world data is a challenging task. This paper addresses this issue by exploring the potential of integrating data-centric AI techniques which profile the data to guide the synthetic data generation process. Moreover, we shed light on the often ignored consequences of neglecting these data profiles during synthetic data generation --- despite seemingly high statistical fidelity. Subsequently, we propose a novel framework to evaluate the integration of data profiles to guide the creation of more representative synthetic data. In an empirical study, we evaluate the performance of five state-of-the-art models for tabular data generation on eleven distinct tabular datasets. The findings offer critical insights into the successes and limitations of current synthetic data generation techniques. Finally, we provide practical recommendations for integrating data-centric insights into the synthetic data generation process, with a specific focus on classification performance, model selection, and feature selection. This study aims to reevaluate conventional approaches to synthetic data generation and promote the application of data-centric AI techniques in improving the quality and effectiveness of synthetic data. | Reimagining Synthetic Tabular Data Generation through Data-Centric AI: A Comprehensive Benchmark | [
"Lasse Hansen",
"Nabeel Seedat",
"Mihaela van der Schaar",
"Andrija Petrovic"
] | Track/Datasets_and_Benchmarks | poster | 2310.16981 | [
"https://github.com/vanderschaarlab/data-centric-synthetic-data"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=dJEjgQcbOt | @inproceedings{
zhao2023kuaisim,
title={KuaiSim: A Comprehensive Simulator for Recommender Systems},
author={Kesen Zhao and Shuchang Liu and Qingpeng Cai and Xiangyu Zhao and Ziru Liu and Dong Zheng and Peng Jiang and Kun Gai},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=dJEjgQcbOt}
} | Reinforcement Learning (RL)-based recommender systems (RSs) have garnered considerable attention due to their ability to learn optimal recommendation policies and maximize long-term user rewards.
However, deploying RL models directly in online environments and generating authentic data through A/B tests can pose challenges and require substantial resources.
Simulators offer an alternative approach by providing training and evaluation environments for RS models, reducing reliance on real-world data.
Existing simulators have shown promising results but also have limitations such as simplified user feedback, lacking consistency with real-world data, the challenge of simulator evaluation, and difficulties in migration and expansion across RSs.
To address these challenges, we propose KuaiSim, a comprehensive user environment that provides user feedback with multi-behavior and cross-session responses.
The resulting simulator can support three levels of recommendation problems: the request level list-wise recommendation task, the whole-session level sequential recommendation task, and the cross-session level retention optimization task.
For each task, KuaiSim also provides evaluation protocols and baseline recommendation algorithms that further serve as benchmarks for future research.
We also restructure existing competitive simulators on the Kuairand Dataset and compare them against KuaiSim to future assess their performance and behavioral differences.
Furthermore, to showcase KuaiSim's flexibility in accommodating different datasets, we demonstrate its versatility and robustness when deploying it on the ML-1m dataset. The implementation code is available online to ease reproducibility \footnote{https://github.com/Applied-Machine-Learning-Lab/KuaiSim}. | KuaiSim: A Comprehensive Simulator for Recommender Systems | null | Track/Datasets_and_Benchmarks | poster | 2309.12645 | [
"https://github.com/applied-machine-learning-lab/kuaisim"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=dI4wzAE6uV | @inproceedings{
li2023can,
title={Can {LLM} Already Serve as A Database Interface? A {BI}g Bench for Large-Scale Database Grounded Text-to-{SQL}s},
author={Jinyang Li and Binyuan Hui and GE QU and Jiaxi Yang and Binhua Li and Bowen Li and Bailin Wang and Bowen Qin and Ruiying Geng and Nan Huo and Xuanhe Zhou and Chenhao Ma and Guoliang Li and Kevin Chang and Fei Huang and Reynold Cheng and Yongbin Li},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=dI4wzAE6uV}
} | Text-to-SQL parsing, which aims at converting natural language instructions into executable SQLs, has gained increasing attention in recent years. In particular, GPT-4 and Claude-2 have shown impressive results in this task. However, most of the prevalent benchmarks, i.e., Spider, and WikiSQL, focus on database schema with few rows of database contents leaving the gap between academic study and real-world applications. To mitigate this gap, we present BIRD, a BIg benchmark for laRge-scale Database grounded in text-to-SQL tasks, containing 12,751 pairs of text-to-SQL data and 95 databases with a total size of 33.4 GB, spanning 37 professional domains. Our emphasis on database values highlights the new challenges of dirty database contents, external knowledge between NL questions and database contents, and SQL efficiency, particularly in the context of massive databases. To solve these problems, text-to-SQL models must feature database value comprehension in addition to semantic parsing. The experimental results demonstrate the significance of database values in generating accurate text-to-SQLs for big databases. Furthermore, even the most popular and effective text-to-SQL models, i.e. GPT-4, only achieve 54.89% in execution accuracy, which is still far from the human result of 92.96%, proving that challenges still stand. We also provide an efficiency analysis to offer insights into generating text-to-efficient-SQLs that are beneficial to industries.
We believe that BIRD will contribute to advancing real-world applications of text-to-SQL research.
The leaderboard and source code are available: https://bird-bench.github.io/. | Can LLM Already Serve as A Database Interface? A BIg Bench for Large-Scale Database Grounded Text-to-SQLs | [
"Jinyang Li",
"Binyuan Hui",
"GE QU",
"Jiaxi Yang",
"Binhua Li",
"Bowen Li",
"Bailin Wang",
"Bowen Qin",
"Ruiying Geng",
"Nan Huo",
"Xuanhe Zhou",
"Chenhao Ma",
"Guoliang Li",
"Kevin Chang",
"Fei Huang",
"Reynold Cheng",
"Yongbin Li"
] | Track/Datasets_and_Benchmarks | oral | 2305.03111 | [
""
] | https://huggingface.co/papers/2305.03111 | 2 | 8 | 0 | 16 | [
"patrickNLP/Graphix-3B"
] | [] | [] | 1 |
null | https://openreview.net/forum?id=cuheT1BAp4 | @inproceedings{
chen2023object,
title={Object Reprojection Error ({ORE}): Camera pose benchmarks from lightweight tracking annotations},
author={Xingyu Chen and Weiyao Wang and Hao Tang and Matt Feiszli},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=cuheT1BAp4}
} | 3D spatial understanding is highly valuable in the context of semantic modeling of environments, agents, and their relationships. Semantic modeling approaches employed on monocular video often ingest outputs from off-the-shelf SLAM/SfM pipelines, which are anecdotally observed to perform poorly or fail completely on some fraction of the videos of interest. These target videos may vary widely in complexity of scenes, activities, camera trajectory, etc. Unfortunately, such semantically-rich video data often comes with no ground-truth 3D information, and in practice it is prohibitively costly or impossible to obtain ground truth reconstructions or camera pose post-hoc.
This paper proposes a novel evaluation protocol, Object Reprojection Error (ORE) to benchmark camera trajectories; ORE computes reprojection error for static objects within the video and requires only lightweight object tracklet annotations. These annotations are easy to gather on new or existing video, enabling ORE to be calculated on essentially arbitrary datasets. We show that ORE maintains high rank correlation with standard metrics based on groundtruth. Leveraging ORE, we source videos and annotations from Ego4D-EgoTracks, resulting in EgoStatic, a large-scale diverse dataset for evaluating camera trajectories in-the-wild. | Object Reprojection Error (ORE): Camera pose benchmarks from lightweight tracking annotations | [
"Xingyu Chen",
"Weiyao Wang",
"Hao Tang",
"Matt Feiszli"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=crbPFR2Hpv | @inproceedings{
smyers2023avoidds,
title={{AVOIDDS}: Aircraft Vision-based Intruder Detection Dataset and Simulator},
author={Elysia Quinn Smyers and Sydney Michelle Katz and Anthony Corso and Mykel Kochenderfer},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=crbPFR2Hpv}
} | Designing robust machine learning systems remains an open problem, and there is a need for benchmark problems that cover both environmental changes and evaluation on a downstream task. In this work, we introduce AVOIDDS, a realistic object detection benchmark for the vision-based aircraft detect-and-avoid problem. We provide a labeled dataset consisting of 72,000 photorealistic images of intruder aircraft with various lighting conditions, weather conditions, relative geometries, and geographic locations. We also provide an interface that evaluates trained models on slices of this dataset to identify changes in performance with respect to changing environmental conditions. Finally, we implement a fully-integrated, closed-loop simulator of the vision-based detect-and-avoid problem to evaluate trained models with respect to the downstream collision avoidance task. This benchmark will enable further research in the design of robust machine learning systems for use in safety-critical applications. The AVOIDDS dataset and code are publicly available at https://purl.stanford.edu/hj293cv5980 and https://github.com/sisl/VisionBasedAircraftDAA, respectively. | AVOIDDS: Aircraft Vision-based Intruder Detection Dataset and Simulator | [
"Elysia Quinn Smyers",
"Sydney Michelle Katz",
"Anthony Corso",
"Mykel Kochenderfer"
] | Track/Datasets_and_Benchmarks | poster | 2306.11203 | [
"https://github.com/sisl/visionbasedaircraftdaa"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=cF6rQz8V3V | @inproceedings{
liu2023bitstreamcorrupted,
title={Bitstream-Corrupted Video Recovery: A Novel Benchmark Dataset and Method},
author={Tianyi Liu and Kejun Wu and YI WANG and Wenyang Liu and Kim-Hui Yap and Lap-Pui Chau},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=cF6rQz8V3V}
} | The past decade has witnessed great strides in video recovery by specialist technologies, like video inpainting, completion, and error concealment. However, they typically simulate the missing content by manual-designed error masks, thus failing to fill in the realistic video loss in video communication (e.g., telepresence, live streaming, and internet video) and multimedia forensics. To address this, we introduce the bitstream-corrupted video (BSCV) benchmark, the first benchmark dataset with more than 28,000 video clips, which can be used for bitstream-corrupted video recovery in the real world. The BSCV is a collection of 1) a proposed three-parameter corruption model for video bitstream, 2) a large-scale dataset containing rich error patterns, multiple corruption levels, and flexible dataset branches, and 3) a new video recovery framework that serves as a benchmark. We evaluate state-of-the-art video inpainting methods on the BSCV dataset, demonstrating existing approaches' limitations and our framework's advantages in solving the bitstream-corrupted video recovery problem. The benchmark and dataset are released at https://github.com/LIUTIGHE/BSCV-Dataset. | Bitstream-Corrupted Video Recovery: A Novel Benchmark Dataset and Method | [
"Tianyi Liu",
"Kejun Wu",
"YI WANG",
"Wenyang Liu",
"Kim-Hui Yap",
"Lap-Pui Chau"
] | Track/Datasets_and_Benchmarks | poster | 2309.13890 | [
"https://github.com/liutighe/bscv-dataset"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=cAjZ3tMye6 | @inproceedings{
chen2023hyporadise,
title={HyPoradise: An Open Baseline for Generative Speech Recognition with Large Language Models},
author={CHEN CHEN and Yuchen Hu and Chao-Han Huck Yang and Sabato Marco Siniscalchi and Pin-Yu Chen and EngSiong Chng},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=cAjZ3tMye6}
} | Advancements in deep neural networks have allowed automatic speech recognition (ASR) systems to attain human parity on several publicly available clean speech datasets. However, even state-of-the-art ASR systems experience performance degradation when confronted with adverse conditions, as a well-trained acoustic model is sensitive to variations in the speech domain, e.g., background noise. Intuitively, humans address this issue by relying on their linguistic knowledge: the meaning of ambiguous spoken terms is usually inferred from contextual cues thereby reducing the dependency on the auditory system. Inspired by this observation, we introduce the first open-source benchmark to utilize external large language models (LLMs) for ASR error correction, where N-best decoding hypotheses provide informative elements for true transcription prediction. This approach is a paradigm shift from the traditional language model rescoring strategy that can only select one candidate hypothesis as output transcription. The proposed benchmark contains a novel dataset, "HyPoradise" (HP), encompassing more than 316,000 pairs of N-best hypotheses and corresponding accurate transcriptions across prevalent speech domains. Given this dataset, we examine three types of error correction techniques based on LLMs with varying amounts of labeled hypotheses-transcription pairs, which gains significant word error rate (WER) reduction. Experimental evidence demonstrates the proposed technique achieves a breakthrough by surpassing the upper bound of traditional re-ranking based methods. More surprisingly, LLM with reasonable prompt design can even correct those tokens that are missing in N-best list. We make our results publicly accessible for reproducible pipelines with released pre-trained models, thus providing a new paradigm for ASR error correction with LLMs. | HyPoradise: An Open Baseline for Generative Speech Recognition with Large Language Models | [
"CHEN CHEN",
"Yuchen Hu",
"Chao-Han Huck Yang",
"Sabato Marco Siniscalchi",
"Pin-Yu Chen",
"EngSiong Chng"
] | Track/Datasets_and_Benchmarks | poster | 2309.15701 | [
"https://github.com/hypotheses-paradise/hypo2trans"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=c5rqd6PZn6 | @inproceedings{
emami2023buildingsbench,
title={BuildingsBench: A Large-Scale Dataset of 900K Buildings and Benchmark for Short-Term Load Forecasting},
author={Patrick Emami and Abhijeet Sahu and Peter Graf},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=c5rqd6PZn6}
} | Short-term forecasting of residential and commercial building energy consumption is widely used in power systems and continues to grow in importance. Data-driven short-term load forecasting (STLF), although promising, has suffered from a lack of open, large-scale datasets with high building diversity. This has hindered exploring the pretrain-then-fine-tune paradigm for STLF. To help address this, we present BuildingsBench, which consists of: 1) Buildings-900K, a large-scale dataset of 900K simulated buildings representing the U.S. building stock; and 2) an evaluation platform with over 1,900 real residential and commercial buildings from 7 open datasets. BuildingsBench benchmarks two under-explored tasks: zero-shot STLF, where a pretrained model is evaluated on unseen buildings without fine-tuning, and transfer learning, where a pretrained model is fine-tuned on a target building. The main finding of our benchmark analysis is that synthetically pretrained models generalize surprisingly well to real commercial buildings. An exploration of the effect of increasing dataset size and diversity on zero-shot commercial building performance reveals a power-law with diminishing returns. We also show that fine-tuning pretrained models on real commercial and residential buildings improves performance for a majority of target buildings. We hope that BuildingsBench encourages and facilitates future research on generalizable STLF. All datasets and code can be accessed from https://github.com/NREL/BuildingsBench. | BuildingsBench: A Large-Scale Dataset of 900K Buildings and Benchmark for Short-Term Load Forecasting | [
"Patrick Emami",
"Abhijeet Sahu",
"Peter Graf"
] | Track/Datasets_and_Benchmarks | poster | 2307.00142 | [
"https://github.com/nrel/buildingsbench"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=c5DUGninMz | @inproceedings{
qu2023rio,
title={{RIO}: A Benchmark for Reasoning Intention-Oriented Objects in Open Environments},
author={Mengxue Qu and Yu Wu and Wu Liu and Xiaodan Liang and Jingkuan Song and Yao Zhao and Yunchao Wei},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=c5DUGninMz}
} | Intention-oriented object detection aims to detect desired objects based on specific intentions or requirements. For instance, when we desire to "lie down and rest", we instinctively seek out a suitable option such as a "bed" or a "sofa" that can fulfill our needs. Previous work in this area is limited either by the number of intention descriptions or by the affordance vocabulary available for intention objects. These limitations make it challenging to handle intentions in open environments effectively. To facilitate this research, we construct a comprehensive dataset called Reasoning Intention-Oriented Objects (RIO). In particular, RIO is specifically designed to incorporate diverse real-world scenarios and a wide range of object categories. It offers the following key features: 1) intention descriptions in RIO are represented as natural sentences rather than a mere word or verb phrase, making them more practical and meaningful; 2) the intention descriptions are contextually relevant to the scene, enabling a broader range of potential functionalities associated with the objects; 3) the dataset comprises a total of 40,214 images and 130,585 intention-object pairs. With the proposed RIO, we evaluate the ability of some existing models to reason intention-oriented objects in open environments. | RIO: A Benchmark for Reasoning Intention-Oriented Objects in Open Environments | [
"Mengxue Qu",
"Yu Wu",
"Wu Liu",
"Xiaodan Liang",
"Jingkuan Song",
"Yao Zhao",
"Yunchao Wei"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=c3kuX7ltzr | @inproceedings{
wu2023forb,
title={{FORB}: A Flat Object Retrieval Benchmark for Universal Image Embedding},
author={Pengxiang Wu and Siman Wang and Kevin S Dela Rosa and Derek Hao Hu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=c3kuX7ltzr}
} | Image retrieval is a fundamental task in computer vision. Despite recent advances in this field, many techniques have been evaluated on a limited number of domains, with a small number of instance categories. Notably, most existing works only consider domains like 3D landmarks, making it difficult to generalize the conclusions made by these works to other domains, e.g., logo and other 2D flat objects. To bridge this gap, we introduce a new dataset for benchmarking visual search methods on flat images with diverse patterns. Our flat object retrieval benchmark (FORB) supplements the commonly adopted 3D object domain, and more importantly, it serves as a testbed for assessing the image embedding quality on out-of-distribution domains. In this benchmark we investigate the retrieval accuracy of representative methods in terms of candidate ranks, as well as matching score margin, a viewpoint which is largely ignored by many works. Our experiments not only highlight the challenges and rich heterogeneity of FORB, but also reveal the hidden properties of different retrieval strategies. The proposed benchmark is a growing project and we expect to expand in both quantity and variety of objects. The dataset and supporting codes are available at https://github.com/pxiangwu/FORB/. | FORB: A Flat Object Retrieval Benchmark for Universal Image Embedding | [
"Pengxiang Wu",
"Siman Wang",
"Kevin S Dela Rosa",
"Derek Hao Hu"
] | Track/Datasets_and_Benchmarks | poster | 2309.16249 | [
"https://github.com/pxiangwu/forb"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=bqXduvuW5E | @inproceedings{
gao2023proteininvbench,
title={ProteinInvBench: Benchmarking Protein Inverse Folding on Diverse Tasks, Models, and Metrics},
author={Zhangyang Gao and Cheng Tan and Yijie Zhang and Xingran Chen and Lirong Wu and Stan Z. Li},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=bqXduvuW5E}
} | Protein inverse folding has attracted increasing attention in recent years. However, we observe that current methods are usually limited to the CATH dataset and the recovery metric. The lack of a unified framework for ensembling and comparing different methods hinders the comprehensive investigation. In this paper, we propose ProteinBench, a new benchmark for protein design, which comprises extended protein design tasks, integrated models, and diverse evaluation metrics. We broaden the application of methods originally designed for single-chain protein design to new scenarios of multi-chain and \textit{de novo} protein design. Recent impressive methods, including GraphTrans, StructGNN, GVP, GCA, AlphaDesign, ProteinMPNN, PiFold and KWDesign are integrated into our framework. In addition to the recovery, we also evaluate the confidence, diversity, sc-TM, efficiency, and robustness to thoroughly revisit current protein design approaches and inspire future work. As a result, we establish the first comprehensive benchmark for protein design, which is publicly available at \url{https://github.com/A4Bio/OpenCPD}. | ProteinInvBench: Benchmarking Protein Inverse Folding on Diverse Tasks, Models, and Metrics | [
"Zhangyang Gao",
"Cheng Tan",
"Yijie Zhang",
"Xingran Chen",
"Lirong Wu",
"Stan Z. Li"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=bmfMNIf1bU | @inproceedings{
zhao2023eveye,
title={{EV}-Eye: Rethinking High-frequency Eye Tracking through the Lenses of Event Cameras},
author={Guangrong Zhao and Yurun Yang and Jingwei Liu and Ning Chen and Yiran Shen and Hongkai Wen and Guohao Lan},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=bmfMNIf1bU}
} | In this paper, we present EV-Eye, a first-of-its-kind large scale multimodal eye tracking dataset aimed at inspiring research on high-frequency eye/gaze tracking. EV-Eye utilizes an emerging bio-inspired event camera to capture independent pixel-level intensity changes induced by eye movements, achieving sub-microsecond latency. Our dataset was curated over a two-week period and collected from 48 participants encompassing diverse genders and age groups. It comprises over 1.5 million near-eye grayscale images and 2.7 billion event samples generated by two DAVIS346 event cameras. Additionally, the dataset contains 675 thousands scene images and 2.7 million gaze references captured by Tobii Pro Glasses 3 eye tracker for cross-modality validation. Compared with existing event-based high-frequency eye tracking datasets, our dataset is significantly larger in size, and the gaze references involve more natural eye movement patterns, i.e., fixation, saccade and smooth pursuit. Alongside the event data, we also present a hybrid eye tracking method as benchmark, which leverages both the near-eye grayscale images and event data for robust and high-frequency eye tracking. We show that our method achieves higher accuracy for both pupil and gaze estimation tasks compared to the existing solution. | EV-Eye: Rethinking High-frequency Eye Tracking through the Lenses of Event Cameras | [
"Guangrong Zhao",
"Yurun Yang",
"Jingwei Liu",
"Ning Chen",
"Yiran Shen",
"Hongkai Wen",
"Guohao Lan"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=bjvRVA2ihO | @inproceedings{
mougan2023how,
title={How to Data in Datathons},
author={Carlos Mougan and Richard Plant and Clare Teng and Marya Bazzi and Alvaro Cabrejas-Egea and Ryan Sze-Yin Chan and David Salvador Jasin and martin stoffel and Kirstie Jane Whitaker and JULES MANSER},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=bjvRVA2ihO}
} | The rise of datathons, also known as data or data science hackathons, has provided a platform to collaborate, learn, and innovate quickly. Despite their significant potential benefits, organizations often struggle to effectively work with data due to a lack of clear guidelines and best practices for potential issues that might arise. Drawing on our own experiences and insights from organizing +80 datathon challenges with +60 partnership organizations since 2016, we provide a guide that serves as a resource for organizers to navigate the data-related complexities of datathons. We apply our proposed framework to 10 case studies. | How to Data in Datathons | [
"Carlos Mougan",
"Richard Plant",
"Clare Teng",
"Marya Bazzi",
"Alvaro Cabrejas-Egea",
"Ryan Sze-Yin Chan",
"David Salvador Jasin",
"martin stoffel",
"Kirstie Jane Whitaker",
"JULES MANSER"
] | Track/Datasets_and_Benchmarks | poster | 2309.09770 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=bdWkFt7M6X | @inproceedings{
moon2023a,
title={A Dataset of Relighted 3D Interacting Hands},
author={Gyeongsik Moon and Shunsuke Saito and Weipeng Xu and Rohan Joshi and Julia Buffalini and Harley Bellan and Nicholas Matthew Rosen and Jesse Richardson and Mallorie Mize and Philippe De Bree and Tomas Simon and Bo Peng and Shubham Garg and Kevyn Alex Anthony McPhail and Takaaki Shiratori},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=bdWkFt7M6X}
} | The two-hand interaction is one of the most challenging signals to analyze due to the self-similarity, complicated articulations, and occlusions of hands. Although several datasets have been proposed for the two-hand interaction analysis, all of them do not achieve 1) diverse and realistic image appearances and 2) diverse and large-scale groundtruth (GT) 3D poses at the same time. In this work, we propose Re:InterHand, a dataset of relighted 3D interacting hands that achieve the two goals. To this end, we employ a state-of-the-art hand relighting network with our accurately tracked two-hand 3D poses. We compare our Re:InterHand with existing 3D interacting hands datasets and show the benefit of it. Our Re:InterHand is available in https://mks0601.github.io/ReInterHand/ | A Dataset of Relighted 3D Interacting Hands | [
"Gyeongsik Moon",
"Shunsuke Saito",
"Weipeng Xu",
"Rohan Joshi",
"Julia Buffalini",
"Harley Bellan",
"Nicholas Matthew Rosen",
"Jesse Richardson",
"Mallorie Mize",
"Philippe De Bree",
"Tomas Simon",
"Bo Peng",
"Shubham Garg",
"Kevyn Alex Anthony McPhail",
"Takaaki Shiratori"
] | Track/Datasets_and_Benchmarks | poster | 2310.17768 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=bW1uwPV3im | @inproceedings{
gao2023lora,
title={Lo{RA}: A Logical Reasoning Augmented Dataset for Visual Question Answering},
author={Jingying Gao and Qi Wu and Alan Blair and Maurice Pagnucco},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=bW1uwPV3im}
} | The capacity to reason logically is a hallmark of human cognition. Humans excel at integrating multimodal information for locigal reasoning, as exemplified by the Visual Question Answering (VQA) task, which is a challenging multimodal task. VQA tasks and large vision-and-language models aim to tackle reasoning problems, but the accuracy, consistency and fabrication of the generated answers is hard to evaluate in the absence of a VQA dataset that can offer formal, comprehensive and systematic complex logical reasoning questions. To address this gap, we present LoRA, a novel Logical Reasoning Augmented VQA dataset that requires formal and complex description logic reasoning based on a food-and-kitchen knowledge base. Our main objective in creating LoRA is to enhance the complex and formal logical reasoning capabilities of VQA models, which are not adequately measured by existing VQA datasets. We devise strong and flexible programs to automatically generate 200,000 diverse description logic reasoning questions based on the SROIQ Description Logic, along with realistic kitchen scenes and ground truth answers. We fine-tune the latest transformer VQA models and evaluate the zero-shot performance of the state-of-the-art large vision-and-language models on LoRA. The results reveal that LoRA presents a unique challenge in logical reasoning, setting a systematic and comprehensive evaluation standard. | LoRA: A Logical Reasoning Augmented Dataset for Visual Question Answering | [
"Jingying Gao",
"Qi Wu",
"Alan Blair",
"Maurice Pagnucco"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=aKnWIrDPiR | @inproceedings{
chen2023multimodal,
title={Multimodal Clinical Benchmark for Emergency Care ({MC}-{BEC}): A Comprehensive Benchmark for Evaluating Foundation Models in Emergency Medicine},
author={Emma Chen and Aman Kansal and Julie Chen and Boyang Tom Jin and Julia Rachel Reisler and David A Kim and Pranav Rajpurkar},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=aKnWIrDPiR}
} | We propose the Multimodal Clinical Benchmark for Emergency Care (MC-BEC), a comprehensive benchmark for evaluating foundation models in Emergency Medicine using a dataset of 100K+ continuously monitored Emergency Department visits from 2020-2022. MC-BEC focuses on clinically relevant prediction tasks at timescales from minutes to days, including predicting patient decompensation, disposition, and emergency department (ED) revisit, and includes a standardized evaluation framework with train-test splits and evaluation metrics. The multimodal dataset includes a wide range of detailed clinical data, including triage information, prior diagnoses and medications, continuously measured vital signs, electrocardiogram and photoplethysmograph waveforms, orders placed and medications administered throughout the visit, free-text reports of imaging studies, and information on ED diagnosis, disposition, and subsequent revisits. We provide performance baselines for each prediction task to enable the evaluation of multimodal, multitask models. We believe that MC-BEC will encourage researchers to develop more effective, generalizable, and accessible foundation models for multimodal clinical data. | Multimodal Clinical Benchmark for Emergency Care (MC-BEC): A Comprehensive Benchmark for Evaluating Foundation Models in Emergency Medicine | [
"Emma Chen",
"Aman Kansal",
"Julie Chen",
"Boyang Tom Jin",
"Julia Rachel Reisler",
"David A Kim",
"Pranav Rajpurkar"
] | Track/Datasets_and_Benchmarks | poster | 2311.04937 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=ZsDB2GzsqG | @inproceedings{
zhang2023magicbrush,
title={MagicBrush: A Manually Annotated Dataset for Instruction-Guided Image Editing},
author={Kai Zhang and Lingbo Mo and Wenhu Chen and Huan Sun and Yu Su},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=ZsDB2GzsqG}
} | Text-guided image editing is widely needed in daily life, ranging from personal use to professional applications such as Photoshop.
However, existing methods are either zero-shot or trained on an automatically synthesized dataset, which contains a high volume of noise.
Thus, they still require lots of manual tuning to produce desirable outcomes in practice.
To address this issue, we introduce MagicBrush, the first large-scale, manually annotated dataset for instruction-guided real image editing that covers diverse scenarios: single-turn, multi-turn, mask-provided, and mask-free editing.
MagicBrush comprises over 10K manually annotated triplets (source image, instruction, target image), which supports trainining large-scale text-guided image editing models.
We fine-tune InstructPix2Pix on MagicBrush and show that the new model can produce much better images according to human evaluation.
We further conduct extensive experiments to evaluate current image editing baselines from multiple dimensions including quantitative, qualitative, and human evaluations.
The results reveal the challenging nature of our dataset and the gap between current baselines and real-world editing needs. | MagicBrush: A Manually Annotated Dataset for Instruction-Guided Image Editing | [
"Kai Zhang",
"Lingbo Mo",
"Wenhu Chen",
"Huan Sun",
"Yu Su"
] | Track/Datasets_and_Benchmarks | poster | 2306.10012 | [
"https://github.com/osu-nlp-group/magicbrush"
] | https://huggingface.co/papers/2306.10012 | 2 | 35 | 6 | 5 | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=ZrNRBmOzwE | @inproceedings{
marone2023data,
title={Data Portraits: Recording Foundation Model Training Data},
author={Marc Marone and Benjamin Van Durme},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=ZrNRBmOzwE}
} | Foundation models are trained on increasingly immense and opaque datasets. Even while these models are now key in AI system building, it can be difficult to answer the straightforward question: has the model already encountered a given example during training? We therefore propose a widespread adoption of Data Portraits: artifacts that record training data and allow for downstream inspection. First we outline the properties of such an artifact and discuss how existing solutions can be used to increase transparency. We then propose and implement a solution based on data sketching, stressing fast and space efficient querying. Using our tools, we document a popular language modeling corpus (The Pile) and a recently released code modeling dataset (The Stack). We show that our solution enables answering questions about test set leakage and model plagiarism. Our tool is lightweight and fast, costing only 3% of the dataset size in overhead. We release a live interface of our tools at https://dataportraits.org/ and call on dataset and model creators to release Data Portraits as a complement to current documentation practices. | Data Portraits: Recording Foundation Model Training Data | [
"Marc Marone",
"Benjamin Van Durme"
] | Track/Datasets_and_Benchmarks | poster | 2303.03919 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=ZknHnDDxng | @inproceedings{
yang2023vidchaptersm,
title={VidChapters-7M: Video Chapters at Scale},
author={Antoine Yang and Arsha Nagrani and Ivan Laptev and Josef Sivic and Cordelia Schmid},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=ZknHnDDxng}
} | Segmenting untrimmed videos into chapters enables users to quickly navigate to the information of their interest. This important topic has been understudied due to the lack of publicly released datasets. To address this issue, we present VidChapters-7M, a dataset of 817K user-chaptered videos including 7M chapters in total. VidChapters-7M is automatically created from videos online in a scalable manner by scraping user-annotated chapters and hence without any additional manual annotation. We introduce the following three tasks based on this data. First, the video chapter generation task consists of temporally segmenting the video and generating a chapter title for each segment. To further dissect the problem, we also define two variants of this task: video chapter generation given ground-truth boundaries, which requires generating a chapter title given an annotated video segment, and video chapter grounding, which requires temporally localizing a chapter given its annotated title. We benchmark both simple baselines as well as state-of-the-art video-language models on these three tasks. We also show that pretraining on VidChapters-7M transfers well to dense video captioning tasks, largely improving the state of the art on the YouCook2 and ViTT benchmarks. Finally, our experiments reveal that downstream performance scales well with the size of the pretraining dataset. | VidChapters-7M: Video Chapters at Scale | [
"Antoine Yang",
"Arsha Nagrani",
"Ivan Laptev",
"Josef Sivic",
"Cordelia Schmid"
] | Track/Datasets_and_Benchmarks | poster | 2309.13952 | [
""
] | https://huggingface.co/papers/2309.13952 | 1 | 9 | 3 | 5 | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=ZbmS3MU25p | @inproceedings{
kusa2023csmed,
title={{CSM}eD: Bridging the Dataset Gap in Automated Citation Screening for Systematic Literature Reviews},
author={Wojciech Kusa and Oscar E. Mendoza and Matthias Samwald and Petr Knoth and Allan Hanbury},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=ZbmS3MU25p}
} | Systematic literature reviews (SLRs) play an essential role in summarising, synthesising and validating scientific evidence. In recent years, there has been a growing interest in using machine learning techniques to automate the identification of relevant studies for SLRs. However, the lack of standardised evaluation datasets makes comparing the performance of such automated literature screening systems difficult. In this paper, we analyse the citation screening evaluation datasets, revealing that many of the available datasets are either too small, suffer from data leakage or have limited applicability to systems treating automated literature screening as a classification task, as opposed to, for example, a retrieval or question-answering task. To address these challenges, we introduce CSMED, a meta-dataset consolidating nine publicly released collections, providing unified access to 325 SLRs from the fields of medicine and computer science. CSMED serves as a comprehensive resource for training and evaluating the performance of automated citation screening models. Additionally, we introduce CSMED-FT, a new dataset designed explicitly for evaluating the full text publication screening task. To demonstrate the utility of CSMED, we conduct experiments and establish baselines on new datasets. | CSMeD: Bridging the Dataset Gap in Automated Citation Screening for Systematic Literature Reviews | [
"Wojciech Kusa",
"Oscar E. Mendoza",
"Matthias Samwald",
"Petr Knoth",
"Allan Hanbury"
] | Track/Datasets_and_Benchmarks | poster | 2311.12474 | [
"https://github.com/wojciechkusa/systematic-review-datasets"
] | https://huggingface.co/papers/2311.12474 | 0 | 0 | 0 | 5 | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=ZV4tZgclu8 | @inproceedings{
naeini2023large,
title={Large Language Models are Fixated by Red Herrings: Exploring Creative Problem Solving and Einstellung Effect using the Only Connect Wall Dataset},
author={Saeid Alavi Naeini and Raeid Saqur and mozhgan saeidi and John Michael Giorgi and Babak Taati},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=ZV4tZgclu8}
} | The quest for human imitative AI has been an enduring topic in AI research since inception. The technical evolution and emerging capabilities of the latest cohort of large language models (LLMs) have reinvigorated the subject beyond academia to cultural zeitgeist.
While recent NLP evaluation benchmark tasks test some aspects of human-imitative behaviour (e.g., BIG-bench's `human-like behavior' tasks), few, if not none, examine *creative problem solving* abilities. Creative problem solving in humans is a well-studied topic in cognitive neuroscience with standardized tests that predominantly use ability to associate (heterogeneous) connections among clue words as a metric for creativity. Exposure to misleading stimuli --- distractors dubbed *red herrings* --- impede human performance in such tasks via the *fixation effect* and Einstellung paradigm. In cognitive neuroscience studies, such fixations are experimentally induced by pre-exposing participants to orthographically similar incorrect words to subsequent word-fragments or clues. The popular British quiz show Only Connect's *Connecting Wall* segment essentially mimics Mednick's Remote Associates Test (RAT) formulation with built-in, deliberate red herrings, that makes it an ideal proxy dataset to explore and study fixation effect and Einstellung paradigm from cognitive neuroscience in LLMs. In addition to presenting the novel Only Connect Wall (OCW) dataset, we also report results from our evaluation of selected pre-trained language models and LLMs (including OpenAI's GPT series) on creative problem solving tasks like grouping clue words by heterogeneous connections, and identifying correct open knowledge domain connections in respective groups. We synthetically generate two additional datasets: OCW-Randomized, OCW-WordNet to further analyze our red-herrings hypothesis in language models.
The code and link to the dataset is available at [url](https://github.com/TaatiTeam/OCW). | Large Language Models are Fixated by Red Herrings: Exploring Creative Problem Solving and Einstellung Effect using the Only Connect Wall Dataset | [
"Saeid Alavi Naeini",
"Raeid Saqur",
"mozhgan saeidi",
"John Michael Giorgi",
"Babak Taati"
] | Track/Datasets_and_Benchmarks | poster | 2306.11167 | [
"https://github.com/taatiteam/ocw"
] | https://huggingface.co/papers/2306.11167 | 3 | 2 | 0 | 5 | [] | [
"TaatiTeam/OCW",
"TaatiTeam/OCW_main",
"TaatiTeam/OCW_randomized",
"TaatiTeam/OCW_wordnet"
] | [] | 1 |
null | https://openreview.net/forum?id=ZJWQfgXQb6 | @inproceedings{
pyarelal2023the,
title={The To{MCAT} Dataset},
author={Adarsh Pyarelal and Eric Duong and Caleb Jones Shibu and Paulo Soares and Savannah Boyd and Payal Khosla and Valeria Pfeifer and Diheng Zhang and Eric S Andrews and Rick Champlin and Vincent Paul Raymond and Meghavarshini Krishnaswamy and Clayton Morrison and Emily Butler and Kobus Barnard},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=ZJWQfgXQb6}
} | We present a rich, multimodal dataset consisting of data from 40 teams of three humans conducting simulated urban search-and-rescue (SAR) missions in a Minecraft-based testbed, collected for the Theory of Mind-based Cognitive Architecture for Teams (ToMCAT) project. Modalities include two kinds of brain scan data---functional near-infrared spectroscopy (fNIRS) and electroencephalography (EEG), as well as skin conductance, heart rate, eye tracking, face images, spoken dialog audio data with automatic speech recognition (ASR) transcriptions, game screenshots, gameplay data, game performance data, demographic data, and self-report questionnaires. Each team undergoes up to six consecutive phases: three behavioral tasks, one mission training session, and two collaborative SAR missions. As time-synchronized multimodal data collected under a variety of circumstances, this dataset will support studying a large variety of research questions on topics including teamwork, coordination, plan recognition, affective computing, physiological linkage, entrainment, and dialog understanding. We provide an initial public release of the de-identified data, along with analyses illustrating the utility of this dataset to both computer scientists and social scientists. | The ToMCAT Dataset | [
"Adarsh Pyarelal",
"Eric Duong",
"Caleb Jones Shibu",
"Paulo Soares",
"Savannah Boyd",
"Payal Khosla",
"Valeria Pfeifer",
"Diheng Zhang",
"Eric S Andrews",
"Rick Champlin",
"Vincent Paul Raymond",
"Meghavarshini Krishnaswamy",
"Clayton Morrison",
"Emily Butler",
"Kobus Barnard"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=ZDnnzsado4 | @inproceedings{
gharaee2023a,
title={A Step Towards Worldwide Biodiversity Assessment: The {BIOSCAN}-1M Insect Dataset},
author={Zahra Gharaee and ZeMing Gong and Nicholas Pellegrino and Iuliia Zarubiieva and Joakim Bruslund Haurum and Scott C Lowe and Jaclyn McKeown and Chris C.Y. Ho and Joschka McLeod and Yi-Yun Catherine Wei and Jireh Agda and Sujeevan Ratnasingham and Dirk Steinke and Angel X Chang and Graham W. Taylor and Paul W. Fieguth},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=ZDnnzsado4}
} | In an effort to catalog insect biodiversity, we propose a new large dataset of hand-labelled insect images, the BIOSCAN-1M Insect Dataset. Each record is taxonomically classified by an expert, and also has associated genetic information including raw nucleotide barcode sequences and assigned barcode index numbers, which are genetic-based proxies for species classification. This paper presents a curated million-image dataset, primarily to train computer-vision models capable of providing image-based taxonomic assessment, however, the dataset also presents compelling characteristics, the study of which would be of interest to the broader machine learning community. Driven by the biological nature inherent to the dataset, a characteristic long-tailed class-imbalance distribution is exhibited. Furthermore, taxonomic labelling is a hierarchical classification scheme, presenting a highly fine-grained classification problem at lower levels. Beyond spurring interest in biodiversity research within the machine learning community, progress on creating an image-based taxonomic classifier will also further the ultimate goal of all BIOSCAN research: to lay the foundation for a comprehensive survey of global biodiversity. This paper introduces the dataset and explores the classification task through the implementation and analysis of a baseline classifier. The code repository of the BIOSCAN-1M-Insect dataset is available at https://github.com/zahrag/BIOSCAN-1M | A Step Towards Worldwide Biodiversity Assessment: The BIOSCAN-1M Insect Dataset | [
"Zahra Gharaee",
"ZeMing Gong",
"Nicholas Pellegrino",
"Iuliia Zarubiieva",
"Joakim Bruslund Haurum",
"Scott C Lowe",
"Jaclyn McKeown",
"Chris C.Y. Ho",
"Joschka McLeod",
"Yi-Yun Catherine Wei",
"Jireh Agda",
"Sujeevan Ratnasingham",
"Dirk Steinke",
"Angel X Chang",
"Graham W. Taylor",
"Paul W. Fieguth"
] | Track/Datasets_and_Benchmarks | poster | [
"https://github.com/zahrag/BIOSCAN-1M"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=Ys8RmfF9w1 | @inproceedings{
chen2023uncovering,
title={Uncovering Neural Scaling Laws in Molecular Representation Learning},
author={Dingshuo Chen and Yanqiao Zhu and Jieyu Zhang and Yuanqi Du and Zhixun Li and Qiang Liu and Shu Wu and Liang Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=Ys8RmfF9w1}
} | Molecular Representation Learning (MRL) has emerged as a powerful tool for drug and materials discovery in a variety of tasks such as virtual screening and inverse design. While there has been a surge of interest in advancing model-centric techniques, the influence of both data quantity and quality on molecular representations is not yet clearly understood within this field.
In this paper, we delve into the neural scaling behaviors of MRL from a data-centric viewpoint, examining four key dimensions: (1) data modalities, (2) dataset splitting, (3) the role of pre-training, and (4) model capacity.
Our empirical studies confirm a consistent power-law relationship between data volume and MRL performance across these dimensions. Additionally, through detailed analysis, we identify potential avenues for improving learning efficiency.
To challenge these scaling laws, we adapt seven popular data pruning strategies to molecular data and benchmark their performance. Our findings underline the importance of data-centric MRL and highlight possible directions for future research. | Uncovering Neural Scaling Laws in Molecular Representation Learning | [
"Dingshuo Chen",
"Yanqiao Zhu",
"Jieyu Zhang",
"Yuanqi Du",
"Zhixun Li",
"Qiang Liu",
"Shu Wu",
"Liang Wang"
] | Track/Datasets_and_Benchmarks | poster | 2309.15123 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=YfPKQycBDE | @inproceedings{
mukherjee2023seva,
title={{SEVA}: Leveraging sketches to evaluate alignment between human and machine visual abstraction},
author={Kushin Mukherjee and Holly Huey and Xuanchen Lu and Yael Vinker and Rio Aguina-Kang and Ariel Shamir and Judith E Fan},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=YfPKQycBDE}
} | Sketching is a powerful tool for creating abstract images that are sparse but meaningful. Sketch understanding poses fundamental challenges for general-purpose vision algorithms because it requires robustness to the sparsity of sketches relative to natural visual inputs and because it demands tolerance for semantic ambiguity, as sketches can reliably evoke multiple meanings. While current vision algorithms have achieved high performance on a variety of visual tasks, it remains unclear to what extent they understand sketches in a human-like way. Here we introduce $\texttt{SEVA}$, a new benchmark dataset containing approximately 90K human-generated sketches of 128 object concepts produced under different time constraints, and thus systematically varying in sparsity. We evaluated a suite of state-of-the-art vision algorithms on their ability to correctly identify the target concept depicted in these sketches and to generate responses that are strongly aligned with human response patterns on the same sketch recognition task. We found that vision algorithms that better predicted human sketch recognition performance also better approximated human uncertainty about sketch meaning, but there remains a sizable gap between model and human response patterns. To explore the potential of models that emulate human visual abstraction in generative tasks, we conducted further evaluations of a recently developed sketch generation algorithm (Vinker et al., 2022) capable of generating sketches that vary in sparsity. We hope that public release of this dataset and evaluation protocol will catalyze progress towards algorithms with enhanced capacities for human-like visual abstraction. | SEVA: Leveraging sketches to evaluate alignment between human and machine visual abstraction | null | Track/Datasets_and_Benchmarks | poster | 2312.03035 | [
"https://github.com/cogtoolslab/visual_abstractions_benchmarking_public2023"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=YdjWXrdOTh | @inproceedings{
li2023evaluating,
title={Evaluating Graph Neural Networks for Link Prediction: Current Pitfalls and New Benchmarking},
author={Juanhui Li and Harry Shomer and Haitao Mao and Shenglai Zeng and Yao Ma and Neil Shah and Jiliang Tang and Dawei Yin},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=YdjWXrdOTh}
} | Link prediction attempts to predict whether an unseen edge exists based on only a portion of the graph. A flurry of methods has been created in recent years that attempt to make use of graph neural networks (GNNs) for this task. Furthermore, new and diverse datasets have also been created to better evaluate the effectiveness of these new models. However, multiple limitations currently exist that hinders our ability to properly evaluate these new methods. This includes, but is not limited to: (1) The underreporting of performance on multiple baselines, (2) A lack of a unified data split and evaluation metric on some datasets, (3) An unrealistic evaluation setting that produces negative samples that are easy to classify. To overcome these challenges we first conduct a fair comparison across prominent methods and datasets, utilizing the same dataset settings and hyperparameter settings. We then create a new real-world evaluation setting that samples difficult negative samples via multiple heuristics. The new evaluation setting helps promote new challenges and opportunities in link prediction by aligning the evaluation with real-world situations. | Evaluating Graph Neural Networks for Link Prediction: Current Pitfalls and New Benchmarking | [
"Juanhui Li",
"Harry Shomer",
"Haitao Mao",
"Shenglai Zeng",
"Yao Ma",
"Neil Shah",
"Jiliang Tang",
"Dawei Yin"
] | Track/Datasets_and_Benchmarks | poster | 2306.10453 | [
"https://github.com/juanhui28/heart"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=YXogl4uQUO | @inproceedings{
valmeekam2023planbench,
title={PlanBench: An Extensible Benchmark for Evaluating Large Language Models on Planning and Reasoning about Change},
author={Karthik Valmeekam and Matthew Marquez and Alberto Olmo and Sarath Sreedharan and Subbarao Kambhampati},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=YXogl4uQUO}
} | Generating plans of action, and reasoning about change have long been considered a core competence of intelligent agents. It is thus no surprise that evaluating the planning and reasoning capabilities of large language models (LLMs) has become a hot topic of research. Most claims about LLM planning capabilities are however based on common sense tasks–where it becomes hard to tell whether LLMs are planning or merely retrieving from their vast world knowledge. There is a strong need for systematic and extensible planning benchmarks with sufficient diversity to evaluate whether LLMs have innate planning capabilities. Motivated by this, we propose PlanBench, an extensible benchmark suite based on the kinds of domains used in the automated planning community, especially in the International Planning Competition, to test the capabilities of LLMs in planning or reasoning about actions and change. PlanBench provides sufficient diversity in both the task domains and the specific planning capabilities. Our studies also show that on many critical capabilities–including plan generation–LLM performance falls quite short, even with the SOTA models. PlanBench can thus function as a useful marker of progress of LLMs in planning and reasoning. | PlanBench: An Extensible Benchmark for Evaluating Large Language Models on Planning and Reasoning about Change | [
"Karthik Valmeekam",
"Matthew Marquez",
"Alberto Olmo",
"Sarath Sreedharan",
"Subbarao Kambhampati"
] | Track/Datasets_and_Benchmarks | poster | 2206.10498 | [
"https://github.com/karthikv792/llms-planning"
] | https://huggingface.co/papers/2206.10498 | 1 | 0 | 0 | 5 | [] | [
"chiayewken/blocksworld",
"tasksource/planbench"
] | [] | 1 |
null | https://openreview.net/forum?id=YWJ7Yi4OtH | @inproceedings{
oh2023ecgqa,
title={{ECG}-{QA}: A Comprehensive Question Answering Dataset Combined With Electrocardiogram},
author={Jungwoo Oh and Gyubok Lee and Seongsu Bae and Joon-myoung Kwon and Edward Choi},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=YWJ7Yi4OtH}
} | Question answering (QA) in the field of healthcare has received much attention due to significant advancements in natural language processing. However, existing healthcare QA datasets primarily focus on medical images, clinical notes, or structured electronic health record tables. This leaves the vast potential of combining electrocardiogram (ECG) data with these systems largely untapped. To address this gap, we present ECG-QA, the first QA dataset specifically designed for ECG analysis. The dataset comprises a total of 70 question templates that cover a wide range of clinically relevant ECG topics, each validated by an ECG expert to ensure their clinical utility. As a result, our dataset includes diverse ECG interpretation questions, including those that require a comparative analysis of two different ECGs. In addition, we have conducted numerous experiments to provide valuable insights for future research directions. We believe that ECG-QA will serve as a valuable resource for the development of intelligent QA systems capable of assisting clinicians in ECG interpretations. | ECG-QA: A Comprehensive Question Answering Dataset Combined With Electrocardiogram | [
"Jungwoo Oh",
"Gyubok Lee",
"Seongsu Bae",
"Joon-myoung Kwon",
"Edward Choi"
] | Track/Datasets_and_Benchmarks | poster | 2306.15681 | [
"https://github.com/jwoo5/ecg-qa"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=YJ4ioRbxNb | @inproceedings{
matteucci2023a,
title={A benchmark of categorical encoders for binary classification},
author={Federico Matteucci and Vadim Arzamasov and Klemens B{\"o}hm},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=YJ4ioRbxNb}
} | Categorical encoders transform categorical features into numerical representations that are indispensable for a wide range of machine learning models.
Existing encoder benchmark studies lack generalizability because of their limited choice of (1) encoders, (2) experimental factors, and (3) datasets.
Additionally, inconsistencies arise from the adoption of varying aggregation strategies.
This paper is the most comprehensive benchmark of categorical encoders to date, including an extensive evaluation of 32 configurations of encoders from diverse families, with 36 combinations of experimental factors, and on 50 datasets.
The study shows the profound influence of dataset selection, experimental factors, and aggregation strategies on the benchmark's conclusions~---~aspects disregarded in previous encoder benchmarks.
Our code is available at \url{https://github.com/DrCohomology/EncoderBenchmarking}. | A benchmark of categorical encoders for binary classification | [
"Federico Matteucci",
"Vadim Arzamasov",
"Klemens Böhm"
] | Track/Datasets_and_Benchmarks | poster | 2307.09191 | [
""
] | https://huggingface.co/papers/2307.09191 | 0 | 0 | 0 | 3 | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=Y4GZ2w74f4 | @inproceedings{
bitton2023visitbench,
title={Vis{IT}-Bench: A Dynamic Benchmark for Evaluating Instruction-Following Vision-and-Language Models},
author={Yonatan Bitton and Hritik Bansal and Jack Hessel and Rulin Shao and Wanrong Zhu and Anas Awadalla and Joshua P Gardner and Rohan Taori and Ludwig Schmidt},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=Y4GZ2w74f4}
} | We introduce VisIT-Bench (Visual InsTruction Benchmark), a benchmark for evaluating instruction-following vision-language models for real-world use. Our starting point is curating 70 "instruction families" that we envision instruction tuned vision-language models should be able to address. Extending beyond evaluations like VQAv2 and COCO, tasks range from basic recognition to game playing and creative generation. Following curation, our dataset comprises 592 test queries, each with a human-authored instruction-conditioned caption. These descriptions surface instruction-specific factors, e.g., for an instruction asking about the accessibility of a storefront for wheelchair users, the instruction-conditioned caption describes ramps/potential obstacles. These descriptions enable 1) collecting human-verified reference outputs for each instance; and 2) automatic evaluation of candidate multimodal generations using a text-only LLM, aligning with human judgment. We quantify quality gaps between models and references using both human and automatic evaluations; e.g., the top-performing instruction-following model wins against the GPT-4 reference in just 27% of the comparison. VisIT-Bench is dynamic to participate, practitioners simply submit their model's response on the project website; Data, code and leaderboard is available at https://visit-bench.github.io/. | VisIT-Bench: A Dynamic Benchmark for Evaluating Instruction-Following Vision-and-Language Models | [
"Yonatan Bitton",
"Hritik Bansal",
"Jack Hessel",
"Rulin Shao",
"Wanrong Zhu",
"Anas Awadalla",
"Joshua P Gardner",
"Rohan Taori",
"Ludwig Schmidt"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=Y45ZCxslFx | @inproceedings{
kudugunta2023madlad,
title={{MADLAD}-400: A Multilingual And Document-Level Large Audited Dataset},
author={Sneha Kudugunta and Isaac Rayburn Caswell and Biao Zhang and Xavier Garcia and Derrick Xin and Aditya Kusupati and Romi Stella and Ankur Bapna and Orhan Firat},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=Y45ZCxslFx}
} | We introduce MADLAD-400, a manually audited, general domain 3T token monolingual dataset based on CommonCrawl, spanning 419 languages. We discuss the limitations revealed by self-auditing MADLAD-400, and the role data auditing had in the dataset creation process. We then train and release a 10.7B-parameter multilingual machine translation model on 250 billion tokens covering over 450 languages using publicly available data, and find that it is competitive with models that are significantly larger, and report the results on different domains. In addition, we train a 8B-parameter language model, and assess the results on few-shot translation. We make the baseline models available to the research community. | MADLAD-400: A Multilingual And Document-Level Large Audited Dataset | [
"Sneha Kudugunta",
"Isaac Rayburn Caswell",
"Biao Zhang",
"Xavier Garcia",
"Derrick Xin",
"Aditya Kusupati",
"Romi Stella",
"Ankur Bapna",
"Orhan Firat"
] | Track/Datasets_and_Benchmarks | poster | 2309.04662 | [
""
] | https://huggingface.co/papers/2309.04662 | 6 | 22 | 3 | 11 | [
"jbochi/madlad400-3b-mt",
"google/madlad400-3b-mt",
"google/madlad400-10b-mt",
"jbochi/madlad400-10b-mt",
"santhosh/madlad400-3b-ct2",
"google/madlad400-7b-mt",
"eryk-mazus/polka-1.1b",
"jbochi/madlad400-7b-mt",
"jbochi/madlad400-8b-lm",
"Hemanth-thunder/Tamil-Mistral-7B-v0.1",
"google/madlad400-7b-mt-bt",
"jbochi/madlad400-7b-mt-bt",
"google/madlad400-8b-lm",
"CXDuncan/madlad400-3b-mt-optimized-quantized-onnx",
"avans06/madlad400-7b-mt-bt-ct2-int8_float16",
"Heng666/madlad400-7b-mt-ct2-int8",
"CXDuncan/madlad400-3b-mt-optimized-onnx",
"RichardErkhov/Hemanth-thunder_-_Tamil-Mistral-7B-v0.1-gguf",
"ikeno-ada/madlad400-3b-mt-bitsandbytes-4bit",
"Heng666/madlad400-3b-mt-ct2",
"Heng666/madlad400-3b-mt-ct2-int8",
"Nextcloud-AI/madlad400-3b-mt-ct2-int8",
"Nextcloud-AI/madlad400-7b-mt-bt-ct2-int8",
"Nextcloud-AI/madlad400-3b-mt-ct2-int8_float32",
"Nextcloud-AI/madlad400-7b-mt-bt-ct2-int8_float32"
] | [
"allenai/MADLAD-400",
"SEACrowd/sea_madlad",
"Symato/madlad-400_vi",
"RWKV/EagleX-WorldContinued"
] | [
"jbochi/madlad400-3b-mt",
"santhosh/madlad400-3b-ct2",
"darylalim/madlad-400-translation",
"davidkim205/ko-translation-leaderbaord",
"radinhas/hf-llm-api",
"utrobinmv/TREX_benchmark_en_ru_zh",
"sejamenath2023/google-madlad400-7b-mt-bt",
"ruslanmv/hf-llm-api-collection",
"DHEIVER/hf-llm-api-pt",
"sepioo/facebook-translation",
"Heng666/madlad400-3b-ct2-int8",
"Heng666/madlad400-7b-ct2-int8",
"szymonrucinski/eryk-mazus-polka-1.1b",
"hgiux/google-madlad400-10b-mt",
"gaokai/google-madlad400-10b-mt",
"fiveliitlec/madlad-400",
"alakxender/dhivehi-english-translation-demo",
"sgutha/google-madlad400-3b-mt",
"StephaneBah/marvin",
"tirtohadi/lulutest",
"Tritkoman/madlad400-3b-ct2",
"gbiamgaurav/Text_Translation",
"NikolasPng/ikeno-ada-madlad400-3b-mt-bitsandbytes-4bit",
"cadem55/jbochi-madlad400-3b-mt",
"rr-sea/jbochi-madlad400-3b-mt",
"sarahai/madlad400",
"ashokrawat2023/hf-llm-api-dup",
"vishwask/madlad400-3b-mt",
"torchsnow/madlad400-3b-mt",
"j4xfu2mm/jbochi-madlad400-3b-mt",
"sjsbdfvv/hf-llm-apidde",
"JaneXZZ/LatinTranslator",
"ogegadavis254/hf-llm-api-collection"
] | 1 |
null | https://openreview.net/forum?id=Xoi31wJ5iI | @inproceedings{
lu2023seeing,
title={Seeing is not always believing: Benchmarking Human and Model Perception of {AI}-Generated Images},
author={Zeyu Lu and Di Huang and LEI BAI and Jingjing Qu and Chengyue Wu and Xihui Liu and Wanli Ouyang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=Xoi31wJ5iI}
} | Photos serve as a way for humans to record what they experience in their daily lives, and they are often regarded as trustworthy sources of information. However, there is a growing concern that the advancement of artificial intelligence (AI) technology may produce fake photos, which can create confusion and diminish trust in photographs. This study aims to comprehensively evaluate agents for distinguishing state-of-the-art AI-generated visual content. Our study benchmarks both human capability and cutting-edge fake image detection AI algorithms, using a newly collected large-scale fake image dataset Fake2M. In our human perception evaluation, titled HPBench, we discovered that humans struggle significantly to distinguish real photos from AI-generated ones, with a misclassification rate of 38.7\%. Along with this, we conduct the model capability of AI-Generated images detection evaluation MPBench and the top-performing model from MPBench achieves a 13\% failure rate under the same setting used in the human evaluation.
We hope that our study can raise awareness of the potential risks of AI-generated images and facilitate further research to prevent the spread of false information. More information can refer to https://github.com/Inf-imagine/Sentry. | Seeing is not always believing: Benchmarking Human and Model Perception of AI-Generated Images | [
"Zeyu Lu",
"Di Huang",
"LEI BAI",
"Jingjing Qu",
"Chengyue Wu",
"Xihui Liu",
"Wanli Ouyang"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=XjaWEAyToL | @inproceedings{
wang2023scientific,
title={Scientific Document Retrieval using Multi-level Aspect-based Queries},
author={Jianyou Wang and Kaicheng Wang and Xiaoyue Wang and Prudhviraj Naidu and Leon Bergen and Ramamohan Paturi},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=XjaWEAyToL}
} | In scientific research, the ability to effectively retrieve relevant documents based on complex, multifaceted queries is critical. Existing evaluation datasets for this task are limited, primarily due to the high costs and effort required to annotate resources that effectively represent complex queries. To address this, we propose a novel task, $\textbf{S}$cientific $\textbf{Do}$cument $\textbf{R}$etrieval using $\textbf{M}$ulti-level $\textbf{A}$spect-based qu$\textbf{E}$ries (DORIS-MAE), which is designed to handle the complex nature of user queries in scientific research. We developed a benchmark dataset within the field of computer science, consisting of 100 human-authored complex query cases. For each complex query, we assembled a collection of 100 relevant documents and produced annotated relevance scores for ranking them. Recognizing the significant labor of expert annotation, we also introduce Anno-GPT, a scalable framework for evaluating the viability of Large Language Models (LLMs) such as ChatGPT-3.5 for expert-level dataset annotation tasks. The application of Anno-GPT to annotate the DORIS-MAE dataset resulted in a 500x reduction in cost, without compromising quality. Furthermore, due to the multi-tiered structure of these complex queries, our DORIS-MAE dataset can be extended to over 4,000 sub-query test cases without requiring additional annotation. We evaluated 17 recent retrieval methods on DORIS-MAE, observing notable performance drops compared to traditional datasets. This highlights DORIS-MAE's challenges and the need for better approaches to handle complex, multifaceted queries in scientific research. Our dataset and codebase are available at https://github.com/Real-Doris-Mae/Doris-Mae-Dataset . | Scientific Document Retrieval using Multi-level Aspect-based Queries | [
"Jianyou Wang",
"Kaicheng Wang",
"Xiaoyue Wang",
"Prudhviraj Naidu",
"Leon Bergen",
"Ramamohan Paturi"
] | Track/Datasets_and_Benchmarks | poster | [
"https://github.com/real-doris-mae/doris-mae-dataset"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=XZf2bnMBag | @inproceedings{
altman2023realistic,
title={Realistic Synthetic Financial Transactions for Anti-Money Laundering Models},
author={Erik Altman and Jovan Blanu{\v{s}}a and Luc Von Niederh{\"a}usern and Beni Egressy and Andreea Anghel and Kubilay Atasu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=XZf2bnMBag}
} | With the widespread digitization of finance and the increasing popularity of cryptocurrencies, the sophistication of fraud schemes devised by cybercriminals is growing. Money laundering -- the movement of illicit funds to conceal their origins -- can cross bank and national boundaries, producing complex transaction patterns. The UN estimates 2-5\% of global GDP or \$0.8 - \$2.0 trillion dollars are laundered globally each year. Unfortunately, real data to train machine learning models to detect laundering is generally not available, and previous synthetic data generators have had significant shortcomings. A realistic, standardized, publicly-available benchmark is needed for comparing models and for the advancement of the area.
To this end, this paper contributes a synthetic financial transaction dataset generator and a set of synthetically generated AML (Anti-Money Laundering) datasets. We have calibrated this agent-based generator to match real transactions as closely as possible and made the datasets public. We describe the generator in detail and demonstrate how the datasets generated can help compare different machine learning models in terms of their AML abilities. In a key way, using synthetic data in these comparisons can be even better than using real data: the ground truth labels are complete, whilst many laundering transactions in real data are never detected. | Realistic Synthetic Financial Transactions for Anti-Money Laundering Models | [
"Erik Altman",
"Jovan Blanuša",
"Luc Von Niederhäusern",
"Beni Egressy",
"Andreea Anghel",
"Kubilay Atasu"
] | Track/Datasets_and_Benchmarks | poster | 2306.16424 | [
"https://github.com/ibm/multi-gnn"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=XYxNklOMMX | @inproceedings{
gardner2023benchmarking,
title={Benchmarking Distribution Shift in Tabular Data with TableShift},
author={Joshua P Gardner and Zoran Popovi and Ludwig Schmidt},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=XYxNklOMMX}
} | Robustness to distribution shift has become a growing concern for text and image models as they transition from research subjects to deployment in the real world. However, high-quality benchmarks for distribution shift in tabular machine learning tasks are still lacking despite the widespread real-world use of tabular data and differences in the models used for tabular data in comparison to text and images. As a consequence, the robustness of tabular models to distribution shift is poorly understood. To address this issue, we introduce TableShift, a distribution shift benchmark for tabular data. TableShift contains 15 binary classification tasks in total, each with an associated shift, and includes a diverse set of data sources, prediction targets, and distribution shifts. The benchmark covers domains including finance, education, public policy, healthcare, and civic participation, and is accessible using only a few lines of Python code via the TableShift API. We conduct a large-scale study comparing several state-of-the-art tabular data models alongside robust learning and domain generalization methods on the benchmark tasks. Our study demonstrates (1) a linear trend between in-distribution (ID) and out-of-distribution (OOD) accuracy; (2) domain robustness methods can reduce shift gaps but at the cost of reduced ID accuracy; (3) a strong relationship between shift gap (difference between ID and OOD performance) and shifts in the label distribution. The benchmark data, Python package, model implementations, and more information about TableShift are available at https://github.com/mlfoundations/tableshift and https://tableshift.org . | Benchmarking Distribution Shift in Tabular Data with TableShift | null | Track/Datasets_and_Benchmarks | poster | 2312.07577 | [
"https://github.com/mlfoundations/tableshift"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=XOpaPrb0U5 | @inproceedings{
papicchio2023qatch,
title={{QATCH}: Benchmarking {SQL}-centric tasks with Table Representation Learning Models on Your Data},
author={Simone Papicchio and Paolo Papotti and Luca Cagliero},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=XOpaPrb0U5}
} | Table Representation Learning (TRL) models are commonly pre-trained on large open-domain datasets comprising millions of tables and then used to address downstream tasks. Choosing the right TRL model to use on proprietary data can be challenging, as the best results depend on the content domain, schema, and data quality. Our purpose is to support end-users in testing TRL models on proprietary data in two established SQL-centric tasks, i.e., Question Answering (QA) and Semantic Parsing (SP). We present QATCH (Query-Aided TRL Checklist), a toolbox to highlight TRL models’ strengths and weaknesses on relational tables unseen at training time. For an input table, QATCH automatically generates a testing checklist tailored to QA and SP. Checklist generation is driven by a SQL query engine that crafts tests of different complexity. This design facilitates inherent portability, allowing the checks to be used by alternative models. We also introduce a set of cross-task performance metrics evaluating the TRL model’s performance over its output. Finally, we show how QATCH automatically generates tests for proprietary datasets to evaluate various state-of-the-art models including TAPAS, TAPEX, and CHATGPT. | QATCH: Benchmarking SQL-centric tasks with Table Representation Learning Models on Your Data | [
"Simone Papicchio",
"Paolo Papotti",
"Luca Cagliero"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=Wz2BJNQlyI | @inproceedings{
lee2023visalign,
title={VisAlign: Dataset for Measuring the Alignment between {AI} and Humans in Visual Perception},
author={Jiyoung Lee and Seungho Kim and Seunghyun Won and Joonseok Lee and Marzyeh Ghassemi and James Thorne and Jaeseok Choi and O-Kil Kwon and Edward Choi},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=Wz2BJNQlyI}
} | AI alignment refers to models acting towards human-intended goals, preferences, or ethical principles. Analyzing the similarity between models and humans can be a proxy measure for ensuring AI safety. In this paper, we focus on the models' visual perception alignment with humans, further referred to as AI-human visual alignment. Specifically, we propose a new dataset for measuring AI-human visual alignment in terms of image classification. In order to evaluate AI-human visual alignment, a dataset should encompass samples with various scenarios and have gold human perception labels. Our dataset consists of three groups of samples, namely Must-Act (i.e., Must-Classify), Must-Abstain, and Uncertain, based on the quantity and clarity of visual information in an image and further divided into eight categories. All samples have a gold human perception label; even Uncertain (e.g., severely blurry) sample labels were obtained via crowd-sourcing. The validity of our dataset is verified by sampling theory, statistical theories related to survey design, and experts in the related fields. Using our dataset, we analyze the visual alignment and reliability of five popular visual perception models and seven abstention methods. Our code and data is available at https://github.com/jiyounglee-0523/VisAlign. | VisAlign: Dataset for Measuring the Alignment between AI and Humans in Visual Perception | [
"Jiyoung Lee",
"Seungho Kim",
"Seunghyun Won",
"Joonseok Lee",
"Marzyeh Ghassemi",
"James Thorne",
"Jaeseok Choi",
"O-Kil Kwon",
"Edward Choi"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=WtajAo0JWU | @inproceedings{
lin2023motionx,
title={Motion-X: A Large-scale 3D Expressive Whole-body Human Motion Dataset},
author={Jing Lin and Ailing Zeng and Shunlin Lu and Yuanhao Cai and Ruimao Zhang and Haoqian Wang and Lei Zhang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=WtajAo0JWU}
} | In this paper, we present Motion-X, a large-scale 3D expressive whole-body motion dataset. Existing motion datasets predominantly contain body-only poses, lacking facial expressions, hand gestures, and fine-grained pose descriptions. Moreover, they are primarily collected from limited laboratory scenes with textual descriptions manually labeled, which greatly limits their scalability. To overcome these limitations, we develop a whole-body motion and text annotation pipeline, which can automatically annotate motion from either single- or multi-view videos and provide comprehensive semantic labels for each video and fine-grained whole-body pose descriptions for each frame. This pipeline is of high precision, cost-effective, and scalable for further research. Based on it, we construct Motion-X, which comprises 15.6M precise 3D whole-body pose annotations (i.e., SMPL-X) covering 81.1K motion sequences from massive scenes. Besides, Motion-X provides 15.6M frame-level whole-body pose descriptions and 81.1K sequence-level semantic labels. Comprehensive experiments demonstrate the accuracy of the annotation pipeline and the significant benefit of Motion-X in enhancing expressive, diverse, and natural motion generation, as well as 3D whole-body human mesh recovery. | Motion-X: A Large-scale 3D Expressive Whole-body Human Motion Dataset | null | Track/Datasets_and_Benchmarks | poster | 2307.00818 | [
"https://github.com/idea-research/motion-x"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=WqSPQFxFRC | @inproceedings{
guha2023legalbench,
title={LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models},
author={Neel Guha and Julian Nyarko and Daniel E. Ho and Christopher Re and Adam Chilton and Aditya Narayana and Alex Chohlas-Wood and Austin Peters and Brandon Waldon and Daniel Rockmore and Diego Zambrano and Dmitry Talisman and Enam Hoque and Faiz Surani and Frank Fagan and Galit Sarfaty and Gregory M. Dickinson and Haggai Porat and Jason Hegland and Jessica Wu and Joe Nudell and Joel Niklaus and John J Nay and Jonathan H. Choi and Kevin Tobia and Margaret Hagan and Megan Ma and Michael Livermore and Nikon Rasumov-Rahe and Nils Holzenberger and Noam Kolt and Peter Henderson and Sean Rehaag and Sharad Goel and Shang Gao and Spencer Williams and Sunny Gandhi and Tom Zur and Varun Iyer and Zehua Li},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=WqSPQFxFRC}
} | The advent of large language models (LLMs) and their adoption by the legal community has given rise to the question: what types of legal reasoning can LLMs perform? To enable greater study of this question, we present LegalBench: a collaboratively constructed legal reasoning benchmark consisting of 162 tasks covering six different types of legal reasoning. LegalBench was built through an interdisciplinary process, in which we collected tasks designed and hand-crafted by legal professionals. Because these subject matter experts took a leading role in construction, tasks either measure legal reasoning capabilities that are practically useful, or measure reasoning skills that lawyers find interesting. To enable cross-disciplinary conversations about LLMs in the law, we additionally show how popular legal frameworks for describing legal reasoning—which distinguish between its many forms—correspond to LegalBench tasks, thus giving lawyers and LLM developers a common vocabulary. This paper describes LegalBench, presents an empirical evaluation of 20 open-source and commercial LLMs, and illustrates the types of research explorations LegalBench enables. | LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models | [
"Neel Guha",
"Julian Nyarko",
"Daniel E. Ho",
"Christopher Re",
"Adam Chilton",
"Aditya Narayana",
"Alex Chohlas-Wood",
"Austin Peters",
"Brandon Waldon",
"Daniel Rockmore",
"Diego Zambrano",
"Dmitry Talisman",
"Enam Hoque",
"Faiz Surani",
"Frank Fagan",
"Galit Sarfaty",
"Gregory M. Dickinson",
"Haggai Porat",
"Jason Hegland",
"Jessica Wu",
"Joe Nudell",
"Joel Niklaus",
"John J Nay",
"Jonathan H. Choi",
"Kevin Tobia",
"Margaret Hagan",
"Megan Ma",
"Michael Livermore",
"Nikon Rasumov-Rahe",
"Nils Holzenberger",
"Noam Kolt",
"Peter Henderson",
"Sean Rehaag",
"Sharad Goel",
"Shang Gao",
"Spencer Williams",
"Sunny Gandhi",
"Tom Zur",
"Varun Iyer",
"Zehua Li"
] | Track/Datasets_and_Benchmarks | poster | 2308.11462 | [
"https://github.com/hazyresearch/legalbench"
] | https://huggingface.co/papers/2308.11462 | 1 | 2 | 0 | 40 | [] | [
"nguha/legalbench"
] | [] | 1 |
null | https://openreview.net/forum?id=Wbr51vK331 | @inproceedings{
ghosh2023geneval,
title={GenEval: An object-focused framework for evaluating text-to-image alignment},
author={Dhruba Ghosh and Hannaneh Hajishirzi and Ludwig Schmidt},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=Wbr51vK331}
} | Recent breakthroughs in diffusion models, multimodal pretraining, and efficient finetuning have led to an explosion of text-to-image generative models.
Given human evaluation is expensive and difficult to scale, automated methods are critical for evaluating the increasingly large number of new models.
However, most current automated evaluation metrics like FID or CLIPScore only offer a distribution-level measure of image quality or image-text alignment, and are unsuited for fine-grained or instance-level analysis.
In this paper, we introduce GenEval, an object-focused framework to evaluate compositional image properties such as object co-occurrence, position, count, and color.
We show that current object detection models can be leveraged to evaluate text-to-image models on a variety of generation tasks with strong human agreement, and that other discriminative vision models can be linked to this pipeline to further verify properties like object color.
We then evaluate several open-source text-to-image models and analyze their relative reasoning capabilities on our benchmark.
We find that recent models demonstrate significant improvement on these tasks, though they are still lacking in complex capabilities such as spatial relations and attribute binding.
Finally, we demonstrate how GenEval might be used to help discover existing failure modes, in order to inform development of the next generation of text-to-image models.
Our code to run the GenEval framework will be made publicly available at https://github.com/djghosh13/geneval. | GenEval: An object-focused framework for evaluating text-to-image alignment | [
"Dhruba Ghosh",
"Hannaneh Hajishirzi",
"Ludwig Schmidt"
] | Track/Datasets_and_Benchmarks | poster | 2310.11513 | [
"https://github.com/djghosh13/geneval"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=WZmlxIuIGR | @inproceedings{
ji2023safety,
title={Safety Gymnasium: A Unified Safe Reinforcement Learning Benchmark},
author={Jiaming Ji and Borong Zhang and Jiayi Zhou and Xuehai Pan and Weidong Huang and Ruiyang Sun and Yiran Geng and Yifan Zhong and Josef Dai and Yaodong Yang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=WZmlxIuIGR}
} | Artificial intelligence (AI) systems possess significant potential to drive societal progress. However, their deployment often faces obstacles due to substantial safety concerns. Safe reinforcement learning (SafeRL) emerges as a solution to optimize policies while simultaneously adhering to multiple constraints, thereby addressing the challenge of integrating reinforcement learning in safety-critical scenarios. In this paper, we present an environment suite called Safety-Gymnasium, which encompasses safety-critical tasks in both single and multi-agent scenarios, accepting vector and vision-only input. Additionally, we offer a library of algorithms named Safe Policy Optimization (SafePO), comprising 16 state-of-the-art SafeRL algorithms. This comprehensive library can serve as a validation tool for the research community. By introducing this benchmark, we aim to facilitate the evaluation and comparison of safety performance, thus fostering the development of reinforcement learning for safer, more reliable, and responsible real-world applications. The website of this project can be accessed at https://sites.google.com/view/safety-gymnasium. | Safety Gymnasium: A Unified Safe Reinforcement Learning Benchmark | [
"Jiaming Ji",
"Borong Zhang",
"Jiayi Zhou",
"Xuehai Pan",
"Weidong Huang",
"Ruiyang Sun",
"Yiran Geng",
"Yifan Zhong",
"Josef Dai",
"Yaodong Yang"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=W6xb7bkbYA | @inproceedings{
ahn2023udcsit,
title={{UDC}-{SIT}: A Real-World Dataset for Under-Display Cameras},
author={Kyusu Ahn and Byeonghyun Ko and HyunGyu Lee and Chanwoo Park and Jaejin Lee},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=W6xb7bkbYA}
} | Under Display Camera (UDC) is a novel imaging system that mounts a digital camera lens beneath a display panel with the panel covering the camera. However, the display panel causes severe degradation to captured images, such as low transmittance, blur, noise, and flare. The restoration of UDC-degraded images is challenging because of the unique luminance and diverse patterns of flares. Existing UDC dataset studies focus on unrealistic or synthetic UDC degradation rather than real-world UDC images. In this paper, we propose a real-world UDC dataset called UDC-SIT. To obtain the non-degraded and UDC-degraded images for the same scene, we propose an image-capturing system and an image alignment technique that exploits discrete Fourier transform (DFT) to align a pair of captured images. UDC-SIT also includes comprehensive annotations missing from other UDC datasets, such as light source, day/night, indoor/outdoor, and flare components (e.g., shimmers, streaks, and glares). We compare UDC-SIT with four existing representative UDC datasets and present the problems with existing UDC datasets. To show UDC-SIT's effectiveness, we compare UDC-SIT and a representative synthetic UDC dataset using four representative learnable image restoration models. The result indicates that the models trained with the synthetic UDC dataset are impractical because the synthetic UDC dataset does not reflect the actual characteristics of UDC-degraded images. UDC-SIT can enable further exploration in the UDC image restoration area and provide better insights into the problem. UDC-SIT is available at: https://github.com/mcrl/UDC-SIT. | UDC-SIT: A Real-World Dataset for Under-Display Cameras | [
"Kyusu Ahn",
"Byeonghyun Ko",
"HyunGyu Lee",
"Chanwoo Park",
"Jaejin Lee"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=W5If9P1xqO | @inproceedings{
yu2023climsim,
title={ClimSim: A large multi-scale dataset for hybrid physics-{ML} climate emulation},
author={Sungduk Yu and Walter Hannah and Liran Peng and Jerry Lin and Mohamed Aziz Bhouri and Ritwik Gupta and Bj{\"o}rn L{\"u}tjens and Justus Christopher Will and Gunnar Behrens and Julius Busecke and Nora Loose and Charles I Stern and Tom Beucler and Bryce Harrop and Benjamin R Hillman and Andrea Jenney and Savannah Ferretti and Nana Liu and Anima Anandkumar and Noah D Brenowitz and Veronika Eyring and Nicholas Geneva and Pierre Gentine and Stephan Mandt and Jaideep Pathak and Akshay Subramaniam and Carl Vondrick and Rose Yu and Laure Zanna and Tian Zheng and Ryan Abernathey and Fiaz Ahmed and David C Bader and Pierre Baldi and Elizabeth Barnes and Christopher Bretherton and Peter Caldwell and Wayne Chuang and Yilun Han and YU HUANG and Fernando Iglesias-Suarez and Sanket Jantre and Karthik Kashinath and Marat Khairoutdinov and Thorsten Kurth and Nicholas Lutsko and Po-Lun Ma and Griffin Mooers and J. David Neelin and David Randall and Sara Shamekh and Mark A Taylor and Nathan Urban and Janni Yuval and Guang Zhang and Michael Pritchard},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=W5If9P1xqO}
} | Modern climate projections lack adequate spatial and temporal resolution due to computational constraints. A consequence is inaccurate and imprecise predictions of critical processes such as storms. Hybrid methods that combine physics with machine learning (ML) have introduced a new generation of higher fidelity climate simulators that can sidestep Moore's Law by outsourcing compute-hungry, short, high-resolution simulations to ML emulators. However, this hybrid ML-physics simulation approach requires domain-specific treatment and has been inaccessible to ML experts because of lack of training data and relevant, easy-to-use workflows. We present ClimSim, the largest-ever dataset designed for hybrid ML-physics research. It comprises multi-scale climate simulations, developed by a consortium of climate scientists and ML researchers. It consists of 5.7 billion pairs of multivariate input and output vectors that isolate the influence of locally-nested, high-resolution, high-fidelity physics on a host climate simulator's macro-scale physical state.
The dataset is global in coverage, spans multiple years at high sampling frequency, and is designed such that resulting emulators are compatible with downstream coupling into operational climate simulators. We implement a range of deterministic and stochastic regression baselines to highlight the ML challenges and their scoring. The data (https://huggingface.co/datasets/LEAP/ClimSim_high-res) and code (https://leap-stc.github.io/ClimSim) are released openly to support the development of hybrid ML-physics and high-fidelity climate simulations for the benefit of science and society. | ClimSim: A large multi-scale dataset for hybrid physics-ML climate emulation | [
"Sungduk Yu",
"Walter Hannah",
"Liran Peng",
"Jerry Lin",
"Mohamed Aziz Bhouri",
"Ritwik Gupta",
"Björn Lütjens",
"Justus Christopher Will",
"Gunnar Behrens",
"Julius Busecke",
"Nora Loose",
"Charles I Stern",
"Tom Beucler",
"Bryce Harrop",
"Benjamin R Hillman",
"Andrea Jenney",
"Savannah Ferretti",
"Nana Liu",
"Anima Anandkumar",
"Noah D Brenowitz",
"Veronika Eyring",
"Nicholas Geneva",
"Pierre Gentine",
"Stephan Mandt",
"Jaideep Pathak",
"Akshay Subramaniam",
"Carl Vondrick",
"Rose Yu",
"Laure Zanna",
"Tian Zheng",
"Ryan Abernathey",
"Fiaz Ahmed",
"David C Bader",
"Pierre Baldi",
"Elizabeth Barnes",
"Christopher Bretherton",
"Peter Caldwell",
"Wayne Chuang",
"Yilun Han",
"YU HUANG",
"Fernando Iglesias-Suarez",
"Sanket Jantre",
"Karthik Kashinath",
"Marat Khairoutdinov",
"Thorsten Kurth",
"Nicholas Lutsko",
"Po-Lun Ma",
"Griffin Mooers",
"J. David Neelin",
"David Randall",
"Sara Shamekh",
"Mark A Taylor",
"Nathan Urban",
"Janni Yuval",
"Guang Zhang",
"Michael Pritchard"
] | Track/Datasets_and_Benchmarks | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=VtbKj2xlhI | @inproceedings{
tsutsui2023wbcatt,
title={{WBCA}tt: A White Blood Cell Dataset Annotated with Detailed Morphological Attributes},
author={Satoshi Tsutsui and Winnie Pang and Bihan Wen},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=VtbKj2xlhI}
} | The examination of blood samples at a microscopic level plays a fundamental role in clinical diagnostics. For instance, an in-depth study of White Blood Cells (WBCs), a crucial component of our blood, is essential for diagnosing blood-related diseases such as leukemia and anemia. While multiple datasets containing WBC images have been proposed, they mostly focus on cell categorization, often lacking the necessary morphological details to explain such categorizations, despite the importance of explainable artificial intelligence (XAI) in medical domains. This paper seeks to address this limitation by introducing comprehensive annotations for WBC images. Through collaboration with pathologists, a thorough literature review, and manual inspection of microscopic images, we have identified 11 morphological attributes associated with the cell and its components (nucleus, cytoplasm, and granules). We then annotated ten thousand WBC images with these attributes, resulting in 113k labels (11 attributes x 10.3k images). Annotating at this level of detail and scale is unprecedented, offering unique value to AI in pathology. Moreover, we conduct experiments to predict these attributes from cell images, and also demonstrate specific applications that can benefit from our detailed annotations. Overall, our dataset paves the way for interpreting WBC recognition models, further advancing XAI in the fields of pathology and hematology. | WBCAtt: A White Blood Cell Dataset Annotated with Detailed Morphological Attributes | [
"Satoshi Tsutsui",
"Winnie Pang",
"Bihan Wen"
] | Track/Datasets_and_Benchmarks | poster | 2306.13531 | [
"https://github.com/apple2373/wbcatt"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=Vn5qZGxGj3 | @inproceedings{
teng2023satbird,
title={SatBird: a Dataset for Bird Species Distribution Modeling using Remote Sensing and Citizen Science Data},
author={M{\'e}lisande Teng and Amna Elmustafa and Benjamin Akera and Yoshua Bengio and Hager Radi and Hugo Larochelle and David Rolnick},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=Vn5qZGxGj3}
} | Biodiversity is declining at an unprecedented rate, impacting ecosystem services necessary to ensure food, water, and human health and well-being. Understanding the distribution of species and their habitats is crucial for conservation policy planning.
However, traditional methods in ecology for species distribution models (SDMs) generally focus either on narrow sets of species or narrow geographical areas and there remain significant knowledge gaps about the distribution of species. A major reason for this is the limited availability of data traditionally used, due to the prohibitive amount of effort and expertise required for traditional field monitoring.
The wide availability of remote sensing data and the growing adoption of citizen science tools to collect species observations data at low cost offer an opportunity for improving biodiversity monitoring and enabling the modelling of complex ecosystems. We introduce a novel task for mapping bird species to their habitats by predicting species encounter rates from satellite images, and present SatBird, a satellite dataset of locations in the USA with labels derived from presence-absence observation data from the citizen science database eBird, considering summer (breeding) and winter seasons. We also provide a dataset in Kenya representing low-data regimes. We additionally provide environmental data and species range maps for each location. We benchmark a set of baselines on our dataset, including SOTA models for remote sensing tasks. SatBird opens up possibilities for scalably modelling properties of ecosystems worldwide. | SatBird: a Dataset for Bird Species Distribution Modeling using Remote Sensing and Citizen Science Data | [
"Mélisande Teng",
"Amna Elmustafa",
"Benjamin Akera",
"Yoshua Bengio",
"Hager Radi",
"Hugo Larochelle",
"David Rolnick"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=VeJgZYhT7H | @inproceedings{
bender2023learning,
title={Learning to Taste: A Multimodal Wine Dataset},
author={Thoranna Bender and Simon Moe S{\o}rensen and Alireza Kashani and Kristjan Eldjarn Hjorleifsson and Grethe Hyldig and S{\o}ren Hauberg and Serge Belongie and Frederik Rahb{\ae}k Warburg},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=VeJgZYhT7H}
} | We present WineSensed, a large multimodal wine dataset for studying the relations between visual perception, language, and flavor. The dataset encompasses 897k images of wine labels and 824k reviews of wines curated from the Vivino platform. It has over 350k unique bottlings, annotated with year, region, rating, alcohol percentage, price, and grape composition. We obtained fine-grained flavor annotations on a subset by conducting a wine-tasting experiment with 256 participants who were asked to rank wines based on their similarity in flavor, resulting in more than 5k pairwise flavor distances. We propose a low-dimensional concept embedding algorithm that combines human experience with automatic machine similarity kernels. We demonstrate that this shared concept embedding space improves upon separate embedding spaces for coarse flavor classification (alcohol percentage, country, grape, price, rating) and representing human perception of flavor. | Learning to Taste: A Multimodal Wine Dataset | [
"Thoranna Bender",
"Simon Moe Sørensen",
"Alireza Kashani",
"Kristjan Eldjarn Hjorleifsson",
"Grethe Hyldig",
"Søren Hauberg",
"Serge Belongie",
"Frederik Rahbæk Warburg"
] | Track/Datasets_and_Benchmarks | poster | 2308.16900 | [
"https://github.com/thoranna/learning_to_taste"
] | https://huggingface.co/papers/2308.16900 | 0 | 0 | 0 | 8 | [] | [
"Dakhoo/L2T-NeurIPS-2023"
] | [] | 1 |
null | https://openreview.net/forum?id=VSJotgbPHF | @inproceedings{
k{\"o}pf2023openassistant,
title={OpenAssistant Conversations - Democratizing Large Language Model Alignment},
author={Andreas K{\"o}pf and Yannic Kilcher and Dimitri von R{\"u}tte and Sotiris Anagnostidis and Zhi Rui Tam and Keith Stevens and Abdullah Barhoum and Duc Minh Nguyen and Oliver Stanley and Rich{\'a}rd Nagyfi and Shahul ES and Sameer Suri and David Alexandrovich Glushkov and Arnav Varma Dantuluri and Andrew Maguire and Christoph Schuhmann and Huu Nguyen and Alexander Julian Mattick},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=VSJotgbPHF}
} | Aligning large language models (LLMs) with human preferences has proven to drastically improve usability and has driven rapid adoption as demonstrated by ChatGPT.
Alignment techniques such as supervised fine-tuning (\textit{SFT}) and reinforcement learning from human feedback (\textit{RLHF}) greatly reduce the required skill and domain knowledge to effectively harness the capabilities of LLMs, increasing their accessibility and utility across various domains.
However, state-of-the-art alignment techniques like \textit{RLHF} rely on high-quality human feedback data, which is expensive to create and often remains proprietary.
In an effort to democratize research on large-scale alignment, we release OpenAssistant Conversations, a human-generated, human-annotated assistant-style conversation corpus consisting of 161,443 messages in 35 different languages, annotated with 461,292 quality ratings, resulting in over 10,000 complete and fully annotated conversation trees.
The corpus is a product of a worldwide crowd-sourcing effort involving over 13,500 volunteers.
Models trained on OpenAssistant Conversations show consistent improvements on standard benchmarks over respective base models.
We release our code\footnote{\git} and data\footnote{\data} under a fully permissive licence. | OpenAssistant Conversations - Democratizing Large Language Model Alignment | [
"Andreas Köpf",
"Yannic Kilcher",
"Dimitri von Rütte",
"Sotiris Anagnostidis",
"Zhi Rui Tam",
"Keith Stevens",
"Abdullah Barhoum",
"Duc Minh Nguyen",
"Oliver Stanley",
"Richárd Nagyfi",
"Shahul ES",
"Sameer Suri",
"David Alexandrovich Glushkov",
"Arnav Varma Dantuluri",
"Andrew Maguire",
"Christoph Schuhmann",
"Huu Nguyen",
"Alexander Julian Mattick"
] | Track/Datasets_and_Benchmarks | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=VIRKdeFJIg | @inproceedings{
nguyen2023improving,
title={Improving multimodal datasets with image captioning},
author={Thao Nguyen and Samir Yitzhak Gadre and Gabriel Ilharco and Sewoong Oh and Ludwig Schmidt},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=VIRKdeFJIg}
} | Massive web datasets play a key role in the success of large vision-language models like CLIP and Flamingo. However, the raw web data is noisy, and existing filtering methods to reduce noise often come at the expense of data diversity. Our work focuses on caption quality as one major source of noise, and studies how generated captions can increase the utility of web-scraped datapoints with nondescript text. Through exploring different mixing strategies for raw and generated captions, we outperform the best filtering method proposed by the DataComp benchmark by 2% on ImageNet and 4% on average across 38 tasks, given a candidate pool of 128M image-text pairs. Our best approach is also 2x better at Flickr and MS-COCO retrieval. We then analyze what makes synthetic captions an effective source of text supervision. In experimenting with different image captioning models, we also demonstrate that the performance of a model on standard image captioning benchmarks (e.g., NoCaps CIDEr) is not a reliable indicator of the utility of the captions it generates for multimodal training. Finally, our experiments with using generated captions at DataComp's large scale (1.28B image-text pairs) offer insights into the limitations of synthetic text, as well as the importance of image curation with increasing training data quantity. The synthetic captions used in our experiments are now available on HuggingFace. | Improving multimodal datasets with image captioning | [
"Thao Nguyen",
"Samir Yitzhak Gadre",
"Gabriel Ilharco",
"Sewoong Oh",
"Ludwig Schmidt"
] | Track/Datasets_and_Benchmarks | poster | 2307.10350 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=VH1vxapUTs | @inproceedings{
kondylatos2023mesogeos,
title={Mesogeos: A multi-purpose dataset for data-driven wildfire modeling in the Mediterranean},
author={Spyros Kondylatos and Ioannis Prapas and Gustau Camps-Valls and Ioannis Papoutsis},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=VH1vxapUTs}
} | We introduce Mesogeos, a large-scale multi-purpose dataset for wildfire modeling in the Mediterranean. Mesogeos integrates variables representing wildfire drivers (meteorology, vegetation, human activity) and historical records of wildfire ignitions and burned areas for 17 years (2006-2022). It is designed as a cloud-friendly spatio-temporal dataset, namely a datacube, harmonizing all variables in a grid of 1km x 1km x 1-day resolution. The datacube structure offers opportunities to assess machine learning (ML) usage in various wildfire modeling tasks. We extract two ML-ready datasets that establish distinct tracks to demonstrate this potential: (1) short-term wildfire danger forecasting and (2) final burned area estimation given the point of ignition. We define appropriate metrics and baselines to evaluate the performance of models in each track. By publishing the datacube, along with the code to create the ML datasets and models, we encourage the community to foster the implementation of additional tracks for mitigating the increasing threat of wildfires in the Mediterranean. | Mesogeos: A multi-purpose dataset for data-driven wildfire modeling in the Mediterranean | [
"Spyros Kondylatos",
"Ioannis Prapas",
"Gustau Camps-Valls",
"Ioannis Papoutsis"
] | Track/Datasets_and_Benchmarks | oral | 2306.05144 | [
"https://github.com/orion-ai-lab/mesogeos"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=UvX8QfhfUx | @inproceedings{
koyamada2023pgx,
title={Pgx: Hardware-Accelerated Parallel Game Simulators for Reinforcement Learning},
author={Sotetsu Koyamada and Shinri Okano and Soichiro Nishimori and Yu Murata and Keigo Habara and Haruka Kita and Shin Ishii},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=UvX8QfhfUx}
} | We propose Pgx, a suite of board game reinforcement learning (RL) environments written in JAX and optimized for GPU/TPU accelerators. By leveraging JAX's auto-vectorization and parallelization over accelerators, Pgx can efficiently scale to thousands of simultaneous simulations over accelerators. In our experiments on a DGX-A100 workstation, we discovered that Pgx can simulate RL environments 10-100x faster than existing implementations available in Python. Pgx includes RL environments commonly used as benchmarks in RL research, such as backgammon, chess, shogi, and Go. Additionally, Pgx offers miniature game sets and baseline models to facilitate rapid research cycles. We demonstrate the efficient training of the Gumbel AlphaZero algorithm with Pgx environments. Overall, Pgx provides high-performance environment simulators for researchers to accelerate their RL experiments. Pgx is available at https://github.com/sotetsuk/pgx. | Pgx: Hardware-Accelerated Parallel Game Simulators for Reinforcement Learning | [
"Sotetsu Koyamada",
"Shinri Okano",
"Soichiro Nishimori",
"Yu Murata",
"Keigo Habara",
"Haruka Kita",
"Shin Ishii"
] | Track/Datasets_and_Benchmarks | poster | 2303.17503 | [
"https://github.com/sotetsuk/pgx"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=UgPAaEugH3 | @inproceedings{
lechner2023gigastep,
title={Gigastep - One Billion Steps per Second Multi-agent Reinforcement Learning},
author={Mathias Lechner and Lianhao Yin and Tim Seyde and Tsun-Hsuan Wang and Wei Xiao and Ramin Hasani and Joshua Rountree and Daniela Rus},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=UgPAaEugH3}
} | Multi-agent reinforcement learning (MARL) research is faced with a trade-off: it either uses complex environments requiring large compute resources, which makes it inaccessible to researchers with limited resources, or relies on simpler dynamics for faster execution, which makes the transferability of the results to more realistic tasks challenging. Motivated by these challenges, we present Gigastep, a fully vectorizable, MARL environment implemented in JAX, capable of executing up to one billion environment steps per second on consumer-grade hardware. Its design allows for comprehensive MARL experimentation, including a complex, high-dimensional space defined by 3D dynamics, stochasticity, and partial observations. Gigastep supports both collaborative and adversarial tasks, continuous and discrete action spaces, and provides RGB image and feature vector observations, allowing the evaluation of a wide range of MARL algorithms.
We validate Gigastep's usability through an extensive set of experiments, underscoring its role in widening participation and promoting inclusivity in the MARL research community. | Gigastep - One Billion Steps per Second Multi-agent Reinforcement Learning | [
"Mathias Lechner",
"Lianhao Yin",
"Tim Seyde",
"Tsun-Hsuan Wang",
"Wei Xiao",
"Ramin Hasani",
"Joshua Rountree",
"Daniela Rus"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=URoZHqAohf | @inproceedings{
notin2023proteingym,
title={ProteinGym: Large-Scale Benchmarks for Protein Fitness Prediction and Design},
author={Pascal Notin and Aaron W Kollasch and Daniel Ritter and Lood Van Niekerk and Steffan Paul and Han Spinner and Nathan J Rollins and Ada Shaw and Rose Orenbuch and Ruben Weitzman and Jonathan Frazer and Mafalda Dias and Dinko Franceschi and Yarin Gal and Debora Susan Marks},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=URoZHqAohf}
} | Predicting the effects of mutations in proteins is critical to many applications, from understanding genetic disease to designing novel proteins that can address our most pressing challenges in climate, agriculture and healthcare. Despite a surge in machine learning-based protein models to tackle these questions, an assessment of their respective benefits is challenging due to the use of distinct, often contrived, experimental datasets, and the variable performance of models across different protein families. Addressing these challenges requires scale. To that end we introduce ProteinGym, a large-scale and holistic set of benchmarks specifically designed for protein fitness prediction and design. It encompasses both a broad collection of over 250 standardized deep mutational scanning assays, spanning millions of mutated sequences, as well as curated clinical datasets providing high-quality expert annotations about mutation effects. We devise a robust evaluation framework that combines metrics for both fitness prediction and design, factors in known limitations of the underlying experimental methods, and covers both zero-shot and supervised settings. We report the performance of a diverse set of over 70 high-performing models from various subfields (eg., alignment-based, inverse folding) into a unified benchmark suite. We open source the corresponding codebase, datasets, MSAs, structures, model predictions and develop a user-friendly website that facilitates data access and analysis. | ProteinGym: Large-Scale Benchmarks for Protein Fitness Prediction and Design | [
"Pascal Notin",
"Aaron W Kollasch",
"Daniel Ritter",
"Lood Van Niekerk",
"Steffan Paul",
"Han Spinner",
"Nathan J Rollins",
"Ada Shaw",
"Ruben Weitzman",
"Jonathan Frazer",
"Mafalda Dias",
"Dinko Franceschi",
"Rose Orenbuch",
"Yarin Gal",
"Debora Susan Marks"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=UQ8pDKcXTq | @inproceedings{
nippani2023graph,
title={Graph Neural Networks for Road Safety Modeling: Datasets and Evaluations for Accident Analysis},
author={Abhinav Nippani and Dongyue Li and Haotian Ju and Haris Koutsopoulos and Hongyang R. Zhang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=UQ8pDKcXTq}
} | We consider the problem of traffic accident analysis on a road network based on road network connections and traffic volume. Previous works have designed various deep-learning methods using historical records to predict traffic accident occurrences. However, there is a lack of consensus on how accurate existing methods are, and a fundamental issue is the lack of public accident datasets for comprehensive evaluations. This paper constructs a large-scale, unified dataset of traffic accident records from official reports of various states in the US, totaling 9 million records, accompanied by road networks and traffic volume reports. Using this new dataset, we evaluate existing deep-learning methods for predicting the occurrence of accidents on road networks. Our main finding is that graph neural networks such as GraphSAGE can accurately predict the number of accidents on roads with less than 22% mean absolute error (relative to the actual count) and whether an accident will occur or not with over 87% AUROC, averaged over states. We achieve these results by using multitask learning to account for cross-state variabilities (e.g., availability of accident labels) and transfer learning to combine traffic volume with accident prediction. Ablation studies highlight the importance of road graph-structural features, amongst other features. Lastly, we discuss the implications of the analysis and develop a package for easily using our new dataset. | Graph Neural Networks for Road Safety Modeling: Datasets and Evaluations for Accident Analysis | [
"Abhinav Nippani",
"Dongyue Li",
"Haotian Ju",
"Haris Koutsopoulos",
"Hongyang R. Zhang"
] | Track/Datasets_and_Benchmarks | poster | 2311.00164 | [
"https://github.com/virtuosoresearch/ml4roadsafety"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=UErNpveP6R | @inproceedings{
wang2023evaluating,
title={Evaluating Open-{QA} Evaluation},
author={Cunxiang Wang and Sirui Cheng and Qipeng Guo and Yuanhao Yue and Bowen Ding and Zhikun Xu and Yidong Wang and Xiangkun Hu and Zheng Zhang and Yue Zhang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=UErNpveP6R}
} | This study focuses on the evaluation of the Open Question Answering (Open-QA) task, which can directly estimate the factuality of large language models (LLMs). Current automatic evaluation methods have shown limitations, indicating that human evaluation still remains the most reliable approach. We introduce a new task, QA Evaluation (QA-Eval) and the corresponding dataset EVOUNA, designed to assess the accuracy of AI-generated answers in relation to standard answers within Open-QA. Our evaluation of these methods utilizes human-annotated results to measure their performance. Specifically, the work investigates methods that show high correlation with human evaluations, deeming them more reliable. We also discuss the pitfalls of current methods and methods to improve LLM-based evaluators. We believe this new QA-Eval task and corresponding dataset EVOUNA will facilitate the development of more effective automatic evaluation tools and prove valuable for future research in this area. All resources are available at https://github.com/wangcunxiang/QA-Eval and it is under the Apache-2.0 License. | Evaluating Open-QA Evaluation | [
"Cunxiang Wang",
"Sirui Cheng",
"Qipeng Guo",
"Yuanhao Yue",
"Bowen Ding",
"Zhikun Xu",
"Yidong Wang",
"Xiangkun Hu",
"Zheng Zhang",
"Yue Zhang"
] | Track/Datasets_and_Benchmarks | poster | 2305.12421 | [
"https://github.com/wangcunxiang/qa-eval"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=UBbm5embIB | @inproceedings{
zhong2023learning,
title={Learning Human Action Recognition Representations Without Real Humans},
author={Howard Zhong and Samarth Mishra and Donghyun Kim and SouYoung Jin and Rameswar Panda and Hilde Kuehne and Leonid Karlinsky and Venkatesh Saligrama and Aude Oliva and Rogerio Feris},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=UBbm5embIB}
} | Pre-training on massive video datasets has become essential to achieve high action recognition performance on smaller downstream datasets. However, most large-scale video datasets contain images of people and hence are accompanied with issues related to privacy, ethics, and data protection, often preventing them from being publicly shared for reproducible research. Existing work has attempted to alleviate these problems by blurring faces, downsampling videos, or training on synthetic data. On the other hand, analysis on the {\em transferability} of privacy-preserving pre-trained models to downstream tasks has been limited. In this work, we study this problem by first asking the question: can we pre-train models for human action recognition with data that does not include real humans? To this end, we present, for the first time, a benchmark that leverages real-world videos with {\em humans removed} and synthetic data containing virtual humans to pre-train a model. We then evaluate the transferability of the representation learned on this data to a diverse set of downstream action recognition benchmarks. Furthermore, we propose a novel pre-training strategy, called Privacy-Preserving MAE-Align, to effectively combine synthetic data and human-removed real data.
Our approach outperforms previous baselines by up to 5\% and closes the performance gap between human and no-human action recognition representations on downstream tasks, for both linear probing and fine-tuning. Our benchmark, code, and models are available at https://github.com/howardzh01/PPMA. | Learning Human Action Recognition Representations Without Real Humans | [
"Howard Zhong",
"Samarth Mishra",
"Donghyun Kim",
"SouYoung Jin",
"Rameswar Panda",
"Hilde Kuehne",
"Leonid Karlinsky",
"Venkatesh Saligrama",
"Aude Oliva",
"Rogerio Feris"
] | Track/Datasets_and_Benchmarks | poster | 2311.06231 | [
"https://github.com/howardzh01/ppma"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=U5uRXlLwnM | @inproceedings{
tang2023gadbench,
title={{GADB}ench: Revisiting and Benchmarking Supervised Graph Anomaly Detection},
author={Jianheng Tang and Fengrui Hua and Ziqi Gao and Peilin Zhao and Jia Li},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=U5uRXlLwnM}
} | With a long history of traditional Graph Anomaly Detection (GAD) algorithms and recently popular Graph Neural Networks (GNNs), it is still not clear (1) how they perform under a standard comprehensive setting, (2) whether GNNs can outperform traditional algorithms such as tree ensembles, and (3) how about their efficiency on large-scale graphs. In response, we introduce GADBench---a benchmark tool dedicated to supervised anomalous node detection in static graphs. GADBench facilitates a detailed comparison across 29 distinct models on ten real-world GAD datasets, encompassing thousands to millions (~6M) nodes. Our main finding is that tree ensembles with simple neighborhood aggregation can outperform the latest GNNs tailored for the GAD task. We shed light on the current progress of GAD, setting a robust groundwork for subsequent investigations in this domain. GADBench is open-sourced at https://github.com/squareRoot3/GADBench. | GADBench: Revisiting and Benchmarking Supervised Graph Anomaly Detection | null | Track/Datasets_and_Benchmarks | poster | 2306.12251 | [
"https://github.com/squareroot3/gadbench"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=Th33sYMCQd | @inproceedings{
kim2023datasets,
title={Datasets and Benchmarks for Nanophotonic Structure and Parametric Design Simulations},
author={Jungtaek Kim and Mingxuan Li and Oliver Hinder and Paul Leu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=Th33sYMCQd}
} | Nanophotonic structures have versatile applications including solar cells, anti-reflective coatings, electromagnetic interference shielding, optical filters, and light emitting diodes. To design and understand these nanophotonic structures, electrodynamic simulations are essential. These simulations enable us to model electromagnetic fields over time and calculate optical properties. In this work, we introduce frameworks and benchmarks to evaluate nanophotonic structures in the context of parametric structure design problems. The benchmarks are instrumental in assessing the performance of optimization algorithms and identifying an optimal structure based on target optical properties. Moreover, we explore the impact of varying grid sizes in electrodynamic simulations, shedding light on how evaluation fidelity can be strategically leveraged in enhancing structure designs. | Datasets and Benchmarks for Nanophotonic Structure and Parametric Design Simulations | [
"Jungtaek Kim",
"Mingxuan Li",
"Oliver Hinder",
"Paul Leu"
] | Track/Datasets_and_Benchmarks | poster | 2310.19053 | [
"https://github.com/jungtaekkim/nanophotonic-structures"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=T5ArxPU3Oq | @inproceedings{
liu2023classical,
title={Classical Simulation of Quantum Circuits: Parallel Environments and Benchmark},
author={Xiao-Yang Liu and Zeliang Zhang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=T5ArxPU3Oq}
} | Google's quantum supremacy announcement has received broad questions from academia and industry due to the debatable estimate of 10,000 years' running time for the classical simulation task on the Summit supercomputer. Has quantum supremacy already come? Or will it come in one or two decades later? To avoid hasty advertisements of quantum supremacy by tech giants or quantum startups and eliminate the cost of dedicating a team to the classical simulation task, we advocate an open-source approach to maintain a trustable benchmark performance. In this paper, we take a reinforcement learning approach for the classical simulation of quantum circuits and demonstrate its great potential by reporting an estimated simulation time of less than 4 days, a speedup of 5.40x over the state-of-the-art method. Specifically, we formulate the classical simulation task as a tensor network contraction ordering problem using the K-spin Ising model and employ a novel Hamiltonina-based reinforcement learning algorithm. Then, we establish standard criteria to evaluate the performance of classical simulation of quantum circuits. We develop a dozen of massively parallel environments to simulate quantum circuits. We open-source our parallel gym environments and benchmarks. We hope the AI/ML community and quantum physics community will collaborate to maintain reference curves for validating an unequivocal first demonstration of empirical quantum supremacy. | Classical Simulation of Quantum Circuits: Parallel Environments and Benchmark | [
"Xiao-Yang Liu",
"Zeliang Zhang"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=T3FKjN4p8d | @inproceedings{
zhang2023intelligent,
title={Intelligent Knee Sleeves: A Real-time Multimodal Dataset for 3D Lower Body Motion Estimation Using Smart Textile},
author={Wenwen Zhang and Arvin Tashakori and Zenan Jiang and Amir Servati and Harishkumar Narayana and Saeid Soltanian and Rou Yi Yeap and Menghan Ma and Lauren Toy and Peyman Servati},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=T3FKjN4p8d}
} | The kinematics of human movements and locomotion are closely linked to the activation and contractions of muscles. To investigate this, we present a multimodal dataset with benchmarks collected using a novel pair of Intelligent Knee Sleeves (Texavie MarsWear Knee Sleeves) for human pose estimation. Our system utilizes synchronized datasets that comprise time-series data from the Knee Sleeves and the corresponding ground truth labels from visualized motion capture camera system. We employ these to generate 3D human models solely based on the wearable data of individuals performing different activities. We demonstrate the effectiveness of this camera-free system and machine learning algorithms in the assessment of various movements and exercises, including extension to unseen exercises and individuals. The results show an average error of 7.21 degrees across all eight lower body joints when compared to the ground truth, indicating the effectiveness and reliability of the Knee Sleeve system for the prediction of different lower body joints beyond knees. The results enable human pose estimation in a seamless manner without being limited by visual occlusion or the field of view of cameras. Our results show the potential of multimodal wearable sensing in a variety of applications from home fitness to sports, healthcare, and physical rehabilitation focusing on pose and movement estimation. | Intelligent Knee Sleeves: A Real-time Multimodal Dataset for 3D Lower Body Motion Estimation Using Smart Textile | [
"Wenwen Zhang",
"Arvin Tashakori",
"Zenan Jiang",
"Amir Servati",
"Harishkumar Narayana",
"Saeid Soltanian",
"Rou Yi Yeap",
"Menghan Ma",
"Lauren Toy",
"Peyman Servati"
] | Track/Datasets_and_Benchmarks | poster | 2311.12829 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=Sq3CLKJeiz | @inproceedings{
deitke2023objaversexl,
title={Objaverse-{XL}: A Universe of 10M+ 3D Objects},
author={Matt Deitke and Ruoshi Liu and Matthew Wallingford and Huong Ngo and Oscar Michel and Aditya Kusupati and Alan Fan and Christian Laforte and Vikram Voleti and Samir Yitzhak Gadre and Eli VanderBilt and Aniruddha Kembhavi and Carl Vondrick and Georgia Gkioxari and Kiana Ehsani and Ludwig Schmidt and Ali Farhadi},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=Sq3CLKJeiz}
} | Natural language processing and 2D vision models have attained remarkable proficiency on many tasks primarily by escalating the scale of training data. However, 3D vision tasks have not seen the same progress, in part due to the challenges of acquiring high-quality 3D data. In this work, we present Objaverse-XL, a dataset of over 10 million 3D objects. Our compilation comprises deduplicated 3D objects from a diverse set of sources, including manually designed objects, photogrammetry scans of landmarks and everyday items, and professional scans of historic and antique artifacts. Representing the largest scale and diversity in the realm of 3D datasets, Objaverse-XL enables significant new possibilities for 3D vision. Our experiments demonstrate the vast improvements enabled with the scale provided by Objaverse-XL. We show that by training Zero123 on novel view synthesis, utilizing over 100 million multi-view rendered images, we achieve strong zero-shot generalization abilities. We hope that releasing Objaverse-XL will enable further innovations in the field of 3D vision at scale. | Objaverse-XL: A Universe of 10M+ 3D Objects | [
"Matt Deitke",
"Ruoshi Liu",
"Matthew Wallingford",
"Huong Ngo",
"Oscar Michel",
"Aditya Kusupati",
"Alan Fan",
"Christian Laforte",
"Vikram Voleti",
"Samir Yitzhak Gadre",
"Eli VanderBilt",
"Aniruddha Kembhavi",
"Carl Vondrick",
"Georgia Gkioxari",
"Kiana Ehsani",
"Ludwig Schmidt",
"Ali Farhadi"
] | Track/Datasets_and_Benchmarks | poster | 2307.05663 | [
""
] | https://huggingface.co/papers/2307.05663 | 1 | 1 | 1 | 17 | [] | [
"allenai/objaverse-xl",
"tiange/Cap3D"
] | [] | 1 |
null | https://openreview.net/forum?id=SS3CK3yx5Z | @inproceedings{
fang2023does,
title={Does progress on ImageNet transfer to real-world datasets?},
author={Alex Fang and Simon Kornblith and Ludwig Schmidt},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=SS3CK3yx5Z}
} | Does progress on ImageNet transfer to real-world datasets? We investigate this question by evaluating ImageNet pre-trained models with varying accuracy (57% - 83%) on six practical image classification datasets. In particular, we study datasets collected with the goal of solving real-world tasks (e.g., classifying images from camera traps or satellites), as opposed to web-scraped benchmarks collected for comparing models. On multiple datasets, models with higher ImageNet accuracy do not consistently yield performance improvements. For certain tasks, interventions such as data augmentation improve performance even when architectures do not. We hope that future benchmarks will include more diverse datasets to encourage a more comprehensive approach to improving learning algorithms. | Does progress on ImageNet transfer to real-world datasets? | [
"Alex Fang",
"Simon Kornblith",
"Ludwig Schmidt"
] | Track/Datasets_and_Benchmarks | poster | 2301.04644 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=SNznC08OOO | @inproceedings{
kong2023robodepth,
title={RoboDepth: Robust Out-of-Distribution Depth Estimation under Corruptions},
author={Lingdong Kong and Shaoyuan Xie and Hanjiang Hu and Lai Xing Ng and Benoit R Cottereau and Wei Tsang Ooi},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=SNznC08OOO}
} | Depth estimation from monocular images is pivotal for real-world visual perception systems. While current learning-based depth estimation models train and test on meticulously curated data, they often overlook out-of-distribution (OoD) situations. Yet, in practical settings -- especially safety-critical ones like autonomous driving -- common corruptions can arise. Addressing this oversight, we introduce a comprehensive robustness test suite, RoboDepth, encompassing 18 corruptions spanning three categories: i) weather and lighting conditions; ii) sensor failures and movement; and iii) data processing anomalies. We subsequently benchmark 42 depth estimation models across indoor and outdoor scenes to assess their resilience to these corruptions. Our findings underscore that, in the absence of a dedicated robustness evaluation framework, many leading depth estimation models may be susceptible to typical corruptions. We delve into design considerations for crafting more robust depth estimation models, touching upon pre-training, augmentation, modality, model capacity, and learning paradigms. We anticipate our benchmark will establish a foundational platform for advancing robust OoD depth estimation. | RoboDepth: Robust Out-of-Distribution Depth Estimation under Corruptions | [
"Lingdong Kong",
"Shaoyuan Xie",
"Hanjiang Hu",
"Lai Xing Ng",
"Benoit R Cottereau",
"Wei Tsang Ooi"
] | Track/Datasets_and_Benchmarks | poster | 2310.15171 | [
"https://github.com/ldkong1205/robodepth"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=SKN2hflBIZ | @inproceedings{
lauren{\c{c}}on2023obelics,
title={{OBELICS}: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents},
author={Hugo Lauren{\c{c}}on and Lucile Saulnier and Leo Tronchon and Stas Bekman and Amanpreet Singh and Anton Lozhkov and Thomas Wang and Siddharth Karamcheti and Alexander M Rush and Douwe Kiela and Matthieu Cord and Victor Sanh},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=SKN2hflBIZ}
} | Large multimodal models trained on natural documents, which interleave images and text, outperform models trained on image-text pairs on various multimodal benchmarks. However, the datasets used to train these models have not been released, and the collection process has not been fully specified. We introduce the OBELICS dataset, an open web-scale filtered dataset of interleaved image-text documents comprising 141 million web pages extracted from Common Crawl, 353 million associated images, and 115 billion text tokens. We describe the dataset creation process, present comprehensive filtering rules, and provide an analysis of the dataset's content. To show the viability of OBELICS, we train on the dataset vision and language models of 9 and 80 billion parameters, IDEFICS-9B and IDEFICS, and obtain competitive performance on different multimodal benchmarks. We release our dataset, models and code. | OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents | [
"Hugo Laurençon",
"Lucile Saulnier",
"Leo Tronchon",
"Stas Bekman",
"Amanpreet Singh",
"Anton Lozhkov",
"Thomas Wang",
"Siddharth Karamcheti",
"Alexander M Rush",
"Douwe Kiela",
"Matthieu Cord",
"Victor Sanh"
] | Track/Datasets_and_Benchmarks | poster | 2306.16527 | [
"https://github.com/huggingface/obelics"
] | https://huggingface.co/papers/2306.16527 | 10 | 45 | 4 | 12 | [
"HuggingFaceM4/idefics2-8b",
"HuggingFaceM4/idefics-80b-instruct",
"HuggingFaceM4/Idefics3-8B-Llama3",
"HuggingFaceM4/idefics-9b-instruct",
"HuggingFaceM4/idefics2-8b-chatty",
"HuggingFaceM4/idefics-80b",
"HuggingFaceM4/idefics-9b",
"HuggingFaceM4/idefics2-8b-base",
"Trelis/idefics2-8b-chatty-bf16",
"Reverb/Idefics2-8b-docVQA-finetuned",
"turing-motors/Heron-Idefics2-8B-v0.1",
"areegtarek/idefics-9b-instruct-all",
"huz-relay/idefics2-8b-ocr",
"peterpeter8585/ai2"
] | [
"HuggingFaceM4/OBELICS"
] | [
"HuggingFaceM4/idefics_playground",
"HuggingFaceM4/idefics2_playground",
"HuggingFaceM4/idefics-8b",
"HuggingFaceM4/AI_Meme_Generator",
"huggingface/data-measurements-tool",
"HuggingFaceM4/idefics3",
"thobuiq/GPT-4o",
"HuggingFaceM4/ai_dad_jokes",
"HuggingFaceM4/ai_raven",
"openskyml/pigeon-chat",
"HuggingFaceM4/IDEFICS-bias-eval",
"EPFL-VILAB/ViPer",
"Leyo/AI_Meme_Generator",
"aliabid94/idefics_playground",
"HuggingFaceM4/IDEFICS_Data_Measurement_Tool",
"arad1367/Marketing_Vision_HuggingFaceM4_idefics3",
"awacke1/idefics_and_chatty",
"Saee/vQA-exploration",
"Omnibus/idefics_playground",
"acecalisto3/IDEfix",
"dwb2023/model_explorer2",
"johann22/chat-diffusion",
"ImagineAI-Real/idefics_playground",
"FallnAI/CHATTERBOX",
"AchilleDev/perpetron",
"Statical/STC-ITT",
"pettah/PETTAHAI-Chatgpt4o-Demo",
"Omnibus/idefics_playground_mod",
"Rooni/OpenGPT-4o",
"Cesarcr/GPT-4o",
"m-ric/rate_coolness",
"fardinkai/GPT-4o",
"sherrybabe1978/OpenGPT-4o",
"HuggingFaceH4/idefics2-8b-playground",
"dwb2023/model_explorer4",
"IncinerateZ/chatbot",
"Zaherrr/KG_transform",
"cocktailpeanut/idefics-8b",
"marc-mao/idefics2_playground",
"johann22/idefics-9b-ft-describe-diffusion-mj",
"jkorstad/idefics3",
"dawood/idefics2_playground",
"LuxOAI/NEARVIDIA-GPT-4o",
"smothiki/idefics_playground",
"peterpeter8585/abc",
"Rahulhuggingface/AAnh",
"acecalisto3/IDE-play",
"johann22/idefics_playground",
"ignitariumcloud/idefics2",
"pettah/pettahaiGPT40",
"johann22/chat-diffusion-describe",
"taronsarkisyan/GPT-4o",
"vijaykumar85601/idefics2_playground",
"johann22/idefics-stream",
"Stable-Human/idefics2_playground",
"arptakash/GPT-4o",
"johann22/inference-explorer",
"LuxOAI/LUXX",
"Tamqeen/Chatbot-Llama",
"ysharma/dummy_123",
"awacke1/idefics2_playground-demo",
"NekonekoID/GPT-4o",
"steadilyai/idefics",
"ggilabert/idefics2_playground",
"minhdang/OpenGPT-4o",
"Omnibus/micro-agent-new-test",
"vaikl/Owngpt",
"LuxOAI/OpenGPT-4o",
"Suniilkumaar/AI_Meme_Generator",
"figh8back/fynd-idefics2-bb",
"ThinkAI-Morocco/KYA_idefics2_yalla",
"jbilcke-hf/idefics-server",
"amanavinash/GPT-4o",
"sapan3012/OpenGPT-4o",
"ysharma/dummy_m4",
"jcheng5/multimodal",
"Zafer01/OpenGPT4",
"Ezi/occurrences_test",
"MasterDee/OpenGPH-4o",
"xi0v/Omni4All",
"cyberdan2002/AI_Meme_Generator",
"Mandeep20/GPT-4o",
"alexkueck/TestInferenceAPI",
"mebinjo/OpenGPT-4o",
"sumitmeharwade/visionmodel",
"bala0o8o0/hexoticlabs-OpenGPT-4o",
"tnzly/TAI.o",
"AnViFedotov/OpenGPT-4o",
"iiced/OpenGPT-4o",
"Losthack777/mohamedsalem",
"SalmanFaroz/idefics8b-docvqa",
"Losthack777/OpenGPT-4o",
"Jayanath1987/JBL-OpenGPT-4o",
"Satyam-Singh/OpenAi_GPT_4-o",
"jihadzakki/idefics2_deploy",
"Jayanath1987/OpenGPT-4o",
"Anon0777/chat-app-model-hf",
"anjanprasad112/OpenGPT",
"ka1kuk/fastapi-demo",
"oscarwang2/OPENCHAT"
] | 1 |
null | https://openreview.net/forum?id=SEU9m9NReo | @inproceedings{
montana-brown2023saramis,
title={{SARAMIS}: Simulation Assets for Robotic Assisted and Minimally Invasive Surgery},
author={Nina Montana-Brown and Shaheer U. Saeed and Ahmed Abdulaal and Thomas Dowrick and Yakup Kilic and Sophie Wilkinson and Jack Gao and Meghavi Mashar and Chloe He and Alkisti Stavropoulou and Emma L Thomson and Zachary Baum and Simone Foti and Brian Davidson and Yipeng Hu and Matthew John Clarkson},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=SEU9m9NReo}
} | Minimally-invasive surgery (MIS) and robot-assisted minimally invasive (RAMIS) surgery offer well-documented benefits to patients such as reduced post-operative pain and shorter hospital stays.
However, the automation of MIS and RAMIS through the use of AI has been slow due to difficulties in data acquisition and curation, partially caused by the ethical considerations of training, testing and deploying AI models in medical environments.
We introduce \texttt{SARAMIS}, the first large-scale dataset of anatomically derived 3D rendering assets of the human abdominal anatomy.
Using previously existing, open-source CT datasets of the human anatomy, we derive novel 3D meshes, tetrahedral volumes, textures and diffuse maps for over 104 different anatomical targets in the human body, representing the largest, open-source dataset of 3D rendering assets for synthetic simulation of vision tasks in MIS+RAMIS, increasing the availability of openly available 3D meshes in the literature by three orders of magnitude.
We supplement our dataset with a series of GPU-enabled rendering environments, which can be used to generate datasets for realistic MIS/RAMIS tasks.
Finally, we present an example of the use of \texttt{SARAMIS} assets for an autonomous navigation task in colonoscopy from CT abdomen-pelvis scans for the first time in the literature.
\texttt{SARAMIS} is publically made available at https://github.com/NMontanaBrown/saramis/, with assets released under a CC-BY-NC-SA license. | SARAMIS: Simulation Assets for Robotic Assisted and Minimally Invasive Surgery | [
"Nina Montana-Brown",
"Shaheer U. Saeed",
"Ahmed Abdulaal",
"Thomas Dowrick",
"Yakup Kilic",
"Sophie Wilkinson",
"Jack Gao",
"Meghavi Mashar",
"Chloe He",
"Alkisti Stavropoulou",
"Emma L Thomson",
"Zachary Baum",
"Simone Foti",
"Brian Davidson",
"Yipeng Hu",
"Matthew John Clarkson"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=SDJ3kYpJFX | @inproceedings{
lanzend{\"o}rfer2023discom,
title={{DISCO}-10M: A Large-Scale Music Dataset},
author={Luca A Lanzend{\"o}rfer and Florian Gr{\"o}tschla and Emil Funke and Roger Wattenhofer},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=SDJ3kYpJFX}
} | Music datasets play a crucial role in advancing research in machine learning for music. However, existing music datasets suffer from limited size, accessibility, and lack of audio resources. To address these shortcomings, we present DISCO-10M, a novel and extensive music dataset that surpasses the largest previously available music dataset by an order of magnitude. To ensure high-quality data, we implement a multi-stage filtering process. This process incorporates similarities based on textual descriptions and audio embeddings. Moreover, we provide precomputed CLAP embeddings alongside DISCO-10M, facilitating direct application on various downstream tasks. These embeddings enable efficient exploration of machine learning applications on the provided data. With DISCO-10M, we aim to democratize and facilitate new research to help advance the development of novel machine learning models for music: https://huggingface.co/DISCOX | DISCO-10M: A Large-Scale Music Dataset | [
"Luca A Lanzendörfer",
"Florian Grötschla",
"Emil Funke",
"Roger Wattenhofer"
] | Track/Datasets_and_Benchmarks | poster | 2306.13512 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=RwNIqaNOgd | @inproceedings{
yuan2023rlvigen,
title={{RL}-ViGen: A Reinforcement Learning Benchmark for Visual Generalization},
author={Zhecheng Yuan and Sizhe Yang and Pu Hua and Can Chang and Kaizhe Hu and Huazhe Xu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=RwNIqaNOgd}
} | Visual Reinforcement Learning (Visual RL), coupled with high-dimensional observations, has consistently confronted the long-standing challenge of out-of-distribution generalization. Despite the focus on algorithms aimed at resolving visual generalization problems, we argue that the devil is in the existing benchmarks as they are restricted to isolated tasks and generalization categories, undermining a comprehensive evaluation of agents' visual generalization capabilities. To bridge this gap, we introduce RL-ViGen: a novel **R**einforcement **L**earning Benchmark for **Vi**sual **Gen**eralization, which contains diverse tasks and a wide spectrum of generalization types, thereby facilitating the derivation of more reliable conclusions. Furthermore, RL-ViGen incorporates the latest generalization visual RL algorithms into a unified framework, under which the experiment results indicate that no single existing algorithm has prevailed universally across tasks. Our aspiration is that Rl-ViGen will serve as a catalyst in this area, and lay a foundation for the future creation of universal visual generalization RL agents suitable for real-world scenarios. Access to our code and implemented algorithms is provided at https://gemcollector.github.io/RL-ViGen/. | RL-ViGen: A Reinforcement Learning Benchmark for Visual Generalization | [
"Zhecheng Yuan",
"Sizhe Yang",
"Pu Hua",
"Can Chang",
"Kaizhe Hu",
"Huazhe Xu"
] | Track/Datasets_and_Benchmarks | poster | 2307.10224 | [
"https://github.com/gemcollector/rl-vigen"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=RgdGkPRQ03 | @inproceedings{
gerard2023wildfirespreadts,
title={WildfireSpread{TS}: A dataset of multi-modal time series for wildfire spread prediction},
author={Sebastian Gerard and Yu Zhao and Josephine Sullivan},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=RgdGkPRQ03}
} | We present a multi-temporal, multi-modal remote-sensing dataset for predicting how active wildfires will spread at a resolution of 24 hours. The dataset consists of 13607 images across 607 fire events in the United States from January 2018 to October 2021. For each fire event, the dataset contains a full time series of daily observations, containing detected active fires and variables related to fuel, topography and weather conditions. The dataset is challenging due to: a) its inputs being multi-temporal, b) the high number of 23 multi-modal input channels, c) highly imbalanced labels and d) noisy labels, due to smoke, clouds, and inaccuracies in the active fire detection. The underlying complexity of the physical processes adds to these challenges. Compared to existing public datasets in this area, WildfireSpreadTS allows for multi-temporal modeling of spreading wildfires, due to its time series structure. Furthermore, we provide additional input modalities and a high spatial resolution of 375m for the active fire maps. We publish this dataset to encourage further research on this important task with multi-temporal, noise-resistant or generative methods, uncertainty estimation or advanced optimization techniques that deal with the high-dimensional input space. | WildfireSpreadTS: A dataset of multi-modal time series for wildfire spread prediction | [
"Sebastian Gerard",
"Yu Zhao",
"Josephine Sullivan"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=Rep7BB4vDa | @inproceedings{
he2023species,
title={Species196: A One-Million Semi-supervised Dataset for Fine-grained Species Recognition},
author={Wei He and Kai Han and Ying Nie and Chengcheng Wang and Yunhe Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=Rep7BB4vDa}
} | The development of foundation vision models has pushed the general visual recognition to a high level, but cannot well address the fine-grained recognition in specialized domain such as invasive species classification. Identifying and managing invasive species has strong social and ecological value. Currently, most invasive species datasets are limited in scale and cover a narrow range of species, which restricts the development of deep-learning based invasion biometrics systems. To fill the gap of this area, we introduced Species196, a large-scale semi-supervised dataset of 196-category invasive species. It collects over 19K images with expert-level accurate annotations (Species196-L), and 1.2M unlabeled images of invasive species (Species196-U). The dataset provides four experimental settings for benchmarking the existing models and algorithms, namely, supervised learning, semi-supervised learning and self-supervised pretraining. To facilitate future research on these four learning paradigms, we conduct an empirical study of the representative methods on the introduced dataset. The dataset will be made publicly available at https://species-dataset.github.io/. | Species196: A One-Million Semi-supervised Dataset for Fine-grained Species Recognition | [
"Wei He",
"Kai Han",
"Ying Nie",
"Chengcheng Wang",
"Yunhe Wang"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=RZJEkLFlPx | @inproceedings{
nguyen2023climatelearn,
title={ClimateLearn: Benchmarking Machine Learning for Weather and Climate Modeling},
author={Tung Nguyen and Jason Kyle Jewik and Hritik Bansal and Prakhar Sharma and Aditya Grover},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=RZJEkLFlPx}
} | Modeling weather and climate is an essential endeavor to understand the near- and long-term impacts of climate change, as well as to inform technology and policymaking for adaptation and mitigation efforts. In recent years, there has been a surging interest in applying data-driven methods based on machine learning for solving core problems such as weather forecasting and climate downscaling. Despite promising results, much of this progress has been impaired due to the lack of large-scale, open-source efforts for reproducibility, resulting in the use of inconsistent or underspecified datasets, training setups, and evaluations by both domain scientists and artificial intelligence researchers. We introduce ClimateLearn, an open-source PyTorch library that vastly simplifies the training and evaluation of machine learning models for data-driven climate science. ClimateLearn consists of holistic pipelines for dataset processing (e.g., ERA5, CMIP6, PRISM), implementing state-of-the-art deep learning models (e.g., Transformers, ResNets), and quantitative and qualitative evaluation for standard weather and climate modeling tasks. We supplement these functionalities with extensive documentation, contribution guides, and quickstart tutorials to expand access and promote community growth. We have also performed comprehensive forecasting and downscaling experiments to showcase the capabilities and key features of our library. To our knowledge, ClimateLearn is the first large-scale, open-source effort for bridging research in weather and climate modeling with modern machine learning systems. Our library is available publicly at https://github.com/aditya-grover/climate-learn. | ClimateLearn: Benchmarking Machine Learning for Weather and Climate Modeling | [
"Tung Nguyen",
"Jason Kyle Jewik",
"Hritik Bansal",
"Prakhar Sharma",
"Aditya Grover"
] | Track/Datasets_and_Benchmarks | poster | 2307.01909 | [
"https://github.com/aditya-grover/climate-learn"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=RADrFxYqIH | @inproceedings{
mayo2023how,
title={How hard are computer vision datasets? Calibrating dataset difficulty to viewing time},
author={David Mayo and Jesse Cummings and Xinyu Lin and Dan Gutfreund and Boris Katz and Andrei Barbu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=RADrFxYqIH}
} | Humans outperform object recognizers despite the fact that models perform well on current datasets, including those explicitly designed to challenge machines with debiased images or distribution shift. This problem persists, in part, because we have no guidance on the absolute difficulty of an image or dataset making it hard to objectively assess progress toward human-level performance, to cover the range of human abilities, and to increase the challenge posed by a dataset. We develop a dataset difficulty metric MVT, Minimum Viewing Time, that addresses these three problems. Subjects view an image that flashes on screen and then classify the object in the image. Images that require brief flashes to recognize are easy, those which require seconds of viewing are hard. We compute the ImageNet and ObjectNet image difficulty distribution, which we find significantly undersamples hard images. Nearly 90% of current benchmark performance is derived from images that are easy for humans. Rather than hoping that we will make harder datasets, we can for the first time objectively guide dataset difficulty during development. We can also subset recognition performance as a function of difficulty: model performance drops precipitously while human performance remains stable. Difficulty provides a new lens through which to view model performance, one which uncovers new scaling laws: vision-language models stand out as being the most robust and human-like while all other techniques scale poorly. We release tools to automatically compute MVT, along with image sets which are tagged by difficulty. Objective image difficulty has practical applications – one can measure how hard a test set is before deploying a real-world system – and scientific applications such as discovering the neural correlates of image difficulty and enabling new object recognition techniques that eliminate the benchmark-vs- real-world performance gap. | How hard are computer vision datasets? Calibrating dataset difficulty to viewing time | [
"David Mayo",
"Jesse Cummings",
"Xinyu Lin",
"Dan Gutfreund",
"Boris Katz",
"Andrei Barbu"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=Qf8uzIT1OK | @inproceedings{
andrews2023ethical,
title={Ethical Considerations for Responsible Data Curation},
author={Jerone Andrews and Dora Zhao and William Thong and Apostolos Modas and Orestis Papakyriakopoulos and Alice Xiang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=Qf8uzIT1OK}
} | Human-centric computer vision (HCCV) data curation practices often neglect privacy and bias concerns, leading to dataset retractions and unfair models. HCCV datasets constructed through nonconsensual web scraping lack crucial metadata for comprehensive fairness and robustness evaluations. Current remedies are post hoc, lack persuasive justification for adoption, or fail to provide proper contextualization for appropriate application. Our research focuses on proactive, domain-specific recommendations, covering purpose, privacy and consent, and diversity, for curating HCCV evaluation datasets, addressing privacy and bias concerns. We adopt an ante hoc reflective perspective, drawing from current practices, guidelines, dataset withdrawals, and audits, to inform our considerations and recommendations. | Ethical Considerations for Responsible Data Curation | [
"Jerone Andrews",
"Dora Zhao",
"William Thong",
"Apostolos Modas",
"Orestis Papakyriakopoulos",
"Alice Xiang"
] | Track/Datasets_and_Benchmarks | oral | 2302.03629 | [
"https://github.com/sonyresearch/responsible_data_curation"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=QXTjde8evS | @inproceedings{
aversa2023diffinfinite,
title={DiffInfinite: Large Mask-Image Synthesis via Parallel Random Patch Diffusion in Histopathology},
author={Marco Aversa and Gabriel Nobis and Miriam H{\"a}gele and Kai Standvoss and Mihaela Chirica and Roderick Murray-Smith and Ahmed Alaa and Lukas Ruff and Daniela Ivanova and Wojciech Samek and Frederick Klauschen and Bruno Sanguinetti and Luis Oala},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=QXTjde8evS}
} | We present DiffInfinite, a hierarchical diffusion model that generates arbitrarily large histological images while preserving long-range correlation structural information. Our approach first generates synthetic segmentation masks, subsequently used as conditions for the high-fidelity generative diffusion process. The proposed sampling method can be scaled up to any desired image size while only requiring small patches for fast training. Moreover, it can be parallelized more efficiently than previous large-content generation methods while avoiding tiling artifacts. The training leverages classifier-free guidance to augment a small, sparsely annotated dataset with unlabelled data. Our method alleviates unique challenges in histopathological imaging practice: large-scale information, costly manual annotation, and protective data handling. The biological plausibility of DiffInfinite data is evaluated in a survey by ten experienced pathologists as well as a downstream classification and segmentation task. Samples from the model score strongly on anti-copying metrics which is relevant for the protection of patient data. | DiffInfinite: Large Mask-Image Synthesis via Parallel Random Patch Diffusion in Histopathology | [
"Marco Aversa",
"Gabriel Nobis",
"Miriam Hägele",
"Kai Standvoss",
"Mihaela Chirica",
"Roderick Murray-Smith",
"Ahmed Alaa",
"Lukas Ruff",
"Daniela Ivanova",
"Wojciech Samek",
"Frederick Klauschen",
"Bruno Sanguinetti",
"Luis Oala"
] | Track/Datasets_and_Benchmarks | oral | 2306.13384 | [
"https://github.com/marcoaversa/diffinfinite"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=QEDjXv9OyY | @inproceedings{
uthus2023youtubeasl,
title={YouTube-{ASL}: A Large-Scale, Open-Domain American Sign Language-English Parallel Corpus},
author={David Uthus and Garrett Tanzer and Manfred Georg},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=QEDjXv9OyY}
} | Machine learning for sign languages is bottlenecked by data. In this paper, we present YouTube-ASL, a large-scale, open-domain corpus of American Sign Language (ASL) videos and accompanying English captions drawn from YouTube. With ~1000 hours of videos and >2500 unique signers, YouTube-ASL is ~3x as large and has ~10x as many unique signers as the largest prior ASL dataset. We train baseline models for ASL to English translation on YouTube-ASL and evaluate them on How2Sign, where we achieve a new fine-tuned state of the art of 12.397 BLEU and, for the first time, nontrivial zero-shot results. | YouTube-ASL: A Large-Scale, Open-Domain American Sign Language-English Parallel Corpus | [
"David Uthus",
"Garrett Tanzer",
"Manfred Georg"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | https://huggingface.co/papers/2306.15162 | 0 | 0 | 0 | 3 | [] | [
"Sigurdur/icelandic-sign-language"
] | [] | 1 |
|
null | https://openreview.net/forum?id=Pk2x7FPuZ4 | @inproceedings{
bae2023ehrxqa,
title={{EHRXQA}: A Multi-Modal Question Answering Dataset for Electronic Health Records with Chest X-ray Images},
author={Seongsu Bae and Daeun Kyung and Jaehee Ryu and Eunbyeol Cho and Gyubok Lee and Sunjun Kweon and Jungwoo Oh and Lei Ji and Eric I-Chao Chang and Tackeun Kim and Edward Choi},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=Pk2x7FPuZ4}
} | Electronic Health Records (EHRs), which contain patients' medical histories in various multi-modal formats, often overlook the potential for joint reasoning across imaging and table modalities underexplored in current EHR Question Answering (QA) systems. In this paper, we introduce EHRXQA, a novel multi-modal question answering dataset combining structured EHRs and chest X-ray images. To develop our dataset, we first construct two uni-modal resources: 1) The MIMIC- CXR-VQA dataset, our newly created medical visual question answering (VQA) benchmark, specifically designed to augment the imaging modality in EHR QA, and 2) EHRSQL (MIMIC-IV), a refashioned version of a previously established table-based EHR QA dataset. By integrating these two uni-modal resources, we successfully construct a multi-modal EHR QA dataset that necessitates both uni-modal and cross-modal reasoning. To address the unique challenges of multi-modal questions within EHRs, we propose a NeuralSQL-based strategy equipped with an external VQA API. This pioneering endeavor enhances engagement with multi-modal EHR sources and we believe that our dataset can catalyze advances in real-world medical scenarios such as clinical decision-making and research. EHRXQA is available at https://github.com/baeseongsu/ehrxqa. | EHRXQA: A Multi-Modal Question Answering Dataset for Electronic Health Records with Chest X-ray Images | [
"Seongsu Bae",
"Daeun Kyung",
"Jaehee Ryu",
"Eunbyeol Cho",
"Gyubok Lee",
"Sunjun Kweon",
"Jungwoo Oh",
"Lei Ji",
"Eric I-Chao Chang",
"Tackeun Kim",
"Edward Choi"
] | Track/Datasets_and_Benchmarks | poster | 2310.18652 | [
"https://github.com/baeseongsu/mimic-cxr-vqa"
] | https://huggingface.co/papers/2310.18652 | 3 | 1 | 0 | 11 | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=PWLGrvoqiR | @inproceedings{
chaves2023rales,
title={Ra{LE}s: a Benchmark for Radiology Language Evaluations},
author={Juan Manuel Zambrano Chaves and Nandita Bhaskhar and Maayane Attias and Jean-Benoit Delbrouck and Daniel Rubin and Andreas Markus Loening and Curtis Langlotz and Akshay S Chaudhari},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=PWLGrvoqiR}
} | The radiology report is the main form of communication between radiologists and other clinicians. Prior work in natural language processing in radiology reports has shown the value of developing methods tailored for individual tasks such as identifying reports with critical results or disease detection. Meanwhile, English and biomedical natural language understanding benchmarks such as the General Language Understanding and Evaluation as well as Biomedical Language Understanding and Reasoning Benchmark have motivated the development of models that can be easily adapted to address many tasks in those domains. Here, we characterize the radiology report as a distinct domain and introduce RaLEs, the Radiology Language Evaluations, as a benchmark for natural language understanding and generation in radiology. RaLEs is comprised of seven natural language understanding and generation evaluations including the extraction of anatomical and disease entities and their relations, procedure selection, and report summarization. We characterize the performance of models designed for the general, biomedical, clinical and radiology domains across these tasks. We find that advances in the general and biomedical domains do not necessarily translate to radiology, and that improved models from the general domain can perform comparably to smaller clinical-specific models. The limited performance of existing pre-trained models on RaLEs highlights the opportunity to improve domain-specific self-supervised models for natural language processing in radiology. We propose RaLEs as a benchmark to promote and track the development of such domain-specific radiology language models. | RaLEs: a Benchmark for Radiology Language Evaluations | [
"Juan Manuel Zambrano Chaves",
"Nandita Bhaskhar",
"Maayane Attias",
"Jean-Benoit Delbrouck",
"Daniel Rubin",
"Andreas Markus Loening",
"Curtis Langlotz",
"Akshay S Chaudhari"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=PFfmfspm28 | @inproceedings{
chevalier-boisvert2023minigrid,
title={Minigrid \& Miniworld: Modular \& Customizable Reinforcement Learning Environments for Goal-Oriented Tasks},
author={Maxime Chevalier-Boisvert and Bolun Dai and Mark Towers and Rodrigo De Lazcano Perez-Vicente and Lucas Willems and Salem Lahlou and Suman Pal and Pablo Samuel Castro and J K Terry},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=PFfmfspm28}
} | We present the Minigrid and Miniworld libraries which provide a suite of goal-oriented 2D and 3D environments. The libraries were explicitly created with a minimalistic design paradigm to allow users to rapidly develop new environments for a wide range of research-specific needs. As a result, both have received widescale adoption by the RL community, facilitating research in a wide range of areas. In this paper, we outline the design philosophy, environment details, and their world generation API. We also showcase the additional capabilities brought by the unified API between Minigrid and Miniworld through case studies on transfer learning (for both RL agents and humans) between the different observation spaces. The source code of Minigrid and Miniworld can be found at https://github.com/Farama-Foundation/Minigrid and https://github.com/Farama-Foundation/Miniworld along with their documentation at https://minigrid.farama.org/ and https://miniworld.farama.org/. | Minigrid Miniworld: Modular Customizable Reinforcement Learning Environments for Goal-Oriented Tasks | [
"Maxime Chevalier-Boisvert",
"Bolun Dai",
"Mark Towers",
"Rodrigo De Lazcano Perez-Vicente",
"Lucas Willems",
"Salem Lahlou",
"Suman Pal",
"Pablo Samuel Castro",
"J K Terry"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=PF0lxayYST | @inproceedings{
liu2023on,
title={On the Need for a Language Describing Distribution Shifts: Illustrations on Tabular Datasets},
author={Jiashuo Liu and Tianyu Wang and Peng Cui and Hongseok Namkoong},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=PF0lxayYST}
} | Different distribution shifts require different algorithmic and operational
interventions. Methodological research must be grounded by the specific
shifts they address. Although nascent benchmarks provide a promising
empirical foundation, they \emph{implicitly} focus on covariate
shifts, and the validity of empirical findings depends on the type of shift,
e.g., previous observations on algorithmic performance can fail to be valid when
the $Y|X$ distribution changes. We conduct a thorough investigation of
natural shifts in 5 tabular datasets over 86,000 model configurations, and
find that $Y|X$-shifts are most prevalent. To encourage researchers to
develop a refined language for distribution shifts, we build
``WhyShift``, an empirical testbed of curated real-world shifts where
we characterize the type of shift we benchmark performance over. Since
$Y|X$-shifts are prevalent in tabular settings, we \emph{identify covariate
regions} that suffer the biggest $Y|X$-shifts and discuss implications for
algorithmic and data-based interventions. Our testbed highlights the
importance of future research that builds an understanding of why
distributions differ. | On the Need for a Language Describing Distribution Shifts: Illustrations on Tabular Datasets | [
"Jiashuo Liu",
"Tianyu Wang",
"Peng Cui",
"Hongseok Namkoong"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |