bibtex_url
null | proceedings
stringlengths 42
42
| bibtext
stringlengths 286
1.35k
| abstract
stringlengths 558
2.37k
| title
stringlengths 18
163
| authors
sequencelengths 1
56
⌀ | id
stringclasses 1
value | type
stringclasses 2
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 63
values | n_linked_authors
int64 -1
10
| upvotes
int64 -1
45
| num_comments
int64 -1
6
| n_authors
int64 -1
40
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
10
| Spaces
sequencelengths 0
100
| paper_page_exists_pre_conf
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | https://openreview.net/forum?id=3sRR2u72oQ | @inproceedings{
huang2023inspect,
title={{INSPECT}: A Multimodal Dataset for Pulmonary Embolism Diagnosis and Prognosis},
author={Shih-Cheng Huang and Zepeng Huo and Ethan Steinberg and Chia-Chun Chiang and Matthew P. Lungren and Curtis Langlotz and Serena Yeung and Nigam Shah and Jason Alan Fries},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=3sRR2u72oQ}
} | Synthesizing information from various data sources plays a crucial role in the practice of modern medicine. Current applications of artificial intelligence in medicine often focus on single-modality data due to a lack of publicly available, multimodal medical datasets. To address this limitation, we introduce INSPECT, which contains de-identified longitudinal records from a large cohort of pulmonary embolism (PE) patients, along with ground truth labels for multiple outcomes. INSPECT contains data from 19,402 patients, including CT images, sections of radiology reports, and structured electronic health record (EHR) data (including demographics, diagnoses, procedures, and vitals). Using our provided dataset, we develop and release a benchmark for evaluating several baseline modeling approaches on a variety of important PE related tasks. We evaluate image-only, EHR-only, and fused models. Trained models and the de-identified dataset are made available for non-commercial use under a data use agreement. To the best our knowledge, INSPECT is the largest multimodal dataset for enabling reproducible research on strategies for integrating 3D medical imaging and EHR data. | INSPECT: A Multimodal Dataset for Pulmonary Embolism Diagnosis and Prognosis | [
"Shih-Cheng Huang",
"Zepeng Huo",
"Ethan Steinberg",
"Chia-Chun Chiang",
"Matthew P. Lungren",
"Curtis Langlotz",
"Serena Yeung",
"Nigam Shah",
"Jason Alan Fries"
] | Track/Datasets_and_Benchmarks | poster | 2311.10798 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=3BxYAaovKr | @inproceedings{
song2023egod,
title={Ego4D Goal-Step: Toward Hierarchical Understanding of Procedural Activities},
author={Yale Song and Gene Byrne and Tushar Nagarajan and Huiyu Wang and Miguel Martin and Lorenzo Torresani},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=3BxYAaovKr}
} | Human activities are goal-oriented and hierarchical, comprising primary goals at the top level, sequences of steps and substeps in the middle, and atomic actions at the lowest level. Recognizing human activities thus requires relating atomic actions and steps to their functional objectives (what the actions contribute to) and modeling their sequential and hierarchical dependencies towards achieving the goals. Current activity recognition research has primarily focused on only the lowest levels of this hierarchy, i.e., atomic or low-level actions, often in trimmed videos with annotations spanning only a few seconds. In this work, we introduce Ego4D Goal-Step, a new set of annotations on the recently released Ego4D with a novel hierarchical taxonomy of goal-oriented activity labels. It provides dense annotations for 48K procedural step segments (430 hours) and high-level goal annotations for 2,807 hours of Ego4D videos. Compared to existing procedural video datasets, it is substantially larger in size, contains hierarchical action labels (goals - steps - substeps), and provides goal-oriented auxiliary information including natural language summary description, step completion status, and step-to-goal relevance information. We take a data-driven approach to build our taxonomy, resulting in dense step annotations that do not suffer from poor label-data alignment issues resulting from a taxonomy defined a priori. Through comprehensive evaluations and analyses, we demonstrate how Ego4D Goal-Step supports exploring various questions in procedural activity understanding, including goal inference, step prediction, hierarchical relation learning, and long-term temporal modeling. | Ego4D Goal-Step: Toward Hierarchical Understanding of Procedural Activities | [
"Yale Song",
"Gene Byrne",
"Tushar Nagarajan",
"Huiyu Wang",
"Miguel Martin",
"Lorenzo Torresani"
] | Track/Datasets_and_Benchmarks | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=3BQaMV9jxK | @inproceedings{
feng2023mathbfmathbbefwi,
title={\${\textbackslash}mathbf\{{\textbackslash}mathbb\{E\}{\textasciicircum}\{{FWI}\}\}\$: Multiparameter Benchmark Datasets for Elastic Full Waveform Inversion of Geophysical Properties},
author={Shihang Feng and Hanchen Wang and Chengyuan Deng and Yinan Feng and Yanhua Liu and Min Zhu and Peng Jin and Yinpeng Chen and Youzuo Lin},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=3BQaMV9jxK}
} | Elastic geophysical properties (such as P- and S-wave velocities) are of great importance to various subsurface applications like CO$_2$ sequestration and energy exploration (e.g., hydrogen and geothermal). Elastic full waveform inversion (FWI) is widely applied for characterizing reservoir properties. In this paper, we introduce $\mathbf{\mathbb{E}^{FWI}}$, a comprehensive benchmark dataset that is specifically designed for elastic FWI. $\mathbf{\mathbb{E}^{FWI}}$ encompasses 8 distinct datasets that cover diverse subsurface geologic structures (flat, curve, faults, etc). The benchmark results produced by three different deep learning methods are provided. In contrast to our previously presented dataset (pressure recordings) for acoustic FWI (referred to as OpenFWI), the seismic dataset in $\mathbf{\mathbb{E}^{FWI}}$ has both vertical and horizontal components. Moreover, the velocity maps in $\mathbf{\mathbb{E}^{FWI}}$ incorporate both P- and S-wave velocities. While the multicomponent data and the added S-wave velocity make the data more realistic, more challenges are introduced regarding the convergence and computational cost of the inversion. We conduct comprehensive numerical experiments to explore the relationship between P-wave and S-wave velocities in seismic data. The relation between P- and S-wave velocities provides crucial insights into the subsurface properties such as lithology, porosity, fluid content, etc. We anticipate that $\mathbf{\mathbb{E}^{FWI}}$ will facilitate future research on multiparameter inversions and stimulate endeavors in several critical research topics of carbon-zero and new energy exploration. All datasets, codes and relevant information can be accessed through our website at https://efwi-lanl.github.io/ | 𝔼^𝐅𝐖𝐈: Multiparameter Benchmark Datasets for Elastic Full Waveform Inversion of Geophysical Properties | [
"Shihang Feng",
"Hanchen Wang",
"Chengyuan Deng",
"Yinan Feng",
"Yanhua Liu",
"Min Zhu",
"Peng Jin",
"Yinpeng Chen",
"Youzuo Lin"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=393EoKpJN3 | @inproceedings{
ma2023chimpact,
title={Chimp{ACT}: A Longitudinal Dataset for Understanding Chimpanzee Behaviors},
author={Xiaoxuan Ma and Stephan Paul Kaufhold and Jiajun Su and Wentao Zhu and Jack Terwilliger and Andres Meza and Yixin Zhu and Federico Rossano and Yizhou Wang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=393EoKpJN3}
} | Understanding the behavior of non-human primates is crucial for improving animal welfare, modeling social behavior, and gaining insights into distinctively human and phylogenetically shared behaviors. However, the lack of datasets on non-human primate behavior hinders in-depth exploration of primate social interactions, posing challenges to research on our closest living relatives. To address these limitations, we present ChimpACT, a comprehensive dataset for quantifying the longitudinal behavior and social relations of chimpanzees within a social group. Spanning from 2015 to 2018, ChimpACT features videos of a group of over 20 chimpanzees residing at the Leipzig Zoo, Germany, with a particular focus on documenting the developmental trajectory of one young male, Azibo. ChimpACT is both comprehensive and challenging, consisting of 163 videos with a cumulative 160,500 frames, each richly annotated with detection, identification, pose estimation, and fine-grained spatiotemporal behavior labels. We benchmark representative methods of three tracks on ChimpACT: (i) tracking and identification, (ii) pose estimation, and (iii) spatiotemporal action detection of the chimpanzees. Our experiments reveal that ChimpACT offers ample opportunities for both devising new methods and adapting existing ones to solve fundamental computer vision tasks applied to chimpanzee groups, such as detection, pose estimation, and behavior analysis, ultimately deepening our comprehension of communication and sociality in non-human primates. | ChimpACT: A Longitudinal Dataset for Understanding Chimpanzee Behaviors | [
"Xiaoxuan Ma",
"Stephan Paul Kaufhold",
"Jiajun Su",
"Wentao Zhu",
"Jack Terwilliger",
"Andres Meza",
"Yixin Zhu",
"Federico Rossano",
"Yizhou Wang"
] | Track/Datasets_and_Benchmarks | poster | 2310.16447 | [
"https://github.com/shirleymaxx/chimpact"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=38bZuqQOhC | @inproceedings{
liao2023does,
title={Does Continual Learning Meet Compositionality? New Benchmarks and An Evaluation Framework},
author={Weiduo Liao and Ying Wei and Mingchen Jiang and Qingfu Zhang and Hisao Ishibuchi},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=38bZuqQOhC}
} | Compositionality facilitates the comprehension of novel objects using acquired concepts and the maintenance of a knowledge pool. This is particularly crucial for continual learners to prevent catastrophic forgetting and enable compositionally forward transfer of knowledge. However, the existing state-of-the-art benchmarks inadequately evaluate the capability of compositional generalization, leaving an intriguing question unanswered. To comprehensively assess this capability, we introduce two vision benchmarks, namely Compositional GQA (CGQA) and Compositional OBJects365 (COBJ), along with a novel evaluation framework called Compositional Few-Shot Testing (CFST). These benchmarks evaluate the systematicity, productivity, and substitutivity aspects of compositional generalization. Experimental results on five baselines and two modularity-based methods demonstrate that current continual learning techniques do exhibit somewhat favorable compositionality in their learned feature extractors. Nonetheless, further efforts are required in developing modularity-based approaches to enhance compositional generalization. We anticipate that our proposed benchmarks and evaluation protocol will foster research on continual learning and compositionality. | Does Continual Learning Meet Compositionality? New Benchmarks and An Evaluation Framework | [
"Weiduo Liao",
"Ying Wei",
"Mingchen Jiang",
"Qingfu Zhang",
"Hisao Ishibuchi"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=2s7ZZUhEGS | @inproceedings{
yuan2023marble,
title={{MARBLE}: Music Audio Representation Benchmark for Universal Evaluation},
author={Ruibin Yuan and Yinghao Ma and Yizhi LI and Ge Zhang and Xingran Chen and Hanzhi Yin and Le Zhuo and Yiqi Liu and Jiawen Huang and Zeyue Tian and Binyue Deng and Ningzhi Wang and Chenghua Lin and Emmanouil Benetos and Anton Ragni and Norbert Gyenge and Roger Dannenberg and Wenhu Chen and Gus Xia and Wei Xue and Si Liu and Shi Wang and Ruibo Liu and Yike Guo and Jie Fu},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=2s7ZZUhEGS}
} | In the era of extensive intersection between art and Artificial Intelligence (AI), such as image generation and fiction co-creation, AI for music remains relatively nascent, particularly in music understanding. This is evident in the limited work on deep music representations, the scarcity of large-scale datasets, and the absence of a universal and community-driven benchmark. To address this issue, we introduce the Music Audio Representation Benchmark for universaL Evaluation, termed MARBLE. It aims to provide a benchmark for various Music Information Retrieval (MIR) tasks by defining a comprehensive taxonomy with four hierarchy levels, including acoustic, performance, score, and high-level description. We then establish a unified protocol based on 18 tasks on 12 public-available datasets, providing a fair and standard assessment of representations of all open-sourced pre-trained models developed on music recordings as baselines. Besides, MARBLE offers an easy-to-use, extendable, and reproducible suite for the community, with a clear statement on copyright issues on datasets. Results suggest recently proposed large-scale pre-trained musical language models perform the best in most tasks, with room for further improvement. The leaderboard and toolkit repository are published to promote future music AI research. | MARBLE: Music Audio Representation Benchmark for Universal Evaluation | null | Track/Datasets_and_Benchmarks | poster | 2306.10548 | [
"https://github.com/a43992899/marble-benchmark"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=2CJUQe6IoR | @inproceedings{
sanders2023multivent,
title={Multi{VENT}: Multilingual Videos of Events and Aligned Natural Text},
author={Kate Sanders and David Etter and Reno Kriz and Benjamin Van Durme},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=2CJUQe6IoR}
} | Everyday news coverage has shifted from traditional broadcasts towards a wide range of presentation formats such as first-hand, unedited video footage. Datasets that reflect the diverse array of multimodal, multilingual news sources available online could be used to teach models to benefit from this shift, but existing news video datasets focus on traditional news broadcasts produced for English-speaking audiences. We address this limitation by constructing MultiVENT, a dataset of multilingual, event-centric videos grounded in text documents across five target languages. MultiVENT includes both news broadcast videos and non-professional event footage, which we use to analyze the state of online news videos and how they can be leveraged to build robust, factually accurate models. Finally, we provide a model for complex, multilingual video retrieval to serve as a baseline for information retrieval using MultiVENT. | MultiVENT: Multilingual Videos of Events and Aligned Natural Text | null | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=27vPcG4vKV | @inproceedings{
kucera2023proteinshake,
title={ProteinShake: Building datasets and benchmarks for deep learning on protein structures},
author={Tim Kucera and Carlos Oliver and Dexiong Chen and Karsten Borgwardt},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=27vPcG4vKV}
} | We present ProteinShake, a Python software package that simplifies dataset
creation and model evaluation for deep learning on protein structures. Users
can create custom datasets or load an extensive set of pre-processed datasets from
biological data repositories such as the Protein Data Bank (PDB) and AlphaFoldDB.
Each dataset is associated with prediction tasks and evaluation functions covering
a broad array of biological challenges. A benchmark on these tasks shows that pre-
training almost always improves performance, the optimal data modality (graphs,
voxel grids, or point clouds) is task-dependent, and models struggle to generalize
to new structures. ProteinShake makes protein structure data easily accessible
and comparison among models straightforward, providing challenging benchmark
settings with real-world implications.
ProteinShake is available at: https://proteinshake.ai | ProteinShake: Building datasets and benchmarks for deep learning on protein structures | [
"Tim Kucera",
"Carlos Oliver",
"Dexiong Chen",
"Karsten Borgwardt"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=22RlsVAOTT | @inproceedings{
pan2023renderme,
title={RenderMe-360: A Large Digital Asset Library and Benchmarks Towards High-fidelity Head Avatars},
author={Dongwei Pan and Long Zhuo and Jingtan Piao and Huiwen Luo and Wei Cheng and Yuxin WANG and Siming Fan and Shengqi Liu and Lei Yang and Bo Dai and Ziwei Liu and Chen Change Loy and Chen Qian and Wayne Wu and Dahua Lin and Kwan-Yee Lin},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=22RlsVAOTT}
} | Synthesizing high-fidelity head avatars is a central problem for computer vision and graphics. While head avatar synthesis algorithms have advanced rapidly, the best ones still face great obstacles in real-world scenarios. One of the vital causes is the inadequate datasets -- 1) current public datasets can only support researchers to explore high-fidelity head avatars in one or two task directions; 2) these datasets usually contain digital head assets with limited data volume, and narrow distribution over different attributes, such as expressions, ages, and accessories. In this paper, we present RenderMe-360, a comprehensive 4D human head dataset to drive advance in head avatar algorithms across different scenarios. It contains massive data assets, with 243+ million complete head frames and over 800k video sequences from 500 different identities captured by multi-view cameras at 30 FPS. It is a large-scale digital library for head avatars with three key attributes: 1) High Fidelity: all subjects are captured in 360 degrees via 60 synchronized, high-resolution 2K cameras. 2) High Diversity: The collected subjects vary from different ages, eras, ethnicities, and cultures, providing abundant materials with distinctive styles in appearance and geometry. Moreover, each subject is asked to perform various dynamic motions, such as expressions and head rotations, which further extend the richness of assets. 3) Rich Annotations: the dataset provides annotations with different granularities: cameras' parameters, background matting, scan, 2D/3D facial landmarks, FLAME fitting, and text description.
Based on the dataset, we build a comprehensive benchmark for head avatar research, with 16 state-of-the-art methods performed on five main tasks: novel view synthesis, novel expression synthesis, hair rendering, hair editing, and talking head generation. Our experiments uncover the strengths and flaws of state-of-the-art methods. RenderMe-360 opens the door for future exploration in modern head avatars. All of the data, code, and models will be publicly available at https://renderme-360.github.io/. | RenderMe-360: A Large Digital Asset Library and Benchmarks Towards High-fidelity Head Avatars | [
"Dongwei Pan",
"Long Zhuo",
"Jingtan Piao",
"Huiwen Luo",
"Wei Cheng",
"Yuxin WANG",
"Siming Fan",
"Shengqi Liu",
"Lei Yang",
"Bo Dai",
"Ziwei Liu",
"Chen Change Loy",
"Chen Qian",
"Wayne Wu",
"Dahua Lin",
"Kwan-Yee Lin"
] | Track/Datasets_and_Benchmarks | poster | 2305.13353 | [
"https://github.com/renderme-360/renderme-360"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=1yOnfDpkVe | @inproceedings{
goldblum2023battle,
title={Battle of the Backbones: A Large-Scale Comparison of Pretrained Models across Computer Vision Tasks},
author={Micah Goldblum and Hossein Souri and Renkun Ni and Manli Shu and Viraj Uday Prabhu and Gowthami Somepalli and Prithvijit Chattopadhyay and Mark Ibrahim and Adrien Bardes and Judy Hoffman and Rama Chellappa and Andrew Gordon Wilson and Tom Goldstein},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=1yOnfDpkVe}
} | Neural network based computer vision systems are typically built on a backbone, a pretrained or randomly initialized feature extractor. Several years ago, the default option was an ImageNet-trained convolutional neural network. However, the recent past has seen the emergence of countless backbones pretrained using various algorithms and datasets. While this abundance of choice has led to performance increases for a range of systems, it is difficult for practitioners to make informed decisions about which backbone to choose. Battle of the Backbones (BoB) makes this choice easier by benchmarking a diverse suite of pretrained models, including vision-language models, those trained via self-supervised learning, and the Stable Diffusion backbone, across a diverse set of computer vision tasks ranging from classification to object detection to OOD generalization and more. Furthermore, BoB sheds light on promising directions for the research community to advance computer vision by illuminating strengths and weakness of existing approaches through a comprehensive analysis conducted on more than 1500 training runs. While vision transformers (ViTs) and self-supervised learning (SSL) are increasingly popular, we find that convolutional neural networks pretrained in a supervised fashion on large training sets still perform best on most tasks among the models we consider. Moreover, in apples-to-apples comparisons on the same architectures and similarly sized pretraining datasets, we find that SSL backbones are highly competitive, indicating that future works should perform SSL pretraining with advanced architectures and larger pretraining datasets. We release the raw results of our experiments along with code that allows researchers to put their own backbones through the gauntlet here: https://github.com/hsouri/Battle-of-the-Backbones. | Battle of the Backbones: A Large-Scale Comparison of Pretrained Models across Computer Vision Tasks | [
"Micah Goldblum",
"Hossein Souri",
"Renkun Ni",
"Manli Shu",
"Viraj Uday Prabhu",
"Gowthami Somepalli",
"Prithvijit Chattopadhyay",
"Mark Ibrahim",
"Adrien Bardes",
"Judy Hoffman",
"Rama Chellappa",
"Andrew Gordon Wilson",
"Tom Goldstein"
] | Track/Datasets_and_Benchmarks | poster | 2310.19909 | [
"https://github.com/hsouri/battle-of-the-backbones"
] | https://huggingface.co/papers/2310.19909 | 8 | 20 | 1 | 13 | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=1uAsASS1th | @inproceedings{
yang2023mmfi,
title={{MM}-Fi: Multi-Modal Non-Intrusive 4D Human Dataset for Versatile Wireless Sensing},
author={Jianfei Yang and He Huang and Yunjiao Zhou and Xinyan Chen and Yuecong Xu and Shenghai Yuan and Han Zou and Chris Xiaoxuan Lu and Lihua Xie},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=1uAsASS1th}
} | 4D human perception plays an essential role in a myriad of applications, such as home automation and metaverse avatar simulation. However, existing solutions which mainly rely on cameras and wearable devices are either privacy intrusive or inconvenient to use. To address these issues, wireless sensing has emerged as a promising alternative, leveraging LiDAR, mmWave radar, and WiFi signals for device-free human sensing. In this paper, we propose MM-Fi, the first multi-modal non-intrusive 4D human dataset with 27 daily or rehabilitation action categories, to bridge the gap between wireless sensing and high-level human perception tasks. MM-Fi consists of over 320k synchronized frames of five modalities from 40 human subjects. Various annotations are provided to support potential sensing tasks, e.g., human pose estimation and action recognition. Extensive experiments have been conducted to compare the sensing capacity of each or several modalities in terms of multiple tasks. We envision that MM-Fi can contribute to wireless sensing research with respect to action recognition, human pose estimation, multi-modal learning, cross-modal supervision, and interdisciplinary healthcare research. | MM-Fi: Multi-Modal Non-Intrusive 4D Human Dataset for Versatile Wireless Sensing | [
"Jianfei Yang",
"He Huang",
"Yunjiao Zhou",
"Xinyan Chen",
"Yuecong Xu",
"Shenghai Yuan",
"Han Zou",
"Chris Xiaoxuan Lu",
"Lihua Xie"
] | Track/Datasets_and_Benchmarks | poster | 2305.10345 | [
"https://github.com/ybhbingo/mmfi_dataset"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=1plAfmP5ms | @inproceedings{
hor2023mvdoppler,
title={{MVD}oppler: Unleashing the Power of Multi-View Doppler for MicroMotion-based Gait Classification},
author={Soheil Hor and Shubo Yang and Jae-Ho Choi and Amin Arbabian},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=1plAfmP5ms}
} | Modern perception systems rely heavily on high-resolution cameras, LiDARs, and advanced deep neural networks, enabling exceptional performance across various applications. However, these optical systems predominantly depend on geometric features and shapes of objects, which can be challenging to capture in long-range perception applications. To overcome this limitation, alternative approaches such as Doppler-based perception using high-resolution radars have been proposed.
Doppler-based systems are capable of measuring micro-motions of targets remotely and with very high precision. When compared to geometric features, the resolution of micro-motion features exhibits significantly greater resilience to the influence of distance. However, the true potential of Doppler-based perception has yet to be fully realized due to several factors. These include the unintuitive nature of Doppler signals, the limited availability of public Doppler datasets, and the current datasets' inability to capture the specific co-factors that are unique to Doppler-based perception, such as the effect of the radar's observation angle and the target's motion trajectory.
This paper introduces a new large multi-view Doppler dataset together with baseline perception models for micro-motion-based gait analysis and classification. The dataset captures the impact of the subject's walking trajectory and radar's observation angle on the classification performance. Additionally, baseline multi-view data fusion techniques are provided to mitigate these effects. This work demonstrates that sub-second micro-motion snapshots can be sufficient for reliable detection of hand movement patterns and even changes in a pedestrian's walking behavior when distracted by their phone. Overall, this research not only showcases the potential of Doppler-based perception, but also offers valuable solutions to tackle its fundamental challenges. | MVDoppler: Unleashing the Power of Multi-View Doppler for MicroMotion-based Gait Classification | [
"Soheil Hor",
"Shubo Yang",
"Jae-Ho Choi",
"Amin Arbabian"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=1ngbR3SZHW | @inproceedings{
guo2023what,
title={What can Large Language Models do in chemistry? A comprehensive benchmark on eight tasks},
author={Taicheng Guo and Kehan Guo and Bozhao Nan and Zhenwen Liang and Zhichun Guo and Nitesh V Chawla and Olaf Wiest and Xiangliang Zhang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=1ngbR3SZHW}
} | Large Language Models (LLMs) with strong abilities in natural language processing tasks have emerged and have been applied in various kinds of areas such as science, finance and software engineering. However, the capability of LLMs to advance the field of chemistry remains unclear. In this paper, rather than pursuing state-of-the-art performance, we aim to evaluate capabilities of LLMs in a wide range of tasks across the chemistry domain. We identify three key chemistry-related capabilities including understanding, reasoning and explaining to explore in LLMs and establish a benchmark containing eight chemistry tasks. Our analysis draws on widely recognized datasets facilitating a broad exploration of the capacities of LLMs within the context of practical chemistry. Five LLMs (GPT-4,GPT-3.5, Davinci-003, Llama and Galactica) are evaluated for each chemistry task in zero-shot and few-shot in-context learning settings with carefully selected demonstration examples and specially crafted prompts. Our investigation found that GPT-4 outperformed other models and LLMs exhibit different competitive levels in eight chemistry tasks. In addition to the key findings from the comprehensive benchmark analysis, our work provides insights into the limitation of current LLMs and the impact of in-context learning settings on LLMs’ performance across various chemistry tasks. The code and datasets used in this study are available at https://github.com/ChemFoundationModels/ChemLLMBench. | What can Large Language Models do in chemistry? A comprehensive benchmark on eight tasks | [
"Taicheng Guo",
"Kehan Guo",
"Bozhao Nan",
"Zhenwen Liang",
"Zhichun Guo",
"Nitesh V Chawla",
"Olaf Wiest",
"Xiangliang Zhang"
] | Track/Datasets_and_Benchmarks | poster | 2305.18365 | [
"https://github.com/chemfoundationmodels/chemllmbench"
] | https://huggingface.co/papers/2305.18365 | 3 | 3 | 0 | 8 | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=1jrYSOG7DR | @inproceedings{
grover2023revealing,
title={Revealing the unseen: Benchmarking video action recognition under occlusion},
author={Shresth Grover and Vibhav Vineet and Yogesh S Rawat},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=1jrYSOG7DR}
} | In this work, we study the effect of occlusion on video action recognition. To
facilitate this study, we propose three benchmark datasets and experiment with
seven different video action recognition models. These datasets include two synthetic benchmarks, UCF-101-O and K-400-O, which enabled understanding the
effects of fundamental properties of occlusion via controlled experiments. We also
propose a real-world occlusion dataset, UCF-101-Y-OCC, which helps in further
validating the findings of this study. We find several interesting insights such as 1)
transformers are more robust than CNN counterparts, 2) pretraining make models
robust against occlusions, and 3) augmentation helps, but does not generalize
well to real-world occlusions. In addition, we propose a simple transformer based
compositional model, termed as CTx-Net, which generalizes well under this distribution shift. We observe that CTx-Net outperforms models which are trained
using occlusions as augmentation, performing significantly better under natural
occlusions. We believe this benchmark will open up interesting future research in
robust video action recognition | Revealing the unseen: Benchmarking video action recognition under occlusion | [
"Shresth Grover",
"Vibhav Vineet",
"Yogesh S Rawat"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=1agtIRxlCY | @inproceedings{
tschernezki2023epic,
title={{EPIC} Fields: Marrying 3D Geometry and Video Understanding},
author={Vadim Tschernezki and Ahmad Darkhalil and Zhifan Zhu and David Fouhey and Iro Laina and Diane Larlus and Dima Damen and Andrea Vedaldi},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=1agtIRxlCY}
} | Neural rendering is fuelling a unification of learning, 3D geometry and video understanding that has been waiting for more than two decades. Progress, however, is still hampered by a lack of suitable datasets and benchmarks. To address this gap, we introduce EPIC Fields, an augmentation of EPIC-KITCHENS with 3D camera information. Like other datasets for neural rendering, EPIC Fields removes the complex and expensive step of reconstructing cameras using photogrammetry, and allows researchers to focus on modelling problems. We illustrate the challenge of photogrammetry in egocentric videos of dynamic actions and propose innovations to address them. Compared to other neural rendering datasets, EPIC Fields is better tailored to video understanding because it is paired with labelled action segments and the recent VISOR segment annotations. To further motivate the community, we also evaluate two benchmark tasks in neural rendering and segmenting dynamic objects, with strong baselines that showcase what is not possible today. We also highlight the advantage of geometry in semi-supervised video object segmentations on the VISOR annotations. EPIC Fields reconstructs 96\% of videos in EPIC-KITCHENS, registering 19M frames in 99 hours recorded in 45 kitchens, and is available from: http://epic-kitchens.github.io/epic-fields | EPIC Fields: Marrying 3D Geometry and Video Understanding | [
"Vadim Tschernezki",
"Ahmad Darkhalil",
"Zhifan Zhu",
"David Fouhey",
"Iro Laina",
"Diane Larlus",
"Dima Damen",
"Andrea Vedaldi"
] | Track/Datasets_and_Benchmarks | poster | 2306.08731 | [
"https://github.com/epic-kitchens/epic-fields-code"
] | https://huggingface.co/papers/2306.08731 | 0 | 0 | 0 | 8 | [] | [] | [] | 1 |
null | https://openreview.net/forum?id=1ODvxEwsGk | @inproceedings{
sen2023diverse,
title={Diverse Community Data for Benchmarking Data Privacy Algorithms},
author={Aniruddha Sen and Christine Task and Dhruv Kapur and Gary Stanley Howarth and Karan Bhagat},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=1ODvxEwsGk}
} | The Collaborative Research Cycle (CRC) is a National Institute of Standards and Technology (NIST) benchmarking program intended to strengthen understanding of tabular data deidentification technologies. Deidentification algorithms are vulnerable to the same bias and privacy issues that impact other data analytics and machine learning applications, and it can even amplify those issues by contaminating downstream applications. This paper summarizes four CRC contributions: theoretical work on the relationship between diverse populations and challenges for equitable deidentification; public benchmark data focused on diverse populations and challenging features; a comprehensive open source suite of evaluation metrology for deidentified datasets; and an archive of more than 450 deidentified data samples from a broad range of techniques. The initial set of evaluation results demonstrate the value of the CRC tools for investigations in this field. | Diverse Community Data for Benchmarking Data Privacy Algorithms | [
"Aniruddha Sen",
"Christine Task",
"Dhruv Kapur",
"Gary Stanley Howarth",
"Karan Bhagat"
] | Track/Datasets_and_Benchmarks | poster | 2306.13216 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=10R4Fg1aA0 | @inproceedings{
sikarwar2023decoding,
title={Decoding the Enigma: Benchmarking Humans and {AI}s on the Many Facets of Working Memory},
author={Ankur Sikarwar and Mengmi Zhang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=10R4Fg1aA0}
} | Working memory (WM), a fundamental cognitive process facilitating the temporary storage, integration, manipulation, and retrieval of information, plays a vital role in reasoning and decision-making tasks. Robust benchmark datasets that capture the multifaceted nature of WM are crucial for the effective development and evaluation of AI WM models. Here, we introduce a comprehensive Working Memory (WorM) benchmark dataset for this purpose. WorM comprises 10 tasks and a total of 1 million trials, assessing 4 functionalities, 3 domains, and 11 behavioral and neural characteristics of WM. We jointly trained and tested state-of-the-art recurrent neural networks and transformers on all these tasks. We also include human behavioral benchmarks as an upper bound for comparison. Our results suggest that AI models replicate some characteristics of WM in the brain, most notably primacy and recency effects, and neural clusters and correlates specialized for different domains and functionalities of WM. In the experiments, we also reveal some limitations in existing models to approximate human behavior. This dataset serves as a valuable resource for communities in cognitive psychology, neuroscience, and AI, offering a standardized framework to compare and enhance WM models, investigate WM's neural underpinnings, and develop WM models with human-like capabilities. Our source code and data are available at: https://github.com/ZhangLab-DeepNeuroCogLab/WorM | Decoding the Enigma: Benchmarking Humans and AIs on the Many Facets of Working Memory | [
"Ankur Sikarwar",
"Mengmi Zhang"
] | Track/Datasets_and_Benchmarks | poster | 2307.10768 | [
"https://github.com/zhanglab-deepneurocoglab/worm"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=0cltUI2Sto | @inproceedings{
modi2023on,
title={On Occlusions in Video Action Detection: Benchmark Datasets And Training Recipes},
author={Rajat Modi and Vibhav Vineet and Yogesh S Rawat},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=0cltUI2Sto}
} | This paper explores the impact of occlusions in video action detection. We facilitate
this study by introducing five new benchmark datasets namely O-UCF and O-
JHMDB consisting of synthetically controlled static/dynamic occlusions, OVIS-
UCF and OVIS-JHMDB consisting of occlusions with realistic motions and Real-
OUCF for occlusions in realistic-world scenarios. We formally confirm an intuitive
expectation: existing models suffer a lot as occlusion severity is increased and
exhibit different behaviours when occluders are static vs when they are moving.
We discover several intriguing phenomenon emerging in neural nets: 1) transformers
can naturally outperform CNN models which might have even used occlusion as a
form of data augmentation during training 2) incorporating symbolic-components
like capsules to such backbones allows them to bind to occluders never even seen
during training and 3) Islands of agreement (similar to the ones hypothesized in
Hinton et Al’s GLOM) can emerge in realistic images/videos without instance-level
supervision, distillation or contrastive-based objectives(eg. video-textual training).
Such emergent properties allow us to derive simple yet effective training recipes
which lead to robust occlusion models inductively satisfying the first two stages of
the binding mechanism (grouping/segregation). Models leveraging these recipes
outperform existing video action-detectors under occlusion by 32.3% on O-UCF,
32.7% on O-JHMDB & 2.6% on Real-OUCF in terms of the vMAP metric. The code for this work has been released at https: //github.com/rajatmodi62/OccludedActionBenchmark. | On Occlusions in Video Action Detection: Benchmark Datasets And Training Recipes | [
"Rajat Modi",
"Vibhav Vineet",
"Yogesh S Rawat"
] | Track/Datasets_and_Benchmarks | poster | 2410.19553 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=0Wmglu8zak | @inproceedings{
hassan2023bubbleml,
title={Bubble{ML}: A Multiphase Multiphysics Dataset and Benchmarks for Machine Learning},
author={Sheikh Md Shakeel Hassan and Arthur Feeney and Akash Dhruv and Jihoon Kim and Youngjoon Suh and Jaiyoung Ryu and Yoonjin Won and Aparna Chandramowlishwaran},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=0Wmglu8zak}
} | In the field of phase change phenomena, the lack of accessible and diverse datasets suitable for machine learning (ML) training poses a significant challenge. Existing experimental datasets are often restricted, with limited availability and sparse ground truth, impeding our understanding of this complex multiphysics phenomena. To bridge this gap, we present the BubbleML dataset which leverages physics-driven simulations to provide accurate ground truth information for various boiling scenarios, encompassing nucleate pool boiling, flow boiling, and sub-cooled boiling. This extensive dataset covers a wide range of parameters, including varying gravity conditions, flow rates, sub-cooling levels, and wall superheat, comprising 79 simulations. BubbleML is validated against experimental observations and trends, establishing it as an invaluable resource for ML research. Furthermore, we showcase its potential to facilitate the exploration of diverse downstream tasks by introducing two benchmarks: (a) optical flow analysis to capture bubble dynamics, and (b) neural PDE solvers for learning temperature and flow dynamics. The BubbleML dataset and its benchmarks aim to catalyze progress in ML-driven research on multiphysics phase change phenomena, providing robust baselines for the development and comparison of state-of-the-art techniques and models. | BubbleML: A Multiphase Multiphysics Dataset and Benchmarks for Machine Learning | [
"Sheikh Md Shakeel Hassan",
"Arthur Feeney",
"Akash Dhruv",
"Jihoon Kim",
"Youngjoon Suh",
"Jaiyoung Ryu",
"Yoonjin Won",
"Aparna Chandramowlishwaran"
] | Track/Datasets_and_Benchmarks | oral | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=0RSQEh9lRG | @inproceedings{
zhu2023ofcourse,
title={{OFCOURSE}: A Multi-Agent Reinforcement Learning Environment for Order Fulfillment},
author={Yiheng Zhu and Yang Zhan and Xuankun Huang and Yuwei Chen and yujie Chen and Jiangwen Wei and Wei Feng and Yinzhi Zhou and Haoyuan Hu and Jieping Ye},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=0RSQEh9lRG}
} | The dramatic growth of global e-commerce has led to a surge in demand for efficient and cost-effective order fulfillment which can increase customers' service levels and sellers' competitiveness. However, managing order fulfillment is challenging due to a series of interdependent online sequential decision-making problems. To clear this hurdle, rather than solving the problems separately as attempted in some recent researches, this paper proposes a method based on multi-agent reinforcement learning to integratively solve the series of interconnected problems, encompassing order handling, packing and pickup, storage, order consolidation, and last-mile delivery. In particular, we model the integrated problem as a Markov game, wherein a team of agents learns a joint policy via interacting with a simulated environment. Since no simulated environment supporting the complete order fulfillment problem exists, we devise Order Fulfillment COoperative mUlti-agent Reinforcement learning Scalable Environment (OFCOURSE) in the OpenAI Gym style, which allows reproduction and re-utilization to build customized applications. By constructing the fulfillment system in OFCOURSE, we optimize a joint policy that solves the integrated problem, facilitating sequential order-wise operations across all fulfillment units and minimizing the total cost of fulfilling all orders within the promised time. With OFCOURSE, we also demonstrate that the joint policy learned by multi-agent reinforcement learning outperforms the combination of locally optimal policies. The source code of OFCOURSE is available at: https://github.com/GitYiheng/ofcourse. | OFCOURSE: A Multi-Agent Reinforcement Learning Environment for Order Fulfillment | [
"Yiheng Zhu",
"Yang Zhan",
"Xuankun Huang",
"Yuwei Chen",
"yujie Chen",
"Jiangwen Wei",
"Wei Feng",
"Yinzhi Zhou",
"Haoyuan Hu",
"Jieping Ye"
] | Track/Datasets_and_Benchmarks | poster | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
||
null | https://openreview.net/forum?id=0MGvE1Gkgv | @inproceedings{
kurenkov2023katakomba,
title={Katakomba: Tools and Benchmarks for Data-Driven NetHack},
author={Vladislav Kurenkov and Alexander Nikulin and Denis Tarasov and Sergey Kolesnikov},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=0MGvE1Gkgv}
} | NetHack is known as the frontier of reinforcement learning research where learning-based methods still need to catch up to rule-based solutions. One of the promising directions for a breakthrough is using pre-collected datasets similar to recent developments in robotics, recommender systems, and more under the umbrella of offline reinforcement learning (ORL). Recently, a large-scale NetHack dataset was released; while it was a necessary step forward, it has yet to gain wide adoption in the ORL community. In this work, we argue that there are three major obstacles for adoption: tool-wise, implementation-wise, and benchmark-wise. To address them, we develop an open-source library that provides workflow fundamentals familiar to the ORL community: pre-defined D4RL-style tasks, uncluttered baseline implementations, and reliable evaluation tools with accompanying configs and logs synced to the cloud. | Katakomba: Tools and Benchmarks for Data-Driven NetHack | [
"Vladislav Kurenkov",
"Alexander Nikulin",
"Denis Tarasov",
"Sergey Kolesnikov"
] | Track/Datasets_and_Benchmarks | poster | 2306.08772 | [
"https://github.com/corl-team/katakomba"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |
|
null | https://openreview.net/forum?id=0H5fRQcpQ7 | @inproceedings{
kumar2023robohive,
title={RoboHive: A Unified Framework for Robot Learning},
author={Vikash Kumar and Rutav Shah and Gaoyue Zhou and Vincent Moens and Vittorio Caggiano and Abhishek Gupta and Aravind Rajeswaran},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=0H5fRQcpQ7}
} | We present RoboHive, a comprehensive software platform and ecosystem for research in the field of Robot Learning and Embodied Artificial Intelligence. Our platform encompasses a diverse range of pre-existing and novel environments, including dexterous manipulation with the Shadow Hand, whole-arm manipulation tasks with Franka and Fetch robots, quadruped locomotion, among others. Included environments are organized within and cover multiple domains such as hand manipulation, locomotion, multi-task, multi-agent, muscles, etc. In comparison to prior works, RoboHive offers a streamlined and unified task interface taking dependency on only a minimal set of well-maintained packages, features tasks with high physics fidelity and rich visual diversity, and supports common hardware drivers for real-world deployment. The unified interface of RoboHive offers a convenient and accessible abstraction for algorithmic research in imitation, reinforcement, multi-task, and hierarchical learning. Furthermore, RoboHive includes expert demonstrations and baseline results for most environments, providing a standard for benchmarking and comparisons. Details: https://sites.google.com/view/robohive | RoboHive: A Unified Framework for Robot Learning | [
"Vikash Kumar",
"Rutav Shah",
"Gaoyue Zhou",
"Vincent Moens",
"Vittorio Caggiano",
"Abhishek Gupta",
"Aravind Rajeswaran"
] | Track/Datasets_and_Benchmarks | poster | 2310.06828 | [
"https://github.com/vikashplus/robohive"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 |