id
stringlengths 10
10
| submitter
stringlengths 3
52
| authors
stringlengths 6
7.24k
| title
stringlengths 12
217
| comments
stringlengths 1
446
⌀ | journal-ref
stringlengths 4
297
| doi
stringlengths 12
118
⌀ | report-no
stringclasses 237
values | categories
stringlengths 5
71
| license
stringclasses 6
values | abstract
stringlengths 90
3.26k
| versions
listlengths 1
17
| update_date
stringclasses 969
values | authors_parsed
sequencelengths 1
451
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2312.05739 | Shu Yin | Shu Yin, Chao Gao, Zhen Wang | GAMC: An Unsupervised Method for Fake News Detection using Graph
Autoencoder with Masking | null | the Thirty-Eighth AAAI Conference on Artificial Intelligence,2024 | null | null | cs.SI cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | With the rise of social media, the spread of fake news has become a
significant concern, potentially misleading public perceptions and impacting
social stability. Although deep learning methods like CNNs, RNNs, and
Transformer-based models like BERT have enhanced fake news detection, they
primarily focus on content, overlooking social context during news propagation.
Graph-based techniques have incorporated this social context but are limited by
the need for large labeled datasets. Addressing these challenges, this paper
introduces GAMC, an unsupervised fake news detection technique using the Graph
Autoencoder with Masking and Contrastive learning. By leveraging both the
context and content of news propagation as self-supervised signals, our method
negates the requirement for labeled datasets. We augment the original news
propagation graph, encode these with a graph encoder, and employ a graph
decoder for reconstruction. A unique composite loss function, including
reconstruction error and contrast loss, is designed. The method's contributions
are: introducing self-supervised learning to fake news detection, proposing a
graph autoencoder integrating two distinct losses, and validating our
approach's efficacy through real-world dataset experiments.
| [
{
"created": "Sun, 10 Dec 2023 03:34:29 GMT",
"version": "v1"
}
] | 2023-12-12 | [
[
"Yin",
"Shu",
""
],
[
"Gao",
"Chao",
""
],
[
"Wang",
"Zhen",
""
]
] |
2312.05757 | Tianqianjin Lin | Tianqianjin Lin, Kaisong Song, Zhuoren Jiang, Yangyang Kang, Weikang
Yuan, Xurui Li, Changlong Sun, Cui Huang, Xiaozhong Liu | Towards Human-like Perception: Learning Structural Causal Model in
Heterogeneous Graph | 28 pages, 10 figures, 6 tables, accepted by Information Processing &
Management | Information Processing & Management, 60 (2024) 1-21 | 10.1016/j.ipm.2023.103600 | null | cs.LG cs.AI cs.DL cs.SI stat.ME | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Heterogeneous graph neural networks have become popular in various domains.
However, their generalizability and interpretability are limited due to the
discrepancy between their inherent inference flows and human reasoning logic or
underlying causal relationships for the learning problem. This study introduces
a novel solution, HG-SCM (Heterogeneous Graph as Structural Causal Model). It
can mimic the human perception and decision process through two key steps:
constructing intelligible variables based on semantics derived from the graph
schema and automatically learning task-level causal relationships among these
variables by incorporating advanced causal discovery techniques. We compared
HG-SCM to seven state-of-the-art baseline models on three real-world datasets,
under three distinct and ubiquitous out-of-distribution settings. HG-SCM
achieved the highest average performance rank with minimal standard deviation,
substantiating its effectiveness and superiority in terms of both predictive
power and generalizability. Additionally, the visualization and analysis of the
auto-learned causal diagrams for the three tasks aligned well with domain
knowledge and human cognition, demonstrating prominent interpretability.
HG-SCM's human-like nature and its enhanced generalizability and
interpretability make it a promising solution for special scenarios where
transparency and trustworthiness are paramount.
| [
{
"created": "Sun, 10 Dec 2023 04:34:35 GMT",
"version": "v1"
}
] | 2023-12-12 | [
[
"Lin",
"Tianqianjin",
""
],
[
"Song",
"Kaisong",
""
],
[
"Jiang",
"Zhuoren",
""
],
[
"Kang",
"Yangyang",
""
],
[
"Yuan",
"Weikang",
""
],
[
"Li",
"Xurui",
""
],
[
"Sun",
"Changlong",
""
],
[
"Huang",
"Cui",
""
],
[
"Liu",
"Xiaozhong",
""
]
] |
2312.05799 | Zhiqiang Yan | Zhengxue Wang and Zhiqiang Yan and Jian Yang | SGNet: Structure Guided Network via Gradient-Frequency Awareness for
Depth Map Super-Resolution | Accepted to AAAI 2024 | AAAI Conference on Artificial Intelligence, 2024 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Depth super-resolution (DSR) aims to restore high-resolution (HR) depth from
low-resolution (LR) one, where RGB image is often used to promote this task.
Recent image guided DSR approaches mainly focus on spatial domain to rebuild
depth structure. However, since the structure of LR depth is usually blurry,
only considering spatial domain is not very sufficient to acquire satisfactory
results. In this paper, we propose structure guided network (SGNet), a method
that pays more attention to gradient and frequency domains, both of which have
the inherent ability to capture high-frequency structure. Specifically, we
first introduce the gradient calibration module (GCM), which employs the
accurate gradient prior of RGB to sharpen the LR depth structure. Then we
present the Frequency Awareness Module (FAM) that recursively conducts multiple
spectrum differencing blocks (SDB), each of which propagates the precise
high-frequency components of RGB into the LR depth. Extensive experimental
results on both real and synthetic datasets demonstrate the superiority of our
SGNet, reaching the state-of-the-art. Codes and pre-trained models are
available at https://github.com/yanzq95/SGNet.
| [
{
"created": "Sun, 10 Dec 2023 07:17:06 GMT",
"version": "v1"
},
{
"created": "Tue, 12 Dec 2023 05:57:48 GMT",
"version": "v2"
},
{
"created": "Wed, 13 Dec 2023 10:47:08 GMT",
"version": "v3"
}
] | 2024-02-29 | [
[
"Wang",
"Zhengxue",
""
],
[
"Yan",
"Zhiqiang",
""
],
[
"Yang",
"Jian",
""
]
] |
2312.05856 | Maomao Li | Maomao Li, Yu Li, Tianyu Yang, Yunfei Liu, Dongxu Yue, Zhihui Lin, and
Dong Xu | A Video is Worth 256 Bases: Spatial-Temporal Expectation-Maximization
Inversion for Zero-Shot Video Editing | 14 pages, Project page: https://stem-inv.github.io/page/ | CVPR 2024 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a video inversion approach for zero-shot video editing,
which models the input video with low-rank representation during the inversion
process. The existing video editing methods usually apply the typical 2D DDIM
inversion or naive spatial-temporal DDIM inversion before editing, which
leverages time-varying representation for each frame to derive noisy latent.
Unlike most existing approaches, we propose a Spatial-Temporal
Expectation-Maximization (STEM) inversion, which formulates the dense video
feature under an expectation-maximization manner and iteratively estimates a
more compact basis set to represent the whole video. Each frame applies the
fixed and global representation for inversion, which is more friendly for
temporal consistency during reconstruction and editing. Extensive qualitative
and quantitative experiments demonstrate that our STEM inversion can achieve
consistent improvement on two state-of-the-art video editing methods. Project
page: https://stem-inv.github.io/page/.
| [
{
"created": "Sun, 10 Dec 2023 11:20:18 GMT",
"version": "v1"
},
{
"created": "Thu, 23 May 2024 14:15:47 GMT",
"version": "v2"
},
{
"created": "Tue, 18 Jun 2024 08:47:29 GMT",
"version": "v3"
}
] | 2024-06-19 | [
[
"Li",
"Maomao",
""
],
[
"Li",
"Yu",
""
],
[
"Yang",
"Tianyu",
""
],
[
"Liu",
"Yunfei",
""
],
[
"Yue",
"Dongxu",
""
],
[
"Lin",
"Zhihui",
""
],
[
"Xu",
"Dong",
""
]
] |
2312.05905 | Francisco Nurudin Alvarez Gonzalez | Nurudin Alvarez-Gonzalez, Andreas Kaltenbrunner, Vicen\c{c} G\'omez | Improving Subgraph-GNNs via Edge-Level Ego-Network Encodings | TMLR, graph neural networks, weisfeiler-lehman, expressivity,
higher-order GNNs, 3-WL, 1-WL, edge-level, ego-networks | Nurudin Alvarez-Gonzalez, Andreas Kaltenbrunner, Vicen\c{c} Gomez.
Improving Subgraph-GNNs via Edge-Level Ego-Network Encodings. In Transactions
on Machine Learning Research, 2024 | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | We present a novel edge-level ego-network encoding for learning on graphs
that can boost Message Passing Graph Neural Networks (MP-GNNs) by providing
additional node and edge features or extending message-passing formats. The
proposed encoding is sufficient to distinguish Strongly Regular Graphs, a
family of challenging 3-WL equivalent graphs. We show theoretically that such
encoding is more expressive than node-based sub-graph MP-GNNs. In an empirical
evaluation on four benchmarks with 10 graph datasets, our results match or
improve previous baselines on expressivity, graph classification, graph
regression, and proximity tasks -- while reducing memory usage by 18.1x in
certain real-world settings.
| [
{
"created": "Sun, 10 Dec 2023 15:05:23 GMT",
"version": "v1"
},
{
"created": "Thu, 2 May 2024 12:18:43 GMT",
"version": "v2"
}
] | 2024-05-03 | [
[
"Alvarez-Gonzalez",
"Nurudin",
""
],
[
"Kaltenbrunner",
"Andreas",
""
],
[
"Gómez",
"Vicenç",
""
]
] |
2312.05933 | Shahriar Noroozizadeh | Shahriar Noroozizadeh, Jeremy C. Weiss, George H. Chen | Temporal Supervised Contrastive Learning for Modeling Patient Risk
Progression | Machine Learning for Health (ML4H 2023) | In Machine Learning for Health (ML4H), pages 403-427. PMLR, 2023 | null | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of predicting how the likelihood of an outcome of
interest for a patient changes over time as we observe more of the patient
data. To solve this problem, we propose a supervised contrastive learning
framework that learns an embedding representation for each time step of a
patient time series. Our framework learns the embedding space to have the
following properties: (1) nearby points in the embedding space have similar
predicted class probabilities, (2) adjacent time steps of the same time series
map to nearby points in the embedding space, and (3) time steps with very
different raw feature vectors map to far apart regions of the embedding space.
To achieve property (3), we employ a nearest neighbor pairing mechanism in the
raw feature space. This mechanism also serves as an alternative to data
augmentation, a key ingredient of contrastive learning, which lacks a standard
procedure that is adequately realistic for clinical tabular data, to our
knowledge. We demonstrate that our approach outperforms state-of-the-art
baselines in predicting mortality of septic patients (MIMIC-III dataset) and
tracking progression of cognitive impairment (ADNI dataset). Our method also
consistently recovers the correct synthetic dataset embedding structure across
experiments, a feat not achieved by baselines. Our ablation experiments show
the pivotal role of our nearest neighbor pairing.
| [
{
"created": "Sun, 10 Dec 2023 16:43:15 GMT",
"version": "v1"
}
] | 2024-04-16 | [
[
"Noroozizadeh",
"Shahriar",
""
],
[
"Weiss",
"Jeremy C.",
""
],
[
"Chen",
"George H.",
""
]
] |
2312.06117 | Jiaming Liu | Jiaming Liu, Yue Wu, Maoguo Gong, Qiguang Miao, Wenping Ma, Can Qin | M3SOT: Multi-frame, Multi-field, Multi-space 3D Single Object Tracking | 12 pages, 10 figures, 10 tables, AAAI 2024 | AAAI 2024 | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | 3D Single Object Tracking (SOT) stands a forefront task of computer vision,
proving essential for applications like autonomous driving. Sparse and occluded
data in scene point clouds introduce variations in the appearance of tracked
objects, adding complexity to the task. In this research, we unveil M3SOT, a
novel 3D SOT framework, which synergizes multiple input frames (template sets),
multiple receptive fields (continuous contexts), and multiple solution spaces
(distinct tasks) in ONE model. Remarkably, M3SOT pioneers in modeling
temporality, contexts, and tasks directly from point clouds, revisiting a
perspective on the key factors influencing SOT. To this end, we design a
transformer-based network centered on point cloud targets in the search area,
aggregating diverse contextual representations and propagating target cues by
employing historical frames. As M3SOT spans varied processing perspectives,
we've streamlined the network-trimming its depth and optimizing its
structure-to ensure a lightweight and efficient deployment for SOT
applications. We posit that, backed by practical construction, M3SOT sidesteps
the need for complex frameworks and auxiliary components to deliver sterling
results. Extensive experiments on benchmarks such as KITTI, nuScenes, and Waymo
Open Dataset demonstrate that M3SOT achieves state-of-the-art performance at 38
FPS. Our code and models are available at
https://github.com/ywu0912/TeamCode.git.
| [
{
"created": "Mon, 11 Dec 2023 04:49:47 GMT",
"version": "v1"
}
] | 2023-12-12 | [
[
"Liu",
"Jiaming",
""
],
[
"Wu",
"Yue",
""
],
[
"Gong",
"Maoguo",
""
],
[
"Miao",
"Qiguang",
""
],
[
"Ma",
"Wenping",
""
],
[
"Qin",
"Can",
""
]
] |
2312.06169 | Liu Yifan | Yifan Liu, Tiecheng Song, Chengye Xian, Ruiyuan Chen, Yi Zhao, Rui Li
and Tan Guo | Two-Stage Adaptive Network for Semi-Supervised Cross-Domain Crater
Detection under Varying Scenario Distributions | null | Liu, Y.; Song, T.; Xian, C.; Chen, R.; Zhao, Y.; Li, R.; Guo, T.
Two-Stage Adaptive Network for Semi-Supervised Cross-Domain Crater Detection
under Varying Scenario Distributions. Remote Sens. 2024, 16, 2024 | 10.3390/rs16112024 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Crater detection can provide valuable information for humans to explore the
topography and understand the history of extraterrestrial planets. Due to the
significantly varying scenario distributions, existing detection models trained
on known labelled crater datasets are hardly effective when applied to new
unlabelled planets. To address this issue, we propose a two-stage adaptive
network (TAN) for semi-supervised cross-domain crater detection. Our network is
built on the YOLOv5 detector, where a series of strategies are employed to
enhance its cross-domain generalisation ability. In the first stage, we propose
an attention-based scale-adaptive fusion (ASAF) strategy to handle objects with
significant scale variances. Furthermore, we propose a smoothing hard example
mining (SHEM) loss function to address the issue of overfitting on hard
examples. In the second stage, we propose a sort-based pseudo-labelling
fine-tuning (SPF) strategy for semi-supervised learning to mitigate the
distributional differences between source and target domains. For both stages,
we employ weak or strong image augmentation to suit different cross-domain
tasks. Experimental results on benchmark datasets demonstrate that the proposed
network can enhance domain adaptation ability for crater detection under
varying scenario distributions.
| [
{
"created": "Mon, 11 Dec 2023 07:16:49 GMT",
"version": "v1"
},
{
"created": "Tue, 11 Jun 2024 02:13:38 GMT",
"version": "v2"
}
] | 2024-06-12 | [
[
"Liu",
"Yifan",
""
],
[
"Song",
"Tiecheng",
""
],
[
"Xian",
"Chengye",
""
],
[
"Chen",
"Ruiyuan",
""
],
[
"Zhao",
"Yi",
""
],
[
"Li",
"Rui",
""
],
[
"Guo",
"Tan",
""
]
] |
2312.06219 | Amina Ghoul | Amina Ghoul, Itheri Yahiaoui (URCA), Fawzi Nashashibi | Interpretable Long Term Waypoint-Based Trajectory Prediction Model | arXiv admin note: text overlap with arXiv:2308.04312 | ITSC, Sep 2023, Bilbao, Spain | null | null | cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predicting the future trajectories of dynamic agents in complex environments
is crucial for a variety of applications, including autonomous driving,
robotics, and human-computer interaction. It is a challenging task as the
behavior of the agent is unknown and intrinsically multimodal. Our key insight
is that the agents behaviors are influenced not only by their past trajectories
and their interaction with their immediate environment but also largely with
their long term waypoint (LTW). In this paper, we study the impact of adding a
long-term goal on the performance of a trajectory prediction framework. We
present an interpretable long term waypoint-driven prediction framework
(WayDCM). WayDCM first predict an agent's intermediate goal (IG) by encoding
his interactions with the environment as well as his LTW using a combination of
a Discrete choice Model (DCM) and a Neural Network model (NN). Then, our model
predicts the corresponding trajectories. This is in contrast to previous work
which does not consider the ultimate intent of the agent to predict his
trajectory. We evaluate and show the effectiveness of our approach on the Waymo
Open dataset.
| [
{
"created": "Mon, 11 Dec 2023 09:10:22 GMT",
"version": "v1"
}
] | 2023-12-12 | [
[
"Ghoul",
"Amina",
"",
"URCA"
],
[
"Yahiaoui",
"Itheri",
"",
"URCA"
],
[
"Nashashibi",
"Fawzi",
""
]
] |
2312.06458 | Ming Kang | Ming Kang, Chee-Ming Ting, Fung Fung Ting, Rapha\"el C.-W. Phan | ASF-YOLO: A Novel YOLO Model with Attentional Scale Sequence Fusion for
Cell Instance Segmentation | null | Image Vis. Comput. 147 (2024) 105057 | 10.1016/j.imavis.2024.105057 | null | cs.CV eess.SP stat.AP | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We propose a novel Attentional Scale Sequence Fusion based You Only Look Once
(YOLO) framework (ASF-YOLO) which combines spatial and scale features for
accurate and fast cell instance segmentation. Built on the YOLO segmentation
framework, we employ the Scale Sequence Feature Fusion (SSFF) module to enhance
the multi-scale information extraction capability of the network, and the
Triple Feature Encoder (TFE) module to fuse feature maps of different scales to
increase detailed information. We further introduce a Channel and Position
Attention Mechanism (CPAM) to integrate both the SSFF and TPE modules, which
focus on informative channels and spatial position-related small objects for
improved detection and segmentation performance. Experimental validations on
two cell datasets show remarkable segmentation accuracy and speed of the
proposed ASF-YOLO model. It achieves a box mAP of 0.91, mask mAP of 0.887, and
an inference speed of 47.3 FPS on the 2018 Data Science Bowl dataset,
outperforming the state-of-the-art methods. The source code is available at
https://github.com/mkang315/ASF-YOLO.
| [
{
"created": "Mon, 11 Dec 2023 15:47:12 GMT",
"version": "v1"
},
{
"created": "Fri, 10 May 2024 04:25:48 GMT",
"version": "v2"
}
] | 2024-05-13 | [
[
"Kang",
"Ming",
""
],
[
"Ting",
"Chee-Ming",
""
],
[
"Ting",
"Fung Fung",
""
],
[
"Phan",
"Raphaël C. -W.",
""
]
] |
2312.06486 | Xi Ye | Xi Ye, Guillaume-Alexandre Bilodeau | STDiff: Spatio-temporal Diffusion for Continuous Stochastic Video
Prediction | null | AAAI2024 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predicting future frames of a video is challenging because it is difficult to
learn the uncertainty of the underlying factors influencing their contents. In
this paper, we propose a novel video prediction model, which has
infinite-dimensional latent variables over the spatio-temporal domain.
Specifically, we first decompose the video motion and content information, then
take a neural stochastic differential equation to predict the temporal motion
information, and finally, an image diffusion model autoregressively generates
the video frame by conditioning on the predicted motion feature and the
previous frame. The better expressiveness and stronger stochasticity learning
capability of our model lead to state-of-the-art video prediction performances.
As well, our model is able to achieve temporal continuous prediction, i.e.,
predicting in an unsupervised way the future video frames with an arbitrarily
high frame rate. Our code is available at
\url{https://github.com/XiYe20/STDiffProject}.
| [
{
"created": "Mon, 11 Dec 2023 16:12:43 GMT",
"version": "v1"
}
] | 2024-02-20 | [
[
"Ye",
"Xi",
""
],
[
"Bilodeau",
"Guillaume-Alexandre",
""
]
] |
2312.06534 | Rebeca D\'iaz-Redondo | Mohamed Soliman Halawa and Rebeca P. D\'iaz-Redondo and Ana
Fern\'andez-Vilas | KPIs-Based Clustering and Visualization of HPC jobs: a Feature Reduction
Approach | 23 pages, 11 figures | IEEE Access, 2021, vol. 9, p. 25522-25543 | 10.1109/ACCESS.2021.3057427 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | High-Performance Computing (HPC) systems need to be constantly monitored to
ensure their stability. The monitoring systems collect a tremendous amount of
data about different parameters or Key Performance Indicators (KPIs), such as
resource usage, IO waiting time, etc. A proper analysis of this data, usually
stored as time series, can provide insight in choosing the right management
strategies as well as the early detection of issues. In this paper, we
introduce a methodology to cluster HPC jobs according to their KPI indicators.
Our approach reduces the inherent high dimensionality of the collected data by
applying two techniques to the time series: literature-based and variance-based
feature extraction. We also define a procedure to visualize the obtained
clusters by combining the two previous approaches and the Principal Component
Analysis (PCA). Finally, we have validated our contributions on a real data set
to conclude that those KPIs related to CPU usage provide the best cohesion and
separation for clustering analysis and the good results of our visualization
methodology.
| [
{
"created": "Mon, 11 Dec 2023 17:13:54 GMT",
"version": "v1"
}
] | 2023-12-12 | [
[
"Halawa",
"Mohamed Soliman",
""
],
[
"Díaz-Redondo",
"Rebeca P.",
""
],
[
"Fernández-Vilas",
"Ana",
""
]
] |
2312.06546 | Rebeca D\'iaz-Redondo | Mohamed S. Halawa and Rebeca P. D\'iaz-Redondo and Ana
Fern\'andez-Vilas | Unsupervised KPIs-Based Clustering of Jobs in HPC Data Centers | 22 pages, 6 figures, journal | Sensors, 2020, vol. 20, no 15, p. 4111 | 10.3390/s20154111 | null | cs.DC cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Performance analysis is an essential task in High-Performance Computing (HPC)
systems and it is applied for different purposes such as anomaly detection,
optimal resource allocation, and budget planning. HPC monitoring tasks generate
a huge number of Key Performance Indicators (KPIs) to supervise the status of
the jobs running in these systems. KPIs give data about CPU usage, memory
usage, network (interface) traffic, or other sensors that monitor the hardware.
Analyzing this data, it is possible to obtain insightful information about
running jobs, such as their characteristics, performance, and failures. The
main contribution in this paper is to identify which metric/s (KPIs) is/are the
most appropriate to identify/classify different types of jobs according to
their behavior in the HPC system. With this aim, we have applied different
clustering techniques (partition and hierarchical clustering algorithms) using
a real dataset from the Galician Computation Center (CESGA). We have concluded
that (i) those metrics (KPIs) related to the Network (interface) traffic
monitoring provide the best cohesion and separation to cluster HPC jobs, and
(ii) hierarchical clustering algorithms are the most suitable for this task.
Our approach was validated using a different real dataset from the same HPC
center.
| [
{
"created": "Mon, 11 Dec 2023 17:31:46 GMT",
"version": "v1"
}
] | 2023-12-12 | [
[
"Halawa",
"Mohamed S.",
""
],
[
"Díaz-Redondo",
"Rebeca P.",
""
],
[
"Fernández-Vilas",
"Ana",
""
]
] |
2312.06579 | Mauricio Resende | Samyukta Sethuraman, Ankur Bansal, Setareh Mardan, Mauricio G.C.
Resende, Timothy L. Jacobs | Amazon Locker Capacity Management | 22 pages, 10 figues | INFORMS J. on Applied Analytics, Published Online:29 Mar 2024 | 10.1287/inte.2023.0005 | null | math.OC cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Amazon Locker is a self-service delivery or pickup location where customers
can pick up packages and drop off returns. A basic first-come-first-served
policy for accepting package delivery requests to lockers results in lockers
becoming full with standard shipping speed (3-5 day shipping) packages, and
leaving no space left for expedited packages which are mostly Next-Day or
Two-Day shipping. This paper proposes a solution to the problem of determining
how much locker capacity to reserve for different ship-option packages. Yield
management is a much researched field with popular applications in the airline,
car rental, and hotel industries. However, Amazon Locker poses a unique
challenge in this field since the number of days a package will wait in a
locker (package dwell time) is, in general, unknown. The proposed solution
combines machine learning techniques to predict locker demand and package dwell
time, and linear programming to maximize throughput in lockers. The decision
variables from this optimization provide optimal capacity reservation values
for different ship options. This resulted in a year-over-year increase of 9% in
Locker throughput worldwide during holiday season of 2018, impacting millions
of customers.
| [
{
"created": "Mon, 11 Dec 2023 18:10:08 GMT",
"version": "v1"
}
] | 2024-05-31 | [
[
"Sethuraman",
"Samyukta",
""
],
[
"Bansal",
"Ankur",
""
],
[
"Mardan",
"Setareh",
""
],
[
"Resende",
"Mauricio G. C.",
""
],
[
"Jacobs",
"Timothy L.",
""
]
] |
2312.06642 | Yixing Lao | Yixing Lao, Xiaogang Xu, Zhipeng Cai, Xihui Liu, Hengshuang Zhao | CorresNeRF: Image Correspondence Priors for Neural Radiance Fields | null | NeurIPS 2023 | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Neural Radiance Fields (NeRFs) have achieved impressive results in novel view
synthesis and surface reconstruction tasks. However, their performance suffers
under challenging scenarios with sparse input views. We present CorresNeRF, a
novel method that leverages image correspondence priors computed by
off-the-shelf methods to supervise NeRF training. We design adaptive processes
for augmentation and filtering to generate dense and high-quality
correspondences. The correspondences are then used to regularize NeRF training
via the correspondence pixel reprojection and depth loss terms. We evaluate our
methods on novel view synthesis and surface reconstruction tasks with
density-based and SDF-based NeRF models on different datasets. Our method
outperforms previous methods in both photometric and geometric metrics. We show
that this simple yet effective technique of using correspondence priors can be
applied as a plug-and-play module across different NeRF variants. The project
page is at https://yxlao.github.io/corres-nerf.
| [
{
"created": "Mon, 11 Dec 2023 18:55:29 GMT",
"version": "v1"
}
] | 2023-12-12 | [
[
"Lao",
"Yixing",
""
],
[
"Xu",
"Xiaogang",
""
],
[
"Cai",
"Zhipeng",
""
],
[
"Liu",
"Xihui",
""
],
[
"Zhao",
"Hengshuang",
""
]
] |
2312.06667 | Toni Mancini | Marco Esposito and Toni Mancini and Enrico Tronci | Optimizing Fault-Tolerant Quality-Guaranteed Sensor Deployments for UAV
Localization in Critical Areas via Computational Geometry | null | IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2023 | 10.1109/TSMC.2023.3327432 | null | cs.RO cs.AI cs.CG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The increasing spreading of small commercial Unmanned Aerial Vehicles (UAVs,
aka drones) presents serious threats for critical areas such as airports, power
plants, governmental and military facilities. In fact, such UAVs can easily
disturb or jam radio communications, collide with other flying objects, perform
espionage activity, and carry offensive payloads, e.g., weapons or explosives.
A central problem when designing surveillance solutions for the localization of
unauthorized UAVs in critical areas is to decide how many triangulating sensors
to use, and where to deploy them to optimise both coverage and cost
effectiveness.
In this article, we compute deployments of triangulating sensors for UAV
localization, optimizing a given blend of metrics, namely: coverage under
multiple sensing quality levels, cost-effectiveness, fault-tolerance. We focus
on large, complex 3D regions, which exhibit obstacles (e.g., buildings),
varying terrain elevation, different coverage priorities, constraints on
possible sensors placement. Our novel approach relies on computational geometry
and statistical model checking, and enables the effective use of off-the-shelf
AI-based black-box optimizers. Moreover, our method allows us to compute a
closed-form, analytical representation of the region uncovered by a sensor
deployment, which provides the means for rigorous, formal certification of the
quality of the latter.
We show the practical feasibility of our approach by computing optimal sensor
deployments for UAV localization in two large, complex 3D critical regions, the
Rome Leonardo Da Vinci International Airport (FCO) and the Vienna International
Center (VIC), using NOMAD as our state-of-the-art underlying optimization
engine. Results show that we can compute optimal sensor deployments within a
few hours on a standard workstation and within minutes on a small parallel
infrastructure.
| [
{
"created": "Tue, 5 Dec 2023 17:58:22 GMT",
"version": "v1"
}
] | 2023-12-13 | [
[
"Esposito",
"Marco",
""
],
[
"Mancini",
"Toni",
""
],
[
"Tronci",
"Enrico",
""
]
] |
2312.06697 | Ricardo Gonzalez | Ricardo Gonzalez, Peyman Nejat, Ashirbani Saha, Clinton J.V. Campbell,
Andrew P. Norgan, Cynthia Lokker | Performance of externally validated machine learning models based on
histopathology images for the diagnosis, classification, prognosis, or
treatment outcome prediction in female breast cancer: A systematic review | null | Journal of Pathology Informatics. 2023;15:100348 | 10.1016/j.jpi.2023.100348 | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Numerous machine learning (ML) models have been developed for breast cancer
using various types of data. Successful external validation (EV) of ML models
is important evidence of their generalizability. The aim of this systematic
review was to assess the performance of externally validated ML models based on
histopathology images for diagnosis, classification, prognosis, or treatment
outcome prediction in female breast cancer. A systematic search of MEDLINE,
EMBASE, CINAHL, IEEE, MICCAI, and SPIE conferences was performed for studies
published between January 2010 and February 2022. The Prediction Model Risk of
Bias Assessment Tool (PROBAST) was employed, and the results were narratively
described. Of the 2011 non-duplicated citations, 8 journal articles and 2
conference proceedings met inclusion criteria. Three studies externally
validated ML models for diagnosis, 4 for classification, 2 for prognosis, and 1
for both classification and prognosis. Most studies used Convolutional Neural
Networks and one used logistic regression algorithms. For
diagnostic/classification models, the most common performance metrics reported
in the EV were accuracy and area under the curve, which were greater than 87%
and 90%, respectively, using pathologists' annotations as ground truth. The
hazard ratios in the EV of prognostic ML models were between 1.7 (95% CI,
1.2-2.6) and 1.8 (95% CI, 1.3-2.7) to predict distant disease-free survival;
1.91 (95% CI, 1.11-3.29) for recurrence, and between 0.09 (95% CI, 0.01-0.70)
and 0.65 (95% CI, 0.43-0.98) for overall survival, using clinical data as
ground truth. Despite EV being an important step before the clinical
application of a ML model, it hasn't been performed routinely. The large
variability in the training/validation datasets, methods, performance metrics,
and reported information limited the comparison of the models and the analysis
of their results (...)
| [
{
"created": "Sat, 9 Dec 2023 18:27:56 GMT",
"version": "v1"
}
] | 2023-12-13 | [
[
"Gonzalez",
"Ricardo",
""
],
[
"Nejat",
"Peyman",
""
],
[
"Saha",
"Ashirbani",
""
],
[
"Campbell",
"Clinton J. V.",
""
],
[
"Norgan",
"Andrew P.",
""
],
[
"Lokker",
"Cynthia",
""
]
] |
2312.06705 | Sakshi Ranjan | Sakshi Ranjan, Subhankar Mishra | Perceiving University Student's Opinions from Google App Reviews | Accepted in Concurrency and Computation Practice and Experience | Concurrency and Computation: Practice and Experience, 34(10),
p.e6800 (2022) | 10.1002/cpe.6800 | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Google app market captures the school of thought of users from every corner
of the globe via ratings and text reviews, in a multilinguistic arena. The
potential information from the reviews cannot be extracted manually, due to its
exponential growth. So, Sentiment analysis, by machine learning and deep
learning algorithms employing NLP, explicitly uncovers and interprets the
emotions. This study performs the sentiment classification of the app reviews
and identifies the university student's behavior towards the app market via
exploratory analysis. We applied machine learning algorithms using the TP, TF,
and TF IDF text representation scheme and evaluated its performance on Bagging,
an ensemble learning method. We used word embedding, Glove, on the deep
learning paradigms. Our model was trained on Google app reviews and tested on
Student's App Reviews(SAR). The various combinations of these algorithms were
compared amongst each other using F score and accuracy and inferences were
highlighted graphically. SVM, amongst other classifiers, gave fruitful
accuracy(93.41%), F score(89%) on bigram and TF IDF scheme. Bagging enhanced
the performance of LR and NB with accuracy of 87.88% and 86.69% and F score of
86% and 78% respectively. Overall, LSTM on Glove embedding recorded the highest
accuracy(95.2%) and F score(88%).
| [
{
"created": "Sun, 10 Dec 2023 12:34:30 GMT",
"version": "v1"
}
] | 2023-12-13 | [
[
"Ranjan",
"Sakshi",
""
],
[
"Mishra",
"Subhankar",
""
]
] |
2312.06709 | Mike Ranzinger | Mike Ranzinger, Greg Heinrich, Jan Kautz, Pavlo Molchanov | AM-RADIO: Agglomerative Vision Foundation Model -- Reduce All Domains
Into One | CVPR 2024 Version 3: CVPR Camera Ready, reconfigured full paper,
table 1 is now more comprehensive Version 2: Added more acknowledgements and
updated table 7 with more recent results. Ensured that the link in the
abstract to our code is working properly Version 3: Fix broken hyperlinks | Proceedings of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition (CVPR), 2024, pp. 12490-12500 | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | A handful of visual foundation models (VFMs) have recently emerged as the
backbones for numerous downstream tasks. VFMs like CLIP, DINOv2, SAM are
trained with distinct objectives, exhibiting unique characteristics for various
downstream tasks. We find that despite their conceptual differences, these
models can be effectively merged into a unified model through multi-teacher
distillation. We name this approach AM-RADIO (Agglomerative Model -- Reduce All
Domains Into One). This integrative approach not only surpasses the performance
of individual teacher models but also amalgamates their distinctive features,
such as zero-shot vision-language comprehension, detailed pixel-level
understanding, and open vocabulary segmentation capabilities. In pursuit of the
most hardware-efficient backbone, we evaluated numerous architectures in our
multi-teacher distillation pipeline using the same training recipe. This led to
the development of a novel architecture (E-RADIO) that exceeds the performance
of its predecessors and is at least 7x faster than the teacher models. Our
comprehensive benchmarking process covers downstream tasks including ImageNet
classification, ADE20k semantic segmentation, COCO object detection and
LLaVa-1.5 framework.
Code: https://github.com/NVlabs/RADIO
| [
{
"created": "Sun, 10 Dec 2023 17:07:29 GMT",
"version": "v1"
},
{
"created": "Thu, 21 Dec 2023 13:35:49 GMT",
"version": "v2"
},
{
"created": "Mon, 25 Dec 2023 13:41:07 GMT",
"version": "v3"
},
{
"created": "Sun, 14 Apr 2024 13:35:14 GMT",
"version": "v4"
},
{
"created": "Tue, 30 Apr 2024 22:22:03 GMT",
"version": "v5"
}
] | 2024-08-01 | [
[
"Ranzinger",
"Mike",
""
],
[
"Heinrich",
"Greg",
""
],
[
"Kautz",
"Jan",
""
],
[
"Molchanov",
"Pavlo",
""
]
] |
2312.06786 | Ronghao Ni | Ronghao Ni, Zinan Lin, Shuaiqi Wang, Giulia Fanti | Mixture-of-Linear-Experts for Long-term Time Series Forecasting | null | Proceedings of The 27th International Conference on Artificial
Intelligence and Statistics, PMLR 238:4672-4680, 2024 | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Long-term time series forecasting (LTSF) aims to predict future values of a
time series given the past values. The current state-of-the-art (SOTA) on this
problem is attained in some cases by linear-centric models, which primarily
feature a linear mapping layer. However, due to their inherent simplicity, they
are not able to adapt their prediction rules to periodic changes in time series
patterns. To address this challenge, we propose a Mixture-of-Experts-style
augmentation for linear-centric models and propose Mixture-of-Linear-Experts
(MoLE). Instead of training a single model, MoLE trains multiple linear-centric
models (i.e., experts) and a router model that weighs and mixes their outputs.
While the entire framework is trained end-to-end, each expert learns to
specialize in a specific temporal pattern, and the router model learns to
compose the experts adaptively. Experiments show that MoLE reduces forecasting
error of linear-centric models, including DLinear, RLinear, and RMLP, in over
78% of the datasets and settings we evaluated. By using MoLE existing
linear-centric models can achieve SOTA LTSF results in 68% of the experiments
that PatchTST reports and we compare to, whereas existing single-head
linear-centric models achieve SOTA results in only 25% of cases.
| [
{
"created": "Mon, 11 Dec 2023 19:05:02 GMT",
"version": "v1"
},
{
"created": "Mon, 8 Jan 2024 04:06:22 GMT",
"version": "v2"
},
{
"created": "Wed, 1 May 2024 22:23:58 GMT",
"version": "v3"
}
] | 2024-05-03 | [
[
"Ni",
"Ronghao",
""
],
[
"Lin",
"Zinan",
""
],
[
"Wang",
"Shuaiqi",
""
],
[
"Fanti",
"Giulia",
""
]
] |
2312.06874 | Yifan Zhang | Yifan Zhang, Rui Wu, Sergiu M. Dascalu, Frederick C. Harris Jr | Sparse Transformer with Local and Seasonal Adaptation for Multivariate
Time Series Forecasting | null | Sci Rep 14, 15909 (2024) | 10.1038/s41598-024-66886-1 | null | cs.LG cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Transformers have achieved remarkable performance in multivariate time
series(MTS) forecasting due to their capability to capture long-term
dependencies. However, the canonical attention mechanism has two key
limitations: (1) its quadratic time complexity limits the sequence length, and
(2) it generates future values from the entire historical sequence. To address
this, we propose a Dozer Attention mechanism consisting of three sparse
components: (1) Local, each query exclusively attends to keys within a
localized window of neighboring time steps. (2) Stride, enables each query to
attend to keys at predefined intervals. (3) Vary, allows queries to selectively
attend to keys from a subset of the historical sequence. Notably, the size of
this subset dynamically expands as forecasting horizons extend. Those three
components are designed to capture essential attributes of MTS data, including
locality, seasonality, and global temporal dependencies. Additionally, we
present the Dozerformer Framework, incorporating the Dozer Attention mechanism
for the MTS forecasting task. We evaluated the proposed Dozerformer framework
with recent state-of-the-art methods on nine benchmark datasets and confirmed
its superior performance. The experimental results indicate that excluding a
subset of historical time steps from the time series forecasting process does
not compromise accuracy while significantly improving efficiency. Code is
available at https://github.com/GRYGY1215/Dozerformer.
| [
{
"created": "Mon, 11 Dec 2023 22:49:02 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Jul 2024 20:59:42 GMT",
"version": "v2"
}
] | 2024-07-17 | [
[
"Zhang",
"Yifan",
""
],
[
"Wu",
"Rui",
""
],
[
"Dascalu",
"Sergiu M.",
""
],
[
"Harris",
"Frederick C.",
"Jr"
]
] |
2312.07086 | Mike Perkins | Mike Perkins (1), Leon Furze (2), Jasper Roe (3), Jason MacVaugh (1)
((1) British University Vietnam, (2) Deakin University, (3) James Cook
University Singapore) | The AI Assessment Scale (AIAS): A Framework for Ethical Integration of
Generative AI in Educational Assessment | This version contains a revised title and the approved text as
published | J Univ Teach Learn Pract, 21(06), 06 | 10.53761/q3azde36 | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Recent developments in Generative Artificial Intelligence (GenAI) have
created a paradigm shift in multiple areas of society, and the use of these
technologies is likely to become a defining feature of education in coming
decades. GenAI offers transformative pedagogical opportunities, while
simultaneously posing ethical and academic challenges. Against this backdrop,
we outline a practical, simple, and sufficiently comprehensive tool to allow
for the integration of GenAI tools into educational assessment: the AI
Assessment Scale (AIAS).
The AIAS empowers educators to select the appropriate level of GenAI usage in
assessments based on the learning outcomes they seek to address. The AIAS
offers greater clarity and transparency for students and educators, provides a
fair and equitable policy tool for institutions to work with, and offers a
nuanced approach which embraces the opportunities of GenAI while recognising
that there are instances where such tools may not be pedagogically appropriate
or necessary.
By adopting a practical, flexible approach that can be implemented quickly,
the AIAS can form a much-needed starting point to address the current
uncertainty and anxiety regarding GenAI in education. As a secondary objective,
we engage with the current literature and advocate for a refocused discourse on
GenAI tools in education, one which foregrounds how technologies can help
support and enhance teaching and learning, which contrasts with the current
focus on GenAI as a facilitator of academic misconduct.
| [
{
"created": "Tue, 12 Dec 2023 09:08:36 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Apr 2024 03:15:00 GMT",
"version": "v2"
}
] | 2024-04-25 | [
[
"Perkins",
"Mike",
""
],
[
"Furze",
"Leon",
""
],
[
"Roe",
"Jasper",
""
],
[
"MacVaugh",
"Jason",
""
]
] |
2312.07101 | Fabrice Rossi | Madalina Olteanu (CEREMADE), Fabrice Rossi (CEREMADE), Florian Yger
(MILES, LAMSADE) | Meta-survey on outlier and anomaly detection | null | Neurocomputing, 2023, 555, pp.126634 | 10.1016/j.neucom.2023.126634 | null | cs.AI math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The impact of outliers and anomalies on model estimation and data processing
is of paramount importance, as evidenced by the extensive body of research
spanning various fields over several decades: thousands of research papers have
been published on the subject. As a consequence, numerous reviews, surveys, and
textbooks have sought to summarize the existing literature, encompassing a wide
range of methods from both the statistical and data mining communities. While
these endeavors to organize and summarize the research are invaluable, they
face inherent challenges due to the pervasive nature of outliers and anomalies
in all data-intensive applications, irrespective of the specific application
field or scientific discipline. As a result, the resulting collection of papers
remains voluminous and somewhat heterogeneous. To address the need for
knowledge organization in this domain, this paper implements the first
systematic meta-survey of general surveys and reviews on outlier and anomaly
detection. Employing a classical systematic survey approach, the study collects
nearly 500 papers using two specialized scientific search engines. From this
comprehensive collection, a subset of 56 papers that claim to be general
surveys on outlier detection is selected using a snowball search technique to
enhance field coverage. A meticulous quality assessment phase further refines
the selection to a subset of 25 high-quality general surveys. Using this
curated collection, the paper investigates the evolution of the outlier
detection field over a 20-year period, revealing emerging themes and methods.
Furthermore, an analysis of the surveys sheds light on the survey writing
practices adopted by scholars from different communities who have contributed
to this field. Finally, the paper delves into several topics where consensus
has emerged from the literature. These include taxonomies of outlier types,
challenges posed by high-dimensional data, the importance of anomaly scores,
the impact of learning conditions, difficulties in benchmarking, and the
significance of neural networks. Non-consensual aspects are also discussed,
particularly the distinction between local and global outliers and the
challenges in organizing detection methods into meaningful taxonomies.
| [
{
"created": "Tue, 12 Dec 2023 09:29:22 GMT",
"version": "v1"
}
] | 2023-12-13 | [
[
"Olteanu",
"Madalina",
"",
"CEREMADE"
],
[
"Rossi",
"Fabrice",
"",
"CEREMADE"
],
[
"Yger",
"Florian",
"",
"MILES, LAMSADE"
]
] |
2312.07214 | Max Pascher | Younes Lakhnati, Max Pascher, Jens Gerken | Exploring Large Language Models to Facilitate Variable Autonomy for
Human-Robot Teaming | Frontiers in Robotics and AI, Variable Autonomy for Human-Robot
Teaming | Front. Robot. AI 11:1347538 2024 | 10.3389/frobt.2024.1347538 | null | cs.HC cs.AI cs.RO | http://creativecommons.org/licenses/by/4.0/ | In a rapidly evolving digital landscape autonomous tools and robots are
becoming commonplace. Recognizing the significance of this development, this
paper explores the integration of Large Language Models (LLMs) like Generative
pre-trained transformer (GPT) into human-robot teaming environments to
facilitate variable autonomy through the means of verbal human-robot
communication. In this paper, we introduce a novel framework for such a
GPT-powered multi-robot testbed environment, based on a Unity Virtual Reality
(VR) setting. This system allows users to interact with robot agents through
natural language, each powered by individual GPT cores. By means of OpenAI's
function calling, we bridge the gap between unstructured natural language input
and structure robot actions. A user study with 12 participants explores the
effectiveness of GPT-4 and, more importantly, user strategies when being given
the opportunity to converse in natural language within a multi-robot
environment. Our findings suggest that users may have preconceived expectations
on how to converse with robots and seldom try to explore the actual language
and cognitive capabilities of their robot collaborators. Still, those users who
did explore where able to benefit from a much more natural flow of
communication and human-like back-and-forth. We provide a set of lessons
learned for future research and technical implementations of similar systems.
| [
{
"created": "Tue, 12 Dec 2023 12:26:48 GMT",
"version": "v1"
},
{
"created": "Fri, 8 Mar 2024 13:33:21 GMT",
"version": "v2"
},
{
"created": "Thu, 21 Mar 2024 11:12:31 GMT",
"version": "v3"
}
] | 2024-03-22 | [
[
"Lakhnati",
"Younes",
""
],
[
"Pascher",
"Max",
""
],
[
"Gerken",
"Jens",
""
]
] |
2312.07428 | Rebeca D\'iaz-Redondo | Alhassan Mabrouk and Rebeca P. D\'iaz Redondo and Mohamed Abd Elaziz
and Mohammed Kayed | Ensemble Federated Learning: an approach for collaborative pneumonia
diagnosis | 15 pages, 9 figures, journal | Applied Soft Computing, 2023, p. 110500 | 10.1016/j.asoc.2023.110500 | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Federated learning is a very convenient approach for scenarios where (i) the
exchange of data implies privacy concerns and/or (ii) a quick reaction is
needed. In smart healthcare systems, both aspects are usually required. In this
paper, we work on the first scenario, where preserving privacy is key and,
consequently, building a unique and massive medical image data set by fusing
different data sets from different medical institutions or research centers
(computation nodes) is not an option. We propose an ensemble federated learning
(EFL) approach that is based on the following characteristics: First, each
computation node works with a different data set (but of the same type). They
work locally and apply an ensemble approach combining eight well-known CNN
models (densenet169, mobilenetv2, xception, inceptionv3, vgg16, resnet50,
densenet121, and resnet152v2) on Chest X-ray images. Second, the best two local
models are used to create a local ensemble model that is shared with a central
node. Third, the ensemble models are aggregated to obtain a global model, which
is shared with the computation nodes to continue with a new iteration. This
procedure continues until there are no changes in the best local models. We
have performed different experiments to compare our approach with centralized
ones (with or without an ensemble approach)\color{black}. The results conclude
that our proposal outperforms these ones in Chest X-ray images (achieving an
accuracy of 96.63\%) and offers very competitive results compared to other
proposals in the literature.
| [
{
"created": "Tue, 12 Dec 2023 16:53:18 GMT",
"version": "v1"
}
] | 2023-12-13 | [
[
"Mabrouk",
"Alhassan",
""
],
[
"Redondo",
"Rebeca P. Díaz",
""
],
[
"Elaziz",
"Mohamed Abd",
""
],
[
"Kayed",
"Mohammed",
""
]
] |
2312.07437 | Rebeca D\'iaz-Redondo | Alhassan Mabrouk and Abdelghani Dahou and Mohamed Abd Elaziz and
Rebeca P. D\'iaz Redondo and Mohammed Kayed | Medical Image Classification Using Transfer Learning and Chaos Game
Optimization on the Internet of Medical Things | 22 pages, 12 figures, journal | Computational Intelligence and Neuroscience, 2022, vol. 2022 | 10.1155/2022/9112634 | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Internet of Medical Things (IoMT) has dramatically benefited medical
professionals that patients and physicians can access from all regions.
Although the automatic detection and prediction of diseases such as melanoma
and leukemia is still being researched and studied in IoMT, existing approaches
are not able to achieve a high degree of efficiency. Thus, with a new approach
that provides better results, patients would access the adequate treatments
earlier and the death rate would be reduced. Therefore, this paper introduces
an IoMT proposal for medical images classification that may be used anywhere,
i.e. it is an ubiquitous approach. It was design in two stages: first, we
employ a Transfer Learning (TL)-based method for feature extraction, which is
carried out using MobileNetV3; second, we use the Chaos Game Optimization (CGO)
for feature selection, with the aim of excluding unnecessary features and
improving the performance, which is key in IoMT. Our methodology was evaluated
using ISIC-2016, PH2, and Blood-Cell datasets. The experimental results
indicated that the proposed approach obtained an accuracy of 88.39% on
ISIC-2016, 97.52% on PH2, and 88.79% on Blood-cell. Moreover, our approach had
successful performances for the metrics employed compared to other existing
methods.
| [
{
"created": "Tue, 12 Dec 2023 17:04:26 GMT",
"version": "v1"
}
] | 2023-12-13 | [
[
"Mabrouk",
"Alhassan",
""
],
[
"Dahou",
"Abdelghani",
""
],
[
"Elaziz",
"Mohamed Abd",
""
],
[
"Redondo",
"Rebeca P. Díaz",
""
],
[
"Kayed",
"Mohammed",
""
]
] |
2312.07482 | Rebeca D\'iaz-Redondo | Manar Mohamed Hafez, Rebeca P. D\'iaz Redondo, Ana Fern\'andez-Vilas,
H\'ector Olivera Paz\'o | Classification of retail products: From probabilistic ranking to neural
networks | 17 pages, 8 figures, journal | Applied Sciences, 2021, vol. 11, no 9, p. 4117 | 10.3390/app11094117 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Food retailing is now on an accelerated path to a success penetration into
the digital market by new ways of value creation at all stages of the consumer
decision process. One of the most important imperatives in this path is the
availability of quality data to feed all the process in digital transformation.
But the quality of data is not so obvious if we consider the variety of
products and suppliers in the grocery market. Within this context of digital
transformation of grocery industry, \textit{Midiadia} is Spanish data provider
company that works on converting data from the retailers' products into
knowledge with attributes and insights from the product labels, that is,
maintaining quality data in a dynamic market with a high dispersion of
products. Currently, they manually categorize products (groceries) according to
the information extracted directly (text processing) from the product labelling
and packaging. This paper introduces a solution to automatically categorize the
constantly changing product catalogue into a 3-level food taxonomy. Our
proposal studies three different approaches: a score-based ranking method,
traditional machine learning algorithms, and deep neural networks. Thus, we
provide four different classifiers that support a more efficient and less
error-prone maintenance of groceries catalogues, the main asset of the company.
Finally, we have compared the performance of these three alternatives,
concluding that traditional machine learning algorithms perform better, but
closely followed by the score-based approach.
| [
{
"created": "Tue, 12 Dec 2023 18:11:15 GMT",
"version": "v1"
}
] | 2023-12-13 | [
[
"Hafez",
"Manar Mohamed",
""
],
[
"Redondo",
"Rebeca P. Díaz",
""
],
[
"Fernández-Vilas",
"Ana",
""
],
[
"Pazó",
"Héctor Olivera",
""
]
] |
2312.07553 | Joon Hyun Jeong | Joonhyun Jeong | Hijacking Context in Large Multi-modal Models | Technical Report. Preprint | ICLR 2024 Workshop on Reliable and Responsible Foundation Models | null | null | cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | Recently, Large Multi-modal Models (LMMs) have demonstrated their ability to
understand the visual contents of images given the instructions regarding the
images. Built upon the Large Language Models (LLMs), LMMs also inherit their
abilities and characteristics such as in-context learning where a coherent
sequence of images and texts are given as the input prompt. However, we
identify a new limitation of off-the-shelf LMMs where a small fraction of
incoherent images or text descriptions mislead LMMs to only generate biased
output about the hijacked context, not the originally intended context. To
address this, we propose a pre-filtering method that removes irrelevant
contexts via GPT-4V, based on its robustness towards distribution shift within
the contexts. We further investigate whether replacing the hijacked visual and
textual contexts with the correlated ones via GPT-4V and text-to-image models
can help yield coherent responses.
| [
{
"created": "Thu, 7 Dec 2023 11:23:29 GMT",
"version": "v1"
},
{
"created": "Mon, 13 May 2024 10:42:05 GMT",
"version": "v2"
}
] | 2024-05-14 | [
[
"Jeong",
"Joonhyun",
""
]
] |
2312.07743 | Thomas Randall | Thomas Randall, Tyler Allen and Rong Ge | FULL-W2V: Fully Exploiting Data Reuse for W2V on GPU-Accelerated Systems | 12 pages, 7 figures, 7 tables, the definitive version of this work is
published in the Proceedings of the ACM International Conference on
Supercomputing 2021, available at https://doi.org/10.1145/3447818.3460373 | Proceedings of the ACM International Conference on Supercomputing
(2021) 455-466 | 10.1145/3447818.3460373 | null | cs.LG cs.CL cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Word2Vec remains one of the highly-impactful innovations in the field of
Natural Language Processing (NLP) that represents latent grammatical and
syntactical information in human text with dense vectors in a low dimension.
Word2Vec has high computational cost due to the algorithm's inherent
sequentiality, intensive memory accesses, and the large vocabularies it
represents. While prior studies have investigated technologies to explore
parallelism and improve memory system performance, they struggle to effectively
gain throughput on powerful GPUs.
We identify memory data access and latency as the primary bottleneck in prior
works on GPUs, which prevents highly optimized kernels from attaining the
architecture's peak performance. We present a novel algorithm, FULL-W2V, which
maximally exploits the opportunities for data reuse in the W2V algorithm and
leverages GPU architecture and resources to reduce access to low memory levels
and improve temporal locality. FULL-W2V is capable of reducing accesses to GPU
global memory significantly, e.g., by more than 89\%, compared to prior
state-of-the-art GPU implementations, resulting in significant performance
improvement that scales across successive hardware generations. Our prototype
implementation achieves 2.97X speedup when ported from Nvidia Pascal P100 to
Volta V100 cards, and outperforms the state-of-the-art by 5.72X on V100 cards
with the same embedding quality. In-depth analysis indicates that the reduction
of memory accesses through register and shared memory caching and
high-throughput shared memory reduction leads to a significantly improved
arithmetic intensity. FULL-W2V can potentially benefit many applications in NLP
and other domains.
| [
{
"created": "Tue, 12 Dec 2023 21:22:07 GMT",
"version": "v1"
}
] | 2023-12-14 | [
[
"Randall",
"Thomas",
""
],
[
"Allen",
"Tyler",
""
],
[
"Ge",
"Rong",
""
]
] |
2312.07860 | Yoshiro Kitamura | Yoshiro Kitamura, Yuanzhong Li, Wataru Ito, Hiroshi Ishikawa | Data-Dependent Higher-Order Clique Selection for Artery-Vein
Segmentation by Energy Minimization | null | International Journal of Computer Vision 117, 142-158(2016) | 10.1007/s11263-015-0856-3 | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We propose a novel segmentation method based on energy minimization of
higher-order potentials. We introduce higher-order terms into the energy to
incorporate prior knowledge on the shape of the segments. The terms encourage
certain sets of pixels to be entirely in one segment or the other. The sets can
for instance be smooth curves in order to help delineate pulmonary vessels,
which are known to run in almost straight lines. The higher-order terms can be
converted to submodular first-order terms by adding auxiliary variables, which
can then be globally minimized using graph cuts. We also determine the weight
of these terms, or the degree of the aforementioned encouragement, in a
principled way by learning from training data with the ground truth. We
demonstrate the effectiveness of the method in a real-world application in
fully-automatic pulmonary artery-vein segmentation in CT images.
| [
{
"created": "Wed, 13 Dec 2023 02:57:30 GMT",
"version": "v1"
}
] | 2023-12-14 | [
[
"Kitamura",
"Yoshiro",
""
],
[
"Li",
"Yuanzhong",
""
],
[
"Ito",
"Wataru",
""
],
[
"Ishikawa",
"Hiroshi",
""
]
] |
2312.07885 | Mohammadhossein Amouei | Mohammadhossein Amouei, Mohsen Rezvani, Mansoor Fateh | RAT: Reinforcement-Learning-Driven and Adaptive Testing for
Vulnerability Discovery in Web Application Firewalls | null | IEEE Transactions on Dependable and Secure Computing ( Volume: 19,
Issue: 5, 01 Sept.-Oct. 2022) | 10.1109/TDSC.2021.3095417 | null | cs.CR cs.AI cs.IR | http://creativecommons.org/licenses/by/4.0/ | Due to the increasing sophistication of web attacks, Web Application
Firewalls (WAFs) have to be tested and updated regularly to resist the
relentless flow of web attacks. In practice, using a brute-force attack to
discover vulnerabilities is infeasible due to the wide variety of attack
patterns. Thus, various black-box testing techniques have been proposed in the
literature. However, these techniques suffer from low efficiency. This paper
presents Reinforcement-Learning-Driven and Adaptive Testing (RAT), an automated
black-box testing strategy to discover injection vulnerabilities in WAFs. In
particular, we focus on SQL injection and Cross-site Scripting, which have been
among the top ten vulnerabilities over the past decade. More specifically, RAT
clusters similar attack samples together. It then utilizes a reinforcement
learning technique combined with a novel adaptive search algorithm to discover
almost all bypassing attack patterns efficiently. We compare RAT with three
state-of-the-art methods considering their objectives. The experiments show
that RAT performs 33.53% and 63.16% on average better than its counterparts in
discovering the most possible bypassing payloads and reducing the number of
attempts before finding the first bypassing payload when testing
well-configured WAFs, respectively.
| [
{
"created": "Wed, 13 Dec 2023 04:07:29 GMT",
"version": "v1"
}
] | 2023-12-14 | [
[
"Amouei",
"Mohammadhossein",
""
],
[
"Rezvani",
"Mohsen",
""
],
[
"Fateh",
"Mansoor",
""
]
] |
2312.07937 | Wenqian Zhang | Wenqian Zhang, Molin Huang, Yuxuan Zhou, Juze Zhang, Jingyi Yu, Jingya
Wang, Lan Xu | BOTH2Hands: Inferring 3D Hands from Both Text Prompts and Body Dynamics | Accepted to CVPR 2024 | Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 2024,
pp. 2393-2404. | 10.1109/CVPR52733.2024.00232 | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The recently emerging text-to-motion advances have spired numerous attempts
for convenient and interactive human motion generation. Yet, existing methods
are largely limited to generating body motions only without considering the
rich two-hand motions, let alone handling various conditions like body dynamics
or texts. To break the data bottleneck, we propose BOTH57M, a novel multi-modal
dataset for two-hand motion generation. Our dataset includes accurate motion
tracking for the human body and hands and provides pair-wised finger-level hand
annotations and body descriptions. We further provide a strong baseline method,
BOTH2Hands, for the novel task: generating vivid two-hand motions from both
implicit body dynamics and explicit text prompts. We first warm up two parallel
body-to-hand and text-to-hand diffusion models and then utilize the
cross-attention transformer for motion blending. Extensive experiments and
cross-validations demonstrate the effectiveness of our approach and dataset for
generating convincing two-hand motions from the hybrid body-and-textual
conditions. Our dataset and code will be disseminated to the community for
future research.
| [
{
"created": "Wed, 13 Dec 2023 07:30:19 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Dec 2023 09:39:58 GMT",
"version": "v2"
},
{
"created": "Wed, 20 Dec 2023 04:50:14 GMT",
"version": "v3"
},
{
"created": "Tue, 9 Apr 2024 07:31:25 GMT",
"version": "v4"
},
{
"created": "Wed, 10 Apr 2024 13:35:51 GMT",
"version": "v5"
}
] | 2024-09-27 | [
[
"Zhang",
"Wenqian",
""
],
[
"Huang",
"Molin",
""
],
[
"Zhou",
"Yuxuan",
""
],
[
"Zhang",
"Juze",
""
],
[
"Yu",
"Jingyi",
""
],
[
"Wang",
"Jingya",
""
],
[
"Xu",
"Lan",
""
]
] |
2312.07965 | Rebeca D\'iaz-Redondo | Alhassan Mabrouk, Rebeca P. D\'iaz Redondo, Abdelghani Dahou, Mohamed
Abd Elaziz, Mohammed Kayed | Pneumonia Detection on chest X-ray images Using Ensemble of Deep
Convolutional Neural Networks | 14 pages, 4 figures, journal | Applied Sciences, 2022, vol. 12, no 13, p. 6448 | 10.3390/app12136448 | null | eess.IV cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pneumonia is a life-threatening lung infection resulting from several
different viral infections. Identifying and treating pneumonia on chest X-ray
images can be difficult due to its similarity to other pulmonary diseases.
Thus, the existing methods for predicting pneumonia cannot attain substantial
levels of accuracy. Therefore, this paper presents a computer-aided
classification of pneumonia, coined as Ensemble Learning (EL), to simplify the
diagnosis process on chest X-ray images. Our proposal is based on Convolutional
Neural Network (CNN) models, which are pre-trained CNN models that have been
recently employed to enhance the performance of many medical tasks instead of
training CNN models from scratch. We propose to use three well-known CNN
pre-trained (DenseNet169, MobileNetV2 and Vision Transformer) using the
ImageNet database. Then, these models are trained on the chest X-ray data set
using fine-tuning. Finally, the results are obtained by combining the extracted
features from these three models during the experimental phase. The proposed EL
approach outperforms other existing state-of-the-art methods, and it obtains an
accuracy of 93.91% and a F1-Score of 93.88% on the testing phase.
| [
{
"created": "Wed, 13 Dec 2023 08:28:21 GMT",
"version": "v1"
}
] | 2023-12-14 | [
[
"Mabrouk",
"Alhassan",
""
],
[
"Redondo",
"Rebeca P. Díaz",
""
],
[
"Dahou",
"Abdelghani",
""
],
[
"Elaziz",
"Mohamed Abd",
""
],
[
"Kayed",
"Mohammed",
""
]
] |
2312.07966 | Nicolas Sabouret | Mathieu Schumann, Quentin Reynaud, Fran\c{c}ois Semp\'e (OASIS),
Julien Guibourdenche (RIFT, UNIGE), Jean-Baptiste Ly (CPU), Nicolas Sabouret
(CPU, CPU, CPU) | A multi-sourced data and agent-based approach for complementing Time Use
Surveys in the context of residential human activity and load curve
simulation | null | Building Simulation Conference, Sep 2023, Shangai, China | null | null | cs.AI cs.HC cs.MA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To address the major issues associated with using Time-Use Survey (TUS) for
simulating residential load curves, we present the SMACH approach, which
combines qualitative and quantitative data with agent-based simulation. Our
model consists of autonomous agents assigned with daily tasks. The agents try
to accomplish their assigned tasks to the best of their abilities. Quantitative
data are used to generate tasks assignments. Qualitative studies allow us to
define how agents select, based on plausible cognitive principles, the tasks to
accomplish depending on the context. Our results show a better representation
of weekdays and weekends, a more flexible association of tasks with appliances,
and an improved simulation of load curves compared to real data. Highlights
$\bullet$ Discussion about Time-Use Surveys (TUS) limits and the use of TUS in
activity and energy simulation $\bullet$ Presentation of complementary data
both qualitative and quantitative used to complement TUS data $\bullet$
Proposition of an agent-based approach that balances these limitations
| [
{
"created": "Wed, 13 Dec 2023 08:28:55 GMT",
"version": "v1"
}
] | 2023-12-14 | [
[
"Schumann",
"Mathieu",
"",
"OASIS"
],
[
"Reynaud",
"Quentin",
"",
"OASIS"
],
[
"Sempé",
"François",
"",
"OASIS"
],
[
"Guibourdenche",
"Julien",
"",
"RIFT, UNIGE"
],
[
"Ly",
"Jean-Baptiste",
"",
"CPU"
],
[
"Sabouret",
"Nicolas",
"",
"CPU, CPU, CPU"
]
] |
2312.08021 | Laurent Bou\'e | Nitin Agarwal, Ashish Kumar, Kiran R, Manish Gupta, Laurent Bou\'e | Improving search relevance of Azure Cognitive Search by Bayesian
optimization | null | Microsoft Journal of Applied Research, Volume 20, 2024 | null | null | cs.IR cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Azure Cognitive Search (ACS) has emerged as a major contender in "Search as a
Service" cloud products in recent years. However, one of the major challenges
for ACS users is to improve the relevance of the search results for their
specific usecases. In this paper, we propose a novel method to find the optimal
ACS configuration that maximizes search relevance for a specific usecase
(product search, document search...) The proposed solution improves key online
marketplace metrics such as click through rates (CTR) by formulating the search
relevance problem as hyperparameter tuning. We have observed significant
improvements in real-world search call to action (CTA) rate in multiple
marketplaces by introducing optimized weights generated from the proposed
approach.
| [
{
"created": "Wed, 13 Dec 2023 09:49:53 GMT",
"version": "v1"
}
] | 2023-12-14 | [
[
"Agarwal",
"Nitin",
""
],
[
"Kumar",
"Ashish",
""
],
[
"R",
"Kiran",
""
],
[
"Gupta",
"Manish",
""
],
[
"Boué",
"Laurent",
""
]
] |
2312.08078 | Wenting Chen | Wenting Chen, Linlin Shen, Jingyang Lin, Jiebo Luo, Xiang Li, Yixuan
Yuan | Fine-Grained Image-Text Alignment in Medical Imaging Enables Explainable
Cyclic Image-Report Generation | Accepted by ACL 2024 | https://aclanthology.org/2024.acl-long.514/ | null | 2024.acl-long.514 | cs.CV cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To address these issues, we propose a novel Adaptive patch-word Matching
(AdaMatch) model to correlate chest X-ray (CXR) image regions with words in
medical reports and apply it to CXR-report generation to provide explainability
for the generation process. AdaMatch exploits the fine-grained relation between
adaptive patches and words to provide explanations of specific image regions
with corresponding words. To capture the abnormal regions of varying sizes and
positions, we introduce the Adaptive Patch extraction (AdaPatch) module to
acquire the adaptive patches for these regions adaptively. In order to provide
explicit explainability for CXR-report generation task, we propose an
AdaMatch-based bidirectional large language model for Cyclic CXR-report
generation (AdaMatch-Cyclic). It employs the AdaMatch to obtain the keywords
for CXR images and `keypatches' for medical reports as hints to guide
CXR-report generation. Extensive experiments on two publicly available CXR
datasets prove the effectiveness of our method and its superior performance to
existing methods.
| [
{
"created": "Wed, 13 Dec 2023 11:47:28 GMT",
"version": "v1"
},
{
"created": "Thu, 14 Dec 2023 02:31:44 GMT",
"version": "v2"
},
{
"created": "Fri, 15 Dec 2023 13:22:51 GMT",
"version": "v3"
},
{
"created": "Wed, 27 Dec 2023 07:21:12 GMT",
"version": "v4"
},
{
"created": "Tue, 4 Jun 2024 12:27:38 GMT",
"version": "v5"
}
] | 2024-08-20 | [
[
"Chen",
"Wenting",
""
],
[
"Shen",
"Linlin",
""
],
[
"Lin",
"Jingyang",
""
],
[
"Luo",
"Jiebo",
""
],
[
"Li",
"Xiang",
""
],
[
"Yuan",
"Yixuan",
""
]
] |
2312.08092 | Rebeca D\'iaz-Redondo | Rebeca P. D\'iaz-Redondo, Carlos Garcia-Rubio, Ana Fern\'andez Vilas,
Celeste Campo, Alicia Rodriguez-Carrion | A hybrid analysis of LBSN data to early detect anomalies in crowd
dynamics | null | Future Generation Computer Systems, 2020, vol. 109, p. 83-94 | 10.1016/j.future.2020.03.038 | null | cs.SI cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Undoubtedly, Location-based Social Networks (LBSNs) provide an interesting
source of geo-located data that we have previously used to obtain patterns of
the dynamics of crowds throughout urban areas. According to our previous
results, activity in LBSNs reflects the real activity in the city. Therefore,
unexpected behaviors in the social media activity are a trustful evidence of
unexpected changes of the activity in the city. In this paper we introduce a
hybrid solution to early detect these changes based on applying a combination
of two approaches, the use of entropy analysis and clustering techniques, on
the data gathered from LBSNs. In particular, we have performed our experiments
over a data set collected from Instagram for seven months in New York City,
obtaining promising results.
| [
{
"created": "Wed, 13 Dec 2023 12:17:16 GMT",
"version": "v1"
}
] | 2023-12-14 | [
[
"Díaz-Redondo",
"Rebeca P.",
""
],
[
"Garcia-Rubio",
"Carlos",
""
],
[
"Vilas",
"Ana Fernández",
""
],
[
"Campo",
"Celeste",
""
],
[
"Rodriguez-Carrion",
"Alicia",
""
]
] |
2312.08234 | Yujun Chen | Yujun Chen, Xin Tan, Zhizhong Zhang, Yanyun Qu, Yuan Xie | Beyond the Label Itself: Latent Labels Enhance Semi-supervised Point
Cloud Panoptic Segmentation | 12 pages, 8 figures, 11 tables | CVPR 2024 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As the exorbitant expense of labeling autopilot datasets and the growing
trend of utilizing unlabeled data, semi-supervised segmentation on point clouds
becomes increasingly imperative. Intuitively, finding out more ``unspoken
words'' (i.e., latent instance information) beyond the label itself should be
helpful to improve performance. In this paper, we discover two types of latent
labels behind the displayed label embedded in LiDAR and image data. First, in
the LiDAR Branch, we propose a novel augmentation, Cylinder-Mix, which is able
to augment more yet reliable samples for training. Second, in the Image Branch,
we propose the Instance Position-scale Learning (IPSL) Module to learn and fuse
the information of instance position and scale, which is from a 2D pre-trained
detector and a type of latent label obtained from 3D to 2D projection. Finally,
the two latent labels are embedded into the multi-modal panoptic segmentation
network. The ablation of the IPSL module demonstrates its robust adaptability,
and the experiments evaluated on SemanticKITTI and nuScenes demonstrate that
our model outperforms the state-of-the-art method, LaserMix.
| [
{
"created": "Wed, 13 Dec 2023 15:56:24 GMT",
"version": "v1"
},
{
"created": "Sun, 11 Feb 2024 12:19:08 GMT",
"version": "v2"
}
] | 2024-02-13 | [
[
"Chen",
"Yujun",
""
],
[
"Tan",
"Xin",
""
],
[
"Zhang",
"Zhizhong",
""
],
[
"Qu",
"Yanyun",
""
],
[
"Xie",
"Yuan",
""
]
] |
2312.08255 | Mikhail Kulyabin | Mikhail Kulyabin, Aleksei Zhdanov, Anastasia Nikiforova, Andrey
Stepichev, Anna Kuznetsova, Mikhail Ronkin, Vasilii Borisov, Alexander
Bogachev, Sergey Korotkich, Paul A Constable, and Andreas Maier | OCTDL: Optical Coherence Tomography Dataset for Image-Based Deep
Learning Methods | null | Scientific Data 11.1 (2024): 365 | 10.1038/s41597-024-03182-7 | null | eess.IV cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Optical coherence tomography (OCT) is a non-invasive imaging technique with
extensive clinical applications in ophthalmology. OCT enables the visualization
of the retinal layers, playing a vital role in the early detection and
monitoring of retinal diseases. OCT uses the principle of light wave
interference to create detailed images of the retinal microstructures, making
it a valuable tool for diagnosing ocular conditions. This work presents an
open-access OCT dataset (OCTDL) comprising over 2000 OCT images labeled
according to disease group and retinal pathology. The dataset consists of OCT
records of patients with Age-related Macular Degeneration (AMD), Diabetic
Macular Edema (DME), Epiretinal Membrane (ERM), Retinal Artery Occlusion (RAO),
Retinal Vein Occlusion (RVO), and Vitreomacular Interface Disease (VID). The
images were acquired with an Optovue Avanti RTVue XR using raster scanning
protocols with dynamic scan length and image resolution. Each retinal b-scan
was acquired by centering on the fovea and interpreted and cataloged by an
experienced retinal specialist. In this work, we applied Deep Learning
classification techniques to this new open-access dataset.
| [
{
"created": "Wed, 13 Dec 2023 16:18:40 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Mar 2024 09:49:01 GMT",
"version": "v2"
},
{
"created": "Sun, 31 Mar 2024 09:33:50 GMT",
"version": "v3"
},
{
"created": "Tue, 1 Oct 2024 19:59:21 GMT",
"version": "v4"
}
] | 2024-10-03 | [
[
"Kulyabin",
"Mikhail",
""
],
[
"Zhdanov",
"Aleksei",
""
],
[
"Nikiforova",
"Anastasia",
""
],
[
"Stepichev",
"Andrey",
""
],
[
"Kuznetsova",
"Anna",
""
],
[
"Ronkin",
"Mikhail",
""
],
[
"Borisov",
"Vasilii",
""
],
[
"Bogachev",
"Alexander",
""
],
[
"Korotkich",
"Sergey",
""
],
[
"Constable",
"Paul A",
""
],
[
"Maier",
"Andreas",
""
]
] |
2312.08369 | Cassidy Laidlaw | Cassidy Laidlaw and Banghua Zhu and Stuart Russell and Anca Dragan | The Effective Horizon Explains Deep RL Performance in Stochastic
Environments | null | ICLR 2024 (Spotlight) | null | null | stat.ML cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reinforcement learning (RL) theory has largely focused on proving minimax
sample complexity bounds. These require strategic exploration algorithms that
use relatively limited function classes for representing the policy or value
function. Our goal is to explain why deep RL algorithms often perform well in
practice, despite using random exploration and much more expressive function
classes like neural networks. Our work arrives at an explanation by showing
that many stochastic MDPs can be solved by performing only a few steps of value
iteration on the random policy's Q function and then acting greedily. When this
is true, we find that it is possible to separate the exploration and learning
components of RL, making it much easier to analyze. We introduce a new RL
algorithm, SQIRL, that iteratively learns a near-optimal policy by exploring
randomly to collect rollouts and then performing a limited number of steps of
fitted-Q iteration over those rollouts. Any regression algorithm that satisfies
basic in-distribution generalization properties can be used in SQIRL to
efficiently solve common MDPs. This can explain why deep RL works, since it is
empirically established that neural networks generalize well in-distribution.
Furthermore, SQIRL explains why random exploration works well in practice. We
leverage SQIRL to derive instance-dependent sample complexity bounds for RL
that are exponential only in an "effective horizon" of lookahead and on the
complexity of the class used for function approximation. Empirically, we also
find that SQIRL performance strongly correlates with PPO and DQN performance in
a variety of stochastic environments, supporting that our theoretical analysis
is predictive of practical performance. Our code and data are available at
https://github.com/cassidylaidlaw/effective-horizon.
| [
{
"created": "Wed, 13 Dec 2023 18:58:56 GMT",
"version": "v1"
},
{
"created": "Fri, 12 Apr 2024 18:26:36 GMT",
"version": "v2"
}
] | 2024-04-16 | [
[
"Laidlaw",
"Cassidy",
""
],
[
"Zhu",
"Banghua",
""
],
[
"Russell",
"Stuart",
""
],
[
"Dragan",
"Anca",
""
]
] |
2312.08393 | Rebeca D\'iaz-Redondo | Manar Mohamed Hafez, Rebeca P. D\'iaz Redondo, Ana Fern\'andez-Vilas,
H\'ector Olivera Paz\'o | Multi-criteria recommendation systems to foster online grocery | 30 pages, 8 images, journal | Sensors, 2021, vol. 21, no 11, p. 3747 | 10.3390/s21113747 | null | cs.IR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the exponential increase in information, it has become imperative to
design mechanisms that allow users to access what matters to them as quickly as
possible. The recommendation system ($RS$) with information technology
development is the solution, it is an intelligent system. Various types of data
can be collected on items of interest to users and presented as
recommendations. $RS$ also play a very important role in e-commerce. The
purpose of recommending a product is to designate the most appropriate
designation for a specific product. The major challenges when recommending
products are insufficient information about the products and the categories to
which they belong. In this paper, we transform the product data using two
methods of document representation: bag-of-words (BOW) and the neural
network-based document combination known as vector-based (Doc2Vec). We propose
three-criteria recommendation systems (product, package, and health) for each
document representation method to foster online grocery, which depends on
product characteristics such as (composition, packaging, nutrition table,
allergen, etc.). For our evaluation, we conducted a user and expert survey.
Finally, we have compared the performance of these three criteria for each
document representation method, discovering that the neural network-based
(Doc2Vec) performs better and completely alters the results.
| [
{
"created": "Tue, 12 Dec 2023 17:40:16 GMT",
"version": "v1"
}
] | 2023-12-15 | [
[
"Hafez",
"Manar Mohamed",
""
],
[
"Redondo",
"Rebeca P. Díaz",
""
],
[
"Fernández-Vilas",
"Ana",
""
],
[
"Pazó",
"Héctor Olivera",
""
]
] |
2312.08459 | Shivangi Aneja Ms | Shivangi Aneja, Justus Thies, Angela Dai, Matthias Nie{\ss}ner | FaceTalk: Audio-Driven Motion Diffusion for Neural Parametric Head
Models | Paper Video: https://youtu.be/7Jf0kawrA3Q Project Page:
https://shivangi-aneja.github.io/projects/facetalk/ | CVPR 2024 | null | null | cs.CV cs.AI cs.GR cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce FaceTalk, a novel generative approach designed for synthesizing
high-fidelity 3D motion sequences of talking human heads from input audio
signal. To capture the expressive, detailed nature of human heads, including
hair, ears, and finer-scale eye movements, we propose to couple speech signal
with the latent space of neural parametric head models to create high-fidelity,
temporally coherent motion sequences. We propose a new latent diffusion model
for this task, operating in the expression space of neural parametric head
models, to synthesize audio-driven realistic head sequences. In the absence of
a dataset with corresponding NPHM expressions to audio, we optimize for these
correspondences to produce a dataset of temporally-optimized NPHM expressions
fit to audio-video recordings of people talking. To the best of our knowledge,
this is the first work to propose a generative approach for realistic and
high-quality motion synthesis of volumetric human heads, representing a
significant advancement in the field of audio-driven 3D animation. Notably, our
approach stands out in its ability to generate plausible motion sequences that
can produce high-fidelity head animation coupled with the NPHM shape space. Our
experimental results substantiate the effectiveness of FaceTalk, consistently
achieving superior and visually natural motion, encompassing diverse facial
expressions and styles, outperforming existing methods by 75% in perceptual
user study evaluation.
| [
{
"created": "Wed, 13 Dec 2023 19:01:07 GMT",
"version": "v1"
},
{
"created": "Sun, 17 Mar 2024 23:45:01 GMT",
"version": "v2"
}
] | 2024-03-19 | [
[
"Aneja",
"Shivangi",
""
],
[
"Thies",
"Justus",
""
],
[
"Dai",
"Angela",
""
],
[
"Nießner",
"Matthias",
""
]
] |
2312.08624 | Manuel Rebol | Manuel Rebol, Krzysztof Pietroszek, Claudia Ranniger, Colton Hood,
Adam Rutenberg, Neal Sikka, David Li, Christian G\"utl | Mixed Reality Communication for Medical Procedures: Teaching the
Placement of a Central Venous Catheter | null | 2022 IEEE International Symposium on Mixed and Augmented Reality
(ISMAR) | 10.1109/ISMAR55827.2022.00050 | null | cs.CV cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Medical procedures are an essential part of healthcare delivery, and the
acquisition of procedural skills is a critical component of medical education.
Unfortunately, procedural skill is not evenly distributed among medical
providers. Skills may vary within departments or institutions, and across
geographic regions, depending on the provider's training and ongoing
experience. We present a mixed reality real-time communication system to
increase access to procedural skill training and to improve remote emergency
assistance. Our system allows a remote expert to guide a local operator through
a medical procedure. RGBD cameras capture a volumetric view of the local scene
including the patient, the operator, and the medical equipment. The volumetric
capture is augmented onto the remote expert's view to allow the expert to
spatially guide the local operator using visual and verbal instructions. We
evaluated our mixed reality communication system in a study in which experts
teach the ultrasound-guided placement of a central venous catheter (CVC) to
students in a simulation setting. The study compares state-of-the-art video
communication against our system. The results indicate that our system enhances
and offers new possibilities for visual communication compared to video
teleconference-based training.
| [
{
"created": "Thu, 14 Dec 2023 03:11:20 GMT",
"version": "v1"
}
] | 2023-12-15 | [
[
"Rebol",
"Manuel",
""
],
[
"Pietroszek",
"Krzysztof",
""
],
[
"Ranniger",
"Claudia",
""
],
[
"Hood",
"Colton",
""
],
[
"Rutenberg",
"Adam",
""
],
[
"Sikka",
"Neal",
""
],
[
"Li",
"David",
""
],
[
"Gütl",
"Christian",
""
]
] |
2312.08672 | Haifeng Li | Silu He, Qinyao Luo, Xinsha Fu, Ling Zhao, Ronghua Du, Haifeng Li | CAT: A Causally Graph Attention Network for Trimming Heterophilic Graph | 25 pages, 18 figures, 5 tables | Information Science 2024 | 10.1016/j.ins.2024.120916 | null | cs.LG cs.AI cs.SI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Local Attention-guided Message Passing Mechanism (LAMP) adopted in Graph
Attention Networks (GATs) is designed to adaptively learn the importance of
neighboring nodes for better local aggregation on the graph, which can bring
the representations of similar neighbors closer effectively, thus showing
stronger discrimination ability. However, existing GATs suffer from a
significant discrimination ability decline in heterophilic graphs because the
high proportion of dissimilar neighbors can weaken the self-attention of the
central node, jointly resulting in the deviation of the central node from
similar nodes in the representation space. This kind of effect generated by
neighboring nodes is called the Distraction Effect (DE) in this paper. To
estimate and weaken the DE of neighboring nodes, we propose a Causally graph
Attention network for Trimming heterophilic graph (CAT). To estimate the DE,
since the DE are generated through two paths (grab the attention assigned to
neighbors and reduce the self-attention of the central node), we use Total
Effect to model DE, which is a kind of causal estimand and can be estimated
from intervened data; To weaken the DE, we identify the neighbors with the
highest DE (we call them Distraction Neighbors) and remove them. We adopt three
representative GATs as the base model within the proposed CAT framework and
conduct experiments on seven heterophilic datasets in three different sizes.
Comparative experiments show that CAT can improve the node classification
accuracy of all base GAT models. Ablation experiments and visualization further
validate the enhancement of discrimination ability brought by CAT. The source
code is available at https://github.com/GeoX-Lab/CAT.
| [
{
"created": "Thu, 14 Dec 2023 06:08:59 GMT",
"version": "v1"
},
{
"created": "Fri, 15 Dec 2023 05:56:36 GMT",
"version": "v2"
},
{
"created": "Mon, 17 Jun 2024 13:22:15 GMT",
"version": "v3"
}
] | 2024-06-18 | [
[
"He",
"Silu",
""
],
[
"Luo",
"Qinyao",
""
],
[
"Fu",
"Xinsha",
""
],
[
"Zhao",
"Ling",
""
],
[
"Du",
"Ronghua",
""
],
[
"Li",
"Haifeng",
""
]
] |
2312.08995 | Markus Reiter-Haas | Markus Reiter-Haas, Beate Kl\"osch, Markus Hadler, Elisabeth Lex | FrameFinder: Explorative Multi-Perspective Framing Extraction from News
Headlines | Accepted for publication at CHIIR'24 | Proceedings of the 2024 ACM SIGIR Conference on Human Information
Interaction and Retrieval | 10.1145/3627508.3638308 | null | cs.IR cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Revealing the framing of news articles is an important yet neglected task in
information seeking and retrieval. In the present work, we present FrameFinder,
an open tool for extracting and analyzing frames in textual data. FrameFinder
visually represents the frames of text from three perspectives, i.e., (i) frame
labels, (ii) frame dimensions, and (iii) frame structure. By analyzing the
well-established gun violence frame corpus, we demonstrate the merits of our
proposed solution to support social science research and call for subsequent
integration into information interactions.
| [
{
"created": "Thu, 14 Dec 2023 14:41:37 GMT",
"version": "v1"
}
] | 2023-12-25 | [
[
"Reiter-Haas",
"Markus",
""
],
[
"Klösch",
"Beate",
""
],
[
"Hadler",
"Markus",
""
],
[
"Lex",
"Elisabeth",
""
]
] |
2312.09037 | Michael Jungo | Michael Jungo, Lars V\"ogtlin, Atefeh Fakhari, Nathan Wegmann, Rolf
Ingold, Andreas Fischer, Anna Scius-Bertrand | Impact of Ground Truth Quality on Handwriting Recognition | SOICT 2023 | SOICT 2023: The 12th International Symposium on Information and
Communication Technology | 10.1145/3628797.3628976 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Handwriting recognition is a key technology for accessing the content of old
manuscripts, helping to preserve cultural heritage. Deep learning shows an
impressive performance in solving this task. However, to achieve its full
potential, it requires a large amount of labeled data, which is difficult to
obtain for ancient languages and scripts. Often, a trade-off has to be made
between ground truth quantity and quality, as is the case for the recently
introduced Bullinger database. It contains an impressive amount of over a
hundred thousand labeled text line images of mostly premodern German and Latin
texts that were obtained by automatically aligning existing page-level
transcriptions with text line images. However, the alignment process introduces
systematic errors, such as wrongly hyphenated words. In this paper, we
investigate the impact of such errors on training and evaluation and suggest
means to detect and correct typical alignment errors.
| [
{
"created": "Thu, 14 Dec 2023 15:36:41 GMT",
"version": "v1"
}
] | 2023-12-15 | [
[
"Jungo",
"Michael",
""
],
[
"Vögtlin",
"Lars",
""
],
[
"Fakhari",
"Atefeh",
""
],
[
"Wegmann",
"Nathan",
""
],
[
"Ingold",
"Rolf",
""
],
[
"Fischer",
"Andreas",
""
],
[
"Scius-Bertrand",
"Anna",
""
]
] |
2312.09038 | Jinghong Li | Jinghong Li, Wen Gu, Koichi Ota, Shinobu Hasegawa | Object Recognition from Scientific Document based on Compartment
Refinement Framework | The extension of this paper has been published in SN Computer
Science. arXiv admin note: text overlap with arXiv:2305.17401 | SN COMPUT. SCI. 5, 816 (2024) | 10.1007/s42979-024-03130-7 | null | cs.CV cs.DL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rapid development of the internet in the past decade, it has become
increasingly important to extract valuable information from vast resources
efficiently, which is crucial for establishing a comprehensive digital
ecosystem, particularly in the context of research surveys and comprehension.
The foundation of these tasks focuses on accurate extraction and deep mining of
data from scientific documents, which are essential for building a robust data
infrastructure. However, parsing raw data or extracting data from complex
scientific documents have been ongoing challenges. Current data extraction
methods for scientific documents typically use rule-based (RB) or machine
learning (ML) approaches. However, using rule-based methods can incur high
coding costs for articles with intricate typesetting. Conversely, relying
solely on machine learning methods necessitates annotation work for complex
content types within the scientific document, which can be costly.
Additionally, few studies have thoroughly defined and explored the hierarchical
layout within scientific documents. The lack of a comprehensive definition of
the internal structure and elements of the documents indirectly impacts the
accuracy of text classification and object recognition tasks. From the
perspective of analyzing the standard layout and typesetting used in the
specified publication, we propose a new document layout analysis framework
called CTBR(Compartment & Text Blocks Refinement). Firstly, we define
scientific documents into hierarchical divisions: base domain, compartment, and
text blocks. Next, we conduct an in-depth exploration and classification of the
meanings of text blocks. Finally, we utilize the results of text block
classification to implement object recognition within scientific documents
based on rule-based compartment segmentation.
| [
{
"created": "Thu, 14 Dec 2023 15:36:49 GMT",
"version": "v1"
},
{
"created": "Fri, 15 Dec 2023 05:25:49 GMT",
"version": "v2"
},
{
"created": "Thu, 4 Jul 2024 13:51:31 GMT",
"version": "v3"
},
{
"created": "Fri, 23 Aug 2024 13:37:56 GMT",
"version": "v4"
}
] | 2024-08-26 | [
[
"Li",
"Jinghong",
""
],
[
"Gu",
"Wen",
""
],
[
"Ota",
"Koichi",
""
],
[
"Hasegawa",
"Shinobu",
""
]
] |
2312.09207 | Benno Weck | Benno Weck, Holger Kirchhoff, Peter Grosche and Xavier Serra | WikiMuTe: A web-sourced dataset of semantic descriptions for music audio | Submitted to 30th International Conference on MultiMedia Modeling
(MMM2024). This preprint has not undergone peer review or any post-submission
improvements or corrections | The Version of Record of this contribution is published in
MultiMedia Modeling. MMM 2024. Lecture Notes in Computer Science, vol 14565.
Springer, Cham | 10.1007/978-3-031-56435-2_4 | null | cs.CL cs.IR cs.LG cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-modal deep learning techniques for matching free-form text with music
have shown promising results in the field of Music Information Retrieval (MIR).
Prior work is often based on large proprietary data while publicly available
datasets are few and small in size. In this study, we present WikiMuTe, a new
and open dataset containing rich semantic descriptions of music. The data is
sourced from Wikipedia's rich catalogue of articles covering musical works.
Using a dedicated text-mining pipeline, we extract both long and short-form
descriptions covering a wide range of topics related to music content such as
genre, style, mood, instrumentation, and tempo. To show the use of this data,
we train a model that jointly learns text and audio representations and
performs cross-modal retrieval. The model is evaluated on two tasks: tag-based
music retrieval and music auto-tagging. The results show that while our
approach has state-of-the-art performance on multiple tasks, but still observe
a difference in performance depending on the data used for training.
| [
{
"created": "Thu, 14 Dec 2023 18:38:02 GMT",
"version": "v1"
}
] | 2024-04-18 | [
[
"Weck",
"Benno",
""
],
[
"Kirchhoff",
"Holger",
""
],
[
"Grosche",
"Peter",
""
],
[
"Serra",
"Xavier",
""
]
] |
2312.09304 | Harris Papadopoulos | Lysimachos Maltoudoglou, Andreas Paisios, Ladislav Lenc, Ji\v{r}\'i
Mart\'inek, Pavel Kr\'al, Harris Papadopoulos | Well-calibrated Confidence Measures for Multi-label Text Classification
with a Large Number of Labels | null | Pattern Recognition, Volume 122, February 2022 | 10.1016/j.patcog.2021.108271 | null | cs.LG cs.CL stat.ML | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We extend our previous work on Inductive Conformal Prediction (ICP) for
multi-label text classification and present a novel approach for addressing the
computational inefficiency of the Label Powerset (LP) ICP, arrising when
dealing with a high number of unique labels. We present experimental results
using the original and the proposed efficient LP-ICP on two English and one
Czech language data-sets. Specifically, we apply the LP-ICP on three deep
Artificial Neural Network (ANN) classifiers of two types: one based on
contextualised (bert) and two on non-contextualised (word2vec) word-embeddings.
In the LP-ICP setting we assign nonconformity scores to label-sets from which
the corresponding p-values and prediction-sets are determined. Our approach
deals with the increased computational burden of LP by eliminating from
consideration a significant number of label-sets that will surely have p-values
below the specified significance level. This reduces dramatically the
computational complexity of the approach while fully respecting the standard CP
guarantees. Our experimental results show that the contextualised-based
classifier surpasses the non-contextualised-based ones and obtains
state-of-the-art performance for all data-sets examined. The good performance
of the underlying classifiers is carried on to their ICP counterparts without
any significant accuracy loss, but with the added benefits of ICP, i.e. the
confidence information encapsulated in the prediction sets. We experimentally
demonstrate that the resulting prediction sets can be tight enough to be
practically useful even though the set of all possible label-sets contains more
than $1e+16$ combinations. Additionally, the empirical error rates of the
obtained prediction-sets confirm that our outputs are well-calibrated.
| [
{
"created": "Thu, 14 Dec 2023 19:17:42 GMT",
"version": "v1"
}
] | 2023-12-18 | [
[
"Maltoudoglou",
"Lysimachos",
""
],
[
"Paisios",
"Andreas",
""
],
[
"Lenc",
"Ladislav",
""
],
[
"Martínek",
"Jiří",
""
],
[
"Král",
"Pavel",
""
],
[
"Papadopoulos",
"Harris",
""
]
] |
2312.09366 | Sahal Shaji Mullappilly | Sahal Shaji Mullappilly, Abdelrahman Shaker, Omkar Thawakar, Hisham
Cholakkal, Rao Muhammad Anwer, Salman Khan, Fahad Shahbaz Khan | Arabic Mini-ClimateGPT : A Climate Change and Sustainability Tailored
Arabic LLM | Accepted to EMNLP 2023 (Findings) | Findings of the Association for Computational Linguistics: EMNLP
2023, pages 14126-14136 | 10.18653/v1/2023.findings-emnlp.941 | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Climate change is one of the most significant challenges we face together as
a society. Creating awareness and educating policy makers the wide-ranging
impact of climate change is an essential step towards a sustainable future.
Recently, Large Language Models (LLMs) like ChatGPT and Bard have shown
impressive conversational abilities and excel in a wide variety of NLP tasks.
While these models are close-source, recently alternative open-source LLMs such
as Stanford Alpaca and Vicuna have shown promising results. However, these
open-source models are not specifically tailored for climate related domain
specific information and also struggle to generate meaningful responses in
other languages such as, Arabic. To this end, we propose a light-weight Arabic
Mini-ClimateGPT that is built on an open-source LLM and is specifically
fine-tuned on a conversational-style instruction tuning curated Arabic dataset
Clima500-Instruct with over 500k instructions about climate change and
sustainability. Further, our model also utilizes a vector embedding based
retrieval mechanism during inference. We validate our proposed model through
quantitative and qualitative evaluations on climate-related queries. Our model
surpasses the baseline LLM in 88.3% of cases during ChatGPT-based evaluation.
Furthermore, our human expert evaluation reveals an 81.6% preference for our
model's responses over multiple popular open-source models. Our open-source
demos, code-base and models are available here
https://github.com/mbzuai-oryx/ClimateGPT.
| [
{
"created": "Thu, 14 Dec 2023 22:04:07 GMT",
"version": "v1"
}
] | 2023-12-18 | [
[
"Mullappilly",
"Sahal Shaji",
""
],
[
"Shaker",
"Abdelrahman",
""
],
[
"Thawakar",
"Omkar",
""
],
[
"Cholakkal",
"Hisham",
""
],
[
"Anwer",
"Rao Muhammad",
""
],
[
"Khan",
"Salman",
""
],
[
"Khan",
"Fahad Shahbaz",
""
]
] |
2312.09525 | Yazhou Yao | Gensheng Pei, Fumin Shen, Yazhou Yao, Tao Chen, Xian-Sheng Hua, and
Heng-Tao Shen | Hierarchical Graph Pattern Understanding for Zero-Shot VOS | accepted by IEEE Transactions on Image Processing | IEEE Transactions on Image Processing 2023 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The optical flow guidance strategy is ideal for obtaining motion information
of objects in the video. It is widely utilized in video segmentation tasks.
However, existing optical flow-based methods have a significant dependency on
optical flow, which results in poor performance when the optical flow
estimation fails for a particular scene. The temporal consistency provided by
the optical flow could be effectively supplemented by modeling in a structural
form. This paper proposes a new hierarchical graph neural network (GNN)
architecture, dubbed hierarchical graph pattern understanding (HGPU), for
zero-shot video object segmentation (ZS-VOS). Inspired by the strong ability of
GNNs in capturing structural relations, HGPU innovatively leverages motion cues
(\ie, optical flow) to enhance the high-order representations from the
neighbors of target frames. Specifically, a hierarchical graph pattern encoder
with message aggregation is introduced to acquire different levels of motion
and appearance features in a sequential manner. Furthermore, a decoder is
designed for hierarchically parsing and understanding the transformed
multi-modal contexts to achieve more accurate and robust results. HGPU achieves
state-of-the-art performance on four publicly available benchmarks (DAVIS-16,
YouTube-Objects, Long-Videos and DAVIS-17). Code and pre-trained model can be
found at \url{https://github.com/NUST-Machine-Intelligence-Laboratory/HGPU}.
| [
{
"created": "Fri, 15 Dec 2023 04:13:21 GMT",
"version": "v1"
}
] | 2023-12-18 | [
[
"Pei",
"Gensheng",
""
],
[
"Shen",
"Fumin",
""
],
[
"Yao",
"Yazhou",
""
],
[
"Chen",
"Tao",
""
],
[
"Hua",
"Xian-Sheng",
""
],
[
"Shen",
"Heng-Tao",
""
]
] |
2312.09536 | Maria Antoniak | Maria Antoniak, Anjalie Field, Jimin Mun, Melanie Walsh, Lauren F.
Klein, Maarten Sap | Riveter: Measuring Power and Social Dynamics Between Entities | null | Proceedings of the 61st Annual Meeting of the Association for
Computational Linguistics, Volume 3: System Demonstrations, 2023, pages
377-388 | 10.18653/v1/2023.acl-demo.36 | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Riveter provides a complete easy-to-use pipeline for analyzing verb
connotations associated with entities in text corpora. We prepopulate the
package with connotation frames of sentiment, power, and agency, which have
demonstrated usefulness for capturing social phenomena, such as gender bias, in
a broad range of corpora. For decades, lexical frameworks have been
foundational tools in computational social science, digital humanities, and
natural language processing, facilitating multifaceted analysis of text
corpora. But working with verb-centric lexica specifically requires natural
language processing skills, reducing their accessibility to other researchers.
By organizing the language processing pipeline, providing complete lexicon
scores and visualizations for all entities in a corpus, and providing
functionality for users to target specific research questions, Riveter greatly
improves the accessibility of verb lexica and can facilitate a broad range of
future research.
| [
{
"created": "Fri, 15 Dec 2023 05:03:24 GMT",
"version": "v1"
}
] | 2023-12-18 | [
[
"Antoniak",
"Maria",
""
],
[
"Field",
"Anjalie",
""
],
[
"Mun",
"Jimin",
""
],
[
"Walsh",
"Melanie",
""
],
[
"Klein",
"Lauren F.",
""
],
[
"Sap",
"Maarten",
""
]
] |
2312.09584 | Byeongkeun Kang | David Kim, Sinhae Cha, Byeongkeun Kang | Multiscale Vision Transformer With Deep Clustering-Guided Refinement for
Weakly Supervised Object Localization | 5 pages | IEEE International Conference on Visual Communications and Image
Processing, 2023 | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work addresses the task of weakly-supervised object localization. The
goal is to learn object localization using only image-level class labels, which
are much easier to obtain compared to bounding box annotations. This task is
important because it reduces the need for labor-intensive ground-truth
annotations. However, methods for object localization trained using weak
supervision often suffer from limited accuracy in localization. To address this
challenge and enhance localization accuracy, we propose a multiscale object
localization transformer (MOLT). It comprises multiple object localization
transformers that extract patch embeddings across various scales. Moreover, we
introduce a deep clustering-guided refinement method that further enhances
localization accuracy by utilizing separately extracted image segments. These
segments are obtained by clustering pixels using convolutional neural networks.
Finally, we demonstrate the effectiveness of our proposed method by conducting
experiments on the publicly available ILSVRC-2012 dataset.
| [
{
"created": "Fri, 15 Dec 2023 07:46:44 GMT",
"version": "v1"
}
] | 2023-12-18 | [
[
"Kim",
"David",
""
],
[
"Cha",
"Sinhae",
""
],
[
"Kang",
"Byeongkeun",
""
]
] |
2312.09639 | Yao Zhao | Yao Zhao, Haipeng Zhang, Shiwei Lyu, Ruiying Jiang, Jinjie Gu, Guannan
Zhang | Multiple Instance Learning for Uplift Modeling | short paper of CIKM22(full version) | Proceedings of the 31st ACM International Conference on
Information and Knowledge Management (2022) 4727-4731 | 10.1145/3511808.3557655 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Uplift modeling is widely used in performance marketing to estimate effects
of promotion campaigns (e.g., increase of customer retention rate). Since it is
impossible to observe outcomes of a recipient in treatment (e.g., receiving a
certain promotion) and control (e.g., without promotion) groups simultaneously
(i.e., counter-factual), uplift models are mainly trained on instances of
treatment and control groups separately to form two models respectively, and
uplifts are predicted by the difference of predictions from these two models
(i.e., two-model method). When responses are noisy and the treatment effect is
fractional, induced individual uplift predictions will be inaccurate, resulting
in targeting undesirable customers. Though it is impossible to obtain the ideal
ground-truth individual uplifts, known as Individual Treatment Effects (ITEs),
alternatively, an average uplift of a group of users, called Average Treatment
Effect (ATE), can be observed from experimental deliveries. Upon this, similar
to Multiple Instance Learning (MIL) in which each training sample is a bag of
instances, our framework sums up individual user uplift predictions for each
bag of users as its bag-wise ATE prediction, and regularizes it to its ATE
label, thus learning more accurate individual uplifts. Additionally, to amplify
the fractional treatment effect, bags are composed of instances with adjacent
individual uplift predictions, instead of random instances. Experiments
conducted on two datasets show the effectiveness and universality of the
proposed framework.
| [
{
"created": "Fri, 15 Dec 2023 09:28:40 GMT",
"version": "v1"
}
] | 2023-12-18 | [
[
"Zhao",
"Yao",
""
],
[
"Zhang",
"Haipeng",
""
],
[
"Lyu",
"Shiwei",
""
],
[
"Jiang",
"Ruiying",
""
],
[
"Gu",
"Jinjie",
""
],
[
"Zhang",
"Guannan",
""
]
] |
2312.09821 | Varun Ojha | Chandresh Pravin, Ivan Martino, Giuseppe Nicosia, Varun Ojha | Fragility, Robustness and Antifragility in Deep Learning | null | Artificial Intelligence 2023 | 10.1016/j.artint.2023.104060 | null | cs.LG cs.CV | http://creativecommons.org/licenses/by/4.0/ | We propose a systematic analysis of deep neural networks (DNNs) based on a
signal processing technique for network parameter removal, in the form of
synaptic filters that identifies the fragility, robustness and antifragility
characteristics of DNN parameters. Our proposed analysis investigates if the
DNN performance is impacted negatively, invariantly, or positively on both
clean and adversarially perturbed test datasets when the DNN undergoes synaptic
filtering. We define three \textit{filtering scores} for quantifying the
fragility, robustness and antifragility characteristics of DNN parameters based
on the performances for (i) clean dataset, (ii) adversarial dataset, and (iii)
the difference in performances of clean and adversarial datasets. We validate
the proposed systematic analysis on ResNet-18, ResNet-50, SqueezeNet-v1.1 and
ShuffleNet V2 x1.0 network architectures for MNIST, CIFAR10 and Tiny ImageNet
datasets. The filtering scores, for a given network architecture, identify
network parameters that are invariant in characteristics across different
datasets over learning epochs. Vice-versa, for a given dataset, the filtering
scores identify the parameters that are invariant in characteristics across
different network architectures. We show that our synaptic filtering method
improves the test accuracy of ResNet and ShuffleNet models on adversarial
datasets when only the robust and antifragile parameters are selectively
retrained at any given epoch, thus demonstrating applications of the proposed
strategy in improving model robustness.
| [
{
"created": "Fri, 15 Dec 2023 14:20:16 GMT",
"version": "v1"
},
{
"created": "Sat, 23 Dec 2023 11:53:41 GMT",
"version": "v2"
}
] | 2023-12-27 | [
[
"Pravin",
"Chandresh",
""
],
[
"Martino",
"Ivan",
""
],
[
"Nicosia",
"Giuseppe",
""
],
[
"Ojha",
"Varun",
""
]
] |
2312.09890 | Vivi Nastase | Vivi Nastase and Paola Merlo | Grammatical information in BERT sentence embeddings as two-dimensional
arrays | Published in RepL4NLP 2023 | Proceedings of the 8th Workshop on Representation Learning for NLP
(RepL4NLP 2023) | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Sentence embeddings induced with various transformer architectures encode
much semantic and syntactic information in a distributed manner in a
one-dimensional array. We investigate whether specific grammatical information
can be accessed in these distributed representations. Using data from a task
developed to test rule-like generalizations, our experiments on detecting
subject-verb agreement yield several promising results. First, we show that
while the usual sentence representations encoded as one-dimensional arrays do
not easily support extraction of rule-like regularities, a two-dimensional
reshaping of these vectors allows various learning architectures to access such
information. Next, we show that various architectures can detect patterns in
these two-dimensional reshaped sentence embeddings and successfully learn a
model based on smaller amounts of simpler training data, which performs well on
more complex test data. This indicates that current sentence embeddings contain
information that is regularly distributed, and which can be captured when the
embeddings are reshaped into higher dimensional arrays. Our results cast light
on representations produced by language models and help move towards developing
few-shot learning approaches.
| [
{
"created": "Fri, 15 Dec 2023 15:41:52 GMT",
"version": "v1"
}
] | 2023-12-18 | [
[
"Nastase",
"Vivi",
""
],
[
"Merlo",
"Paola",
""
]
] |
2312.09950 | Cedric Derstroff | Cedric Derstroff, Mattia Cerrato, Jannis Brugger, Jan Peters and
Stefan Kramer | Peer Learning: Learning Complex Policies in Groups from Scratch via
Action Recommendations | 9 pages, 7 figures, AAAI-24 | AAAI, vol. 38, no. 10, pp. 11766-11774, Mar. 2024 | 10.1609/aaai.v38i10.29061 | null | cs.LG cs.AI cs.MA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Peer learning is a novel high-level reinforcement learning framework for
agents learning in groups. While standard reinforcement learning trains an
individual agent in trial-and-error fashion, all on its own, peer learning
addresses a related setting in which a group of agents, i.e., peers, learns to
master a task simultaneously together from scratch. Peers are allowed to
communicate only about their own states and actions recommended by others:
"What would you do in my situation?". Our motivation is to study the learning
behavior of these agents. We formalize the teacher selection process in the
action advice setting as a multi-armed bandit problem and therefore highlight
the need for exploration. Eventually, we analyze the learning behavior of the
peers and observe their ability to rank the agents' performance within the
study group and understand which agents give reliable advice. Further, we
compare peer learning with single agent learning and a state-of-the-art action
advice baseline. We show that peer learning is able to outperform single-agent
learning and the baseline in several challenging discrete and continuous OpenAI
Gym domains. Doing so, we also show that within such a framework complex
policies from action recommendations beyond discrete action spaces can evolve.
| [
{
"created": "Fri, 15 Dec 2023 17:01:35 GMT",
"version": "v1"
},
{
"created": "Mon, 6 May 2024 09:03:54 GMT",
"version": "v2"
}
] | 2024-05-07 | [
[
"Derstroff",
"Cedric",
""
],
[
"Cerrato",
"Mattia",
""
],
[
"Brugger",
"Jannis",
""
],
[
"Peters",
"Jan",
""
],
[
"Kramer",
"Stefan",
""
]
] |
2312.10008 | Paul Maria Scheikl | Paul Maria Scheikl, Nicolas Schreiber, Christoph Haas, Niklas
Freymuth, Gerhard Neumann, Rudolf Lioutikov, and Franziska Mathis-Ullrich | Movement Primitive Diffusion: Learning Gentle Robotic Manipulation of
Deformable Objects | null | IEEE Robotics and Automation Letters 9 (2024) 5338-5345 | 10.1109/LRA.2024.3382529 | null | cs.RO cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Policy learning in robot-assisted surgery (RAS) lacks data efficient and
versatile methods that exhibit the desired motion quality for delicate surgical
interventions. To this end, we introduce Movement Primitive Diffusion (MPD), a
novel method for imitation learning (IL) in RAS that focuses on gentle
manipulation of deformable objects. The approach combines the versatility of
diffusion-based imitation learning (DIL) with the high-quality motion
generation capabilities of Probabilistic Dynamic Movement Primitives (ProDMPs).
This combination enables MPD to achieve gentle manipulation of deformable
objects, while maintaining data efficiency critical for RAS applications where
demonstration data is scarce. We evaluate MPD across various simulated and real
world robotic tasks on both state and image observations. MPD outperforms
state-of-the-art DIL methods in success rate, motion quality, and data
efficiency.
Project page: https://scheiklp.github.io/movement-primitive-diffusion/
| [
{
"created": "Fri, 15 Dec 2023 18:24:28 GMT",
"version": "v1"
},
{
"created": "Mon, 10 Jun 2024 08:11:00 GMT",
"version": "v2"
}
] | 2024-06-11 | [
[
"Scheikl",
"Paul Maria",
""
],
[
"Schreiber",
"Nicolas",
""
],
[
"Haas",
"Christoph",
""
],
[
"Freymuth",
"Niklas",
""
],
[
"Neumann",
"Gerhard",
""
],
[
"Lioutikov",
"Rudolf",
""
],
[
"Mathis-Ullrich",
"Franziska",
""
]
] |
2312.10047 | Zhengbing Hu | Serhiy Balovsyak, Oleksandr Derevyanchuk, Hanna Kravchenko, Yuriy
Ushenko, Zhengbing Hu | Clustering Students According to their Academic Achievement Using Fuzzy
Logic | 13 pages,9 figures,ijmecs | International Journal of Modern Education and Computer
Science(IJMECS), Vol.15, No.6, pp. 31-43, 2023 | 10.5815/ijmecs.2023.06.03 | null | cs.CY cs.AI | http://creativecommons.org/licenses/by/4.0/ | The software for clustering students according to their educational
achievements using fuzzy logic was developed in Python using the Google Colab
cloud service. In the process of analyzing educational data, the problems of
Data Mining are solved, since only some characteristics of the educational
process are obtained from a large sample of data. Data clustering was performed
using the classic K-Means method, which is characterized by simplicity and high
speed. Cluster analysis was performed in the space of two features using the
machine learning library scikit-learn (Python). The obtained clusters are
described by fuzzy triangular membership functions, which allowed to correctly
determine the membership of each student to a certain cluster. Creation of
fuzzy membership functions is done using the scikit-fuzzy library. The
development of fuzzy functions of objects belonging to clusters is also useful
for educational purposes, as it allows a better understanding of the principles
of using fuzzy logic. As a result of processing test educational data using the
developed software, correct results were obtained. It is shown that the use of
fuzzy membership functions makes it possible to correctly determine the
belonging of students to certain clusters, even if such clusters are not
clearly separated. Due to this, it is possible to more accurately determine the
recommended level of difficulty of tasks for each student, depending on his
previous evaluations.
| [
{
"created": "Fri, 1 Dec 2023 23:02:34 GMT",
"version": "v1"
}
] | 2023-12-19 | [
[
"Balovsyak",
"Serhiy",
""
],
[
"Derevyanchuk",
"Oleksandr",
""
],
[
"Kravchenko",
"Hanna",
""
],
[
"Ushenko",
"Yuriy",
""
],
[
"Hu",
"Zhengbing",
""
]
] |
2312.10116 | Wei Tan | Wei Tan, Lan Du, Wray Buntine | Bayesian Estimate of Mean Proper Scores for Diversity-Enhanced Active
Learning | 16 pages, TPAMI. arXiv admin note: text overlap with arXiv:2110.14171 | TPAMI, 2023 | 10.1109/TPAMI.2023.3343359 | null | cs.LG cs.CV | http://creativecommons.org/licenses/by/4.0/ | The effectiveness of active learning largely depends on the sampling
efficiency of the acquisition function. Expected Loss Reduction (ELR) focuses
on a Bayesian estimate of the reduction in classification error, and more
general costs fit in the same framework. We propose Bayesian Estimate of Mean
Proper Scores (BEMPS) to estimate the increase in strictly proper scores such
as log probability or negative mean square error within this framework. We also
prove convergence results for this general class of costs. To facilitate better
experimentation with the new acquisition functions, we develop a complementary
batch AL algorithm that encourages diversity in the vector of expected changes
in scores for unlabeled data. To allow high-performance classifiers, we combine
deep ensembles, and dynamic validation set construction on pretrained models,
and further speed up the ensemble process with the idea of Monte Carlo Dropout.
Extensive experiments on both texts and images show that the use of mean square
error and log probability with BEMPS yields robust acquisition functions and
well-calibrated classifiers, and consistently outperforms the others tested.
The advantages of BEMPS over the others are further supported by a set of
qualitative analyses, where we visualise their sampling behaviour using data
maps and t-SNE plots.
| [
{
"created": "Fri, 15 Dec 2023 11:02:17 GMT",
"version": "v1"
}
] | 2023-12-19 | [
[
"Tan",
"Wei",
""
],
[
"Du",
"Lan",
""
],
[
"Buntine",
"Wray",
""
]
] |
2312.10136 | Zhi Zhang | Zhi Zhang, Qizhe Zhang, Zijun Gao, Renrui Zhang, Ekaterina Shutova,
Shiji Zhou, Shanghang Zhang | Gradient-based Parameter Selection for Efficient Fine-Tuning | null | CVPR2024 | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | With the growing size of pre-trained models, full fine-tuning and storing all
the parameters for various downstream tasks is costly and infeasible. In this
paper, we propose a new parameter-efficient fine-tuning method, Gradient-based
Parameter Selection (GPS), demonstrating that only tuning a few selected
parameters from the pre-trained model while keeping the remainder of the model
frozen can generate similar or better performance compared with the full model
fine-tuning method. Different from the existing popular and state-of-the-art
parameter-efficient fine-tuning approaches, our method does not introduce any
additional parameters and computational costs during both the training and
inference stages. Another advantage is the model-agnostic and non-destructive
property, which eliminates the need for any other design specific to a
particular model. Compared with the full fine-tuning, GPS achieves 3.33%
(91.78% vs. 88.45%, FGVC) and 9.61% (73.1% vs. 65.57%, VTAB) improvement of the
accuracy with tuning only 0.36% parameters of the pre-trained model on average
over 24 image classification tasks; it also demonstrates a significant
improvement of 17% and 16.8% in mDice and mIoU, respectively, on medical image
segmentation task. Moreover, GPS achieves state-of-the-art performance compared
with existing PEFT methods.
| [
{
"created": "Fri, 15 Dec 2023 18:59:05 GMT",
"version": "v1"
},
{
"created": "Sat, 4 May 2024 23:24:37 GMT",
"version": "v2"
},
{
"created": "Tue, 11 Jun 2024 22:45:49 GMT",
"version": "v3"
}
] | 2024-06-13 | [
[
"Zhang",
"Zhi",
""
],
[
"Zhang",
"Qizhe",
""
],
[
"Gao",
"Zijun",
""
],
[
"Zhang",
"Renrui",
""
],
[
"Shutova",
"Ekaterina",
""
],
[
"Zhou",
"Shiji",
""
],
[
"Zhang",
"Shanghang",
""
]
] |
2312.10170 | Wei Li | Wei Li, Fu-Lin Hsu, Will Bishop, Folawiyo Campbell-Ajala, Max Lin,
Oriana Riva | UINav: A Practical Approach to Train On-Device Automation Agents | null | NAACL 2024 Industry Track | null | null | cs.HC cs.AI | http://creativecommons.org/licenses/by/4.0/ | Automation systems that can autonomously drive application user interfaces to
complete user tasks are of great benefit, especially when users are
situationally or permanently impaired. Prior automation systems do not produce
generalizable models while AI-based automation agents work reliably only in
simple, hand-crafted applications or incur high computation costs. We propose
UINav, a demonstration-based approach to train automation agents that fit
mobile devices, yet achieving high success rates with modest numbers of
demonstrations. To reduce the demonstration overhead, UINav uses a referee
model that provides users with immediate feedback on tasks where the agent
fails, and automatically augments human demonstrations to increase diversity in
training data. Our evaluation shows that with only 10 demonstrations UINav can
achieve 70% accuracy, and that with enough demonstrations it can surpass 90%
accuracy.
| [
{
"created": "Fri, 15 Dec 2023 19:37:39 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Apr 2024 17:25:57 GMT",
"version": "v2"
},
{
"created": "Thu, 4 Apr 2024 13:51:56 GMT",
"version": "v3"
},
{
"created": "Fri, 28 Jun 2024 11:25:41 GMT",
"version": "v4"
}
] | 2024-07-01 | [
[
"Li",
"Wei",
""
],
[
"Hsu",
"Fu-Lin",
""
],
[
"Bishop",
"Will",
""
],
[
"Campbell-Ajala",
"Folawiyo",
""
],
[
"Lin",
"Max",
""
],
[
"Riva",
"Oriana",
""
]
] |
2312.10237 | Paul K. Mandal | Paul K. Mandal | A Distributed Privacy Preserving Model for the Detection of Alzheimer's
Disease | 15 pages, 7 figures, 2 tables | Neural Comput & Applic (2024) | 10.1007/s00521-024-10419-4 | null | cs.LG cs.AI cs.CV cs.DC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In the era of rapidly advancing medical technologies, the segmentation of
medical data has become inevitable, necessitating the development of privacy
preserving machine learning algorithms that can train on distributed data.
Consolidating sensitive medical data is not always an option particularly due
to the stringent privacy regulations imposed by the Health Insurance
Portability and Accountability Act (HIPAA). In this paper, I introduce a HIPAA
compliant framework that can train from distributed data. I then propose a
multimodal vertical federated model for Alzheimer's Disease (AD) detection, a
serious neurodegenerative condition that can cause dementia, severely impairing
brain function and hindering simple tasks, especially without preventative
care. This vertical federated learning (VFL) model offers a distributed
architecture that enables collaborative learning across diverse sources of
medical data while respecting privacy constraints imposed by HIPAA. The VFL
architecture proposed herein offers a novel distributed architecture, enabling
collaborative learning across diverse sources of medical data while respecting
statutory privacy constraints. By leveraging multiple modalities of data, the
robustness and accuracy of AD detection can be enhanced. This model not only
contributes to the advancement of federated learning techniques but also holds
promise for overcoming the hurdles posed by data segmentation in medical
research.
| [
{
"created": "Fri, 15 Dec 2023 22:09:04 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Dec 2023 15:44:40 GMT",
"version": "v2"
},
{
"created": "Thu, 15 Aug 2024 17:10:19 GMT",
"version": "v3"
},
{
"created": "Sat, 24 Aug 2024 18:04:57 GMT",
"version": "v4"
},
{
"created": "Thu, 26 Sep 2024 21:24:00 GMT",
"version": "v5"
}
] | 2024-09-30 | [
[
"Mandal",
"Paul K.",
""
]
] |
2312.10246 | Benjamin Planche | Yuchun Liu, Benjamin Planche, Meng Zheng, Zhongpai Gao, Pierre
Sibut-Bourde, Fan Yang, Terrence Chen, Ziyan Wu | Implicit Modeling of Non-rigid Objects with Cross-Category Signals | Accepted at AAAI 2024. Paper + supplementary material | Proceedings of the AAAI Conference on Artificial Intelligence,
38(1), 2024 | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep implicit functions (DIFs) have emerged as a potent and articulate means
of representing 3D shapes. However, methods modeling object categories or
non-rigid entities have mainly focused on single-object scenarios. In this
work, we propose MODIF, a multi-object deep implicit function that jointly
learns the deformation fields and instance-specific latent codes for multiple
objects at once. Our emphasis is on non-rigid, non-interpenetrating entities
such as organs. To effectively capture the interrelation between these entities
and ensure precise, collision-free representations, our approach facilitates
signaling between category-specific fields to adequately rectify shapes. We
also introduce novel inter-object supervision: an attraction-repulsion loss is
formulated to refine contact regions between objects. Our approach is
demonstrated on various medical benchmarks, involving modeling different groups
of intricate anatomical entities. Experimental results illustrate that our
model can proficiently learn the shape representation of each organ and their
relations to others, to the point that shapes missing from unseen instances can
be consistently recovered by our method. Finally, MODIF can also propagate
semantic information throughout the population via accurate point
correspondences
| [
{
"created": "Fri, 15 Dec 2023 22:34:17 GMT",
"version": "v1"
}
] | 2023-12-19 | [
[
"Liu",
"Yuchun",
""
],
[
"Planche",
"Benjamin",
""
],
[
"Zheng",
"Meng",
""
],
[
"Gao",
"Zhongpai",
""
],
[
"Sibut-Bourde",
"Pierre",
""
],
[
"Yang",
"Fan",
""
],
[
"Chen",
"Terrence",
""
],
[
"Wu",
"Ziyan",
""
]
] |
2312.10361 | Hai Siong Tan | H. S. Tan, Kuancheng Wang and Rafe Mcbeth | Exploring UMAP in hybrid models of entropy-based and representativeness
sampling for active learning in biomedical segmentation | 25 pages, 6 figures | Computers in Biology and Medicine, vol. 176, June 2024, 108605 | 10.1016/j.compbiomed.2024.108605 | null | cs.CV physics.med-ph | http://creativecommons.org/licenses/by/4.0/ | In this work, we study various hybrid models of entropy-based and
representativeness sampling techniques in the context of active learning in
medical segmentation, in particular examining the role of UMAP (Uniform
Manifold Approximation and Projection) as a technique for capturing
representativeness. Although UMAP has been shown viable as a general purpose
dimension reduction method in diverse areas, its role in deep learning-based
medical segmentation has yet been extensively explored. Using the cardiac and
prostate datasets in the Medical Segmentation Decathlon for validation, we
found that a novel hybrid combination of Entropy-UMAP sampling technique
achieved a statistically significant Dice score advantage over the random
baseline ($3.2 \%$ for cardiac, $4.5 \%$ for prostate), and attained the
highest Dice coefficient among the spectrum of 10 distinct active learning
methodologies we examined. This provides preliminary evidence that there is an
interesting synergy between entropy-based and UMAP methods when the former
precedes the latter in a hybrid model of active learning.
| [
{
"created": "Sat, 16 Dec 2023 07:40:09 GMT",
"version": "v1"
},
{
"created": "Mon, 27 May 2024 08:09:54 GMT",
"version": "v2"
}
] | 2024-05-28 | [
[
"Tan",
"H. S.",
""
],
[
"Wang",
"Kuancheng",
""
],
[
"Mcbeth",
"Rafe",
""
]
] |
2312.10385 | Huy Hoang | Huy Hoang and Tien Mai and Pradeep Varakantham | Imitate the Good and Avoid the Bad: An Incremental Approach to Safe
Reinforcement Learning | null | AAAI 2024 | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | A popular framework for enforcing safe actions in Reinforcement Learning (RL)
is Constrained RL, where trajectory based constraints on expected cost (or
other cost measures) are employed to enforce safety and more importantly these
constraints are enforced while maximizing expected reward. Most recent
approaches for solving Constrained RL convert the trajectory based cost
constraint into a surrogate problem that can be solved using minor
modifications to RL methods. A key drawback with such approaches is an over or
underestimation of the cost constraint at each state. Therefore, we provide an
approach that does not modify the trajectory based cost constraint and instead
imitates ``good'' trajectories and avoids ``bad'' trajectories generated from
incrementally improving policies. We employ an oracle that utilizes a reward
threshold (which is varied with learning) and the overall cost constraint to
label trajectories as ``good'' or ``bad''. A key advantage of our approach is
that we are able to work from any starting policy or set of trajectories and
improve on it. In an exhaustive set of experiments, we demonstrate that our
approach is able to outperform top benchmark approaches for solving Constrained
RL problems, with respect to expected cost, CVaR cost, or even unknown cost
constraints.
| [
{
"created": "Sat, 16 Dec 2023 08:48:46 GMT",
"version": "v1"
},
{
"created": "Tue, 26 Dec 2023 07:55:04 GMT",
"version": "v2"
},
{
"created": "Wed, 13 Mar 2024 14:48:36 GMT",
"version": "v3"
},
{
"created": "Thu, 8 Aug 2024 03:44:21 GMT",
"version": "v4"
}
] | 2024-08-09 | [
[
"Hoang",
"Huy",
""
],
[
"Mai",
"Tien",
""
],
[
"Varakantham",
"Pradeep",
""
]
] |
2312.10560 | Luis Balderas Ruiz | Luis Balderas, Miguel Lastra and Jos\'e M. Ben\'itez | Optimizing Dense Feed-Forward Neural Networks | null | Neural Networks, Volume 171, 2024, Pages 229-241, ISSN 0893-6080, | 10.1016/j.neunet.2023.12.015 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Deep learning models have been widely used during the last decade due to
their outstanding learning and abstraction capacities. However, one of the main
challenges any scientist has to face using deep learning models is to establish
the network's architecture. Due to this difficulty, data scientists usually
build over complex models and, as a result, most of them result computationally
intensive and impose a large memory footprint, generating huge costs,
contributing to climate change and hindering their use in computational-limited
devices. In this paper, we propose a novel feed-forward neural network
constructing method based on pruning and transfer learning. Its performance has
been thoroughly assessed in classification and regression problems. Without any
accuracy loss, our approach can compress the number of parameters by more than
70%. Even further, choosing the pruning parameter carefully, most of the
refined models outperform original ones. We also evaluate the transfer learning
level comparing the refined model and the original one training from scratch a
neural network with the same hyper parameters as the optimized model. The
results obtained show that our constructing method not only helps in the design
of more efficient models but also more effective ones.
| [
{
"created": "Sat, 16 Dec 2023 23:23:16 GMT",
"version": "v1"
}
] | 2024-10-01 | [
[
"Balderas",
"Luis",
""
],
[
"Lastra",
"Miguel",
""
],
[
"Benítez",
"José M.",
""
]
] |
2312.10663 | Jos\'e L. Risco-Mart\'in | Patricia Arroba, Jos\'e L. Risco-Mart\'in, Jos\'e M. Moya and Jos\'e
L. Ayala | Heuristics and Metaheuristics for Dynamic Management of Computing and
Cooling Energy in Cloud Data Centers | null | Software: Practice and Experience, 48(10), 2018 | 10.1002/spe.2603 | null | cs.DC cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Data centers handle impressive high figures in terms of energy consumption,
and the growing popularity of Cloud applications is intensifying their
computational demand. Moreover, the cooling needed to keep the servers within
reliable thermal operating conditions also has an impact on the thermal
distribution of the data room, thus affecting to servers' power leakage.
Optimizing the energy consumption of these infrastructures is a major challenge
to place data centers on a more scalable scenario. Thus, understanding the
relationship between power, temperature, consolidation and performance is
crucial to enable an energy-efficient management at the data center level. In
this research, we propose novel power and thermal-aware strategies and models
to provide joint cooling and computing optimizations from a local perspective
based on the global energy consumption of metaheuristic-based optimizations.
Our results show that the combined awareness from both metaheuristic and best
fit decreasing algorithms allow us to describe the global energy into faster
and lighter optimization strategies that may be used during runtime. This
approach allows us to improve the energy efficiency of the data center,
considering both computing and cooling infrastructures, in up to a 21.74\%
while maintaining quality of service.
| [
{
"created": "Sun, 17 Dec 2023 09:40:36 GMT",
"version": "v1"
}
] | 2023-12-19 | [
[
"Arroba",
"Patricia",
""
],
[
"Risco-Martín",
"José L.",
""
],
[
"Moya",
"José M.",
""
],
[
"Ayala",
"José L.",
""
]
] |
2312.10741 | Yu Zhang | Yu Zhang, Rongjie Huang, Ruiqi Li, JinZheng He, Yan Xia, Feiyang Chen,
Xinyu Duan, Baoxing Huai, Zhou Zhao | StyleSinger: Style Transfer for Out-of-Domain Singing Voice Synthesis | Accepted by AAAI 2024 | Proceedings of the AAAI Conference on Artificial Intelligence,
38(17), 19597-19605. (2024) | 10.1609/aaai.v38i17.29932 | null | eess.AS cs.CL cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Style transfer for out-of-domain (OOD) singing voice synthesis (SVS) focuses
on generating high-quality singing voices with unseen styles (such as timbre,
emotion, pronunciation, and articulation skills) derived from reference singing
voice samples. However, the endeavor to model the intricate nuances of singing
voice styles is an arduous task, as singing voices possess a remarkable degree
of expressiveness. Moreover, existing SVS methods encounter a decline in the
quality of synthesized singing voices in OOD scenarios, as they rest upon the
assumption that the target vocal attributes are discernible during the training
phase. To overcome these challenges, we propose StyleSinger, the first singing
voice synthesis model for zero-shot style transfer of out-of-domain reference
singing voice samples. StyleSinger incorporates two critical approaches for
enhanced effectiveness: 1) the Residual Style Adaptor (RSA) which employs a
residual quantization module to capture diverse style characteristics in
singing voices, and 2) the Uncertainty Modeling Layer Normalization (UMLN) to
perturb the style attributes within the content representation during the
training phase and thus improve the model generalization. Our extensive
evaluations in zero-shot style transfer undeniably establish that StyleSinger
outperforms baseline models in both audio quality and similarity to the
reference singing voice samples. Access to singing voice samples can be found
at https://stylesinger.github.io/.
| [
{
"created": "Sun, 17 Dec 2023 15:26:16 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Jan 2024 12:59:20 GMT",
"version": "v2"
},
{
"created": "Thu, 12 Sep 2024 05:36:06 GMT",
"version": "v3"
}
] | 2024-09-24 | [
[
"Zhang",
"Yu",
""
],
[
"Huang",
"Rongjie",
""
],
[
"Li",
"Ruiqi",
""
],
[
"He",
"JinZheng",
""
],
[
"Xia",
"Yan",
""
],
[
"Chen",
"Feiyang",
""
],
[
"Duan",
"Xinyu",
""
],
[
"Huai",
"Baoxing",
""
],
[
"Zhao",
"Zhou",
""
]
] |
2312.10937 | David Hason Rudd | David Hason Rudd, Huan Huo, Guandong Xu | An Extended Variational Mode Decomposition Algorithm Developed Speech
Emotion Recognition Performance | 12 pages | Advances in Knowledge Discovery and Data Mining. PAKDD 2023.
Lecture Notes in Computer Science(), vol 13937. Springer, Cham | 10.1007/978-3-031-33380-4_17 | null | cs.SD cs.AI cs.HC cs.LG cs.MM eess.AS | http://creativecommons.org/licenses/by/4.0/ | Emotion recognition (ER) from speech signals is a robust approach since it
cannot be imitated like facial expression or text based sentiment analysis.
Valuable information underlying the emotions are significant for human-computer
interactions enabling intelligent machines to interact with sensitivity in the
real world. Previous ER studies through speech signal processing have focused
exclusively on associations between different signal mode decomposition methods
and hidden informative features. However, improper decomposition parameter
selections lead to informative signal component losses due to mode duplicating
and mixing. In contrast, the current study proposes VGG-optiVMD, an empowered
variational mode decomposition algorithm, to distinguish meaningful speech
features and automatically select the number of decomposed modes and optimum
balancing parameter for the data fidelity constraint by assessing their effects
on the VGG16 flattening output layer. Various feature vectors were employed to
train the VGG16 network on different databases and assess VGG-optiVMD
reproducibility and reliability. One, two, and three-dimensional feature
vectors were constructed by concatenating Mel-frequency cepstral coefficients,
Chromagram, Mel spectrograms, Tonnetz diagrams, and spectral centroids. Results
confirmed a synergistic relationship between the fine-tuning of the signal
sample rate and decomposition parameters with classification accuracy,
achieving state-of-the-art 96.09% accuracy in predicting seven emotions on the
Berlin EMO-DB database.
| [
{
"created": "Mon, 18 Dec 2023 05:24:03 GMT",
"version": "v1"
}
] | 2023-12-19 | [
[
"Rudd",
"David Hason",
""
],
[
"Huo",
"Huan",
""
],
[
"Xu",
"Guandong",
""
]
] |
2312.10949 | David Hason Rudd | David Hason Rudd, Huan Huo, Guandong Xu | Leveraged Mel spectrograms using Harmonic and Percussive Components in
Speech Emotion Recognition | 12 pages | Advances in Knowledge Discovery and Data Mining. PAKDD 2022.
Lecture Notes in Computer Science(), vol 13281. Springer, Cham | 10.1007/978-3-031-05936-0_31 | null | cs.SD cs.CV cs.HC cs.LG cs.MM eess.AS | http://creativecommons.org/licenses/by/4.0/ | Speech Emotion Recognition (SER) affective technology enables the intelligent
embedded devices to interact with sensitivity. Similarly, call centre employees
recognise customers' emotions from their pitch, energy, and tone of voice so as
to modify their speech for a high-quality interaction with customers. This work
explores, for the first time, the effects of the harmonic and percussive
components of Mel spectrograms in SER. We attempt to leverage the Mel
spectrogram by decomposing distinguishable acoustic features for exploitation
in our proposed architecture, which includes a novel feature map generator
algorithm, a CNN-based network feature extractor and a multi-layer perceptron
(MLP) classifier. This study specifically focuses on effective data
augmentation techniques for building an enriched hybrid-based feature map. This
process results in a function that outputs a 2D image so that it can be used as
input data for a pre-trained CNN-VGG16 feature extractor. Furthermore, we also
investigate other acoustic features such as MFCCs, chromagram, spectral
contrast, and the tonnetz to assess our proposed framework. A test accuracy of
92.79% on the Berlin EMO-DB database is achieved. Our result is higher than
previous works using CNN-VGG16.
| [
{
"created": "Mon, 18 Dec 2023 05:55:46 GMT",
"version": "v1"
}
] | 2023-12-19 | [
[
"Rudd",
"David Hason",
""
],
[
"Huo",
"Huan",
""
],
[
"Xu",
"Guandong",
""
]
] |
2312.10983 | Jinxiang Lai | Jinxiang Lai, Wenlong Wu, Bin-Bin Gao, Jun Liu, Jiawei Zhan, Congchong
Nie, Yi Zeng, Chengjie Wang | MatchDet: A Collaborative Framework for Image Matching and Object
Detection | null | AAAI 2024 | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Image matching and object detection are two fundamental and challenging
tasks, while many related applications consider them two individual tasks (i.e.
task-individual). In this paper, a collaborative framework called MatchDet
(i.e. task-collaborative) is proposed for image matching and object detection
to obtain mutual improvements. To achieve the collaborative learning of the two
tasks, we propose three novel modules, including a Weighted Spatial Attention
Module (WSAM) for Detector, and Weighted Attention Module (WAM) and Box Filter
for Matcher. Specifically, the WSAM highlights the foreground regions of target
image to benefit the subsequent detector, the WAM enhances the connection
between the foreground regions of pair images to ensure high-quality matches,
and Box Filter mitigates the impact of false matches. We evaluate the
approaches on a new benchmark with two datasets called Warp-COCO and
miniScanNet. Experimental results show our approaches are effective and achieve
competitive improvements.
| [
{
"created": "Mon, 18 Dec 2023 07:11:45 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Jan 2024 04:36:43 GMT",
"version": "v2"
},
{
"created": "Wed, 17 Jul 2024 04:03:31 GMT",
"version": "v3"
}
] | 2024-07-18 | [
[
"Lai",
"Jinxiang",
""
],
[
"Wu",
"Wenlong",
""
],
[
"Gao",
"Bin-Bin",
""
],
[
"Liu",
"Jun",
""
],
[
"Zhan",
"Jiawei",
""
],
[
"Nie",
"Congchong",
""
],
[
"Zeng",
"Yi",
""
],
[
"Wang",
"Chengjie",
""
]
] |
2312.11051 | Pengpeng Liang | Shihao Feng, Pengpeng Liang, Jin Gao, Erkang Cheng | Multi-Correlation Siamese Transformer Network with Dense Connection for
3D Single Object Tracking | Preprint version for IEEE Robotics and Automation Letters (RAL) | IEEE Robotics and Automation Letters (RAL), vol. 8, no. 12, pp.
8066-8073, 2023 | 10.1109/LRA.2023.3325715 | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Point cloud-based 3D object tracking is an important task in autonomous
driving. Though great advances regarding Siamese-based 3D tracking have been
made recently, it remains challenging to learn the correlation between the
template and search branches effectively with the sparse LIDAR point cloud
data. Instead of performing correlation of the two branches at just one point
in the network, in this paper, we present a multi-correlation Siamese
Transformer network that has multiple stages and carries out feature
correlation at the end of each stage based on sparse pillars. More
specifically, in each stage, self-attention is first applied to each branch
separately to capture the non-local context information. Then, cross-attention
is used to inject the template information into the search area. This strategy
allows the feature learning of the search area to be aware of the template
while keeping the individual characteristics of the template intact. To enable
the network to easily preserve the information learned at different stages and
ease the optimization, for the search area, we densely connect the initial
input sparse pillars and the output of each stage to all subsequent stages and
the target localization network, which converts pillars to bird's eye view
(BEV) feature maps and predicts the state of the target with a small densely
connected convolution network. Deep supervision is added to each stage to
further boost the performance as well. The proposed algorithm is evaluated on
the popular KITTI, nuScenes, and Waymo datasets, and the experimental results
show that our method achieves promising performance compared with the
state-of-the-art. Ablation study that shows the effectiveness of each component
is provided as well. Code is available at
https://github.com/liangp/MCSTN-3DSOT.
| [
{
"created": "Mon, 18 Dec 2023 09:33:49 GMT",
"version": "v1"
}
] | 2023-12-19 | [
[
"Feng",
"Shihao",
""
],
[
"Liang",
"Pengpeng",
""
],
[
"Gao",
"Jin",
""
],
[
"Cheng",
"Erkang",
""
]
] |
2312.11076 | Rebeca D\'iaz-Redondo | H\'ector Cerezo-Costas, Ana Fern\'andez Vilas, Manuela
Mart\'in-Vicente, Rebeca P. D\'iaz-Redondo | Discovering Geo-dependent Stories by Combining Density-based Clustering
and Thread-based Aggregation techniques | 11 pages, 12 figures, journal | Expert Systems with Applications, 2018, vol. 95, p. 32-42 | 10.1016/j.eswa.2017.11.019 | null | cs.SI cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Citizens are actively interacting with their surroundings, especially through
social media. Not only do shared posts give important information about what is
happening (from the users' perspective), but also the metadata linked to these
posts offer relevant data, such as the GPS-location in Location-based Social
Networks (LBSNs). In this paper we introduce a global analysis of the
geo-tagged posts in social media which supports (i) the detection of unexpected
behavior in the city and (ii) the analysis of the posts to infer what is
happening. The former is obtained by applying density-based clustering
techniques, whereas the latter is consequence of applying natural language
processing. We have applied our methodology to a dataset obtained from
Instagram activity in New York City for seven months obtaining promising
results. The developed algorithms require very low resources, being able to
analyze millions of data-points in commodity hardware in less than one hour
without applying complex parallelization techniques. Furthermore, the solution
can be easily adapted to other geo-tagged data sources without extra effort.
| [
{
"created": "Mon, 18 Dec 2023 10:17:12 GMT",
"version": "v1"
}
] | 2023-12-19 | [
[
"Cerezo-Costas",
"Héctor",
""
],
[
"Vilas",
"Ana Fernández",
""
],
[
"Martín-Vicente",
"Manuela",
""
],
[
"Díaz-Redondo",
"Rebeca P.",
""
]
] |
2312.11344 | Christoph Tillmann | Christoph Tillmann, Aashka Trivedi, Sara Rosenthal, Santosh Borse,
Rong Zhang, Avirup Sil, Bishwaranjan Bhattacharjee | Muted: Multilingual Targeted Offensive Speech Identification and
Visualization | null | EMNLP 2023 Demo Track | null | null | cs.CL cs.AI cs.HC | http://creativecommons.org/licenses/by-sa/4.0/ | Offensive language such as hate, abuse, and profanity (HAP) occurs in various
content on the web. While previous work has mostly dealt with sentence level
annotations, there have been a few recent attempts to identify offensive spans
as well. We build upon this work and introduce Muted, a system to identify
multilingual HAP content by displaying offensive arguments and their targets
using heat maps to indicate their intensity. Muted can leverage any
transformer-based HAP-classification model and its attention mechanism
out-of-the-box to identify toxic spans, without further fine-tuning. In
addition, we use the spaCy library to identify the specific targets and
arguments for the words predicted by the attention heatmaps. We present the
model's performance on identifying offensive spans and their targets in
existing datasets and present new annotations on German text. Finally, we
demonstrate our proposed visualization tool on multilingual inputs.
| [
{
"created": "Mon, 18 Dec 2023 16:50:27 GMT",
"version": "v1"
}
] | 2023-12-19 | [
[
"Tillmann",
"Christoph",
""
],
[
"Trivedi",
"Aashka",
""
],
[
"Rosenthal",
"Sara",
""
],
[
"Borse",
"Santosh",
""
],
[
"Zhang",
"Rong",
""
],
[
"Sil",
"Avirup",
""
],
[
"Bhattacharjee",
"Bishwaranjan",
""
]
] |
2312.11375 | Rebeca D\'iaz-Redondo | Francisco Troncoso-Pastoriza, Pablo Egu\'ia-Oller, Rebeca P.
D\'iaz-Redondo, Enrique Granada-\'Alvarez | Use of BIM Data as Input and Output for Improved Detection of Lighting
Elements in Buildings | null | Automation in Construction, 2019, vol. 106, p. 102852 | 10.1016/j.autcon.2019.102852 | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces a complete method for the automatic detection,
identification and localization of lighting elements in buildings, leveraging
the available building information modeling (BIM) data of a building and
feeding the BIM model with the new collected information, which is key for
energy-saving strategies. The detection system is heavily improved from our
previous work, with the following two main contributions: (i) a new refinement
algorithm to provide a better detection rate and identification performance
with comparable computational resources and (ii) a new plane estimation,
filtering and projection step to leverage the BIM information earlier for lamps
that are both hanging and embedded. The two modifications are thoroughly tested
in five different case studies, yielding better results in terms of detection,
identification and localization.
| [
{
"created": "Mon, 18 Dec 2023 17:38:49 GMT",
"version": "v1"
}
] | 2023-12-19 | [
[
"Troncoso-Pastoriza",
"Francisco",
""
],
[
"Eguía-Oller",
"Pablo",
""
],
[
"Díaz-Redondo",
"Rebeca P.",
""
],
[
"Granada-Álvarez",
"Enrique",
""
]
] |
2312.11380 | Rebeca D\'iaz-Redondo | Francisco Troncoso-Pastoriza, Pablo Egu\'ia-Oller, Rebeca P.
D\'iaz-Redondo, Enrique Granada-\'Alvarez, Aitor Erkoreka | Orientation-Constrained System for Lamp Detection in Buildings Based on
Computer Vision | null | Sensors, 2019, vol. 19, no 7, p. 1516 | 10.3390/s19071516 | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computer vision is used in this work to detect lighting elements in buildings
with the goal of improving the accuracy of previous methods to provide a
precise inventory of the location and state of lamps. Using the framework
developed in our previous works, we introduce two new modifications to enhance
the system: first, a constraint on the orientation of the detected poses in the
optimization methods for both the initial and the refined estimates based on
the geometric information of the building information modelling (BIM) model;
second, an additional reprojection error filtering step to discard the
erroneous poses introduced with the orientation restrictions, keeping the
identification and localization errors low while greatly increasing the number
of detections. These~enhancements are tested in five different case studies
with more than 30,000 images, with results showing improvements in the number
of detections, the percentage of correct model and state identifications, and
the distance between detections and reference positions
| [
{
"created": "Mon, 18 Dec 2023 17:43:55 GMT",
"version": "v1"
}
] | 2023-12-19 | [
[
"Troncoso-Pastoriza",
"Francisco",
""
],
[
"Eguía-Oller",
"Pablo",
""
],
[
"Díaz-Redondo",
"Rebeca P.",
""
],
[
"Granada-Álvarez",
"Enrique",
""
],
[
"Erkoreka",
"Aitor",
""
]
] |
2312.11436 | Nikhil Parthasarathy | Nikhil Parthasarathy, Olivier J. H\'enaff, Eero P. Simoncelli | Layerwise complexity-matched learning yields an improved model of
cortical area V2 | 31 pages, 13 figures | Transactions on Machine Learning Research, Jun 2024 | null | null | q-bio.NC cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Human ability to recognize complex visual patterns arises through
transformations performed by successive areas in the ventral visual cortex.
Deep neural networks trained end-to-end for object recognition approach human
capabilities, and offer the best descriptions to date of neural responses in
the late stages of the hierarchy. But these networks provide a poor account of
the early stages, compared to traditional hand-engineered models, or models
optimized for coding efficiency or prediction. Moreover, the gradient
backpropagation used in end-to-end learning is generally considered to be
biologically implausible. Here, we overcome both of these limitations by
developing a bottom-up self-supervised training methodology that operates
independently on successive layers. Specifically, we maximize feature
similarity between pairs of locally-deformed natural image patches, while
decorrelating features across patches sampled from other images. Crucially, the
deformation amplitudes are adjusted proportionally to receptive field sizes in
each layer, thus matching the task complexity to the capacity at each stage of
processing. In comparison with architecture-matched versions of previous
models, we demonstrate that our layerwise complexity-matched learning (LCL)
formulation produces a two-stage model (LCL-V2) that is better aligned with
selectivity properties and neural activity in primate area V2. We demonstrate
that the complexity-matched learning paradigm is responsible for much of the
emergence of the improved biological alignment. Finally, when the two-stage
model is used as a fixed front-end for a deep network trained to perform object
recognition, the resultant model (LCL-V2Net) is significantly better than
standard end-to-end self-supervised, supervised, and adversarially-trained
models in terms of generalization to out-of-distribution tasks and alignment
with human behavior.
| [
{
"created": "Mon, 18 Dec 2023 18:37:02 GMT",
"version": "v1"
},
{
"created": "Sun, 3 Mar 2024 16:31:58 GMT",
"version": "v2"
},
{
"created": "Thu, 18 Jul 2024 23:41:24 GMT",
"version": "v3"
}
] | 2024-07-22 | [
[
"Parthasarathy",
"Nikhil",
""
],
[
"Hénaff",
"Olivier J.",
""
],
[
"Simoncelli",
"Eero P.",
""
]
] |
2312.11554 | Yu Wang | Yu Wang, Zexue He, Zhankui He, Hao Xu, Julian McAuley | Deciphering Compatibility Relationships with Textual Descriptions via
Extraction and Explanation | null | AAAI 2024 | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Understanding and accurately explaining compatibility relationships between
fashion items is a challenging problem in the burgeoning domain of AI-driven
outfit recommendations. Present models, while making strides in this area,
still occasionally fall short, offering explanations that can be elementary and
repetitive. This work aims to address these shortcomings by introducing the
Pair Fashion Explanation (PFE) dataset, a unique resource that has been curated
to illuminate these compatibility relationships. Furthermore, we propose an
innovative two-stage pipeline model that leverages this dataset. This
fine-tuning allows the model to generate explanations that convey the
compatibility relationships between items. Our experiments showcase the model's
potential in crafting descriptions that are knowledgeable, aligned with
ground-truth matching correlations, and that produce understandable and
informative descriptions, as assessed by both automatic metrics and human
evaluation. Our code and data are released at
https://github.com/wangyu-ustc/PairFashionExplanation
| [
{
"created": "Sun, 17 Dec 2023 05:45:49 GMT",
"version": "v1"
}
] | 2023-12-20 | [
[
"Wang",
"Yu",
""
],
[
"He",
"Zexue",
""
],
[
"He",
"Zhankui",
""
],
[
"Xu",
"Hao",
""
],
[
"McAuley",
"Julian",
""
]
] |
2312.11753 | Juho Kim | Juho Kim | Recording and Describing Poker Hands | 8 pages, 2 figures, accepted to the 2024 IEEE Conference on Games | 2024 IEEE Conference on Games (CoG), Milan, Italy, 2024, pp. 1-8. | 10.1109/CoG60054.2024.10645611 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper introduces the Poker Hand History (PHH) file format, designed to
standardize the recording of poker hands across different game variants.
Despite poker's widespread popularity in the mainstream culture as a mind sport
and its prominence in the field of artificial intelligence (AI) research as a
benchmark for imperfect information AI agents, it lacks a consistent format
that humans can use to document poker hands across different variants that can
also easily be parsed by machines. To address this gap in the literature, we
propose the PHH format which provides a concise human-readable machine-friendly
representation of hand history that comprehensively captures various details of
the hand, ranging from initial game parameters and actions to contextual
parameters including but not limited to the venue, players, and time control
information. In the supplementary, we provide 10,088 hands covering 11
different variants in the PHH format. The full specification is available on
https://github.com/uoftcprg/phh-std
| [
{
"created": "Mon, 18 Dec 2023 23:39:01 GMT",
"version": "v1"
},
{
"created": "Mon, 1 Jan 2024 06:49:19 GMT",
"version": "v2"
},
{
"created": "Thu, 4 Apr 2024 08:06:03 GMT",
"version": "v3"
},
{
"created": "Fri, 10 May 2024 20:22:28 GMT",
"version": "v4"
},
{
"created": "Thu, 29 Aug 2024 18:13:37 GMT",
"version": "v5"
}
] | 2024-09-02 | [
[
"Kim",
"Juho",
""
]
] |
2312.11952 | Collin Leiber | Collin Leiber and Dominik Mautz and Claudia Plant and Christian B\"ohm | Automatic Parameter Selection for Non-Redundant Clustering | null | Proceedings of the 2022 SIAM International Conference on Data
Mining (SDM) (pp. 226-234). Society for Industrial and Applied Mathematics | 10.1137/1.9781611977172.26 | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | High-dimensional datasets often contain multiple meaningful clusterings in
different subspaces. For example, objects can be clustered either by color,
weight, or size, revealing different interpretations of the given dataset. A
variety of approaches are able to identify such non-redundant clusterings.
However, most of these methods require the user to specify the expected number
of subspaces and clusters for each subspace. Stating these values is a
non-trivial problem and usually requires detailed knowledge of the input
dataset. In this paper, we propose a framework that utilizes the Minimum
Description Length Principle (MDL) to detect the number of subspaces and
clusters per subspace automatically. We describe an efficient procedure that
greedily searches the parameter space by splitting and merging subspaces and
clusters within subspaces. Additionally, an encoding strategy is introduced
that allows us to detect outliers in each subspace. Extensive experiments show
that our approach is highly competitive to state-of-the-art methods.
| [
{
"created": "Tue, 19 Dec 2023 08:53:00 GMT",
"version": "v1"
}
] | 2023-12-20 | [
[
"Leiber",
"Collin",
""
],
[
"Mautz",
"Dominik",
""
],
[
"Plant",
"Claudia",
""
],
[
"Böhm",
"Christian",
""
]
] |
2312.12006 | Md.Rafiul Biswas Mr. | Md. Rafiul Biswas, Ashhadul Islam, Zubair Shah, Wajdi Zaghouani, Samir
Brahim Belhaouari | Can ChatGPT be Your Personal Medical Assistant? | 5 pages, 7 figures, two tables, Accepted on The International
Symposium on Foundation and Large Language Models (FLLM2023) | The International Symposium on Foundation and Large Language
Models (FLLM2023) https://fllm-conference.org/2023/ | null | null | cs.CL cs.SI | http://creativecommons.org/licenses/by/4.0/ | The advanced large language model (LLM) ChatGPT has shown its potential in
different domains and remains unbeaten due to its characteristics compared to
other LLMs. This study aims to evaluate the potential of using a fine-tuned
ChatGPT model as a personal medical assistant in the Arabic language. To do so,
this study uses publicly available online questions and answering datasets in
Arabic language. There are almost 430K questions and answers for 20
disease-specific categories. GPT-3.5-turbo model was fine-tuned with a portion
of this dataset. The performance of this fine-tuned model was evaluated through
automated and human evaluation. The automated evaluations include perplexity,
coherence, similarity, and token count. Native Arabic speakers with medical
knowledge evaluated the generated text by calculating relevance, accuracy,
precision, logic, and originality. The overall result shows that ChatGPT has a
bright future in medical assistance.
| [
{
"created": "Tue, 19 Dec 2023 09:54:27 GMT",
"version": "v1"
}
] | 2023-12-20 | [
[
"Biswas",
"Md. Rafiul",
""
],
[
"Islam",
"Ashhadul",
""
],
[
"Shah",
"Zubair",
""
],
[
"Zaghouani",
"Wajdi",
""
],
[
"Belhaouari",
"Samir Brahim",
""
]
] |
2312.12050 | Collin Leiber | Lena G. M. Bauer and Collin Leiber and Christian B\"ohm and Claudia
Plant | Extension of the Dip-test Repertoire -- Efficient and Differentiable
p-value Calculation for Clustering | null | Proceedings of the 2023 SIAM International Conference on Data
Mining (SDM) (pp. 109-117). Society for Industrial and Applied Mathematics | 10.1137/1.9781611977653.ch13 | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Over the last decade, the Dip-test of unimodality has gained increasing
interest in the data mining community as it is a parameter-free statistical
test that reliably rates the modality in one-dimensional samples. It returns a
so called Dip-value and a corresponding probability for the sample's
unimodality (Dip-p-value). These two values share a sigmoidal relationship.
However, the specific transformation is dependent on the sample size. Many
Dip-based clustering algorithms use bootstrapped look-up tables translating
Dip- to Dip-p-values for a certain limited amount of sample sizes. We propose a
specifically designed sigmoid function as a substitute for these
state-of-the-art look-up tables. This accelerates computation and provides an
approximation of the Dip- to Dip-p-value transformation for every single sample
size. Further, it is differentiable and can therefore easily be integrated in
learning schemes using gradient descent. We showcase this by exploiting our
function in a novel subspace clustering algorithm called Dip'n'Sub. We
highlight in extensive experiments the various benefits of our proposal.
| [
{
"created": "Tue, 19 Dec 2023 11:14:37 GMT",
"version": "v1"
}
] | 2023-12-20 | [
[
"Bauer",
"Lena G. M.",
""
],
[
"Leiber",
"Collin",
""
],
[
"Böhm",
"Christian",
""
],
[
"Plant",
"Claudia",
""
]
] |
2312.12115 | Gwladys Kelodjou | Gwladys Kelodjou, Laurence Roz\'e, V\'eronique Masson, Luis
Gal\'arraga, Romaric Gaudel, Maurice Tchuente, Alexandre Termier | Shaping Up SHAP: Enhancing Stability through Layer-Wise Neighbor
Selection | null | AAAI Conference on Artificial Intelligence, 2024 | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Machine learning techniques, such as deep learning and ensemble methods, are
widely used in various domains due to their ability to handle complex
real-world tasks. However, their black-box nature has raised multiple concerns
about the fairness, trustworthiness, and transparency of computer-assisted
decision-making. This has led to the emergence of local post-hoc explainability
methods, which offer explanations for individual decisions made by black-box
algorithms. Among these methods, Kernel SHAP is widely used due to its
model-agnostic nature and its well-founded theoretical framework. Despite these
strengths, Kernel SHAP suffers from high instability: different executions of
the method with the same inputs can lead to significantly different
explanations, which diminishes the relevance of the explanations. The
contribution of this paper is two-fold. On the one hand, we show that Kernel
SHAP's instability is caused by its stochastic neighbor selection procedure,
which we adapt to achieve full stability without compromising explanation
fidelity. On the other hand, we show that by restricting the neighbors
generation to perturbations of size 1 -- which we call the coalitions of Layer
1 -- we obtain a novel feature-attribution method that is fully stable,
computationally efficient, and still meaningful.
| [
{
"created": "Tue, 19 Dec 2023 12:46:22 GMT",
"version": "v1"
},
{
"created": "Mon, 17 Jun 2024 17:35:02 GMT",
"version": "v2"
}
] | 2024-06-18 | [
[
"Kelodjou",
"Gwladys",
""
],
[
"Rozé",
"Laurence",
""
],
[
"Masson",
"Véronique",
""
],
[
"Galárraga",
"Luis",
""
],
[
"Gaudel",
"Romaric",
""
],
[
"Tchuente",
"Maurice",
""
],
[
"Termier",
"Alexandre",
""
]
] |
2312.12142 | Zhenhua Yang | Zhenhua Yang, Dezhi Peng, Yuxin Kong, Yuyi Zhang, Cong Yao, Lianwen
Jin | FontDiffuser: One-Shot Font Generation via Denoising Diffusion with
Multi-Scale Content Aggregation and Style Contrastive Learning | Accepted to AAAI 2024; Github Page:
https://github.com/yeungchenwa/FontDiffuser | 38th AAAI Conference on Artificial Intelligence (AAAI2024),
Vancouver, BC, Canada, 2024 | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic font generation is an imitation task, which aims to create a font
library that mimics the style of reference images while preserving the content
from source images. Although existing font generation methods have achieved
satisfactory performance, they still struggle with complex characters and large
style variations. To address these issues, we propose FontDiffuser, a
diffusion-based image-to-image one-shot font generation method, which
innovatively models the font imitation task as a noise-to-denoise paradigm. In
our method, we introduce a Multi-scale Content Aggregation (MCA) block, which
effectively combines global and local content cues across different scales,
leading to enhanced preservation of intricate strokes of complex characters.
Moreover, to better manage the large variations in style transfer, we propose a
Style Contrastive Refinement (SCR) module, which is a novel structure for style
representation learning. It utilizes a style extractor to disentangle styles
from images, subsequently supervising the diffusion model via a meticulously
designed style contrastive loss. Extensive experiments demonstrate
FontDiffuser's state-of-the-art performance in generating diverse characters
and styles. It consistently excels on complex characters and large style
changes compared to previous methods. The code is available at
https://github.com/yeungchenwa/FontDiffuser.
| [
{
"created": "Tue, 19 Dec 2023 13:23:20 GMT",
"version": "v1"
}
] | 2023-12-20 | [
[
"Yang",
"Zhenhua",
""
],
[
"Peng",
"Dezhi",
""
],
[
"Kong",
"Yuxin",
""
],
[
"Zhang",
"Yuyi",
""
],
[
"Yao",
"Cong",
""
],
[
"Jin",
"Lianwen",
""
]
] |
2312.12439 | Shi-Hai Sun | Tingqin Lai, Xiaolin Liang, Yi Zhu, Xinyi Wu, Lianye Liao, Xuelin
Yuan, Ping Su and Shihai Sun | Single-pixel 3D imaging based on fusion temporal data of single photon
detector and millimeter-wave radar | Accepted by Chinese Optics Letters, and comments are welcome | Chinese Optics Letters, Vol.2, No.2, 2024 | 10.3788/COL202422.022701 | null | cs.CV physics.optics | http://creativecommons.org/licenses/by/4.0/ | Recently, there has been increased attention towards 3D imaging using
single-pixel single-photon detection (also known as temporal data) due to its
potential advantages in terms of cost and power efficiency. However, to
eliminate the symmetry blur in the reconstructed images, a fixed background is
required. This paper proposes a fusion-data-based 3D imaging method that
utilizes a single-pixel single-photon detector and a millimeter-wave radar to
capture temporal histograms of a scene from multiple perspectives.
Subsequently, the 3D information can be reconstructed from the one-dimensional
fusion temporal data by using Artificial Neural Network (ANN). Both the
simulation and experimental results demonstrate that our fusion method
effectively eliminates symmetry blur and improves the quality of the
reconstructed images.
| [
{
"created": "Fri, 20 Oct 2023 13:03:48 GMT",
"version": "v1"
}
] | 2024-02-28 | [
[
"Lai",
"Tingqin",
""
],
[
"Liang",
"Xiaolin",
""
],
[
"Zhu",
"Yi",
""
],
[
"Wu",
"Xinyi",
""
],
[
"Liao",
"Lianye",
""
],
[
"Yuan",
"Xuelin",
""
],
[
"Su",
"Ping",
""
],
[
"Sun",
"Shihai",
""
]
] |
2312.12606 | Li Ding | Li Ding, Lee Spector | Optimizing Neural Networks with Gradient Lexicase Selection | ICLR 2022 | International Conference on Learning Representations (2022) | null | null | cs.LG cs.CV cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One potential drawback of using aggregated performance measurement in machine
learning is that models may learn to accept higher errors on some training
cases as compromises for lower errors on others, with the lower errors actually
being instances of overfitting. This can lead to both stagnation at local
optima and poor generalization. Lexicase selection is an uncompromising method
developed in evolutionary computation, which selects models on the basis of
sequences of individual training case errors instead of using aggregated
metrics such as loss and accuracy. In this paper, we investigate how lexicase
selection, in its general form, can be integrated into the context of deep
learning to enhance generalization. We propose Gradient Lexicase Selection, an
optimization framework that combines gradient descent and lexicase selection in
an evolutionary fashion. Our experimental results demonstrate that the proposed
method improves the generalization performance of various widely-used deep
neural network architectures across three image classification benchmarks.
Additionally, qualitative analysis suggests that our method assists networks in
learning more diverse representations. Our source code is available on GitHub:
https://github.com/ld-ing/gradient-lexicase.
| [
{
"created": "Tue, 19 Dec 2023 21:21:25 GMT",
"version": "v1"
}
] | 2023-12-21 | [
[
"Ding",
"Li",
""
],
[
"Spector",
"Lee",
""
]
] |
2312.12773 | Carol Anderson | Carol Anderson and Phil Crone (Ancestry.com) | Segmenting Messy Text: Detecting Boundaries in Text Derived from
Historical Newspaper Images | 8 pages, 4 figures | 2020 25th International Conference on Pattern Recognition (ICPR),
Milan, Italy, 2021, pp. 5543-5550 | 10.1109/ICPR48806.2021.9413279 | null | cs.CV cs.CL cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Text segmentation, the task of dividing a document into sections, is often a
prerequisite for performing additional natural language processing tasks.
Existing text segmentation methods have typically been developed and tested
using clean, narrative-style text with segments containing distinct topics.
Here we consider a challenging text segmentation task: dividing newspaper
marriage announcement lists into units of one announcement each. In many cases
the information is not structured into sentences, and adjacent segments are not
topically distinct from each other. In addition, the text of the announcements,
which is derived from images of historical newspapers via optical character
recognition, contains many typographical errors. As a result, these
announcements are not amenable to segmentation with existing techniques. We
present a novel deep learning-based model for segmenting such text and show
that it significantly outperforms an existing state-of-the-art method on our
task.
| [
{
"created": "Wed, 20 Dec 2023 05:17:06 GMT",
"version": "v1"
}
] | 2023-12-21 | [
[
"Anderson",
"Carol",
"",
"Ancestry.com"
],
[
"Crone",
"Phil",
"",
"Ancestry.com"
]
] |
2312.12881 | Julian Sienkiewicz | Stanis{\l}aw Gizi\'nski, Paulina Kaczy\'nska, Hubert Ruczy\'nski,
Emilia Wi\'snios, Bartosz Pieli\'nski, Przemys{\l}aw Biecek, Julian
Sienkiewicz | Big Tech influence over AI research revisited: memetic analysis of
attribution of ideas to affiliation | null | Journal of Informetrics 18(4), 101572 (2024) | 10.1016/j.joi.2024.101572 | null | physics.soc-ph cs.CL cs.SI | http://creativecommons.org/licenses/by/4.0/ | There exists a growing discourse around the domination of Big Tech on the
landscape of artificial intelligence (AI) research, yet our comprehension of
this phenomenon remains cursory. This paper aims to broaden and deepen our
understanding of Big Tech's reach and power within AI research. It highlights
the dominance not merely in terms of sheer publication volume but rather in the
propagation of new ideas or memes. Current studies often oversimplify the
concept of influence to the share of affiliations in academic papers, typically
sourced from limited databases such as arXiv or specific academic conferences.
The main goal of this paper is to unravel the specific nuances of such
influence, determining which AI ideas are predominantly driven by Big Tech
entities. By employing network and memetic analysis on AI-oriented paper
abstracts and their citation network, we are able to grasp a deeper insight
into this phenomenon. By utilizing two databases: OpenAlex and S2ORC, we are
able to perform such analysis on a much bigger scale than previous attempts.
Our findings suggest that while Big Tech-affiliated papers are
disproportionately more cited in some areas, the most cited papers are those
affiliated with both Big Tech and Academia. Focusing on the most contagious
memes, their attribution to specific affiliation groups (Big Tech, Academia,
mixed affiliation) seems equally distributed between those three groups. This
suggests that the notion of Big Tech domination over AI research is
oversimplified in the discourse.
| [
{
"created": "Wed, 20 Dec 2023 09:45:44 GMT",
"version": "v1"
},
{
"created": "Sat, 24 Aug 2024 09:11:13 GMT",
"version": "v2"
}
] | 2024-08-27 | [
[
"Giziński",
"Stanisław",
""
],
[
"Kaczyńska",
"Paulina",
""
],
[
"Ruczyński",
"Hubert",
""
],
[
"Wiśnios",
"Emilia",
""
],
[
"Pieliński",
"Bartosz",
""
],
[
"Biecek",
"Przemysław",
""
],
[
"Sienkiewicz",
"Julian",
""
]
] |
2312.12882 | Junkang Wu | Junkang Wu, Jiawei Chen, Jiancan Wu, Wentao Shi, Jizhi Zhang, Xiang
Wang | BSL: Understanding and Improving Softmax Loss for Recommendation | null | ICDE2024 | null | null | cs.LG cs.AI cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Loss functions steer the optimization direction of recommendation models and
are critical to model performance, but have received relatively little
attention in recent recommendation research. Among various losses, we find
Softmax loss (SL) stands out for not only achieving remarkable accuracy but
also better robustness and fairness. Nevertheless, the current literature lacks
a comprehensive explanation for the efficacy of SL. Toward addressing this
research gap, we conduct theoretical analyses on SL and uncover three insights:
1) Optimizing SL is equivalent to performing Distributionally Robust
Optimization (DRO) on the negative data, thereby learning against perturbations
on the negative distribution and yielding robustness to noisy negatives. 2)
Comparing with other loss functions, SL implicitly penalizes the prediction
variance, resulting in a smaller gap between predicted values and and thus
producing fairer results. Building on these insights, we further propose a
novel loss function Bilateral SoftMax Loss (BSL) that extends the advantage of
SL to both positive and negative sides. BSL augments SL by applying the same
Log-Expectation-Exp structure to positive examples as is used for negatives,
making the model robust to the noisy positives as well. Remarkably, BSL is
simple and easy-to-implement -- requiring just one additional line of code
compared to SL. Experiments on four real-world datasets and three
representative backbones demonstrate the effectiveness of our proposal. The
code is available at https://github.com/junkangwu/BSL
| [
{
"created": "Wed, 20 Dec 2023 09:46:42 GMT",
"version": "v1"
}
] | 2023-12-21 | [
[
"Wu",
"Junkang",
""
],
[
"Chen",
"Jiawei",
""
],
[
"Wu",
"Jiancan",
""
],
[
"Shi",
"Wentao",
""
],
[
"Zhang",
"Jizhi",
""
],
[
"Wang",
"Xiang",
""
]
] |
2312.12908 | Pau Torras | Pau Torras and Sanket Biswas and Alicia Forn\'es | A Unified Representation Framework for the Evaluation of Optical Music
Recognition Systems | 18 pages, 4 figures, 3 tables, submitted (under review) for the
International Journal in Document Analysis and Recognition | International Journal on Document Analysis and Recognition
(IJDAR), Volume 27, 2024, pp. 379-393 | 10.1007/s10032-024-00485-8 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Modern-day Optical Music Recognition (OMR) is a fairly fragmented field. Most
OMR approaches use datasets that are independent and incompatible between each
other, making it difficult to both combine them and compare recognition systems
built upon them. In this paper we identify the need of a common music
representation language and propose the Music Tree Notation (MTN) format, with
the idea to construct a common endpoint for OMR research that allows
coordination, reuse of technology and fair evaluation of community efforts.
This format represents music as a set of primitives that group together into
higher-abstraction nodes, a compromise between the expression of fully
graph-based and sequential notation formats. We have also developed a specific
set of OMR metrics and a typeset score dataset as a proof of concept of this
idea.
| [
{
"created": "Wed, 20 Dec 2023 10:45:22 GMT",
"version": "v1"
},
{
"created": "Fri, 6 Sep 2024 13:25:56 GMT",
"version": "v2"
}
] | 2024-09-09 | [
[
"Torras",
"Pau",
""
],
[
"Biswas",
"Sanket",
""
],
[
"Fornés",
"Alicia",
""
]
] |
2312.13216 | Octave Mariotti | Octave Mariotti, Oisin Mac Aodha, Hakan Bilen | Improving Semantic Correspondence with Viewpoint-Guided Spherical Maps | null | Proceedings of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition (CVPR), 2024, pp. 19521-19530 | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | Recent progress in self-supervised representation learning has resulted in
models that are capable of extracting image features that are not only
effective at encoding image level, but also pixel-level, semantics. These
features have been shown to be effective for dense visual semantic
correspondence estimation, even outperforming fully-supervised methods.
Nevertheless, current self-supervised approaches still fail in the presence of
challenging image characteristics such as symmetries and repeated parts. To
address these limitations, we propose a new approach for semantic
correspondence estimation that supplements discriminative self-supervised
features with 3D understanding via a weak geometric spherical prior. Compared
to more involved 3D pipelines, our model only requires weak viewpoint
information, and the simplicity of our spherical representation enables us to
inject informative geometric priors into the model during training. We propose
a new evaluation metric that better accounts for repeated part and
symmetry-induced mistakes. We present results on the challenging SPair-71k
dataset, where we show that our approach demonstrates is capable of
distinguishing between symmetric views and repeated parts across many object
categories, and also demonstrate that we can generalize to unseen classes on
the AwA dataset.
| [
{
"created": "Wed, 20 Dec 2023 17:35:24 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Jul 2024 16:07:13 GMT",
"version": "v2"
}
] | 2024-07-08 | [
[
"Mariotti",
"Octave",
""
],
[
"Mac Aodha",
"Oisin",
""
],
[
"Bilen",
"Hakan",
""
]
] |
2312.13423 | Yavuz Selim Kartal | Yavuz Selim Kartal, Muhammad Ahsan Shahid, Sotaro Takeshita, Tornike
Tsereteli, Andrea Zielinski, Benjamin Zapilko, Philipp Mayr | VADIS -- a VAriable Detection, Interlinking and Summarization system | It is 4 pages and 2 figures. This paper has recently been accepted by
ECIR 2024 Demo Track and this version is the camera-ready version of the
paper | ECIR 2024 proceedings | 10.1007/978-3-031-56069-9_22 | null | cs.DL cs.CL cs.IR | http://creativecommons.org/licenses/by/4.0/ | The VADIS system addresses the demand of providing enhanced information
access in the domain of the social sciences. This is achieved by allowing users
to search and use survey variables in context of their underlying research data
and scholarly publications which have been interlinked with each other.
| [
{
"created": "Wed, 20 Dec 2023 21:02:09 GMT",
"version": "v1"
}
] | 2024-04-11 | [
[
"Kartal",
"Yavuz Selim",
""
],
[
"Shahid",
"Muhammad Ahsan",
""
],
[
"Takeshita",
"Sotaro",
""
],
[
"Tsereteli",
"Tornike",
""
],
[
"Zielinski",
"Andrea",
""
],
[
"Zapilko",
"Benjamin",
""
],
[
"Mayr",
"Philipp",
""
]
] |
2312.13437 | Alexander Braylan | Alexander Braylan, Madalyn Marabella, Omar Alonso, Matthew Lease | A General Model for Aggregating Annotations Across Simple, Complex, and
Multi-Object Annotation Tasks | null | Journal of Artificial Intelligence Research 2023, 78, 901-973 | 10.1613/jair.1.14388 | null | cs.LG cs.CL | http://creativecommons.org/licenses/by/4.0/ | Human annotations are vital to supervised learning, yet annotators often
disagree on the correct label, especially as annotation tasks increase in
complexity. A strategy to improve label quality is to ask multiple annotators
to label the same item and aggregate their labels. Many aggregation models have
been proposed for categorical or numerical annotation tasks, but far less work
has considered more complex annotation tasks involving open-ended,
multivariate, or structured responses. While a variety of bespoke models have
been proposed for specific tasks, our work is the first to introduce
aggregation methods that generalize across many diverse complex tasks,
including sequence labeling, translation, syntactic parsing, ranking, bounding
boxes, and keypoints. This generality is achieved by devising a task-agnostic
method to model distances between labels rather than the labels themselves.
This article extends our prior work with investigation of three new research
questions. First, how do complex annotation properties impact aggregation
accuracy? Second, how should a task owner navigate the many modeling choices to
maximize aggregation accuracy? Finally, what diagnoses can verify that
aggregation models are specified correctly for the given data? To understand
how various factors impact accuracy and to inform model selection, we conduct
simulation studies and experiments on real, complex datasets. Regarding
testing, we introduce unit tests for aggregation models and present a suite of
such tests to ensure that a given model is not mis-specified and exhibits
expected behavior.
Beyond investigating these research questions above, we discuss the
foundational concept of annotation complexity, present a new aggregation model
as a bridge between traditional models and our own, and contribute a new
semi-supervised learning method for complex label aggregation that outperforms
prior work.
| [
{
"created": "Wed, 20 Dec 2023 21:28:35 GMT",
"version": "v1"
}
] | 2023-12-22 | [
[
"Braylan",
"Alexander",
""
],
[
"Marabella",
"Madalyn",
""
],
[
"Alonso",
"Omar",
""
],
[
"Lease",
"Matthew",
""
]
] |
2312.13471 | Xingxing Zuo | Jens Naumann, Binbin Xu, Stefan Leutenegger, Xingxing Zuo | NeRF-VO: Real-Time Sparse Visual Odometry with Neural Radiance Fields | Project page: https://xingxingzuo.github.io/nerfvo/ | IEEE Robotics and Automation Letters (RA-L), 2024 | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We introduce a novel monocular visual odometry (VO) system, NeRF-VO, that
integrates learning-based sparse visual odometry for low-latency camera
tracking and a neural radiance scene representation for fine-detailed dense
reconstruction and novel view synthesis. Our system initializes camera poses
using sparse visual odometry and obtains view-dependent dense geometry priors
from a monocular prediction network. We harmonize the scale of poses and dense
geometry, treating them as supervisory cues to train a neural implicit scene
representation. NeRF-VO demonstrates exceptional performance in both
photometric and geometric fidelity of the scene representation by jointly
optimizing a sliding window of keyframed poses and the underlying dense
geometry, which is accomplished through training the radiance field with volume
rendering. We surpass SOTA methods in pose estimation accuracy, novel view
synthesis fidelity, and dense reconstruction quality across a variety of
synthetic and real-world datasets while achieving a higher camera tracking
frequency and consuming less GPU memory.
| [
{
"created": "Wed, 20 Dec 2023 22:42:17 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Jul 2024 05:58:33 GMT",
"version": "v2"
}
] | 2024-07-17 | [
[
"Naumann",
"Jens",
""
],
[
"Xu",
"Binbin",
""
],
[
"Leutenegger",
"Stefan",
""
],
[
"Zuo",
"Xingxing",
""
]
] |
2312.13820 | Marina Ljubenovic | Marina Ljubenovic, Alessia Artesani, Stefano Bonetti, Arianna
Traviglia | Super-resolution of THz time-domain images based on low-rank
representation | This work was presented at the Sixth International Workshop on Mobile
Terahertz Systems (IWMTS) | 2023 Sixth International Workshop on Mobile Terahertz Systems
(IWMTS), Bonn, Germany, 2023, pp. 1-5 | 10.1109/IWMTS58186.2023.10207785 | null | physics.optics cs.CV | http://creativecommons.org/licenses/by/4.0/ | Terahertz time-domain spectroscopy (THz-TDS) employs sub-picosecond pulses to
probe dielectric properties of materials giving as a result a 3-dimensional
hyperspectral data cube. The spatial resolution of THz images is primarily
limited by two sources: a non-zero THz beam waist and the acquisition step
size. Acquisition with a small step size allows for the visualisation of
smaller details in images at the expense of acquisition time, but the
frequency-dependent point-spread function remains the biggest bottleneck for
THz imaging. This work presents a super-resolution approach to restore THz
time-domain images acquired with medium-to-big step sizes. The results show the
optimized and robust performance for different frequency bands (from 0.5 to 3.5
THz) obtaining higher resolution and additionally removing effects of blur at
lower frequencies and noise at higher frequencies.
| [
{
"created": "Thu, 21 Dec 2023 13:11:57 GMT",
"version": "v1"
}
] | 2023-12-22 | [
[
"Ljubenovic",
"Marina",
""
],
[
"Artesani",
"Alessia",
""
],
[
"Bonetti",
"Stefano",
""
],
[
"Traviglia",
"Arianna",
""
]
] |
2312.13841 | Alexander K\"ohler | Alexander K\"ohler, Michael Breu{\ss} | Towards Efficient Time Stepping for Numerical Shape Correspondence | 12 pages, 4 figures | SSVM2021 (2021) 165-176 | 10.1007/978-3-030-75549-2_14 | null | math.NA cs.CV cs.NA | http://creativecommons.org/licenses/by/4.0/ | The computation of correspondences between shapes is a principal task in
shape analysis. To this end, methods based on partial differential equations
(PDEs) have been established, encompassing e.g. the classic heat kernel
signature as well as numerical solution schemes for geometric PDEs. In this
work we focus on the latter approach.
We consider here several time stepping schemes. The goal of this
investigation is to assess, if one may identify a useful property of methods
for time integration for the shape analysis context. Thereby we investigate the
dependence on time step size, since the class of implicit schemes that are
useful candidates in this context should ideally yield an invariant behaviour
with respect to this parameter.
To this end we study integration of heat and wave equation on a manifold. In
order to facilitate this study, we propose an efficient, unified model order
reduction framework for these models. We show that specific $l_0$ stable
schemes are favourable for numerical shape analysis. We give an experimental
evaluation of the methods at hand of classical TOSCA data sets.
| [
{
"created": "Thu, 21 Dec 2023 13:40:03 GMT",
"version": "v1"
}
] | 2023-12-22 | [
[
"Köhler",
"Alexander",
""
],
[
"Breuß",
"Michael",
""
]
] |
2312.13906 | Benjamin Alt | Benjamin Alt, Minh Dang Nguyen, Andreas Hermann, Darko Katic, Rainer
J\"akel, R\"udiger Dillmann, Eric Sax | EfficientPPS: Part-aware Panoptic Segmentation of Transparent Objects
for Robotic Manipulation | 8 pages, 8 figures, presented at the 56th International Symposium on
Robotics (ISR Europe) | ISR Europe 2023 | null | null | cs.RO cs.AI cs.CV cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | The use of autonomous robots for assistance tasks in hospitals has the
potential to free up qualified staff and im-prove patient care. However, the
ubiquity of deformable and transparent objects in hospital settings poses
signif-icant challenges to vision-based perception systems. We present
EfficientPPS, a neural architecture for part-aware panoptic segmentation that
provides robots with semantically rich visual information for grasping and
ma-nipulation tasks. We also present an unsupervised data collection and
labelling method to reduce the need for human involvement in the training
process. EfficientPPS is evaluated on a dataset containing real-world hospital
objects and demonstrated to be robust and efficient in grasping transparent
transfusion bags with a collaborative robot arm.
| [
{
"created": "Thu, 21 Dec 2023 14:51:23 GMT",
"version": "v1"
}
] | 2023-12-22 | [
[
"Alt",
"Benjamin",
""
],
[
"Nguyen",
"Minh Dang",
""
],
[
"Hermann",
"Andreas",
""
],
[
"Katic",
"Darko",
""
],
[
"Jäkel",
"Rainer",
""
],
[
"Dillmann",
"Rüdiger",
""
],
[
"Sax",
"Eric",
""
]
] |
2312.13944 | Tomasz Danel | Tomasz Danel, Jan {\L}\k{e}ski, Sabina Podlewska, Igor T. Podolak | Docking-based generative approaches in the search for new drug
candidates | null | Drug Discovery Today 28.2 (2023) | 10.1016/j.drudis.2022.103439 | null | q-bio.BM cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Despite the great popularity of virtual screening of existing compound
libraries, the search for new potential drug candidates also takes advantage of
generative protocols, where new compound suggestions are enumerated using
various algorithms. To increase the activity potency of generative approaches,
they have recently been coupled with molecular docking, a leading methodology
of structure-based drug design. In this review, we summarize progress since
docking-based generative models emerged. We propose a new taxonomy for these
methods and discuss their importance for the field of computer-aided drug
design. In addition, we discuss the most promising directions for further
development of generative protocols coupled with docking.
| [
{
"created": "Wed, 22 Nov 2023 11:37:09 GMT",
"version": "v1"
}
] | 2023-12-22 | [
[
"Danel",
"Tomasz",
""
],
[
"Łęski",
"Jan",
""
],
[
"Podlewska",
"Sabina",
""
],
[
"Podolak",
"Igor T.",
""
]
] |
2312.14157 | Vladislav Golyanik | Christen Millerdurai and Diogo Luvizon and Viktor Rudnev and Andr\'e
Jonas and Jiayi Wang and Christian Theobalt and Vladislav Golyanik | 3D Pose Estimation of Two Interacting Hands from a Monocular Event
Camera | 17 pages, 12 figures, 7 tables; project page:
https://4dqv.mpi-inf.mpg.de/Ev2Hands/ | International Conference on 3D Vision (3DV) 2024 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 3D hand tracking from a monocular video is a very challenging problem due to
hand interactions, occlusions, left-right hand ambiguity, and fast motion. Most
existing methods rely on RGB inputs, which have severe limitations under
low-light conditions and suffer from motion blur. In contrast, event cameras
capture local brightness changes instead of full image frames and do not suffer
from the described effects. Unfortunately, existing image-based techniques
cannot be directly applied to events due to significant differences in the data
modalities. In response to these challenges, this paper introduces the first
framework for 3D tracking of two fast-moving and interacting hands from a
single monocular event camera. Our approach tackles the left-right hand
ambiguity with a novel semi-supervised feature-wise attention mechanism and
integrates an intersection loss to fix hand collisions. To facilitate advances
in this research domain, we release a new synthetic large-scale dataset of two
interacting hands, Ev2Hands-S, and a new real benchmark with real event streams
and ground-truth 3D annotations, Ev2Hands-R. Our approach outperforms existing
methods in terms of the 3D reconstruction accuracy and generalises to real data
under severe light conditions.
| [
{
"created": "Thu, 21 Dec 2023 18:59:57 GMT",
"version": "v1"
}
] | 2023-12-22 | [
[
"Millerdurai",
"Christen",
""
],
[
"Luvizon",
"Diogo",
""
],
[
"Rudnev",
"Viktor",
""
],
[
"Jonas",
"André",
""
],
[
"Wang",
"Jiayi",
""
],
[
"Theobalt",
"Christian",
""
],
[
"Golyanik",
"Vladislav",
""
]
] |
Subsets and Splits