id
stringlengths 10
10
| submitter
stringlengths 3
52
| authors
stringlengths 6
7.24k
| title
stringlengths 12
217
| comments
stringlengths 1
446
⌀ | journal-ref
stringlengths 4
297
| doi
stringlengths 12
118
⌀ | report-no
stringclasses 237
values | categories
stringlengths 5
71
| license
stringclasses 6
values | abstract
stringlengths 90
3.26k
| versions
listlengths 1
17
| update_date
stringclasses 969
values | authors_parsed
sequencelengths 1
451
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2402.18919 | Fahimeh Hosseini Noohdani | Fahimeh Hosseini Noohdani, Parsa Hosseini, Aryan Yazdan Parast,
Hamidreza Yaghoubi Araghi, Mahdieh Soleymani Baghshah | Decompose-and-Compose: A Compositional Approach to Mitigating Spurious
Correlation | CVPR 2024, 17 pages | Proceedings of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition (CVPR), 2024 | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | While standard Empirical Risk Minimization (ERM) training is proven effective
for image classification on in-distribution data, it fails to perform well on
out-of-distribution samples. One of the main sources of distribution shift for
image classification is the compositional nature of images. Specifically, in
addition to the main object or component(s) determining the label, some other
image components usually exist, which may lead to the shift of input
distribution between train and test environments. More importantly, these
components may have spurious correlations with the label. To address this
issue, we propose Decompose-and-Compose (DaC), which improves robustness to
correlation shift by a compositional approach based on combining elements of
images. Based on our observations, models trained with ERM usually highly
attend to either the causal components or the components having a high spurious
correlation with the label (especially in datapoints on which models have a
high confidence). In fact, according to the amount of spurious correlation and
the easiness of classification based on the causal or non-causal components,
the model usually attends to one of these more (on samples with high
confidence). Following this, we first try to identify the causal components of
images using class activation maps of models trained with ERM. Afterward, we
intervene on images by combining them and retraining the model on the augmented
data, including the counterfactual ones. Along with its high interpretability,
this work proposes a group-balancing method by intervening on images without
requiring group labels or information regarding the spurious features during
training. The method has an overall better worst group accuracy compared to
previous methods with the same amount of supervision on the group labels in
correlation shift.
| [
{
"created": "Thu, 29 Feb 2024 07:24:24 GMT",
"version": "v1"
},
{
"created": "Sat, 2 Mar 2024 14:57:12 GMT",
"version": "v2"
},
{
"created": "Sun, 21 Jul 2024 12:22:05 GMT",
"version": "v3"
}
] | 2024-07-23 | [
[
"Noohdani",
"Fahimeh Hosseini",
""
],
[
"Hosseini",
"Parsa",
""
],
[
"Parast",
"Aryan Yazdan",
""
],
[
"Araghi",
"Hamidreza Yaghoubi",
""
],
[
"Baghshah",
"Mahdieh Soleymani",
""
]
] |
2402.18958 | Boxuan Zhang | Boxuan Zhang, Zengmao Wang and Bo Du | Boosting Semi-Supervised Object Detection in Remote Sensing Images With
Active Teaching | null | in IEEE Geoscience and Remote Sensing Letters, vol. 21, pp. 1-5,
2024 | 10.1109/LGRS.2024.3357098 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The lack of object-level annotations poses a significant challenge for object
detection in remote sensing images (RSIs). To address this issue, active
learning (AL) and semi-supervised learning (SSL) techniques have been proposed
to enhance the quality and quantity of annotations. AL focuses on selecting the
most informative samples for annotation, while SSL leverages the knowledge from
unlabeled samples. In this letter, we propose a novel AL method to boost
semi-supervised object detection (SSOD) for remote sensing images with a
teacher student network, called SSOD-AT. The proposed method incorporates an
RoI comparison module (RoICM) to generate high-confidence pseudo-labels for
regions of interest (RoIs). Meanwhile, the RoICM is utilized to identify the
top-K uncertain images. To reduce redundancy in the top-K uncertain images for
human labeling, a diversity criterion is introduced based on object-level
prototypes of different categories using both labeled and pseudo-labeled
images. Extensive experiments on DOTA and DIOR, two popular datasets,
demonstrate that our proposed method outperforms state-of-the-art methods for
object detection in RSIs. Compared with the best performance in the SOTA
methods, the proposed method achieves 1 percent improvement in most cases in
the whole AL.
| [
{
"created": "Thu, 29 Feb 2024 08:52:38 GMT",
"version": "v1"
}
] | 2024-03-01 | [
[
"Zhang",
"Boxuan",
""
],
[
"Wang",
"Zengmao",
""
],
[
"Du",
"Bo",
""
]
] |
2402.19197 | Kennard Yanting Chan | Kennard Yanting Chan, Fayao Liu, Guosheng Lin, Chuan Sheng Foo, Weisi
Lin | Fine Structure-Aware Sampling: A New Sampling Training Scheme for
Pixel-Aligned Implicit Models in Single-View Human Reconstruction | Accepted in Proceedings of the AAAI Conference on Artificial
Intelligence, 2024 (AAAI 2024) | Proceedings of the AAAI Conference on Artificial Intelligence,
2024, pp. 964-971 | 10.1609/aaai.v38i2.27856 | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Pixel-aligned implicit models, such as PIFu, PIFuHD, and ICON, are used for
single-view clothed human reconstruction. These models need to be trained using
a sampling training scheme. Existing sampling training schemes either fail to
capture thin surfaces (e.g. ears, fingers) or cause noisy artefacts in
reconstructed meshes. To address these problems, we introduce Fine
Structured-Aware Sampling (FSS), a new sampling training scheme to train
pixel-aligned implicit models for single-view human reconstruction. FSS
resolves the aforementioned problems by proactively adapting to the thickness
and complexity of surfaces. In addition, unlike existing sampling training
schemes, FSS shows how normals of sample points can be capitalized in the
training process to improve results. Lastly, to further improve the training
process, FSS proposes a mesh thickness loss signal for pixel-aligned implicit
models. It becomes computationally feasible to introduce this loss once a
slight reworking of the pixel-aligned implicit function framework is carried
out. Our results show that our methods significantly outperform SOTA methods
qualitatively and quantitatively. Our code is publicly available at
https://github.com/kcyt/FSS.
| [
{
"created": "Thu, 29 Feb 2024 14:26:46 GMT",
"version": "v1"
}
] | 2024-07-02 | [
[
"Chan",
"Kennard Yanting",
""
],
[
"Liu",
"Fayao",
""
],
[
"Lin",
"Guosheng",
""
],
[
"Foo",
"Chuan Sheng",
""
],
[
"Lin",
"Weisi",
""
]
] |
2402.19265 | Daniele Meli | Daniele Meli, Alberto Castellini, Alessandro Farinelli | Learning Logic Specifications for Policy Guidance in POMDPs: an
Inductive Logic Programming Approach | null | Journal of Artificial Intelligence Research, volume 79 (2024), pp.
725-776 | 10.1613/jair.1.15826 | null | cs.AI cs.LG cs.LO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Partially Observable Markov Decision Processes (POMDPs) are a powerful
framework for planning under uncertainty. They allow to model state uncertainty
as a belief probability distribution. Approximate solvers based on Monte Carlo
sampling show great success to relax the computational demand and perform
online planning. However, scaling to complex realistic domains with many
actions and long planning horizons is still a major challenge, and a key point
to achieve good performance is guiding the action-selection process with
domain-dependent policy heuristics which are tailored for the specific
application domain. We propose to learn high-quality heuristics from POMDP
traces of executions generated by any solver. We convert the belief-action
pairs to a logical semantics, and exploit data- and time-efficient Inductive
Logic Programming (ILP) to generate interpretable belief-based policy
specifications, which are then used as online heuristics. We evaluate
thoroughly our methodology on two notoriously challenging POMDP problems,
involving large action spaces and long planning horizons, namely, rocksample
and pocman. Considering different state-of-the-art online POMDP solvers,
including POMCP, DESPOT and AdaOPS, we show that learned heuristics expressed
in Answer Set Programming (ASP) yield performance superior to neural networks
and similar to optimal handcrafted task-specific heuristics within lower
computational time. Moreover, they well generalize to more challenging
scenarios not experienced in the training phase (e.g., increasing rocks and
grid size in rocksample, incrementing the size of the map and the aggressivity
of ghosts in pocman).
| [
{
"created": "Thu, 29 Feb 2024 15:36:01 GMT",
"version": "v1"
}
] | 2024-03-01 | [
[
"Meli",
"Daniele",
""
],
[
"Castellini",
"Alberto",
""
],
[
"Farinelli",
"Alessandro",
""
]
] |
2402.19348 | Xingchen Zou | Xingchen Zou, Yibo Yan, Xixuan Hao, Yuehong Hu, Haomin Wen, Erdong
Liu, Junbo Zhang, Yong Li, Tianrui Li, Yu Zheng, Yuxuan Liang | Deep Learning for Cross-Domain Data Fusion in Urban Computing: Taxonomy,
Advances, and Outlook | null | Inform.Fusion.113(2025)102606 | 10.1016/j.inffus.2024.102606. | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As cities continue to burgeon, Urban Computing emerges as a pivotal
discipline for sustainable development by harnessing the power of cross-domain
data fusion from diverse sources (e.g., geographical, traffic, social media,
and environmental data) and modalities (e.g., spatio-temporal, visual, and
textual modalities). Recently, we are witnessing a rising trend that utilizes
various deep-learning methods to facilitate cross-domain data fusion in smart
cities. To this end, we propose the first survey that systematically reviews
the latest advancements in deep learning-based data fusion methods tailored for
urban computing. Specifically, we first delve into data perspective to
comprehend the role of each modality and data source. Secondly, we classify the
methodology into four primary categories: feature-based, alignment-based,
contrast-based, and generation-based fusion methods. Thirdly, we further
categorize multi-modal urban applications into seven types: urban planning,
transportation, economy, public safety, society, environment, and energy.
Compared with previous surveys, we focus more on the synergy of deep learning
methods with urban computing applications. Furthermore, we shed light on the
interplay between Large Language Models (LLMs) and urban computing, postulating
future research directions that could revolutionize the field. We firmly
believe that the taxonomy, progress, and prospects delineated in our survey
stand poised to significantly enrich the research community. The summary of the
comprehensive and up-to-date paper list can be found at
https://github.com/yoshall/Awesome-Multimodal-Urban-Computing.
| [
{
"created": "Thu, 29 Feb 2024 16:56:23 GMT",
"version": "v1"
},
{
"created": "Sun, 16 Jun 2024 10:16:00 GMT",
"version": "v2"
}
] | 2024-08-09 | [
[
"Zou",
"Xingchen",
""
],
[
"Yan",
"Yibo",
""
],
[
"Hao",
"Xixuan",
""
],
[
"Hu",
"Yuehong",
""
],
[
"Wen",
"Haomin",
""
],
[
"Liu",
"Erdong",
""
],
[
"Zhang",
"Junbo",
""
],
[
"Li",
"Yong",
""
],
[
"Li",
"Tianrui",
""
],
[
"Zheng",
"Yu",
""
],
[
"Liang",
"Yuxuan",
""
]
] |
2402.19431 | Zexiong Ma | Zexiong Ma, Shengnan An, Bing Xie, Zeqi Lin | Compositional API Recommendation for Library-Oriented Code Generation | null | 32nd IEEE/ACM International Conference on Program Comprehension
(ICPC 2024), Apr 2024, Lisboa, Portugal | 10.1145/3643916.3644403 | null | cs.SE cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) have achieved exceptional performance in code
generation. However, the performance remains unsatisfactory in generating
library-oriented code, especially for the libraries not present in the training
data of LLMs. Previous work utilizes API recommendation technology to help LLMs
use libraries: it retrieves APIs related to the user requirements, then
leverages them as context to prompt LLMs. However, developmental requirements
can be coarse-grained, requiring a combination of multiple fine-grained APIs.
This granularity inconsistency makes API recommendation a challenging task. To
address this, we propose CAPIR (Compositional API Recommendation), which adopts
a "divide-and-conquer" strategy to recommend APIs for coarse-grained
requirements. Specifically, CAPIR employs an LLM-based Decomposer to break down
a coarse-grained task description into several detailed subtasks. Then, CAPIR
applies an embedding-based Retriever to identify relevant APIs corresponding to
each subtask. Moreover, CAPIR leverages an LLM-based Reranker to filter out
redundant APIs and provides the final recommendation. To facilitate the
evaluation of API recommendation methods on coarse-grained requirements, we
present two challenging benchmarks, RAPID (Recommend APIs based on
Documentation) and LOCG (Library-Oriented Code Generation). Experimental
results on these benchmarks, demonstrate the effectiveness of CAPIR in
comparison to existing baselines. Specifically, on RAPID's Torchdata-AR
dataset, compared to the state-of-the-art API recommendation approach, CAPIR
improves recall@5 from 18.7% to 43.2% and precision@5 from 15.5% to 37.1%. On
LOCG's Torchdata-Code dataset, compared to code generation without API
recommendation, CAPIR improves pass@100 from 16.0% to 28.0%.
| [
{
"created": "Thu, 29 Feb 2024 18:27:27 GMT",
"version": "v1"
}
] | 2024-03-01 | [
[
"Ma",
"Zexiong",
""
],
[
"An",
"Shengnan",
""
],
[
"Xie",
"Bing",
""
],
[
"Lin",
"Zeqi",
""
]
] |
2403.00014 | Le Cheng | Le Cheng, Peican Zhu, Keke Tang, Chao Gao, Zhen Wang | GIN-SD: Source Detection in Graphs with Incomplete Nodes via Positional
Encoding and Attentive Fusion | The paper is accepted by AAAI24 | Proceedings of the AAAI Conference on Artificial Intelligence 2024 | 10.1609/aaai.v38i1.27755 | Vol. 38, No. 1, 55-63 | cs.SI cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Source detection in graphs has demonstrated robust efficacy in the domain of
rumor source identification. Although recent solutions have enhanced
performance by leveraging deep neural networks, they often require complete
user data. In this paper, we address a more challenging task, rumor source
detection with incomplete user data, and propose a novel framework, i.e.,
Source Detection in Graphs with Incomplete Nodes via Positional Encoding and
Attentive Fusion (GIN-SD), to tackle this challenge. Specifically, our approach
utilizes a positional embedding module to distinguish nodes that are incomplete
and employs a self-attention mechanism to focus on nodes with greater
information transmission capacity. To mitigate the prediction bias caused by
the significant disparity between the numbers of source and non-source nodes,
we also introduce a class-balancing mechanism. Extensive experiments validate
the effectiveness of GIN-SD and its superiority to state-of-the-art methods.
| [
{
"created": "Tue, 27 Feb 2024 09:35:54 GMT",
"version": "v1"
}
] | 2024-05-31 | [
[
"Cheng",
"Le",
""
],
[
"Zhu",
"Peican",
""
],
[
"Tang",
"Keke",
""
],
[
"Gao",
"Chao",
""
],
[
"Wang",
"Zhen",
""
]
] |
2403.00071 | Suyuchen Wang | Suyuchen Wang, Ivan Kobyzev, Peng Lu, Mehdi Rezagholizadeh, Bang Liu | Resonance RoPE: Improving Context Length Generalization of Large
Language Models | 13 pages, 4 figures, accepted at ACL 2024 Findings | https://aclanthology.org/2024.findings-acl.32 | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper addresses the challenge of train-short-test-long (TSTL) scenarios
in Large Language Models (LLMs) equipped with Rotary Position Embedding (RoPE),
where models pre-trained on shorter sequences face difficulty with
out-of-distribution (OOD) token positions in longer sequences. We introduce
Resonance RoPE, a novel approach designed to narrow the generalization gap in
TSTL scenarios by refining the interpolation of RoPE features for OOD
positions, significantly improving the model performance without additional
online computational costs. Furthermore, we present PosGen, a new synthetic
benchmark specifically designed for fine-grained behavior analysis in TSTL
scenarios, aiming to isolate the constantly increasing difficulty of token
generation on long contexts from the challenges of recognizing new token
positions. Our experiments on synthetic tasks show that after applying
Resonance RoPE, Transformers recognize OOD position better and more robustly.
Our extensive LLM experiments also show superior performance after applying
Resonance RoPE to the current state-of-the-art RoPE scaling method, YaRN, on
both upstream language modeling tasks and a variety of downstream long-text
applications.
| [
{
"created": "Thu, 29 Feb 2024 19:02:03 GMT",
"version": "v1"
},
{
"created": "Mon, 10 Jun 2024 13:30:34 GMT",
"version": "v2"
}
] | 2024-09-05 | [
[
"Wang",
"Suyuchen",
""
],
[
"Kobyzev",
"Ivan",
""
],
[
"Lu",
"Peng",
""
],
[
"Rezagholizadeh",
"Mehdi",
""
],
[
"Liu",
"Bang",
""
]
] |
2403.00175 | Safouane El Ghazouali | Safouane El Ghazouali, Youssef Mhirit, Ali Oukhrid, Umberto
Michelucci, Hichem Nouira | FusionVision: A comprehensive approach of 3D object reconstruction and
segmentation from RGB-D cameras using YOLO and fast segment anything | 14 pages, 9 figures, 1 table | Sensors 2024 | 10.3390/s24092889 | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In the realm of computer vision, the integration of advanced techniques into
the processing of RGB-D camera inputs poses a significant challenge, given the
inherent complexities arising from diverse environmental conditions and varying
object appearances. Therefore, this paper introduces FusionVision, an
exhaustive pipeline adapted for the robust 3D segmentation of objects in RGB-D
imagery. Traditional computer vision systems face limitations in simultaneously
capturing precise object boundaries and achieving high-precision object
detection on depth map as they are mainly proposed for RGB cameras. To address
this challenge, FusionVision adopts an integrated approach by merging
state-of-the-art object detection techniques, with advanced instance
segmentation methods. The integration of these components enables a holistic
(unified analysis of information obtained from both color \textit{RGB} and
depth \textit{D} channels) interpretation of RGB-D data, facilitating the
extraction of comprehensive and accurate object information. The proposed
FusionVision pipeline employs YOLO for identifying objects within the RGB image
domain. Subsequently, FastSAM, an innovative semantic segmentation model, is
applied to delineate object boundaries, yielding refined segmentation masks.
The synergy between these components and their integration into 3D scene
understanding ensures a cohesive fusion of object detection and segmentation,
enhancing overall precision in 3D object segmentation. The code and pre-trained
models are publicly available at https://github.com/safouaneelg/FusionVision/.
| [
{
"created": "Thu, 29 Feb 2024 22:59:27 GMT",
"version": "v1"
},
{
"created": "Wed, 1 May 2024 12:34:53 GMT",
"version": "v2"
}
] | 2024-05-02 | [
[
"Ghazouali",
"Safouane El",
""
],
[
"Mhirit",
"Youssef",
""
],
[
"Oukhrid",
"Ali",
""
],
[
"Michelucci",
"Umberto",
""
],
[
"Nouira",
"Hichem",
""
]
] |
2403.00372 | Zhiying Leng | Zhiying Leng, Tolga Birdal, Xiaohui Liang and Federico Tombari | HyperSDFusion: Bridging Hierarchical Structures in Language and Geometry
for Enhanced 3D Text2Shape Generation | null | IEEE/CVF conference on computer vision and pattern recognition
2024 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 3D shape generation from text is a fundamental task in 3D representation
learning. The text-shape pairs exhibit a hierarchical structure, where a
general text like ``chair" covers all 3D shapes of the chair, while more
detailed prompts refer to more specific shapes. Furthermore, both text and 3D
shapes are inherently hierarchical structures. However, existing Text2Shape
methods, such as SDFusion, do not exploit that. In this work, we propose
HyperSDFusion, a dual-branch diffusion model that generates 3D shapes from a
given text. Since hyperbolic space is suitable for handling hierarchical data,
we propose to learn the hierarchical representations of text and 3D shapes in
hyperbolic space. First, we introduce a hyperbolic text-image encoder to learn
the sequential and multi-modal hierarchical features of text in hyperbolic
space. In addition, we design a hyperbolic text-graph convolution module to
learn the hierarchical features of text in hyperbolic space. In order to fully
utilize these text features, we introduce a dual-branch structure to embed text
features in 3D feature space. At last, to endow the generated 3D shapes with a
hierarchical structure, we devise a hyperbolic hierarchical loss. Our method is
the first to explore the hyperbolic hierarchical representation for
text-to-shape generation. Experimental results on the existing text-to-shape
paired dataset, Text2Shape, achieved state-of-the-art results. We release our
implementation under HyperSDFusion.github.io.
| [
{
"created": "Fri, 1 Mar 2024 08:57:28 GMT",
"version": "v1"
},
{
"created": "Sun, 28 Apr 2024 18:45:32 GMT",
"version": "v2"
},
{
"created": "Tue, 30 Apr 2024 05:32:01 GMT",
"version": "v3"
}
] | 2024-05-01 | [
[
"Leng",
"Zhiying",
""
],
[
"Birdal",
"Tolga",
""
],
[
"Liang",
"Xiaohui",
""
],
[
"Tombari",
"Federico",
""
]
] |
2403.00396 | Athanasios Tragakis | Athanasios Tragakis, Qianying Liu, Chaitanya Kaul, Swalpa Kumar Roy,
Hang Dai, Fani Deligianni, Roderick Murray-Smith, Daniele Faccio | GLFNET: Global-Local (frequency) Filter Networks for efficient medical
image segmentation | null | 2024 IEEE International Symposium on Biomedical Imaging (ISBI) | 10.1109/ISBI56570.2024.10635344 | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | We propose a novel transformer-style architecture called Global-Local Filter
Network (GLFNet) for medical image segmentation and demonstrate its
state-of-the-art performance. We replace the self-attention mechanism with a
combination of global-local filter blocks to optimize model efficiency. The
global filters extract features from the whole feature map whereas the local
filters are being adaptively created as 4x4 patches of the same feature map and
add restricted scale information. In particular, the feature extraction takes
place in the frequency domain rather than the commonly used spatial (image)
domain to facilitate faster computations. The fusion of information from both
spatial and frequency spaces creates an efficient model with regards to
complexity, required data and performance. We test GLFNet on three benchmark
datasets achieving state-of-the-art performance on all of them while being
almost twice as efficient in terms of GFLOP operations.
| [
{
"created": "Fri, 1 Mar 2024 09:35:03 GMT",
"version": "v1"
}
] | 2024-09-02 | [
[
"Tragakis",
"Athanasios",
""
],
[
"Liu",
"Qianying",
""
],
[
"Kaul",
"Chaitanya",
""
],
[
"Roy",
"Swalpa Kumar",
""
],
[
"Dai",
"Hang",
""
],
[
"Deligianni",
"Fani",
""
],
[
"Murray-Smith",
"Roderick",
""
],
[
"Faccio",
"Daniele",
""
]
] |
2403.00402 | Utako Yamamoto | Utako Yamamoto, Hirohiko Imai, Kei Sano, Masayuki Ohzeki, Tetsuya
Matsuda and Toshiyuki Tanaka | Spatio-temporal reconstruction of substance dynamics using compressed
sensing in multi-spectral magnetic resonance spectroscopic imaging | null | Expert Systems with Applications, Vol. 232 (2023) p. 120744 | 10.1016/j.eswa.2023.120744 | null | eess.SP cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The objective of our study is to observe dynamics of multiple substances in
vivo with high temporal resolution from multi-spectral magnetic resonance
spectroscopic imaging (MRSI) data. The multi-spectral MRSI can effectively
separate spectral peaks of multiple substances and is useful to measure spatial
distributions of substances. However it is difficult to measure time-varying
substance distributions directly by ordinary full sampling because the
measurement requires a significantly long time. In this study, we propose a
novel method to reconstruct the spatio-temporal distributions of substances
from randomly undersampled multi-spectral MRSI data on the basis of compressed
sensing (CS) and the partially separable function model with base spectra of
substances. In our method, we have employed spatio-temporal sparsity and
temporal smoothness of the substance distributions as prior knowledge to
perform CS. The effectiveness of our method has been evaluated using phantom
data sets of glass tubes filled with glucose or lactate solution in increasing
amounts over time and animal data sets of a tumor-bearing mouse to observe the
metabolic dynamics involved in the Warburg effect in vivo. The reconstructed
results are consistent with the expected behaviors, showing that our method can
reconstruct the spatio-temporal distribution of substances with a temporal
resolution of four seconds which is extremely short time scale compared with
that of full sampling. Since this method utilizes only prior knowledge
naturally assumed for the spatio-temporal distributions of substances and is
independent of the number of the spectral and spatial dimensions or the
acquisition sequence of MRSI, it is expected to contribute to revealing the
underlying substance dynamics in MRSI data already acquired or to be acquired
in the future.
| [
{
"created": "Fri, 1 Mar 2024 09:46:41 GMT",
"version": "v1"
}
] | 2024-03-04 | [
[
"Yamamoto",
"Utako",
""
],
[
"Imai",
"Hirohiko",
""
],
[
"Sano",
"Kei",
""
],
[
"Ohzeki",
"Masayuki",
""
],
[
"Matsuda",
"Tetsuya",
""
],
[
"Tanaka",
"Toshiyuki",
""
]
] |
2403.00642 | Xianghong Fang | Xianghong Fang, Jian Li, Qiang Sun, Benyou Wang | Rethinking The Uniformity Metric in Self-Supervised Learning | null | ICLR 2024 | null | null | cs.LG cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Uniformity plays an important role in evaluating learned representations,
providing insights into self-supervised learning. In our quest for effective
uniformity metrics, we pinpoint four principled properties that such metrics
should possess. Namely, an effective uniformity metric should remain invariant
to instance permutations and sample replications while accurately capturing
feature redundancy and dimensional collapse. Surprisingly, we find that the
uniformity metric proposed by \citet{Wang2020UnderstandingCR} fails to satisfy
the majority of these properties. Specifically, their metric is sensitive to
sample replications, and can not account for feature redundancy and dimensional
collapse correctly. To overcome these limitations, we introduce a new
uniformity metric based on the Wasserstein distance, which satisfies all the
aforementioned properties. Integrating this new metric in existing
self-supervised learning methods effectively mitigates dimensional collapse and
consistently improves their performance on downstream tasks involving CIFAR-10
and CIFAR-100 datasets. Code is available at
\url{https://github.com/statsle/WassersteinSSL}.
| [
{
"created": "Fri, 1 Mar 2024 16:22:05 GMT",
"version": "v1"
},
{
"created": "Fri, 26 Apr 2024 08:24:11 GMT",
"version": "v2"
}
] | 2024-04-29 | [
[
"Fang",
"Xianghong",
""
],
[
"Li",
"Jian",
""
],
[
"Sun",
"Qiang",
""
],
[
"Wang",
"Benyou",
""
]
] |
2403.00724 | Hoda Eldardiry | Jiaying Gong and Hoda Eldardiry | Few-Shot Relation Extraction with Hybrid Visual Evidence | 16 pages, 5 figures | LREC-COLING 2024 | null | null | cs.CL cs.CV | http://creativecommons.org/licenses/by/4.0/ | The goal of few-shot relation extraction is to predict relations between name
entities in a sentence when only a few labeled instances are available for
training. Existing few-shot relation extraction methods focus on uni-modal
information such as text only. This reduces performance when there are no clear
contexts between the name entities described in text. We propose a multi-modal
few-shot relation extraction model (MFS-HVE) that leverages both textual and
visual semantic information to learn a multi-modal representation jointly. The
MFS-HVE includes semantic feature extractors and multi-modal fusion components.
The MFS-HVE semantic feature extractors are developed to extract both textual
and visual features. The visual features include global image features and
local object features within the image. The MFS-HVE multi-modal fusion unit
integrates information from various modalities using image-guided attention,
object-guided attention, and hybrid feature attention to fully capture the
semantic interaction between visual regions of images and relevant texts.
Extensive experiments conducted on two public datasets demonstrate that
semantic visual information significantly improves the performance of few-shot
relation prediction.
| [
{
"created": "Fri, 1 Mar 2024 18:20:11 GMT",
"version": "v1"
}
] | 2024-03-04 | [
[
"Gong",
"Jiaying",
""
],
[
"Eldardiry",
"Hoda",
""
]
] |
2403.00772 | Muslim Chochlov | Ziyuan Ma, Conor Ryan, Jim Buckley, and Muslim Chochlov | Do Weibo platform experts perform better at predicting stock market? | null | 2021, 22nd Engineering Applications of Neural Networks Conference
(EANN 2021) | 10.1007/978-3-030-80568-5_40 | null | q-fin.ST cs.AI cs.LG cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sentiment analysis can be used for stock market prediction. However, existing
research has not studied the impact of a user's financial background on
sentiment-based forecasting of the stock market using artificial neural
networks. In this work, a novel combination of neural networks is used for the
assessment of sentiment-based stock market prediction, based on the financial
background of the population that generated the sentiment. The state-of-the-art
language processing model Bidirectional Encoder Representations from
Transformers (BERT) is used to classify the sentiment and a Long-Short Term
Memory (LSTM) model is used for time-series based stock market prediction. For
evaluation, the Weibo social networking platform is used as a sentiment data
collection source. Weibo users (and their comments respectively) are divided
into Authorized Financial Advisor (AFA) and Unauthorized Financial Advisor
(UFA) groups according to their background information, as collected by Weibo.
The Hong Kong Hang Seng index is used to extract historical stock market change
data. The results indicate that stock market prediction learned from the AFA
group users is 39.67% more precise than that learned from the UFA group users
and shows the highest accuracy (87%) when compared to existing approaches.
| [
{
"created": "Mon, 12 Feb 2024 10:04:54 GMT",
"version": "v1"
}
] | 2024-03-05 | [
[
"Ma",
"Ziyuan",
""
],
[
"Ryan",
"Conor",
""
],
[
"Buckley",
"Jim",
""
],
[
"Chochlov",
"Muslim",
""
]
] |
2403.00781 | Zhongqi Yang | Zhongqi Yang, Elahe Khatibi, Nitish Nagesh, Mahyar Abbasian, Iman
Azimi, Ramesh Jain, Amir M. Rahmani | ChatDiet: Empowering Personalized Nutrition-Oriented Food Recommender
Chatbots through an LLM-Augmented Framework | Published on Smart Health | Smart Health 32 (2024): 100465 | 10.1016/j.smhl.2024.100465 | null | cs.IR cs.AI cs.LG cs.MM | http://creativecommons.org/licenses/by/4.0/ | The profound impact of food on health necessitates advanced
nutrition-oriented food recommendation services. Conventional methods often
lack the crucial elements of personalization, explainability, and
interactivity. While Large Language Models (LLMs) bring interpretability and
explainability, their standalone use falls short of achieving true
personalization. In this paper, we introduce ChatDiet, a novel LLM-powered
framework designed specifically for personalized nutrition-oriented food
recommendation chatbots. ChatDiet integrates personal and population models,
complemented by an orchestrator, to seamlessly retrieve and process pertinent
information. The personal model leverages causal discovery and inference
techniques to assess personalized nutritional effects for a specific user,
whereas the population model provides generalized information on food
nutritional content. The orchestrator retrieves, synergizes and delivers the
output of both models to the LLM, providing tailored food recommendations
designed to support targeted health outcomes. The result is a dynamic delivery
of personalized and explainable food recommendations, tailored to individual
user preferences. Our evaluation of ChatDiet includes a compelling case study,
where we establish a causal personal model to estimate individual nutrition
effects. Our assessments, including a food recommendation test showcasing a
92\% effectiveness rate, coupled with illustrative dialogue examples,
underscore ChatDiet's strengths in explainability, personalization, and
interactivity.
| [
{
"created": "Sun, 18 Feb 2024 06:07:17 GMT",
"version": "v1"
},
{
"created": "Sat, 16 Mar 2024 17:31:11 GMT",
"version": "v2"
},
{
"created": "Wed, 25 Sep 2024 06:31:09 GMT",
"version": "v3"
}
] | 2024-09-26 | [
[
"Yang",
"Zhongqi",
""
],
[
"Khatibi",
"Elahe",
""
],
[
"Nagesh",
"Nitish",
""
],
[
"Abbasian",
"Mahyar",
""
],
[
"Azimi",
"Iman",
""
],
[
"Jain",
"Ramesh",
""
],
[
"Rahmani",
"Amir M.",
""
]
] |
2403.00815 | Yue Yu | Ran Xu, Wenqi Shi, Yue Yu, Yuchen Zhuang, Bowen Jin, May D. Wang,
Joyce C. Ho, Carl Yang | RAM-EHR: Retrieval Augmentation Meets Clinical Predictions on Electronic
Health Records | ACL 2024 (Oral) | ACL 2024 | null | null | cs.CL cs.AI cs.IR q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present RAM-EHR, a Retrieval AugMentation pipeline to improve clinical
predictions on Electronic Health Records (EHRs). RAM-EHR first collects
multiple knowledge sources, converts them into text format, and uses dense
retrieval to obtain information related to medical concepts. This strategy
addresses the difficulties associated with complex names for the concepts.
RAM-EHR then augments the local EHR predictive model co-trained with
consistency regularization to capture complementary information from patient
visits and summarized knowledge. Experiments on two EHR datasets show the
efficacy of RAM-EHR over previous knowledge-enhanced baselines (3.4% gain in
AUROC and 7.2% gain in AUPR), emphasizing the effectiveness of the summarized
knowledge from RAM-EHR for clinical prediction tasks. The code will be
published at \url{https://github.com/ritaranx/RAM-EHR}.
| [
{
"created": "Sun, 25 Feb 2024 23:10:20 GMT",
"version": "v1"
},
{
"created": "Tue, 4 Jun 2024 05:11:19 GMT",
"version": "v2"
},
{
"created": "Fri, 26 Jul 2024 23:24:39 GMT",
"version": "v3"
}
] | 2024-07-30 | [
[
"Xu",
"Ran",
""
],
[
"Shi",
"Wenqi",
""
],
[
"Yu",
"Yue",
""
],
[
"Zhuang",
"Yuchen",
""
],
[
"Jin",
"Bowen",
""
],
[
"Wang",
"May D.",
""
],
[
"Ho",
"Joyce C.",
""
],
[
"Yang",
"Carl",
""
]
] |
2403.00898 | Gabriele Iommazzo | Gabriele Iommazzo, Claudia D'Ambrosio, Antonio Frangioni, Leo Liberti | The Algorithm Configuration Problem | null | In: Pardalos, P.M., Prokopyev, O.A. (eds) Encyclopedia of
Optimization. Springer, Cham. (2023) | 10.1007/978-3-030-54621-2_749-1 | null | cs.AI cs.LG math.OC | http://creativecommons.org/licenses/by/4.0/ | The field of algorithmic optimization has significantly advanced with the
development of methods for the automatic configuration of algorithmic
parameters. This article delves into the Algorithm Configuration Problem,
focused on optimizing parametrized algorithms for solving specific instances of
decision/optimization problems. We present a comprehensive framework that not
only formalizes the Algorithm Configuration Problem, but also outlines
different approaches for its resolution, leveraging machine learning models and
heuristic strategies. The article categorizes existing methodologies into
per-instance and per-problem approaches, distinguishing between offline and
online strategies for model construction and deployment. By synthesizing these
approaches, we aim to provide a clear pathway for both understanding and
addressing the complexities inherent in algorithm configuration.
| [
{
"created": "Fri, 1 Mar 2024 17:29:34 GMT",
"version": "v1"
}
] | 2024-03-05 | [
[
"Iommazzo",
"Gabriele",
""
],
[
"D'Ambrosio",
"Claudia",
""
],
[
"Frangioni",
"Antonio",
""
],
[
"Liberti",
"Leo",
""
]
] |
2403.00980 | Saugat Aryal | Saugat Aryal, Mark T. Keane | Even-Ifs From If-Onlys: Are the Best Semi-Factual Explanations Found
Using Counterfactuals As Guides? | 16 pages, 5 figures | 32nd International Conference on Case-Based Reasoning (ICCBR)
2024, Merida, Mexico | 10.1007/978-3-031-63646-2_3 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recently, counterfactuals using "if-only" explanations have become very
popular in eXplainable AI (XAI), as they describe which changes to
feature-inputs of a black-box AI system result in changes to a (usually
negative) decision-outcome. Even more recently, semi-factuals using "even-if"
explanations have gained more attention. They elucidate the feature-input
changes that do not change the decision-outcome of the AI system, with a
potential to suggest more beneficial recourses. Some semi-factual methods use
counterfactuals to the query-instance to guide semi-factual production
(so-called counterfactual-guided methods), whereas others do not (so-called
counterfactual-free methods). In this work, we perform comprehensive tests of 8
semi-factual methods on 7 datasets using 5 key metrics, to determine whether
counterfactual guidance is necessary to find the best semi-factuals. The
results of these tests suggests not, but rather that computing other aspects of
the decision space lead to better semi-factual XAI.
| [
{
"created": "Fri, 1 Mar 2024 21:04:48 GMT",
"version": "v1"
},
{
"created": "Thu, 25 Apr 2024 15:36:15 GMT",
"version": "v2"
}
] | 2024-06-28 | [
[
"Aryal",
"Saugat",
""
],
[
"Keane",
"Mark T.",
""
]
] |
2403.01087 | Rudrabha Mukhopadhyay | Sindhu Hegde, Rudrabha Mukhopadhyay, C.V. Jawahar, Vinay Namboodiri | Towards Accurate Lip-to-Speech Synthesis in-the-Wild | 8 pages of content, 1 page of references and 4 figures | In Proceedings of the 31st ACM International Conference on
Multimedia, 2023 | 10.1145/3581783.3611787 | null | cs.MM cs.CV cs.SD eess.AS | http://creativecommons.org/licenses/by/4.0/ | In this paper, we introduce a novel approach to address the task of
synthesizing speech from silent videos of any in-the-wild speaker solely based
on lip movements. The traditional approach of directly generating speech from
lip videos faces the challenge of not being able to learn a robust language
model from speech alone, resulting in unsatisfactory outcomes. To overcome this
issue, we propose incorporating noisy text supervision using a state-of-the-art
lip-to-text network that instills language information into our model. The
noisy text is generated using a pre-trained lip-to-text model, enabling our
approach to work without text annotations during inference. We design a visual
text-to-speech network that utilizes the visual stream to generate accurate
speech, which is in-sync with the silent input video. We perform extensive
experiments and ablation studies, demonstrating our approach's superiority over
the current state-of-the-art methods on various benchmark datasets. Further, we
demonstrate an essential practical application of our method in assistive
technology by generating speech for an ALS patient who has lost the voice but
can make mouth movements. Our demo video, code, and additional details can be
found at
\url{http://cvit.iiit.ac.in/research/projects/cvit-projects/ms-l2s-itw}.
| [
{
"created": "Sat, 2 Mar 2024 04:07:24 GMT",
"version": "v1"
}
] | 2024-03-05 | [
[
"Hegde",
"Sindhu",
""
],
[
"Mukhopadhyay",
"Rudrabha",
""
],
[
"Jawahar",
"C. V.",
""
],
[
"Namboodiri",
"Vinay",
""
]
] |
2403.01196 | Seamus Lankford | S\'eamus Lankford, Haithem Afli and Andy Way | Machine Translation in the Covid domain: an English-Irish case study for
LoResMT 2021 | null | Proceedings of the 4th Workshop on Technologies for MT of Low
Resource Languages (LoResMT2021) | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Translation models for the specific domain of translating Covid data from
English to Irish were developed for the LoResMT 2021 shared task. Domain
adaptation techniques, using a Covid-adapted generic 55k corpus from the
Directorate General of Translation, were applied. Fine-tuning, mixed
fine-tuning and combined dataset approaches were compared with models trained
on an extended in-domain dataset. As part of this study, an English-Irish
dataset of Covid related data, from the Health and Education domains, was
developed. The highest-performing model used a Transformer architecture trained
with an extended in-domain Covid dataset. In the context of this study, we have
demonstrated that extending an 8k in-domain baseline dataset by just 5k lines
improved the BLEU score by 27 points.
| [
{
"created": "Sat, 2 Mar 2024 12:29:28 GMT",
"version": "v1"
}
] | 2024-03-05 | [
[
"Lankford",
"Séamus",
""
],
[
"Afli",
"Haithem",
""
],
[
"Way",
"Andy",
""
]
] |
2403.01255 | Hamza Kheddar | Hamza Kheddar, Mustapha Hemis, Yassine Himeur | Automatic Speech Recognition using Advanced Deep Learning Approaches: A
survey | null | Information Fusion, Elsevier, 2024 | 10.1016/j.inffus.2024.102422 | null | cs.SD cs.AI eess.AS eess.SP | http://creativecommons.org/licenses/by/4.0/ | Recent advancements in deep learning (DL) have posed a significant challenge
for automatic speech recognition (ASR). ASR relies on extensive training
datasets, including confidential ones, and demands substantial computational
and storage resources. Enabling adaptive systems improves ASR performance in
dynamic environments. DL techniques assume training and testing data originate
from the same domain, which is not always true. Advanced DL techniques like
deep transfer learning (DTL), federated learning (FL), and reinforcement
learning (RL) address these issues. DTL allows high-performance models using
small yet related datasets, FL enables training on confidential data without
dataset possession, and RL optimizes decision-making in dynamic environments,
reducing computation costs. This survey offers a comprehensive review of DTL,
FL, and RL-based ASR frameworks, aiming to provide insights into the latest
developments and aid researchers and professionals in understanding the current
challenges. Additionally, transformers, which are advanced DL techniques
heavily used in proposed ASR frameworks, are considered in this survey for
their ability to capture extensive dependencies in the input ASR sequence. The
paper starts by presenting the background of DTL, FL, RL, and Transformers and
then adopts a well-designed taxonomy to outline the state-of-the-art
approaches. Subsequently, a critical analysis is conducted to identify the
strengths and weaknesses of each framework. Additionally, a comparative study
is presented to highlight the existing challenges, paving the way for future
research opportunities.
| [
{
"created": "Sat, 2 Mar 2024 16:25:42 GMT",
"version": "v1"
},
{
"created": "Thu, 18 Apr 2024 17:29:29 GMT",
"version": "v2"
}
] | 2024-04-19 | [
[
"Kheddar",
"Hamza",
""
],
[
"Hemis",
"Mustapha",
""
],
[
"Himeur",
"Yassine",
""
]
] |
2403.01263 | Katia Genovese | Katia Genovese | Single-image camera calibration with model-free distortion correction | Accepted manuscript | Optics and Lasers in Engineering, Volume 181, October 2024, 108348 | 10.1016/j.optlaseng.2024.108348 | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Camera calibration is a process of paramount importance in computer vision
applications that require accurate quantitative measurements. The popular
method developed by Zhang relies on the use of a large number of images of a
planar grid of fiducial points captured in multiple poses. Although flexible
and easy to implement, Zhang's method has some limitations. The simultaneous
optimization of the entire parameter set, including the coefficients of a
predefined distortion model, may result in poor distortion correction at the
image boundaries or in miscalculation of the intrinsic parameters, even with a
reasonably small reprojection error. Indeed, applications involving image
stitching (e.g. multi-camera systems) require accurate mapping of distortion up
to the outermost regions of the image. Moreover, intrinsic parameters affect
the accuracy of camera pose estimation, which is fundamental for applications
such as vision servoing in robot navigation and automated assembly. This paper
proposes a method for estimating the complete set of calibration parameters
from a single image of a planar speckle pattern covering the entire sensor. The
correspondence between image points and physical points on the calibration
target is obtained using Digital Image Correlation. The effective focal length
and the extrinsic parameters are calculated separately after a prior evaluation
of the principal point. At the end of the procedure, a dense and uniform
model-free distortion map is obtained over the entire image. Synthetic data
with different noise levels were used to test the feasibility of the proposed
method and to compare its metrological performance with Zhang's method.
Real-world tests demonstrate the potential of the developed method to reveal
aspects of the image formation that are hidden by averaging over multiple
images.
| [
{
"created": "Sat, 2 Mar 2024 16:51:35 GMT",
"version": "v1"
},
{
"created": "Mon, 24 Jun 2024 17:49:37 GMT",
"version": "v2"
}
] | 2024-06-25 | [
[
"Genovese",
"Katia",
""
]
] |
2403.01407 | Dipesh Gyawali | Dipesh Gyawali, Jian Zhang, BB Karki | Region-Transformer: Self-Attention Region Based Class-Agnostic Point
Cloud Segmentation | 8 pages, 5 figures, 3 tables | 19th International Joint Conference on Computer Vision, Imaging
and Computer Graphics Theory and Applications - Volume 4 VISAPP: VISAPP,
341-348, 2024 , Rome, Italy | 10.5220/0012424500003660 | null | cs.CV cs.AI cs.RO | http://creativecommons.org/licenses/by/4.0/ | Point cloud segmentation, which helps us understand the environment of
specific structures and objects, can be performed in class-specific and
class-agnostic ways. We propose a novel region-based transformer model called
Region-Transformer for performing class-agnostic point cloud segmentation. The
model utilizes a region-growth approach and self-attention mechanism to
iteratively expand or contract a region by adding or removing points. It is
trained on simulated point clouds with instance labels only, avoiding semantic
labels. Attention-based networks have succeeded in many previous methods of
performing point cloud segmentation. However, a region-growth approach with
attention-based networks has yet to be used to explore its performance gain. To
our knowledge, we are the first to use a self-attention mechanism in a
region-growth approach. With the introduction of self-attention to
region-growth that can utilize local contextual information of neighborhood
points, our experiments demonstrate that the Region-Transformer model
outperforms previous class-agnostic and class-specific methods on indoor
datasets regarding clustering metrics. The model generalizes well to
large-scale scenes. Key advantages include capturing long-range dependencies
through self-attention, avoiding the need for semantic labels during training,
and applicability to a variable number of objects. The Region-Transformer model
represents a promising approach for flexible point cloud segmentation with
applications in robotics, digital twinning, and autonomous vehicles.
| [
{
"created": "Sun, 3 Mar 2024 06:13:43 GMT",
"version": "v1"
}
] | 2024-03-07 | [
[
"Gyawali",
"Dipesh",
""
],
[
"Zhang",
"Jian",
""
],
[
"Karki",
"BB",
""
]
] |
2403.01510 | Qinglin Liu | Qinglin Liu, Shengping Zhang, Quanling Meng, Bineng Zhong, Peiqiang
Liu, Hongxun Yao | End-to-End Human Instance Matting | null | IEEE T-CSVT 2023 | 10.1109/TCSVT.2023.3306400 | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Human instance matting aims to estimate an alpha matte for each human
instance in an image, which is extremely challenging and has rarely been
studied so far. Despite some efforts to use instance segmentation to generate a
trimap for each instance and apply trimap-based matting methods, the resulting
alpha mattes are often inaccurate due to inaccurate segmentation. In addition,
this approach is computationally inefficient due to multiple executions of the
matting method. To address these problems, this paper proposes a novel
End-to-End Human Instance Matting (E2E-HIM) framework for simultaneous multiple
instance matting in a more efficient manner. Specifically, a general perception
network first extracts image features and decodes instance contexts into latent
codes. Then, a united guidance network exploits spatial attention and semantics
embedding to generate united semantics guidance, which encodes the locations
and semantic correspondences of all instances. Finally, an instance matting
network decodes the image features and united semantics guidance to predict all
instance-level alpha mattes. In addition, we construct a large-scale human
instance matting dataset (HIM-100K) comprising over 100,000 human images with
instance alpha matte labels. Experiments on HIM-100K demonstrate the proposed
E2E-HIM outperforms the existing methods on human instance matting with 50%
lower errors and 5X faster speed (6 instances in a 640X640 image). Experiments
on the PPM-100, RWP-636, and P3M datasets demonstrate that E2E-HIM also
achieves competitive performance on traditional human matting.
| [
{
"created": "Sun, 3 Mar 2024 13:17:10 GMT",
"version": "v1"
}
] | 2024-03-05 | [
[
"Liu",
"Qinglin",
""
],
[
"Zhang",
"Shengping",
""
],
[
"Meng",
"Quanling",
""
],
[
"Zhong",
"Bineng",
""
],
[
"Liu",
"Peiqiang",
""
],
[
"Yao",
"Hongxun",
""
]
] |
2403.01606 | Yuxiang Huang | Yuxiang Huang, John Zelek | A Unified Model Selection Technique for Spectral Clustering Based Motion
Segmentation | for the published version, see
https://openjournals.uwaterloo.ca/index.php/vsl/article/view/5870/5922 | Journal of Computational Vision and Imaging Systems 9 (2023) 68-71 | 10.15353/jcvis.v9i1.10018 | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Motion segmentation is a fundamental problem in computer vision and is
crucial in various applications such as robotics, autonomous driving and action
recognition. Recently, spectral clustering based methods have shown impressive
results on motion segmentation in dynamic environments. These methods perform
spectral clustering on motion affinity matrices to cluster objects or point
trajectories in the scene into different motion groups. However, existing
methods often need the number of motions present in the scene to be known,
which significantly reduces their practicality. In this paper, we propose a
unified model selection technique to automatically infer the number of motion
groups for spectral clustering based motion segmentation methods by combining
different existing model selection techniques together. We evaluate our method
on the KT3DMoSeg dataset and achieve competitve results comparing to the
baseline where the number of clusters is given as ground truth information.
| [
{
"created": "Sun, 3 Mar 2024 20:16:14 GMT",
"version": "v1"
},
{
"created": "Mon, 6 May 2024 22:19:22 GMT",
"version": "v2"
}
] | 2024-05-08 | [
[
"Huang",
"Yuxiang",
""
],
[
"Zelek",
"John",
""
]
] |
2403.01861 | Jaehoon Jang | Jaehoon Jang, Inha Lee, Minje Kim, Kyungdon Joo | AiSDF: Structure-aware Neural Signed Distance Fields in Indoor Scenes | 8 pages, 6 figures, Accepted to IEEE RA-L (First two authors
contributed equally) | IEEE Robotics and Automation Letters (RA-L), vol. 9, no. 5, pp.
4106-4113, 2024 | 10.1109/LRA.2024.3375117 | null | cs.RO cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Indoor scenes we are living in are visually homogenous or textureless, while
they inherently have structural forms and provide enough structural priors for
3D scene reconstruction. Motivated by this fact, we propose a structure-aware
online signed distance fields (SDF) reconstruction framework in indoor scenes,
especially under the Atlanta world (AW) assumption. Thus, we dub this
incremental SDF reconstruction for AW as AiSDF. Within the online framework, we
infer the underlying Atlanta structure of a given scene and then estimate
planar surfel regions supporting the Atlanta structure. This Atlanta-aware
surfel representation provides an explicit planar map for a given scene. In
addition, based on these Atlanta planar surfel regions, we adaptively sample
and constrain the structural regularity in the SDF reconstruction, which
enables us to improve the reconstruction quality by maintaining a high-level
structure while enhancing the details of a given scene. We evaluate the
proposed AiSDF on the ScanNet and ReplicaCAD datasets, where we demonstrate
that the proposed framework is capable of reconstructing fine details of
objects implicitly, as well as structures explicitly in room-scale scenes.
| [
{
"created": "Mon, 4 Mar 2024 09:18:13 GMT",
"version": "v1"
}
] | 2024-03-26 | [
[
"Jang",
"Jaehoon",
""
],
[
"Lee",
"Inha",
""
],
[
"Kim",
"Minje",
""
],
[
"Joo",
"Kyungdon",
""
]
] |
2403.01868 | Maxime Noizet | Benjamin Missaoui (Heudiasyc), Maxime Noizet (Heudiasyc), Philippe Xu
(Heudiasyc) | Map-aided annotation for pole base detection | null | 35th IEEE Intelligent Vehicles Symposium (IV 2023), Jun 2023,
Anchorage, AK, United States | 10.1109/IV55152.2023.10186774 | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For autonomous navigation, high definition maps are a widely used source of
information. Pole-like features encoded in HD maps such as traffic signs,
traffic lights or street lights can be used as landmarks for localization. For
this purpose, they first need to be detected by the vehicle using its embedded
sensors. While geometric models can be used to process 3D point clouds
retrieved by lidar sensors, modern image-based approaches rely on deep neural
network and therefore heavily depend on annotated training data. In this paper,
a 2D HD map is used to automatically annotate pole-like features in images. In
the absence of height information, the map features are represented as pole
bases at the ground level. We show how an additional lidar sensor can be used
to filter out occluded features and refine the ground projection. We also
demonstrate how an object detector can be trained to detect a pole base. To
evaluate our methodology, it is first validated with data manually annotated
from semantic segmentation and then compared to our own automatically generated
annotated data recorded in the city of Compi{\`e}gne, France. Erratum: In the
original version [1], an error occurred in the accuracy evaluation of the
different models studied and the evaluation method applied on the detection
results was not clearly defined. In this revision, we offer a rectification to
this segment, presenting updated results, especially in terms of Mean Absolute
Errors (MAE).
| [
{
"created": "Mon, 4 Mar 2024 09:23:11 GMT",
"version": "v1"
}
] | 2024-03-05 | [
[
"Missaoui",
"Benjamin",
"",
"Heudiasyc"
],
[
"Noizet",
"Maxime",
"",
"Heudiasyc"
],
[
"Xu",
"Philippe",
"",
"Heudiasyc"
]
] |
2403.01985 | Seamus Lankford | S\'eamus Lankford, Haithem Afli and Andy Way | Transformers for Low-Resource Languages: Is F\'eidir Linn! | 13 pages | Proceedings of Machine Translation Summit XVIII: Research Track
2021 | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | The Transformer model is the state-of-the-art in Machine Translation.
However, in general, neural translation models often under perform on language
pairs with insufficient training data. As a consequence, relatively few
experiments have been carried out using this architecture on low-resource
language pairs. In this study, hyperparameter optimization of Transformer
models in translating the low-resource English-Irish language pair is
evaluated. We demonstrate that choosing appropriate parameters leads to
considerable performance improvements. Most importantly, the correct choice of
subword model is shown to be the biggest driver of translation performance.
SentencePiece models using both unigram and BPE approaches were appraised.
Variations on model architectures included modifying the number of layers,
testing various regularisation techniques and evaluating the optimal number of
heads for attention. A generic 55k DGT corpus and an in-domain 88k public admin
corpus were used for evaluation. A Transformer optimized model demonstrated a
BLEU score improvement of 7.8 points when compared with a baseline RNN model.
Improvements were observed across a range of metrics, including TER, indicating
a substantially reduced post editing effort for Transformer optimized models
with 16k BPE subword models. Bench-marked against Google Translate, our
translation engines demonstrated significant improvements. The question of
whether or not Transformers can be used effectively in a low-resource setting
of English-Irish translation has been addressed. Is f\'eidir linn - yes we can.
| [
{
"created": "Mon, 4 Mar 2024 12:29:59 GMT",
"version": "v1"
}
] | 2024-10-08 | [
[
"Lankford",
"Séamus",
""
],
[
"Afli",
"Haithem",
""
],
[
"Way",
"Andy",
""
]
] |
2403.02053 | Zhipeng Ma | Zhipeng Ma, Bo N{\o}rregaard J{\o}rgensen, Zheng Ma | A Scoping Review of Energy-Efficient Driving Behaviors and Applied
State-of-the-Art AI Methods | null | Energies 2024, 17, 500 | 10.3390/en17020500 | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The transportation sector remains a major contributor to greenhouse gas
emissions. The understanding of energy-efficient driving behaviors and
utilization of energy-efficient driving strategies are essential to reduce
vehicles' fuel consumption. However, there is no comprehensive investigation
into energy-efficient driving behaviors and strategies. Furthermore, many
state-of-the-art AI models have been applied for the analysis of eco-friendly
driving styles, but no overview is available. To fill the gap, this paper
conducts a thorough literature review on ecological driving behaviors and
styles and analyzes the driving factors influencing energy consumption and
state-of-the-art methodologies. With a thorough scoping review process, the
methodological and related data are compared. The results show that the factors
that impact driving behaviors can be summarized into eleven features including
speed, acceleration, deceleration, pedal, and so on. This paper finds that
supervised/unsupervised learning algorithms and reinforcement learning
frameworks have been popularly used to model the vehicle's energy consumption
with multi-dimensional data. Furthermore, the literature shows that the driving
data are collected from either simulators or real-world experiments, and the
real-world data are mainly stored and transmitted by meters, controller area
networks, onboard data services, smartphones, and additional sensors installed
in the vehicle. Based on driving behavior factors, driver characteristics, and
safety rules, this paper recommends nine energy-efficient driving styles
including four guidelines for the drivers' selection and adjustment of the
vehicle parameters, three recommendations for the energy-efficient driving
styles in different driving scenarios, and two subjective suggestions for
different types of drivers and employers.
| [
{
"created": "Mon, 4 Mar 2024 13:57:34 GMT",
"version": "v1"
}
] | 2024-03-05 | [
[
"Ma",
"Zhipeng",
""
],
[
"Jørgensen",
"Bo Nørregaard",
""
],
[
"Ma",
"Zheng",
""
]
] |
2403.02069 | Aisha Lawal Shuaibu | Aisha L. Shuaibu, Ivor J. A. Simpson | HyperPredict: Estimating Hyperparameter Effects for Instance-Specific
Regularization in Deformable Image Registration | Accepted for publication at the Journal of Machine Learning for
Biomedical Imaging (MELBA) https://melba-journal.org/2024:005 | Machine.Learning.for.Biomedical.Imaging. 2 (2024) | 10.59275/j.melba.2024-d434 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Methods for medical image registration infer geometric transformations that
align pairs/groups of images by maximising an image similarity metric. This
problem is ill-posed as several solutions may have equivalent likelihoods, also
optimising purely for image similarity can yield implausible transformations.
For these reasons regularization terms are essential to obtain meaningful
registration results. However, this requires the introduction of at least one
hyperparameter often termed $\lambda$, that serves as a tradeoff between loss
terms. In some situations, the quality of the estimated transformation greatly
depends on hyperparameter choice, and different choices may be required
depending on the characteristics of the data. Analyzing the effect of these
hyperparameters requires labelled data, which is not commonly available at
test-time. In this paper, we propose a method for evaluating the influence of
hyperparameters and subsequently selecting an optimal value for given image
pairs. Our approach which we call HyperPredict, implements a Multi-Layer
Perceptron that learns the effect of selecting particular hyperparameters for
registering an image pair by predicting the resulting segmentation overlap and
measure of deformation smoothness. This approach enables us to select optimal
hyperparameters at test time without requiring labelled data, removing the need
for a one-size-fits-all cross-validation approach. Furthermore, the criteria
used to define optimal hyperparameter is flexible post-training, allowing us to
efficiently choose specific properties. We evaluate our proposed method on the
OASIS brain MR dataset using a recent deep learning approach(cLapIRN) and an
algorithmic method(Niftyreg). Our results demonstrate good performance in
predicting the effects of regularization hyperparameters and highlight the
benefits of our image-pair specific approach to hyperparameter selection.
| [
{
"created": "Mon, 4 Mar 2024 14:17:30 GMT",
"version": "v1"
},
{
"created": "Sat, 16 Mar 2024 13:52:03 GMT",
"version": "v2"
}
] | 2024-03-19 | [
[
"Shuaibu",
"Aisha L.",
""
],
[
"Simpson",
"Ivor J. A.",
""
]
] |
2403.02078 | Qiao Wang | Qiao Wang, Ralph Rose, Naho Orita, Ayaka Sugawara | Automated Generation of Multiple-Choice Cloze Questions for Assessing
English Vocabulary Using GPT-turbo 3.5 | null | Mika H\"am\"al\"ainen, Emily \"Ohman, Flammie Pirinen, Khalid
Alnajjar, So Miyagawa, Yuri Bizzoni, Niko Partanen, and Jack Rueter. 2023.
Proc. of the Joint 3rd International Conference on NLP4DH and 8th IWCLUL.
ACL, Tokyo, Japan, edition | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | A common way of assessing language learners' mastery of vocabulary is via
multiple-choice cloze (i.e., fill-in-the-blank) questions. But the creation of
test items can be laborious for individual teachers or in large-scale language
programs. In this paper, we evaluate a new method for automatically generating
these types of questions using large language models (LLM). The VocaTT
(vocabulary teaching and training) engine is written in Python and comprises
three basic steps: pre-processing target word lists, generating sentences and
candidate word options using GPT, and finally selecting suitable word options.
To test the efficiency of this system, 60 questions were generated targeting
academic words. The generated items were reviewed by expert reviewers who
judged the well-formedness of the sentences and word options, adding comments
to items judged not well-formed. Results showed a 75% rate of well-formedness
for sentences and 66.85% rate for suitable word options. This is a marked
improvement over the generator used earlier in our research which did not take
advantage of GPT's capabilities. Post-hoc qualitative analysis reveals several
points for improvement in future work including cross-referencing
part-of-speech tagging, better sentence validation, and improving GPT prompts.
| [
{
"created": "Mon, 4 Mar 2024 14:24:47 GMT",
"version": "v1"
}
] | 2024-03-05 | [
[
"Wang",
"Qiao",
""
],
[
"Rose",
"Ralph",
""
],
[
"Orita",
"Naho",
""
],
[
"Sugawara",
"Ayaka",
""
]
] |
2403.02112 | Hugo Bohy | Hugo Bohy, Kevin El Haddad and Thierry Dutoit | A New Perspective on Smiling and Laughter Detection: Intensity Levels
Matter | null | In 2022 10th International Conference on Affective Computing and
Intelligent Interaction (ACII) (pp. 1-8). IEEE | 10.1109/ACII55700.2022.9953896 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Smiles and laughs detection systems have attracted a lot of attention in the
past decade contributing to the improvement of human-agent interaction systems.
But very few considered these expressions as distinct, although no prior work
clearly proves them to belong to the same category or not. In this work, we
present a deep learning-based multimodal smile and laugh classification system,
considering them as two different entities. We compare the use of audio and
vision-based models as well as a fusion approach. We show that, as expected,
the fusion leads to a better generalization on unseen data. We also present an
in-depth analysis of the behavior of these models on the smiles and laughs
intensity levels. The analyses on the intensity levels show that the
relationship between smiles and laughs might not be as simple as a binary one
or even grouping them in a single category, and so, a more complex approach
should be taken when dealing with them. We also tackle the problem of limited
resources by showing that transfer learning allows the models to improve the
detection of confusing intensity levels.
| [
{
"created": "Mon, 4 Mar 2024 15:15:57 GMT",
"version": "v1"
}
] | 2024-03-05 | [
[
"Bohy",
"Hugo",
""
],
[
"Haddad",
"Kevin El",
""
],
[
"Dutoit",
"Thierry",
""
]
] |
2403.02227 | Yongzhao Wang | Ariyan Bighashdel, Yongzhao Wang, Stephen McAleer, Rahul Savani, Frans
A. Oliehoek | Policy Space Response Oracles: A Survey | Ariyan Bighashdel and Yongzhao Wang contributed equally | The 33rd International Joint Conference on Artificial
Intelligence, 2024 | null | null | cs.GT cs.AI cs.MA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Game theory provides a mathematical way to study the interaction between
multiple decision makers. However, classical game-theoretic analysis is limited
in scalability due to the large number of strategies, precluding direct
application to more complex scenarios. This survey provides a comprehensive
overview of a framework for large games, known as Policy Space Response Oracles
(PSRO), which holds promise to improve scalability by focusing attention on
sufficient subsets of strategies. We first motivate PSRO and provide historical
context. We then focus on the strategy exploration problem for PSRO: the
challenge of assembling effective subsets of strategies that still represent
the original game well with minimum computational cost. We survey current
research directions for enhancing the efficiency of PSRO, and explore the
applications of PSRO across various domains. We conclude by discussing open
questions and future research.
| [
{
"created": "Mon, 4 Mar 2024 17:15:09 GMT",
"version": "v1"
},
{
"created": "Mon, 27 May 2024 16:49:18 GMT",
"version": "v2"
}
] | 2024-05-28 | [
[
"Bighashdel",
"Ariyan",
""
],
[
"Wang",
"Yongzhao",
""
],
[
"McAleer",
"Stephen",
""
],
[
"Savani",
"Rahul",
""
],
[
"Oliehoek",
"Frans A.",
""
]
] |
2403.02232 | Zhenglin Li | Zhenglin Li, Haibei Zhu, Houze Liu, Jintong Song, Qishuo Cheng | Comprehensive evaluation of Mal-API-2019 dataset by machine learning in
malware detection | null | International Journal of Computer Science and Information
Technology, 2024, 2(1), 1-9 | 10.62051/ijcsit.v2n1.01 | null | cs.CR cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study conducts a thorough examination of malware detection using machine
learning techniques, focusing on the evaluation of various classification
models using the Mal-API-2019 dataset. The aim is to advance cybersecurity
capabilities by identifying and mitigating threats more effectively. Both
ensemble and non-ensemble machine learning methods, such as Random Forest,
XGBoost, K Nearest Neighbor (KNN), and Neural Networks, are explored. Special
emphasis is placed on the importance of data pre-processing techniques,
particularly TF-IDF representation and Principal Component Analysis, in
improving model performance. Results indicate that ensemble methods,
particularly Random Forest and XGBoost, exhibit superior accuracy, precision,
and recall compared to others, highlighting their effectiveness in malware
detection. The paper also discusses limitations and potential future
directions, emphasizing the need for continuous adaptation to address the
evolving nature of malware. This research contributes to ongoing discussions in
cybersecurity and provides practical insights for developing more robust
malware detection systems in the digital era.
| [
{
"created": "Mon, 4 Mar 2024 17:22:43 GMT",
"version": "v1"
},
{
"created": "Mon, 25 Mar 2024 21:33:18 GMT",
"version": "v2"
}
] | 2024-03-27 | [
[
"Li",
"Zhenglin",
""
],
[
"Zhu",
"Haibei",
""
],
[
"Liu",
"Houze",
""
],
[
"Song",
"Jintong",
""
],
[
"Cheng",
"Qishuo",
""
]
] |
2403.02241 | Damien Teney | Damien Teney, Armand Nicolicioiu, Valentin Hartmann, Ehsan Abbasnejad | Neural Redshift: Random Networks are not Random Functions | null | IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
2024 | null | null | cs.LG cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Our understanding of the generalization capabilities of neural networks (NNs)
is still incomplete. Prevailing explanations are based on implicit biases of
gradient descent (GD) but they cannot account for the capabilities of models
from gradient-free methods nor the simplicity bias recently observed in
untrained networks. This paper seeks other sources of generalization in NNs.
Findings. To understand the inductive biases provided by architectures
independently from GD, we examine untrained, random-weight networks. Even
simple MLPs show strong inductive biases: uniform sampling in weight space
yields a very biased distribution of functions in terms of complexity. But
unlike common wisdom, NNs do not have an inherent "simplicity bias". This
property depends on components such as ReLUs, residual connections, and layer
normalizations. Alternative architectures can be built with a bias for any
level of complexity. Transformers also inherit all these properties from their
building blocks.
Implications. We provide a fresh explanation for the success of deep learning
independent from gradient-based training. It points at promising avenues for
controlling the solutions implemented by trained models.
| [
{
"created": "Mon, 4 Mar 2024 17:33:20 GMT",
"version": "v1"
},
{
"created": "Tue, 5 Mar 2024 11:43:24 GMT",
"version": "v2"
}
] | 2024-03-06 | [
[
"Teney",
"Damien",
""
],
[
"Nicolicioiu",
"Armand",
""
],
[
"Hartmann",
"Valentin",
""
],
[
"Abbasnejad",
"Ehsan",
""
]
] |
2403.02243 | Cameron R. Wolfe | Cameron R. Wolfe and Anastasios Kyrillidis | Better Schedules for Low Precision Training of Deep Neural Networks | 20 pages, 8 figures, 1 table, ACML 2023 | Machine Learning (2024): 1-19 | 10.1007/s10994-023-06480-0 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Low precision training can significantly reduce the computational overhead of
training deep neural networks (DNNs). Though many such techniques exist, cyclic
precision training (CPT), which dynamically adjusts precision throughout
training according to a cyclic schedule, achieves particularly impressive
improvements in training efficiency, while actually improving DNN performance.
Existing CPT implementations take common learning rate schedules (e.g.,
cyclical cosine schedules) and use them for low precision training without
adequate comparisons to alternative scheduling options. We define a diverse
suite of CPT schedules and analyze their performance across a variety of DNN
training regimes, some of which are unexplored in the low precision training
literature (e.g., node classification with graph neural networks). From these
experiments, we discover alternative CPT schedules that offer further
improvements in training efficiency and model performance, as well as derive a
set of best practices for choosing CPT schedules. Going further, we find that a
correlation exists between model performance and training cost, and that
changing the underlying CPT schedule can control the tradeoff between these two
variables. To explain the direct correlation between model performance and
training cost, we draw a connection between quantized training and critical
learning periods, suggesting that aggressive quantization is a form of learning
impairment that can permanently damage model performance.
| [
{
"created": "Mon, 4 Mar 2024 17:33:39 GMT",
"version": "v1"
}
] | 2024-03-05 | [
[
"Wolfe",
"Cameron R.",
""
],
[
"Kyrillidis",
"Anastasios",
""
]
] |
2403.02311 | Yidong Zhao | Yidong Zhao, Joao Tourais, Iain Pierce, Christian Nitsche, Thomas A.
Treibel, Sebastian Weing\"artner, Artur M. Schweidtmann, Qian Tao | Bayesian Uncertainty Estimation by Hamiltonian Monte Carlo: Applications
to Cardiac MRI Segmentation | Accepted for publication at the Journal of Machine Learning for
Biomedical Imaging (MELBA) https://melba-journal.org/2024:011 | Machine.Learning.for.Biomedical.Imaging. 2 (2024) | 10.59275/j.melba.2024-88fa | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Deep learning (DL)-based methods have achieved state-of-the-art performance
for many medical image segmentation tasks. Nevertheless, recent studies show
that deep neural networks (DNNs) can be miscalibrated and overconfident,
leading to "silent failures" that are risky for clinical applications. Bayesian
DL provides an intuitive approach to DL failure detection, based on posterior
probability estimation. However, the posterior is intractable for large medical
image segmentation DNNs. To tackle this challenge, we propose a Bayesian
learning framework using Hamiltonian Monte Carlo (HMC), tempered by cold
posterior (CP) to accommodate medical data augmentation, named HMC-CP. For HMC
computation, we further propose a cyclical annealing strategy, capturing both
local and global geometries of the posterior distribution, enabling highly
efficient Bayesian DNN training with the same computational budget as training
a single DNN. The resulting Bayesian DNN outputs an ensemble segmentation along
with the segmentation uncertainty. We evaluate the proposed HMC-CP extensively
on cardiac magnetic resonance image (MRI) segmentation, using in-domain
steady-state free precession (SSFP) cine images as well as out-of-domain
datasets of quantitative T1 and T2 mapping. Our results show that the proposed
method improves both segmentation accuracy and uncertainty estimation for in-
and out-of-domain data, compared with well-established baseline methods such as
Monte Carlo Dropout and Deep Ensembles. Additionally, we establish a conceptual
link between HMC and the commonly known stochastic gradient descent (SGD) and
provide general insight into the uncertainty of DL. This uncertainty is
implicitly encoded in the training dynamics but often overlooked. With reliable
uncertainty estimation, our method provides a promising direction toward
trustworthy DL in clinical applications.
| [
{
"created": "Mon, 4 Mar 2024 18:47:56 GMT",
"version": "v1"
},
{
"created": "Wed, 26 Jun 2024 11:14:21 GMT",
"version": "v2"
},
{
"created": "Thu, 27 Jun 2024 08:21:51 GMT",
"version": "v3"
}
] | 2024-06-28 | [
[
"Zhao",
"Yidong",
""
],
[
"Tourais",
"Joao",
""
],
[
"Pierce",
"Iain",
""
],
[
"Nitsche",
"Christian",
""
],
[
"Treibel",
"Thomas A.",
""
],
[
"Weingärtner",
"Sebastian",
""
],
[
"Schweidtmann",
"Artur M.",
""
],
[
"Tao",
"Qian",
""
]
] |
2403.02366 | Seamus Lankford | S\'eamus Lankford, Haithem Afli and Andy Way | Human Evaluation of English--Irish Transformer-Based NMT | arXiv admin note: text overlap with arXiv:2403.01985 | Information 2022, 13(7), 309 | 10.3390/info13070309 | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this study, a human evaluation is carried out on how hyperparameter
settings impact the quality of Transformer-based Neural Machine Translation
(NMT) for the low-resourced English--Irish pair. SentencePiece models using
both Byte Pair Encoding (BPE) and unigram approaches were appraised. Variations
in model architectures included modifying the number of layers, evaluating the
optimal number of heads for attention and testing various regularisation
techniques. The greatest performance improvement was recorded for a
Transformer-optimized model with a 16k BPE subword model. Compared with a
baseline Recurrent Neural Network (RNN) model, a Transformer-optimized model
demonstrated a BLEU score improvement of 7.8 points. When benchmarked against
Google Translate, our translation engines demonstrated significant
improvements. Furthermore, a quantitative fine-grained manual evaluation was
conducted which compared the performance of machine translation systems. Using
the Multidimensional Quality Metrics (MQM) error taxonomy, a human evaluation
of the error types generated by an RNN-based system and a Transformer-based
system was explored. Our findings show the best-performing Transformer system
significantly reduces both accuracy and fluency errors when compared with an
RNN-based model.
| [
{
"created": "Mon, 4 Mar 2024 11:45:46 GMT",
"version": "v1"
}
] | 2024-03-06 | [
[
"Lankford",
"Séamus",
""
],
[
"Afli",
"Haithem",
""
],
[
"Way",
"Andy",
""
]
] |
2403.02367 | Seamus Lankford | S\'eamus Lankford, Haithem Afli and Andy Way | adaptNMT: an open-source, language-agnostic development environment for
Neural Machine Translation | null | Language Resources and Evaluation 57, 1671-1696, (2023) | 10.1007/s10579-023-09671-2 | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | adaptNMT streamlines all processes involved in the development and deployment
of RNN and Transformer neural translation models. As an open-source
application, it is designed for both technical and non-technical users who work
in the field of machine translation. Built upon the widely-adopted OpenNMT
ecosystem, the application is particularly useful for new entrants to the field
since the setup of the development environment and creation of train,
validation and test splits is greatly simplified. Graphing, embedded within the
application, illustrates the progress of model training, and SentencePiece is
used for creating subword segmentation models. Hyperparameter customization is
facilitated through an intuitive user interface, and a single-click model
development approach has been implemented. Models developed by adaptNMT can be
evaluated using a range of metrics, and deployed as a translation service
within the application. To support eco-friendly research in the NLP space, a
green report also flags the power consumption and kgCO$_{2}$ emissions
generated during model development. The application is freely available.
| [
{
"created": "Mon, 4 Mar 2024 12:10:17 GMT",
"version": "v1"
}
] | 2024-03-06 | [
[
"Lankford",
"Séamus",
""
],
[
"Afli",
"Haithem",
""
],
[
"Way",
"Andy",
""
]
] |
2403.02368 | Zhipeng Ma | Zhipeng Ma, Bo N{\o}rregaard J{\o}rgensen, Zheng Grace Ma | A Novel Hybrid Feature Importance and Feature Interaction Detection
Framework for Predictive Optimization in Industry 4.0 Applications | null | IECON 2023- 49th Annual Conference of the IEEE Industrial
Electronics Society | 10.1109/IECON51785.2023.10312491 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Advanced machine learning algorithms are increasingly utilized to provide
data-based prediction and decision-making support in Industry 4.0. However, the
prediction accuracy achieved by the existing models is insufficient to warrant
practical implementation in real-world applications. This is because not all
features present in real-world datasets possess a direct relevance to the
predictive analysis being conducted. Consequently, the careful incorporation of
select features has the potential to yield a substantial positive impact on the
outcome. To address the research gap, this paper proposes a novel hybrid
framework that combines the feature importance detector - local interpretable
model-agnostic explanations (LIME) and the feature interaction detector -
neural interaction detection (NID), to improve prediction accuracy. By applying
the proposed framework, unnecessary features can be eliminated, and
interactions are encoded to generate a more conducive dataset for predictive
purposes. Subsequently, the proposed model is deployed to refine the prediction
of electricity consumption in foundry processing. The experimental outcomes
reveal an augmentation of up to 9.56% in the R2 score, and a diminution of up
to 24.05% in the root mean square error.
| [
{
"created": "Mon, 4 Mar 2024 13:22:53 GMT",
"version": "v1"
}
] | 2024-03-06 | [
[
"Ma",
"Zhipeng",
""
],
[
"Jørgensen",
"Bo Nørregaard",
""
],
[
"Ma",
"Zheng Grace",
""
]
] |
2403.02370 | Seamus Lankford | S\'eamus Lankford, Haithem Afli and Andy Way | adaptMLLM: Fine-Tuning Multilingual Language Models on Low-Resource
Languages with Integrated LLM Playgrounds | null | Information 2023, 14(12), 638 | 10.3390/info14120638 | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | The advent of Multilingual Language Models (MLLMs) and Large Language Models
has spawned innovation in many areas of natural language processing. Despite
the exciting potential of this technology, its impact on developing
high-quality Machine Translation (MT) outputs for low-resource languages
remains relatively under-explored. Furthermore, an open-source application,
dedicated to both fine-tuning MLLMs and managing the complete MT workflow for
low-resources languages, remains unavailable. We aim to address these
imbalances through the development of adaptMLLM, which streamlines all
processes involved in the fine-tuning of MLLMs for MT. This open-source
application is tailored for developers, translators, and users who are engaged
in MT. An intuitive interface allows for easy customisation of hyperparameters,
and the application offers a range of metrics for model evaluation and the
capability to deploy models as a translation service directly within the
application. As a multilingual tool, we used adaptMLLM to fine-tune models for
two low-resource language pairs: English to Irish (EN$\leftrightarrow$GA) and
English to Marathi (EN$\leftrightarrow$MR). Compared with baselines from the
LoResMT2021 Shared Task, the adaptMLLM system demonstrated significant
improvements. In the EN$\rightarrow$GA direction, an improvement of 5.2 BLEU
points was observed and an increase of 40.5 BLEU points was recorded in the
GA$\rightarrow$EN direction. Significant improvements in the translation
performance of the EN$\leftrightarrow$MR pair were also observed notably in the
MR$\rightarrow$EN direction with an increase of 21.3 BLEU points. Finally, a
fine-grained human evaluation of the MLLM output on the EN$\rightarrow$GA pair
was conducted using the Multidimensional Quality Metrics and Scalar Quality
Metrics error taxonomies. The application and models are freely available.
| [
{
"created": "Mon, 4 Mar 2024 14:49:18 GMT",
"version": "v1"
}
] | 2024-03-06 | [
[
"Lankford",
"Séamus",
""
],
[
"Afli",
"Haithem",
""
],
[
"Way",
"Andy",
""
]
] |
2403.02451 | Adil Soubki | Adil Soubki, John Murzaku, Arash Yousefi Jordehi, Peter Zeng,
Magdalena Markowska, Seyed Abolghasem Mirroshandel, Owen Rambow | Views Are My Own, but Also Yours: Benchmarking Theory of Mind Using
Common Ground | null | ACL 2024 Findings | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Evaluating the theory of mind (ToM) capabilities of language models (LMs) has
recently received a great deal of attention. However, many existing benchmarks
rely on synthetic data, which risks misaligning the resulting experiments with
human behavior. We introduce the first ToM dataset based on naturally occurring
spoken dialogs, Common-ToM, and show that LMs struggle to demonstrate ToM. We
then show that integrating a simple, explicit representation of beliefs
improves LM performance on Common-ToM.
| [
{
"created": "Mon, 4 Mar 2024 20:07:17 GMT",
"version": "v1"
},
{
"created": "Thu, 6 Jun 2024 00:30:01 GMT",
"version": "v2"
}
] | 2024-06-11 | [
[
"Soubki",
"Adil",
""
],
[
"Murzaku",
"John",
""
],
[
"Jordehi",
"Arash Yousefi",
""
],
[
"Zeng",
"Peter",
""
],
[
"Markowska",
"Magdalena",
""
],
[
"Mirroshandel",
"Seyed Abolghasem",
""
],
[
"Rambow",
"Owen",
""
]
] |
2403.02772 | Ali Abedi | Mark Karlov, Ali Abedi, Shehroz S. Khan | Rehabilitation Exercise Quality Assessment through Supervised
Contrastive Learning with Hard and Soft Negatives | 23 pages, 4 figures, 5 tables | Medical & Biological Engineering & Computing Journal, 2024 | 10.1007/s11517-024-03177-x | null | cs.LG cs.AI cs.CV cs.CY | http://creativecommons.org/licenses/by/4.0/ | Exercise-based rehabilitation programs have proven to be effective in
enhancing the quality of life and reducing mortality and rehospitalization
rates. AI-driven virtual rehabilitation, which allows patients to independently
complete exercises at home, utilizes AI algorithms to analyze exercise data,
providing feedback to patients and updating clinicians on their progress. These
programs commonly prescribe a variety of exercise types, leading to a distinct
challenge in rehabilitation exercise assessment datasets: while abundant in
overall training samples, these datasets often have a limited number of samples
for each individual exercise type. This disparity hampers the ability of
existing approaches to train generalizable models with such a small sample size
per exercise type. Addressing this issue, this paper introduces a novel
supervised contrastive learning framework with hard and soft negative samples
that effectively utilizes the entire dataset to train a single model applicable
to all exercise types. This model, with a Spatial-Temporal Graph Convolutional
Network (ST-GCN) architecture, demonstrated enhanced generalizability across
exercises and a decrease in overall complexity. Through extensive experiments
on three publicly available rehabilitation exercise assessment datasets,
UI-PRMD, IRDS, and KIMORE, our method has proven to surpass existing methods,
setting a new benchmark in rehabilitation exercise quality assessment.
| [
{
"created": "Tue, 5 Mar 2024 08:38:25 GMT",
"version": "v1"
},
{
"created": "Fri, 9 Aug 2024 15:54:49 GMT",
"version": "v2"
}
] | 2024-08-12 | [
[
"Karlov",
"Mark",
""
],
[
"Abedi",
"Ali",
""
],
[
"Khan",
"Shehroz S.",
""
]
] |
2403.02782 | Kumaranage Ravindu Nagasinghe | Kumaranage Ravindu Yasas Nagasinghe, Honglu Zhou, Malitha
Gunawardhana, Martin Renqiang Min, Daniel Harari, Muhammad Haris Khan | Why Not Use Your Textbook? Knowledge-Enhanced Procedure Planning of
Instructional Videos | 8 pages, 6 figures, (supplementary material: 9 pages, 5 figures),
accepted to CVPR 2024 | Proceedings of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition (CVPR) 2024 , Pages 18816-18826 | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In this paper, we explore the capability of an agent to construct a logical
sequence of action steps, thereby assembling a strategic procedural plan. This
plan is crucial for navigating from an initial visual observation to a target
visual outcome, as depicted in real-life instructional videos. Existing works
have attained partial success by extensively leveraging various sources of
information available in the datasets, such as heavy intermediate visual
observations, procedural names, or natural language step-by-step instructions,
for features or supervision signals. However, the task remains formidable due
to the implicit causal constraints in the sequencing of steps and the
variability inherent in multiple feasible plans. To tackle these intricacies
that previous efforts have overlooked, we propose to enhance the capabilities
of the agent by infusing it with procedural knowledge. This knowledge, sourced
from training procedure plans and structured as a directed weighted graph,
equips the agent to better navigate the complexities of step sequencing and its
potential variations. We coin our approach KEPP, a novel Knowledge-Enhanced
Procedure Planning system, which harnesses a probabilistic procedural knowledge
graph extracted from training data, effectively acting as a comprehensive
textbook for the training domain. Experimental evaluations across three
widely-used datasets under settings of varying complexity reveal that KEPP
attains superior, state-of-the-art results while requiring only minimal
supervision.
| [
{
"created": "Tue, 5 Mar 2024 08:55:51 GMT",
"version": "v1"
},
{
"created": "Sat, 15 Jun 2024 17:55:58 GMT",
"version": "v2"
}
] | 2024-06-18 | [
[
"Nagasinghe",
"Kumaranage Ravindu Yasas",
""
],
[
"Zhou",
"Honglu",
""
],
[
"Gunawardhana",
"Malitha",
""
],
[
"Min",
"Martin Renqiang",
""
],
[
"Harari",
"Daniel",
""
],
[
"Khan",
"Muhammad Haris",
""
]
] |
2403.02783 | Sebastien Verel | S\'ebastien Verel (LISIC), Sarah Thomson, Omar Rifki (LISIC) | Where the Really Hard Quadratic Assignment Problems Are: the QAP-SAT
instances | null | Evolutionary Computation in Combinatorial Optimization Conference
(evoCOP), Apr 2024, Aberystwyth, United Kingdom | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Quadratic Assignment Problem (QAP) is one of the major domains in the
field of evolutionary computation, and more widely in combinatorial
optimization. This paper studies the phase transition of the QAP, which can be
described as a dramatic change in the problem's computational complexity and
satisfiability, within a narrow range of the problem parameters. To approach
this phenomenon, we introduce a new QAP-SAT design of the initial problem based
on submodularity to capture its difficulty with new features. This
decomposition is studied experimentally using branch-and-bound and tabu search
solvers. A phase transition parameter is then proposed. The critical parameter
of phase transition satisfaction and that of the solving effort are shown to be
highly correlated for tabu search, thus allowing the prediction of difficult
instances.
| [
{
"created": "Tue, 5 Mar 2024 08:56:30 GMT",
"version": "v1"
}
] | 2024-03-06 | [
[
"Verel",
"Sébastien",
"",
"LISIC"
],
[
"Thomson",
"Sarah",
"",
"LISIC"
],
[
"Rifki",
"Omar",
"",
"LISIC"
]
] |
2403.02889 | Yakir Yehuda | Yakir Yehuda, Itzik Malkiel, Oren Barkan, Jonathan Weill, Royi Ronen
and Noam Koenigstein | InterrogateLLM: Zero-Resource Hallucination Detection in LLM-Generated
Answers | null | https://aclanthology.org/2024.acl-long.506/ | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the many advances of Large Language Models (LLMs) and their
unprecedented rapid evolution, their impact and integration into every facet of
our daily lives is limited due to various reasons. One critical factor
hindering their widespread adoption is the occurrence of hallucinations, where
LLMs invent answers that sound realistic, yet drift away from factual truth. In
this paper, we present a novel method for detecting hallucinations in large
language models, which tackles a critical issue in the adoption of these models
in various real-world scenarios. Through extensive evaluations across multiple
datasets and LLMs, including Llama-2, we study the hallucination levels of
various recent LLMs and demonstrate the effectiveness of our method to
automatically detect them. Notably, we observe up to 87% hallucinations for
Llama-2 in a specific experiment, where our method achieves a Balanced Accuracy
of 81%, all without relying on external knowledge.
| [
{
"created": "Tue, 5 Mar 2024 11:50:01 GMT",
"version": "v1"
},
{
"created": "Wed, 20 Mar 2024 09:53:17 GMT",
"version": "v2"
},
{
"created": "Mon, 19 Aug 2024 07:53:17 GMT",
"version": "v3"
}
] | 2024-08-20 | [
[
"Yehuda",
"Yakir",
""
],
[
"Malkiel",
"Itzik",
""
],
[
"Barkan",
"Oren",
""
],
[
"Weill",
"Jonathan",
""
],
[
"Ronen",
"Royi",
""
],
[
"Koenigstein",
"Noam",
""
]
] |
2403.02892 | Byeongkeun Kang | Duy Tran Thanh and Yeejin Lee and Byeongkeun Kang | Enhancing Long-Term Person Re-Identification Using Global, Local Body
Part, and Head Streams | 16 pages | Neurocomputing, 2024 | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work addresses the task of long-term person re-identification.
Typically, person re-identification assumes that people do not change their
clothes, which limits its applications to short-term scenarios. To overcome
this limitation, we investigate long-term person re-identification, which
considers both clothes-changing and clothes-consistent scenarios. In this
paper, we propose a novel framework that effectively learns and utilizes both
global and local information. The proposed framework consists of three streams:
global, local body part, and head streams. The global and head streams encode
identity-relevant information from an entire image and a cropped image of the
head region, respectively. Both streams encode the most distinct, less
distinct, and average features using the combinations of adversarial erasing,
max pooling, and average pooling. The local body part stream extracts
identity-related information for each body part, allowing it to be compared
with the same body part from another image. Since body part annotations are not
available in re-identification datasets, pseudo-labels are generated using
clustering. These labels are then utilized to train a body part segmentation
head in the local body part stream. The proposed framework is trained by
backpropagating the weighted summation of the identity classification loss, the
pair-based loss, and the pseudo body part segmentation loss. To demonstrate the
effectiveness of the proposed method, we conducted experiments on three
publicly available datasets (Celeb-reID, PRCC, and VC-Clothes). The
experimental results demonstrate that the proposed method outperforms the
previous state-of-the-art method.
| [
{
"created": "Tue, 5 Mar 2024 11:57:10 GMT",
"version": "v1"
}
] | 2024-03-06 | [
[
"Thanh",
"Duy Tran",
""
],
[
"Lee",
"Yeejin",
""
],
[
"Kang",
"Byeongkeun",
""
]
] |
2403.02938 | Kazuki Kawamura | Kazuki Kawamura and Jun Rekimoto | AIx Speed: Playback Speed Optimization Using Listening Comprehension of
Speech Recognition Models | null | AHs '23: Proceedings of the Augmented Humans International
Conference 2023 | 10.1145/3582700.3582722 | null | cs.CL cs.HC cs.LG cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Since humans can listen to audio and watch videos at faster speeds than
actually observed, we often listen to or watch these pieces of content at
higher playback speeds to increase the time efficiency of content
comprehension. To further utilize this capability, systems that automatically
adjust the playback speed according to the user's condition and the type of
content to assist in more efficient comprehension of time-series content have
been developed. However, there is still room for these systems to further
extend human speed-listening ability by generating speech with playback speed
optimized for even finer time units and providing it to humans. In this study,
we determine whether humans can hear the optimized speech and propose a system
that automatically adjusts playback speed at units as small as phonemes while
ensuring speech intelligibility. The system uses the speech recognizer score as
a proxy for how well a human can hear a certain unit of speech and maximizes
the speech playback speed to the extent that a human can hear. This method can
be used to produce fast but intelligible speech. In the evaluation experiment,
we compared the speech played back at a constant fast speed and the flexibly
speed-up speech generated by the proposed method in a blind test and confirmed
that the proposed method produced speech that was easier to listen to.
| [
{
"created": "Tue, 5 Mar 2024 13:08:52 GMT",
"version": "v1"
}
] | 2024-03-06 | [
[
"Kawamura",
"Kazuki",
""
],
[
"Rekimoto",
"Jun",
""
]
] |
2403.02955 | Raz Lapid | Ben Pinhasov, Raz Lapid, Rony Ohayon, Moshe Sipper and Yehudit
Aperstein | XAI-Based Detection of Adversarial Attacks on Deepfake Detectors | Accepted at TMLR 2024 | Transactions on Machine Learning Research, 2024 | null | null | cs.CR cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a novel methodology for identifying adversarial attacks on
deepfake detectors using eXplainable Artificial Intelligence (XAI). In an era
characterized by digital advancement, deepfakes have emerged as a potent tool,
creating a demand for efficient detection systems. However, these systems are
frequently targeted by adversarial attacks that inhibit their performance. We
address this gap, developing a defensible deepfake detector by leveraging the
power of XAI. The proposed methodology uses XAI to generate interpretability
maps for a given method, providing explicit visualizations of decision-making
factors within the AI models. We subsequently employ a pretrained feature
extractor that processes both the input image and its corresponding XAI image.
The feature embeddings extracted from this process are then used for training a
simple yet effective classifier. Our approach contributes not only to the
detection of deepfakes but also enhances the understanding of possible
adversarial attacks, pinpointing potential vulnerabilities. Furthermore, this
approach does not change the performance of the deepfake detector. The paper
demonstrates promising results suggesting a potential pathway for future
deepfake detection mechanisms. We believe this study will serve as a valuable
contribution to the community, sparking much-needed discourse on safeguarding
deepfake detectors.
| [
{
"created": "Tue, 5 Mar 2024 13:25:30 GMT",
"version": "v1"
},
{
"created": "Sun, 18 Aug 2024 12:22:06 GMT",
"version": "v2"
}
] | 2024-08-20 | [
[
"Pinhasov",
"Ben",
""
],
[
"Lapid",
"Raz",
""
],
[
"Ohayon",
"Rony",
""
],
[
"Sipper",
"Moshe",
""
],
[
"Aperstein",
"Yehudit",
""
]
] |
2403.02991 | Jianjian Cao | Jianjian Cao and Peng Ye and Shengze Li and Chong Yu and Yansong Tang
and Jiwen Lu and Tao Chen | MADTP: Multimodal Alignment-Guided Dynamic Token Pruning for
Accelerating Vision-Language Transformer | 19 pages, 9 figures, Published in CVPR2024 | In Proc. IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), 2024 | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Vision-Language Transformers (VLTs) have shown great success recently, but
are meanwhile accompanied by heavy computation costs, where a major reason can
be attributed to the large number of visual and language tokens. Existing token
pruning research for compressing VLTs mainly follows a single-modality-based
scheme yet ignores the critical role of aligning different modalities for
guiding the token pruning process, causing the important tokens for one
modality to be falsely pruned in another modality branch. Meanwhile, existing
VLT pruning works also lack the flexibility to dynamically compress each layer
based on different input samples. To this end, we propose a novel framework
named Multimodal Alignment-Guided Dynamic Token Pruning (MADTP) for
accelerating various VLTs. Specifically, we first introduce a well-designed
Multi-modality Alignment Guidance (MAG) module that can align features of the
same semantic concept from different modalities, to ensure the pruned tokens
are less important for all modalities. We further design a novel Dynamic Token
Pruning (DTP) module, which can adaptively adjust the token compression ratio
in each layer based on different input instances. Extensive experiments on
various benchmarks demonstrate that MADTP significantly reduces the
computational complexity of kinds of multimodal models while preserving
competitive performance. Notably, when applied to the BLIP model in the NLVR2
dataset, MADTP can reduce the GFLOPs by 80% with less than 4% performance
degradation.
| [
{
"created": "Tue, 5 Mar 2024 14:13:50 GMT",
"version": "v1"
}
] | 2024-03-06 | [
[
"Cao",
"Jianjian",
""
],
[
"Ye",
"Peng",
""
],
[
"Li",
"Shengze",
""
],
[
"Yu",
"Chong",
""
],
[
"Tang",
"Yansong",
""
],
[
"Lu",
"Jiwen",
""
],
[
"Chen",
"Tao",
""
]
] |
2403.03400 | Yong Li | Yong Li, Shiguang Shan | Contrastive Learning of Person-independent Representations for Facial
Action Unit Detection | null | Published in Transaction on Image Processing 2023 | 10.1109/TIP.2023.3279978 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Facial action unit (AU) detection, aiming to classify AU present in the
facial image, has long suffered from insufficient AU annotations. In this
paper, we aim to mitigate this data scarcity issue by learning AU
representations from a large number of unlabelled facial videos in a
contrastive learning paradigm. We formulate the self-supervised AU
representation learning signals in two-fold: (1) AU representation should be
frame-wisely discriminative within a short video clip; (2) Facial frames
sampled from different identities but show analogous facial AUs should have
consistent AU representations. As to achieve these goals, we propose to
contrastively learn the AU representation within a video clip and devise a
cross-identity reconstruction mechanism to learn the person-independent
representations. Specially, we adopt a margin-based temporal contrastive
learning paradigm to perceive the temporal AU coherence and evolution
characteristics within a clip that consists of consecutive input facial frames.
Moreover, the cross-identity reconstruction mechanism facilitates pushing the
faces from different identities but show analogous AUs close in the latent
embedding space. Experimental results on three public AU datasets demonstrate
that the learned AU representation is discriminative for AU detection. Our
method outperforms other contrastive learning methods and significantly closes
the performance gap between the self-supervised and supervised AU detection
approaches.
| [
{
"created": "Wed, 6 Mar 2024 01:49:28 GMT",
"version": "v1"
}
] | 2024-03-07 | [
[
"Li",
"Yong",
""
],
[
"Shan",
"Shiguang",
""
]
] |
2403.03409 | Biswadeep Chakraborty | Biswadeep Chakraborty, Beomseok Kang, Harshit Kumar and Saibal
Mukhopadhyay | Sparse Spiking Neural Network: Exploiting Heterogeneity in Timescales
for Pruning Recurrent SNN | Published as a conference paper at ICLR 2024 | ICLR 2024 | null | null | cs.NE cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recurrent Spiking Neural Networks (RSNNs) have emerged as a computationally
efficient and brain-inspired learning model. The design of sparse RSNNs with
fewer neurons and synapses helps reduce the computational complexity of RSNNs.
Traditionally, sparse SNNs are obtained by first training a dense and complex
SNN for a target task, and, then, pruning neurons with low activity
(activity-based pruning) while maintaining task performance. In contrast, this
paper presents a task-agnostic methodology for designing sparse RSNNs by
pruning a large randomly initialized model. We introduce a novel Lyapunov Noise
Pruning (LNP) algorithm that uses graph sparsification methods and utilizes
Lyapunov exponents to design a stable sparse RSNN from a randomly initialized
RSNN. We show that the LNP can leverage diversity in neuronal timescales to
design a sparse Heterogeneous RSNN (HRSNN). Further, we show that the same
sparse HRSNN model can be trained for different tasks, such as image
classification and temporal prediction. We experimentally show that, in spite
of being task-agnostic, LNP increases computational efficiency (fewer neurons
and synapses) and prediction performance of RSNNs compared to traditional
activity-based pruning of trained dense models.
| [
{
"created": "Wed, 6 Mar 2024 02:36:15 GMT",
"version": "v1"
}
] | 2024-03-07 | [
[
"Chakraborty",
"Biswadeep",
""
],
[
"Kang",
"Beomseok",
""
],
[
"Kumar",
"Harshit",
""
],
[
"Mukhopadhyay",
"Saibal",
""
]
] |
2403.03448 | Yu Guo | Rina Su, Yu Guo, Caiying Wu, Qiyu Jin, Tieyong Zeng | Kernel Correlation-Dissimilarity for Multiple Kernel k-Means Clustering | 36 pages. This paper was accepted by Pattern Recognition on January
31, 2024 | Pattern Recognition, 2024, 150:110307 | 10.1016/j.patcog.2024.110307 | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The main objective of the Multiple Kernel k-Means (MKKM) algorithm is to
extract non-linear information and achieve optimal clustering by optimizing
base kernel matrices. Current methods enhance information diversity and reduce
redundancy by exploiting interdependencies among multiple kernels based on
correlations or dissimilarities. Nevertheless, relying solely on a single
metric, such as correlation or dissimilarity, to define kernel relationships
introduces bias and incomplete characterization. Consequently, this limitation
hinders efficient information extraction, ultimately compromising clustering
performance. To tackle this challenge, we introduce a novel method that
systematically integrates both kernel correlation and dissimilarity. Our
approach comprehensively captures kernel relationships, facilitating more
efficient classification information extraction and improving clustering
performance. By emphasizing the coherence between kernel correlation and
dissimilarity, our method offers a more objective and transparent strategy for
extracting non-linear information and significantly improving clustering
precision, supported by theoretical rationale. We assess the performance of our
algorithm on 13 challenging benchmark datasets, demonstrating its superiority
over contemporary state-of-the-art MKKM techniques.
| [
{
"created": "Wed, 6 Mar 2024 04:24:43 GMT",
"version": "v1"
}
] | 2024-03-07 | [
[
"Su",
"Rina",
""
],
[
"Guo",
"Yu",
""
],
[
"Wu",
"Caiying",
""
],
[
"Jin",
"Qiyu",
""
],
[
"Zeng",
"Tieyong",
""
]
] |
2403.03456 | Bingxuan Zhang | Xiangquan Gui, Binxuan Zhang, Li Li, Yi Yang | DLP-GAN: learning to draw modern Chinese landscape photos with
generative adversarial network | Corrected typos | Neural Computing and Applications, 2023: 1-18 | 10.1007/s00521-023-09345-8 | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Chinese landscape painting has a unique and artistic style, and its drawing
technique is highly abstract in both the use of color and the realistic
representation of objects. Previous methods focus on transferring from modern
photos to ancient ink paintings. However, little attention has been paid to
translating landscape paintings into modern photos. To solve such problems, in
this paper, we (1) propose DLP-GAN (Draw Modern Chinese Landscape Photos with
Generative Adversarial Network), an unsupervised cross-domain image translation
framework with a novel asymmetric cycle mapping, and (2) introduce a generator
based on a dense-fusion module to match different translation directions.
Moreover, a dual-consistency loss is proposed to balance the realism and
abstraction of model painting. In this way, our model can draw landscape photos
and sketches in the modern sense. Finally, based on our collection of modern
landscape and sketch datasets, we compare the images generated by our model
with other benchmarks. Extensive experiments including user studies show that
our model outperforms state-of-the-art methods.
| [
{
"created": "Wed, 6 Mar 2024 04:46:03 GMT",
"version": "v1"
},
{
"created": "Thu, 7 Mar 2024 05:49:05 GMT",
"version": "v2"
}
] | 2024-03-08 | [
[
"Gui",
"Xiangquan",
""
],
[
"Zhang",
"Binxuan",
""
],
[
"Li",
"Li",
""
],
[
"Yang",
"Yi",
""
]
] |
2403.03488 | Yu Guo | Yu Guo, Axel Davy, Gabriele Facciolo, Jean-Michel Morel, Qiyu Jin | Fast, nonlocal and neural: a lightweight high quality solution to image
denoising | 5 pages. This paper was accepted by IEEE Signal Processing Letters on
July 1, 2021 | IEEE Signal Processing Letters, 2021, 28:1515-1519 | 10.1109/LSP.2021.3099963 | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the widespread application of convolutional neural networks (CNNs), the
traditional model based denoising algorithms are now outperformed. However,
CNNs face two problems. First, they are computationally demanding, which makes
their deployment especially difficult for mobile terminals. Second,
experimental evidence shows that CNNs often over-smooth regular textures
present in images, in contrast to traditional non-local models. In this letter,
we propose a solution to both issues by combining a nonlocal algorithm with a
lightweight residual CNN. This solution gives full latitude to the advantages
of both models. We apply this framework to two GPU implementations of classic
nonlocal algorithms (NLM and BM3D) and observe a substantial gain in both
cases, performing better than the state-of-the-art with low computational
requirements. Our solution is between 10 and 20 times faster than CNNs with
equivalent performance and attains higher PSNR. In addition the final method
shows a notable gain on images containing complex textures like the ones of the
MIT Moire dataset.
| [
{
"created": "Wed, 6 Mar 2024 06:12:56 GMT",
"version": "v1"
}
] | 2024-03-07 | [
[
"Guo",
"Yu",
""
],
[
"Davy",
"Axel",
""
],
[
"Facciolo",
"Gabriele",
""
],
[
"Morel",
"Jean-Michel",
""
],
[
"Jin",
"Qiyu",
""
]
] |
2403.03575 | Seamus Lankford | S\'eamus Lankford, Haithem Afli, \'Orla N\'i Loinsigh, Andy Way | gaHealth: An English-Irish Bilingual Corpus of Health Data | arXiv admin note: text overlap with arXiv:2403.02367 | In Proceedings of the Thirteenth Language Resources and Evaluation
Conference, pages 6753-6758, Marseille, France. European Language Resources
Association, 2022 | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Machine Translation is a mature technology for many high-resource language
pairs. However in the context of low-resource languages, there is a paucity of
parallel data datasets available for developing translation models.
Furthermore, the development of datasets for low-resource languages often
focuses on simply creating the largest possible dataset for generic
translation. The benefits and development of smaller in-domain datasets can
easily be overlooked. To assess the merits of using in-domain data, a dataset
for the specific domain of health was developed for the low-resource English to
Irish language pair. Our study outlines the process used in developing the
corpus and empirically demonstrates the benefits of using an in-domain dataset
for the health domain. In the context of translating health-related data,
models developed using the gaHealth corpus demonstrated a maximum BLEU score
improvement of 22.2 points (40%) when compared with top performing models from
the LoResMT2021 Shared Task. Furthermore, we define linguistic guidelines for
developing gaHealth, the first bilingual corpus of health data for the Irish
language, which we hope will be of use to other creators of low-resource data
sets. gaHealth is now freely available online and is ready to be explored for
further research.
| [
{
"created": "Wed, 6 Mar 2024 09:36:36 GMT",
"version": "v1"
}
] | 2024-03-07 | [
[
"Lankford",
"Séamus",
""
],
[
"Afli",
"Haithem",
""
],
[
"Loinsigh",
"Órla Ní",
""
],
[
"Way",
"Andy",
""
]
] |
2403.03581 | Jos\'e Alberto Ben\'itez-Andrades Ph.D. | Sergio Rubio-Mart\'in, Mar\'ia Teresa Garc\'ia-Ord\'as, Mart\'in
Bay\'on-Guti\'errez, Natalia Prieto-Fern\'andez and Jos\'e Alberto
Ben\'itez-Andrades | Enhancing ASD detection accuracy: a combined approach of machine
learning and deep learning models with natural language processing | null | Health Inf Sci Syst 12, 20 (2024) | 10.1007/s13755-024-00281-y | null | cs.CL cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Purpose: Our study explored the use of artificial intelligence (AI) to
diagnose autism spectrum disorder (ASD). It focused on machine learning (ML)
and deep learning (DL) to detect ASD from text inputs on social media,
addressing challenges in traditional ASD diagnosis.
Methods: We used natural language processing (NLP), ML, and DL models
(including decision trees, XGB, KNN, RNN, LSTM, Bi-LSTM, BERT, and BERTweet) to
analyze 404,627 tweets, classifying them based on ASD or non-ASD authors. A
subset of 90,000 tweets was used for model training and testing.
Results: Our AI models showed high accuracy, with an 88% success rate in
identifying texts from individuals with ASD.
Conclusion: The study demonstrates AI's potential in improving ASD diagnosis,
especially in children, highlighting the importance of early detection.
| [
{
"created": "Wed, 6 Mar 2024 09:57:42 GMT",
"version": "v1"
}
] | 2024-03-07 | [
[
"Rubio-Martín",
"Sergio",
""
],
[
"García-Ordás",
"María Teresa",
""
],
[
"Bayón-Gutiérrez",
"Martín",
""
],
[
"Prieto-Fernández",
"Natalia",
""
],
[
"Benítez-Andrades",
"José Alberto",
""
]
] |
2403.03582 | Seamus Lankford | S\'eamus Lankford, Haithem Afli and Andy Way | Design of an Open-Source Architecture for Neural Machine Translation | arXiv admin note: substantial text overlap with arXiv:2403.02367 | In Proceedings of the 1st Workshop on Open Community-Driven
Machine Translation, pages 15-20, Tampere, Finland. European Association for
Machine Translation, 2023 | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | adaptNMT is an open-source application that offers a streamlined approach to
the development and deployment of Recurrent Neural Networks and Transformer
models. This application is built upon the widely-adopted OpenNMT ecosystem,
and is particularly useful for new entrants to the field, as it simplifies the
setup of the development environment and creation of train, validation, and
test splits. The application offers a graphing feature that illustrates the
progress of model training, and employs SentencePiece for creating subword
segmentation models. Furthermore, the application provides an intuitive user
interface that facilitates hyperparameter customization. Notably, a
single-click model development approach has been implemented, and models
developed by adaptNMT can be evaluated using a range of metrics. To encourage
eco-friendly research, adaptNMT incorporates a green report that flags the
power consumption and kgCO${_2}$ emissions generated during model development.
The application is freely available.
| [
{
"created": "Wed, 6 Mar 2024 09:57:52 GMT",
"version": "v1"
}
] | 2024-03-07 | [
[
"Lankford",
"Séamus",
""
],
[
"Afli",
"Haithem",
""
],
[
"Way",
"Andy",
""
]
] |
2403.03781 | Seamus Lankford | S\'eamus Lankford and Diarmuid Grimes | Neural Architecture Search using Particle Swarm and Ant Colony
Optimization | null | Proceedings of The 28th Irish Conference on Artificial
Intelligence and Cognitive Science. 2771. CEUR-WS, 2020 | null | null | cs.NE cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Neural network models have a number of hyperparameters that must be chosen
along with their architecture. This can be a heavy burden on a novice user,
choosing which architecture and what values to assign to parameters. In most
cases, default hyperparameters and architectures are used. Significant
improvements to model accuracy can be achieved through the evaluation of
multiple architectures. A process known as Neural Architecture Search (NAS) may
be applied to automatically evaluate a large number of such architectures. A
system integrating open source tools for Neural Architecture Search (OpenNAS),
in the classification of images, has been developed as part of this research.
OpenNAS takes any dataset of grayscale, or RBG images, and generates
Convolutional Neural Network (CNN) architectures based on a range of
metaheuristics using either an AutoKeras, a transfer learning or a Swarm
Intelligence (SI) approach. Particle Swarm Optimization (PSO) and Ant Colony
Optimization (ACO) are used as the SI algorithms. Furthermore, models developed
through such metaheuristics may be combined using stacking ensembles. In the
context of this paper, we focus on training and optimizing CNNs using the Swarm
Intelligence (SI) components of OpenNAS. Two major types of SI algorithms,
namely PSO and ACO, are compared to see which is more effective in generating
higher model accuracies. It is shown, with our experimental design, that the
PSO algorithm performs better than ACO. The performance improvement of PSO is
most notable with a more complex dataset. As a baseline, the performance of
fine-tuned pre-trained models is also evaluated.
| [
{
"created": "Wed, 6 Mar 2024 15:23:26 GMT",
"version": "v1"
}
] | 2024-03-07 | [
[
"Lankford",
"Séamus",
""
],
[
"Grimes",
"Diarmuid",
""
]
] |
2403.04121 | Subbarao Kambhampati | Subbarao Kambhampati | Can Large Language Models Reason and Plan? | arXiv admin note: text overlap with arXiv:2402.01817 (v2 add creative
commons attribution to Figure 2 graphic) | Annals of The New York Academy of Sciences; March 2024 | 10.1111/nyas.15125 | null | cs.AI cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | While humans sometimes do show the capability of correcting their own
erroneous guesses with self-critiquing, there seems to be no basis for that
assumption in the case of LLMs.
| [
{
"created": "Thu, 7 Mar 2024 00:36:32 GMT",
"version": "v1"
},
{
"created": "Fri, 8 Mar 2024 19:51:14 GMT",
"version": "v2"
}
] | 2024-03-12 | [
[
"Kambhampati",
"Subbarao",
""
]
] |
2403.04146 | Hong Lin | Hong Lin, Lidan Shou, Ke Chen, Gang Chen, Sai Wu | FL-GUARD: A Holistic Framework for Run-Time Detection and Recovery of
Negative Federated Learning | null | Data Science and Engineering (2024) | 10.1007/s41019-024-00243-0 | null | cs.LG cs.AI cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Federated learning (FL) is a promising approach for learning a model from
data distributed on massive clients without exposing data privacy. It works
effectively in the ideal federation where clients share homogeneous data
distribution and learning behavior. However, FL may fail to function
appropriately when the federation is not ideal, amid an unhealthy state called
Negative Federated Learning (NFL), in which most clients gain no benefit from
participating in FL. Many studies have tried to address NFL. However, their
solutions either (1) predetermine to prevent NFL in the entire learning
life-cycle or (2) tackle NFL in the aftermath of numerous learning rounds.
Thus, they either (1) indiscriminately incur extra costs even if FL can perform
well without such costs or (2) waste numerous learning rounds. Additionally,
none of the previous work takes into account the clients who may be
unwilling/unable to follow the proposed NFL solutions when using those
solutions to upgrade an FL system in use. This paper introduces FL-GUARD, a
holistic framework that can be employed on any FL system for tackling NFL in a
run-time paradigm. That is, to dynamically detect NFL at the early stage (tens
of rounds) of learning and then to activate recovery measures when necessary.
Specifically, we devise a cost-effective NFL detection mechanism, which relies
on an estimation of performance gain on clients. Only when NFL is detected, we
activate the NFL recovery process, in which each client learns in parallel an
adapted model when training the global model. Extensive experiment results
confirm the effectiveness of FL-GUARD in detecting NFL and recovering from NFL
to a healthy learning state. We also show that FL-GUARD is compatible with
previous NFL solutions and robust against clients unwilling/unable to take any
recovery measures.
| [
{
"created": "Thu, 7 Mar 2024 01:52:05 GMT",
"version": "v1"
}
] | 2024-03-08 | [
[
"Lin",
"Hong",
""
],
[
"Shou",
"Lidan",
""
],
[
"Chen",
"Ke",
""
],
[
"Chen",
"Gang",
""
],
[
"Wu",
"Sai",
""
]
] |
2403.04261 | Hui Zong | Hui Zong, Rongrong Wu, Jiaxue Cha, Weizhe Feng, Erman Wu, Jiakun Li,
Aibin Shao, Liang Tao, Zuofeng Li, Buzhou Tang, Bairong Shen | Advancing Chinese biomedical text mining with community challenges | null | Journal of Biomedical Informatics. 2024;157:104716. | 10.1016/j.jbi.2024.104716 | null | cs.AI cs.CL cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Objective: This study aims to review the recent advances in community
challenges for biomedical text mining in China. Methods: We collected
information of evaluation tasks released in community challenges of biomedical
text mining, including task description, dataset description, data source, task
type and related links. A systematic summary and comparative analysis were
conducted on various biomedical natural language processing tasks, such as
named entity recognition, entity normalization, attribute extraction, relation
extraction, event extraction, text classification, text similarity, knowledge
graph construction, question answering, text generation, and large language
model evaluation. Results: We identified 39 evaluation tasks from 6 community
challenges that spanned from 2017 to 2023. Our analysis revealed the diverse
range of evaluation task types and data sources in biomedical text mining. We
explored the potential clinical applications of these community challenge tasks
from a translational biomedical informatics perspective. We compared with their
English counterparts, and discussed the contributions, limitations, lessons and
guidelines of these community challenges, while highlighting future directions
in the era of large language models. Conclusion: Community challenge evaluation
competitions have played a crucial role in promoting technology innovation and
fostering interdisciplinary collaboration in the field of biomedical text
mining. These challenges provide valuable platforms for researchers to develop
state-of-the-art solutions.
| [
{
"created": "Thu, 7 Mar 2024 06:52:51 GMT",
"version": "v1"
},
{
"created": "Fri, 30 Aug 2024 02:47:43 GMT",
"version": "v2"
}
] | 2024-09-02 | [
[
"Zong",
"Hui",
""
],
[
"Wu",
"Rongrong",
""
],
[
"Cha",
"Jiaxue",
""
],
[
"Feng",
"Weizhe",
""
],
[
"Wu",
"Erman",
""
],
[
"Li",
"Jiakun",
""
],
[
"Shao",
"Aibin",
""
],
[
"Tao",
"Liang",
""
],
[
"Li",
"Zuofeng",
""
],
[
"Tang",
"Buzhou",
""
],
[
"Shen",
"Bairong",
""
]
] |
2403.04292 | Knud Thomsen | Knud Thomsen | A challenge in A(G)I, cybernetics revived in the Ouroboros Model as one
algorithm for all thinking | 26 pages, 11 figures | Artificial Intelligence and Autonomous Systems Volume 1 Issue 1,
2024 | 10.55092/aias20240001 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | A topical challenge for algorithms in general and for automatic image
categorization and generation in particular is presented in the form of a
drawing for AI to understand. In a second vein, AI is challenged to produce
something similar from verbal description. The aim of the paper is to highlight
strengths and deficiencies of current Artificial Intelligence approaches while
coarsely sketching a way forward. A general lack of encompassing
symbol-embedding and (not only) -grounding in some bodily basis is made
responsible for current deficiencies. A concomitant dearth of hierarchical
organization of concepts follows suite. As a remedy for these shortcomings, it
is proposed to take a wide step back and to newly incorporate aspects of
cybernetics and analog control processes. It is claimed that a promising
overarching perspective is provided by the Ouroboros Model with a valid and
versatile algorithmic backbone for general cognition at all accessible levels
of abstraction and capabilities. Reality, rules, truth, and Free Will are all
useful abstractions according to the Ouroboros Model. Logic deduction as well
as intuitive guesses are claimed as produced on the basis of one
compartmentalized memory for schemata and a pattern-matching, i.e., monitoring
process termed consumption analysis. The latter directs attention on short
(attention proper) and also on long times scales (emotional biases). In this
cybernetic approach, discrepancies between expectations and actual activations
(e.g., sensory precepts) drive the general process of cognition and at the same
time steer the storage of new and adapted memory entries. Dedicated structures
in the human brain work in concert according to this scheme.
| [
{
"created": "Thu, 7 Mar 2024 07:39:54 GMT",
"version": "v1"
}
] | 2024-03-08 | [
[
"Thomsen",
"Knud",
""
]
] |
2403.04380 | Peter Eisert | Wolfgang Paier and Paul Hinzer and Anna Hilsmann and Peter Eisert | Video-Driven Animation of Neural Head Avatars | null | Proc. International Workshop on Vision, Modeling, and
Visualization (VMV), Braunschweig, Germany, Sep. 2023 | 10.2312/vmv.20231237 | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We present a new approach for video-driven animation of high-quality neural
3D head models, addressing the challenge of person-independent animation from
video input. Typically, high-quality generative models are learned for specific
individuals from multi-view video footage, resulting in person-specific latent
representations that drive the generation process. In order to achieve
person-independent animation from video input, we introduce an LSTM-based
animation network capable of translating person-independent expression features
into personalized animation parameters of person-specific 3D head models. Our
approach combines the advantages of personalized head models (high quality and
realism) with the convenience of video-driven animation employing multi-person
facial performance capture. We demonstrate the effectiveness of our approach on
synthesized animations with high quality based on different source videos as
well as an ablation study.
| [
{
"created": "Thu, 7 Mar 2024 10:13:48 GMT",
"version": "v1"
}
] | 2024-03-08 | [
[
"Paier",
"Wolfgang",
""
],
[
"Hinzer",
"Paul",
""
],
[
"Hilsmann",
"Anna",
""
],
[
"Eisert",
"Peter",
""
]
] |
2403.04442 | Pierre-Alexandre Murena | Ali Khoshvishkaie, Petrus Mikkola, Pierre-Alexandre Murena, Samuel
Kaski | Cooperative Bayesian Optimization for Imperfect Agents | null | Joint European Conference on Machine Learning and Knowledge
Discovery in Databases, ECML PKDD 2023 | 10.1007/978-3-031-43412-9_28 | null | cs.LG cs.AI cs.MA | http://creativecommons.org/licenses/by/4.0/ | We introduce a cooperative Bayesian optimization problem for optimizing
black-box functions of two variables where two agents choose together at which
points to query the function but have only control over one variable each. This
setting is inspired by human-AI teamwork, where an AI-assistant helps its human
user solve a problem, in this simplest case, collaborative optimization. We
formulate the solution as sequential decision-making, where the agent we
control models the user as a computationally rational agent with prior
knowledge about the function. We show that strategic planning of the queries
enables better identification of the global maximum of the function as long as
the user avoids excessive exploration. This planning is made possible by using
Bayes Adaptive Monte Carlo planning and by endowing the agent with a user model
that accounts for conservative belief updates and exploratory sampling of the
points to query.
| [
{
"created": "Thu, 7 Mar 2024 12:16:51 GMT",
"version": "v1"
}
] | 2024-03-08 | [
[
"Khoshvishkaie",
"Ali",
""
],
[
"Mikkola",
"Petrus",
""
],
[
"Murena",
"Pierre-Alexandre",
""
],
[
"Kaski",
"Samuel",
""
]
] |
2403.04451 | Nico Manzonelli | Nico Manzonelli, Wanrong Zhang, Salil Vadhan | Membership Inference Attacks and Privacy in Topic Modeling | 13 pages + appendices and references. 9 figures | Transactions on Machine Learning Research (2024) | null | null | cs.CR cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Recent research shows that large language models are susceptible to privacy
attacks that infer aspects of the training data. However, it is unclear if
simpler generative models, like topic models, share similar vulnerabilities. In
this work, we propose an attack against topic models that can confidently
identify members of the training data in Latent Dirichlet Allocation. Our
results suggest that the privacy risks associated with generative modeling are
not restricted to large neural models. Additionally, to mitigate these
vulnerabilities, we explore differentially private (DP) topic modeling. We
propose a framework for private topic modeling that incorporates DP vocabulary
selection as a pre-processing step, and show that it improves privacy while
having limited effects on practical utility.
| [
{
"created": "Thu, 7 Mar 2024 12:43:42 GMT",
"version": "v1"
},
{
"created": "Mon, 23 Sep 2024 13:57:56 GMT",
"version": "v2"
}
] | 2024-09-24 | [
[
"Manzonelli",
"Nico",
""
],
[
"Zhang",
"Wanrong",
""
],
[
"Vadhan",
"Salil",
""
]
] |
2403.04547 | Ibrahim Alabdulmohsin | Ibrahim Alabdulmohsin, Xiao Wang, Andreas Steiner, Priya Goyal,
Alexander D'Amour, Xiaohua Zhai | CLIP the Bias: How Useful is Balancing Data in Multimodal Learning? | 32 pages, 20 figures, 7 tables | ICLR 2024 | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the effectiveness of data-balancing for mitigating biases in
contrastive language-image pretraining (CLIP), identifying areas of strength
and limitation. First, we reaffirm prior conclusions that CLIP models can
inadvertently absorb societal stereotypes. To counter this, we present a novel
algorithm, called Multi-Modal Moment Matching (M4), designed to reduce both
representation and association biases (i.e. in first- and second-order
statistics) in multimodal data. We use M4 to conduct an in-depth analysis
taking into account various factors, such as the model, representation, and
data size. Our study also explores the dynamic nature of how CLIP learns and
unlearns biases. In particular, we find that fine-tuning is effective in
countering representation biases, though its impact diminishes for association
biases. Also, data balancing has a mixed impact on quality: it tends to improve
classification but can hurt retrieval. Interestingly, data and architectural
improvements seem to mitigate the negative impact of data balancing on
performance; e.g. applying M4 to SigLIP-B/16 with data quality filters improves
COCO image-to-text retrieval @5 from 86% (without data balancing) to 87% and
ImageNet 0-shot classification from 77% to 77.5%! Finally, we conclude with
recommendations for improving the efficacy of data balancing in multimodal
systems.
| [
{
"created": "Thu, 7 Mar 2024 14:43:17 GMT",
"version": "v1"
}
] | 2024-03-08 | [
[
"Alabdulmohsin",
"Ibrahim",
""
],
[
"Wang",
"Xiao",
""
],
[
"Steiner",
"Andreas",
""
],
[
"Goyal",
"Priya",
""
],
[
"D'Amour",
"Alexander",
""
],
[
"Zhai",
"Xiaohua",
""
]
] |
2403.04650 | Bilal Faye | Bilal Faye, Hanane Azzag, Mustapha Lebbah, Djamel Bouchaffra | Lightweight Cross-Modal Representation Learning | null | ESANN 2024 | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Low-cost cross-modal representation learning is crucial for deriving semantic
representations across diverse modalities such as text, audio, images, and
video. Traditional approaches typically depend on large specialized models
trained from scratch, requiring extensive datasets and resulting in high
resource and time costs. To overcome these challenges, we introduce a novel
approach named Lightweight Cross-Modal Representation Learning (LightCRL). This
method uses a single neural network titled Deep Fusion Encoder (DFE), which
projects data from multiple modalities into a shared latent representation
space. This reduces the overall parameter count while still delivering robust
performance comparable to more complex systems.
| [
{
"created": "Thu, 7 Mar 2024 16:50:25 GMT",
"version": "v1"
},
{
"created": "Fri, 8 Mar 2024 14:29:41 GMT",
"version": "v2"
},
{
"created": "Sat, 7 Sep 2024 07:24:36 GMT",
"version": "v3"
}
] | 2024-09-10 | [
[
"Faye",
"Bilal",
""
],
[
"Azzag",
"Hanane",
""
],
[
"Lebbah",
"Mustapha",
""
],
[
"Bouchaffra",
"Djamel",
""
]
] |
2403.04667 | Berenice Fernandez Nieto Miss | Maria T. Baldassarre, Danilo Caivano, Berenice Fernandez Nieto,
Domenico Gigante, and Azzurra Ragone | The Social Impact of Generative AI: An Analysis on ChatGPT | Presented at GoodIT2023 - ACM Conference on Information Technology
for Social Good | Proceedings of the 2023 ACM Conference on Information Technology
for Social Good (GoodIT '23) | 10.1145/3582515.3609555 | null | cs.AI cs.CY cs.ET | http://creativecommons.org/licenses/by/4.0/ | In recent months, the social impact of Artificial Intelligence (AI) has
gained considerable public interest, driven by the emergence of Generative AI
models, ChatGPT in particular. The rapid development of these models has
sparked heated discussions regarding their benefits, limitations, and
associated risks. Generative models hold immense promise across multiple
domains, such as healthcare, finance, and education, to cite a few, presenting
diverse practical applications. Nevertheless, concerns about potential adverse
effects have elicited divergent perspectives, ranging from privacy risks to
escalating social inequality. This paper adopts a methodology to delve into the
societal implications of Generative AI tools, focusing primarily on the case of
ChatGPT. It evaluates the potential impact on several social sectors and
illustrates the findings of a comprehensive literature review of both positive
and negative effects, emerging trends, and areas of opportunity of Generative
AI models. This analysis aims to facilitate an in-depth discussion by providing
insights that can inspire policy, regulation, and responsible development
practices to foster a human-centered AI.
| [
{
"created": "Thu, 7 Mar 2024 17:14:22 GMT",
"version": "v1"
}
] | 2024-05-10 | [
[
"Baldassarre",
"Maria T.",
""
],
[
"Caivano",
"Danilo",
""
],
[
"Nieto",
"Berenice Fernandez",
""
],
[
"Gigante",
"Domenico",
""
],
[
"Ragone",
"Azzurra",
""
]
] |
2403.04701 | Muhammad Huzaifa | Hashmat Shadab Malik, Muhammad Huzaifa, Muzammal Naseer, Salman Khan,
Fahad Shahbaz Khan | ObjectCompose: Evaluating Resilience of Vision-Based Models on
Object-to-Background Compositional Changes | null | Asian Conference on Computer Vision - 2024 | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Given the large-scale multi-modal training of recent vision-based models and
their generalization capabilities, understanding the extent of their robustness
is critical for their real-world deployment. In this work, we evaluate the
resilience of current vision-based models against diverse object-to-background
context variations. The majority of robustness evaluation methods have
introduced synthetic datasets to induce changes to object characteristics
(viewpoints, scale, color) or utilized image transformation techniques
(adversarial changes, common corruptions) on real images to simulate shifts in
distributions. Recent works have explored leveraging large language models and
diffusion models to generate changes in the background. However, these methods
either lack in offering control over the changes to be made or distort the
object semantics, making them unsuitable for the task. Our method, on the other
hand, can induce diverse object-to-background changes while preserving the
original semantics and appearance of the object. To achieve this goal, we
harness the generative capabilities of text-to-image, image-to-text, and
image-to-segment models to automatically generate a broad spectrum of
object-to-background changes. We induce both natural and adversarial background
changes by either modifying the textual prompts or optimizing the latents and
textual embedding of text-to-image models. We produce various versions of
standard vision datasets (ImageNet, COCO), incorporating either diverse and
realistic backgrounds into the images or introducing color, texture, and
adversarial changes in the background. We conduct extensive experiments to
analyze the robustness of vision-based models against object-to-background
context variations across diverse tasks. Code
https://github.com/Muhammad-Huzaifaa/ObjectCompose.
| [
{
"created": "Thu, 7 Mar 2024 17:48:48 GMT",
"version": "v1"
},
{
"created": "Fri, 15 Mar 2024 11:43:21 GMT",
"version": "v2"
},
{
"created": "Tue, 26 Mar 2024 11:26:17 GMT",
"version": "v3"
},
{
"created": "Tue, 8 Oct 2024 20:10:02 GMT",
"version": "v4"
}
] | 2024-10-10 | [
[
"Malik",
"Hashmat Shadab",
""
],
[
"Huzaifa",
"Muhammad",
""
],
[
"Naseer",
"Muzammal",
""
],
[
"Khan",
"Salman",
""
],
[
"Khan",
"Fahad Shahbaz",
""
]
] |
2403.04775 | Michael Rawson | Ahmed Bhayat, Johannes Schoisswohl, Michael Rawson | Superposition with Delayed Unification | 16 pages, 0 figures, 1 table | International Conference on Automated Deduction (CADE) 2023. LNAI
volume 14132, 2023, pp. 23-40 | 10.1007/978-3-031-38499-8_2 | null | cs.LO cs.AI | http://creativecommons.org/licenses/by/4.0/ | Classically, in saturation-based proof systems, unification has been
considered atomic. However, it is also possible to move unification to the
calculus level, turning the steps of the unification algorithm into inferences.
For calculi that rely on unification procedures returning large or even
infinite sets of unifiers, integrating unification into the calculus is an
attractive method of dovetailing unification and inference. This applies, for
example, to AC-superposition and higher-order superposition. We show that
first-order superposition remains complete when moving unification rules to the
calculus level. We discuss some of the benefits this has even for standard
first-order superposition and provide an experimental evaluation.
| [
{
"created": "Thu, 29 Feb 2024 11:35:49 GMT",
"version": "v1"
}
] | 2024-03-11 | [
[
"Bhayat",
"Ahmed",
""
],
[
"Schoisswohl",
"Johannes",
""
],
[
"Rawson",
"Michael",
""
]
] |
2403.04793 | Zhipeng Ma | Zhipeng Ma, Marco Kemmerling, Daniel Buschmann, Chrismarie Enslin,
Daniel L\"utticke, Robert H. Schmitt | A Data-Driven Two-Phase Multi-Split Causal Ensemble Model for Time
Series | null | Symmetry 2023, 15(5), 982 | 10.3390/sym15050982 | null | cs.LG cs.AI stat.ME | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Causal inference is a fundamental research topic for discovering the
cause-effect relationships in many disciplines. However, not all algorithms are
equally well-suited for a given dataset. For instance, some approaches may only
be able to identify linear relationships, while others are applicable for
non-linearities. Algorithms further vary in their sensitivity to noise and
their ability to infer causal information from coupled vs. non-coupled time
series. Therefore, different algorithms often generate different causal
relationships for the same input. To achieve a more robust causal inference
result, this publication proposes a novel data-driven two-phase multi-split
causal ensemble model to combine the strengths of different causality base
algorithms. In comparison to existing approaches, the proposed ensemble method
reduces the influence of noise through a data partitioning scheme in the first
phase. To achieve this, the data are initially divided into several partitions
and the base algorithms are applied to each partition. Subsequently, Gaussian
mixture models are used to identify the causal relationships derived from the
different partitions that are likely to be valid. In the second phase, the
identified relationships from each base algorithm are then merged based on
three combination rules. The proposed ensemble approach is evaluated using
multiple metrics, among them a newly developed evaluation index for causal
ensemble approaches. We perform experiments using three synthetic datasets with
different volumes and complexity, which are specifically designed to test
causality detection methods under different circumstances while knowing the
ground truth causal relationships. In these experiments, our causality ensemble
outperforms each of its base algorithms. In practical applications, the use of
the proposed method could hence lead to more robust and reliable causality
results.
| [
{
"created": "Mon, 4 Mar 2024 14:20:41 GMT",
"version": "v1"
}
] | 2024-03-11 | [
[
"Ma",
"Zhipeng",
""
],
[
"Kemmerling",
"Marco",
""
],
[
"Buschmann",
"Daniel",
""
],
[
"Enslin",
"Chrismarie",
""
],
[
"Lütticke",
"Daniel",
""
],
[
"Schmitt",
"Robert H.",
""
]
] |
2403.04965 | Lezhong Wang | Lezhong Wang, Jeppe Revall Frisvad, Mark Bo Jensen, Siavash Arjomand
Bigdeli | StereoDiffusion: Training-Free Stereo Image Generation Using Latent
Diffusion Models | Updated to CVPR 2024 GCV accepted version | Proceedings of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition, 2024, pp. 7416-7425 | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The demand for stereo images increases as manufacturers launch more XR
devices. To meet this demand, we introduce StereoDiffusion, a method that,
unlike traditional inpainting pipelines, is trainning free, remarkably
straightforward to use, and it seamlessly integrates into the original Stable
Diffusion model. Our method modifies the latent variable to provide an
end-to-end, lightweight capability for fast generation of stereo image pairs,
without the need for fine-tuning model weights or any post-processing of
images. Using the original input to generate a left image and estimate a
disparity map for it, we generate the latent vector for the right image through
Stereo Pixel Shift operations, complemented by Symmetric Pixel Shift Masking
Denoise and Self-Attention Layers Modification methods to align the right-side
image with the left-side image. Moreover, our proposed method maintains a high
standard of image quality throughout the stereo generation process, achieving
state-of-the-art scores in various quantitative evaluations.
| [
{
"created": "Fri, 8 Mar 2024 00:30:25 GMT",
"version": "v1"
},
{
"created": "Sun, 2 Jun 2024 14:31:09 GMT",
"version": "v2"
}
] | 2024-07-08 | [
[
"Wang",
"Lezhong",
""
],
[
"Frisvad",
"Jeppe Revall",
""
],
[
"Jensen",
"Mark Bo",
""
],
[
"Bigdeli",
"Siavash Arjomand",
""
]
] |
2403.05112 | Tanvi Verma | Tanvi Verma, Linh Le Dinh, Nicholas Tan, Xinxing Xu, Chingyu Cheng,
Yong Liu | RLPeri: Accelerating Visual Perimetry Test with Reinforcement Learning
and Convolutional Feature Extraction | Published at AAAI-24 | The 38th Annual AAAI Conference on Artificial Intelligence, 2024 | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Visual perimetry is an important eye examination that helps detect vision
problems caused by ocular or neurological conditions. During the test, a
patient's gaze is fixed at a specific location while light stimuli of varying
intensities are presented in central and peripheral vision. Based on the
patient's responses to the stimuli, the visual field mapping and sensitivity
are determined. However, maintaining high levels of concentration throughout
the test can be challenging for patients, leading to increased examination
times and decreased accuracy.
In this work, we present RLPeri, a reinforcement learning-based approach to
optimize visual perimetry testing. By determining the optimal sequence of
locations and initial stimulus values, we aim to reduce the examination time
without compromising accuracy. Additionally, we incorporate reward shaping
techniques to further improve the testing performance. To monitor the patient's
responses over time during testing, we represent the test's state as a pair of
3D matrices. We apply two different convolutional kernels to extract spatial
features across locations as well as features across different stimulus values
for each location. Through experiments, we demonstrate that our approach
results in a 10-20% reduction in examination time while maintaining the
accuracy as compared to state-of-the-art methods. With the presented approach,
we aim to make visual perimetry testing more efficient and patient-friendly,
while still providing accurate results.
| [
{
"created": "Fri, 8 Mar 2024 07:19:43 GMT",
"version": "v1"
}
] | 2024-03-11 | [
[
"Verma",
"Tanvi",
""
],
[
"Dinh",
"Linh Le",
""
],
[
"Tan",
"Nicholas",
""
],
[
"Xu",
"Xinxing",
""
],
[
"Cheng",
"Chingyu",
""
],
[
"Liu",
"Yong",
""
]
] |
2403.05547 | Julius Sch\"oning | Julius Sch\"oning and Tim Wawer and Kai-Michael Griese | AI for non-programmers: Applied AI in the lectures for students without
programming skills | 10 pages, 6 figures, Translated from the German of "KI f\"ur
Nicht-Programmierer*innen: Angewandte KI im H\"orsaal f\"ur Studierende ohne
Programmierkenntnisse". Translated from the German of
https://nbn-resolving.org/urn:nbn:de:bsz:959-opus-52866 | Voneinander Lehren lernen (5) (2024) | null | null | cs.CY cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Applications such as ChatGPT and WOMBO Dream make it easy to inspire students
without programming knowledge to use artificial intelligence (AI). Therefore,
given the increasing importance of AI in all disciplines, innovative strategies
are needed to educate students in AI without programming knowledge so that AI
can be integrated into their study modules as a future skill. This work
presents a didactic planning script for applied AI. The didactic planning
script is based on the AI application pipeline and links AI concepts with
study-relevant topics. These linkages open up a new solution space and promote
students' interest in and understanding of the potentials and risks of AI. An
example lecture series for master students in energy management shows how AI
can be seamlessly integrated into discipline-specific lectures. To this end,
the planning script for applied AI is adapted to fit the study programs' topic.
This specific teaching scenario enables students to solve a discipline-specific
task step by step using the AI application pipeline. Thus, the application of
the didactic planning script for applied AI shows the practical implementation
of the theoretical concepts of AI. In addition, a checklist is presented that
can be used to assess whether AI can be used in the discipline-specific
lecture. AI as a future skill must be learned by students based on use cases
that are relevant to the course of studies. For this reason, AI education
should fit seamlessly into various curricula, even if the students do not have
a programming background due to their field of study.
| [
{
"created": "Tue, 6 Feb 2024 17:26:24 GMT",
"version": "v1"
}
] | 2024-03-12 | [
[
"Schöning",
"Julius",
""
],
[
"Wawer",
"Tim",
""
],
[
"Griese",
"Kai-Michael",
""
]
] |
2403.05550 | Rosana Montes | Rosana Montes, Ana M. Sanchez, Pedro Villar, Francisco Herrera | Teranga Go!: Carpooling Collaborative Consumption Community with
multi-criteria hesitant fuzzy linguistic term set opinions to build
confidence and trust | project at https://github.com/rosanamontes/teranga.go. arXiv admin
note: substantial text overlap with arXiv:2402.01775 | Applied Soft Computing 67, 2018, Pages 941-952 | 10.1016/j.asoc.2017.05.039 | null | cs.CY cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Classic Delphi and Fuzzy Delphi methods are used to test content validity of
a data collection tools such as questionnaires. Fuzzy Delphi takes the opinion
issued by judges from a linguistic perspective reducing ambiguity in opinions
by using fuzzy numbers. We propose an extension named 2-Tuple Fuzzy Linguistic
Delphi method to deal with scenarios in which judges show different expertise
degrees by using fuzzy multigranular semantics of the linguistic terms and to
obtain intermediate and final results expressed by 2-tuple linguistic values.
The key idea of our proposal is to validate the full questionnaire by means of
the evaluation of its parts, defining the validity of each item as a Decision
Making problem. Taking the opinion of experts, we measure the degree of
consensus, the degree of consistency, and the linguistic score of each item, in
order to detect those items that affect, positively or negatively, the quality
of the instrument. Considering the real need to evaluate a b-learning
educational experience with a consensual questionnaire, we present a Decision
Making model for questionnaire validation that solve it. Additionally, we
contribute to this consensus reaching problem by developing an online tool
under GPL v3 license. The software visualizes the collective valuations for
each iteration and assists to determine which parts of the questionnaire should
be modified to reach a consensual solution.
| [
{
"created": "Wed, 7 Feb 2024 15:50:54 GMT",
"version": "v1"
}
] | 2024-03-12 | [
[
"Montes",
"Rosana",
""
],
[
"Sanchez",
"Ana M.",
""
],
[
"Villar",
"Pedro",
""
],
[
"Herrera",
"Francisco",
""
]
] |
2403.05552 | Cristobal Romero | W. Chango, R. Cerezo, and C. Romero | Multi-source and multimodal data fusion for predicting academic
performance in blended learning university courses | null | Computers & Electrical Engineering, 89, 106908 (2021) | 10.1016/j.compeleceng.2020.106908 | null | cs.CY cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we applied data fusion approaches for predicting the final
academic performance of university students using multiple-source, multimodal
data from blended learning environments. We collected and preprocessed data
about first-year university students from different sources: theory classes,
practical sessions, on-line Moodle sessions, and a final exam. Our objective
was to discover which data fusion approach produced the best results using our
data. We carried out experiments by applying four different data fusion
approaches and six classification algorithms. The results showed that the best
predictions were produced using ensembles and selecting the best attributes
approach with discretized data. The best prediction models showed us that the
level of attention in theory classes, scores in Moodle quizzes, and the level
of activity in Moodle forums were the best set of attributes for predicting
students' final performance in our courses.
| [
{
"created": "Thu, 8 Feb 2024 21:29:41 GMT",
"version": "v1"
}
] | 2024-03-12 | [
[
"Chango",
"W.",
""
],
[
"Cerezo",
"R.",
""
],
[
"Romero",
"C.",
""
]
] |
2403.05561 | Michael Guerzhoy | Claire S. Lee, Noelle Lim, and Michael Guerzhoy | Detecting a Proxy for Potential Comorbid ADHD in People Reporting
Anxiety Symptoms from Social Media Data | Forthcoming in Proc. of the Workshop on Computational Linguistics and
Clinical Psychology (CLPsych) at EACL 2024 | Proceedings of the 9th Workshop on Computational Linguistics and
Clinical Psychology (CLPsych 2024) | null | null | cs.CY cs.CL | http://creativecommons.org/licenses/by/4.0/ | We present a novel task that can elucidate the connection between anxiety and
ADHD; use Transformers to make progress toward solving a task that is not
solvable by keyword-based classifiers; and discuss a method for visualization
of our classifier illuminating the connection between anxiety and ADHD
presentations.
Up to approximately 50% of adults with ADHD may also have an anxiety disorder
and approximately 30\% of adults with anxiety may also have ADHD. Patients
presenting with anxiety may be treated for anxiety without ADHD ever being
considered, possibly affecting treatment. We show how data that bears on ADHD
that is comorbid with anxiety can be obtained from social media data, and show
that Transformers can be used to detect a proxy for possible comorbid ADHD in
people with anxiety symptoms.
We collected data from anxiety and ADHD online forums (subreddits). We
identified posters who first started posting in the Anxiety subreddit and later
started posting in the ADHD subreddit as well. We use this subset of the
posters as a proxy for people who presented with anxiety symptoms and then
became aware that they might have ADHD. We fine-tune a Transformer
architecture-based classifier to classify people who started posting in the
Anxiety subreddit and then started posting in the ADHD subreddit vs. people who
posted in the Anxiety subreddit without later posting in the ADHD subreddit. We
show that a Transformer architecture is capable of achieving reasonable results
(76% correct for RoBERTa vs. under 60% correct for the best keyword-based
model, both with 50% base rate).
| [
{
"created": "Sat, 17 Feb 2024 10:32:43 GMT",
"version": "v1"
}
] | 2024-08-06 | [
[
"Lee",
"Claire S.",
""
],
[
"Lim",
"Noelle",
""
],
[
"Guerzhoy",
"Michael",
""
]
] |
2403.05595 | Farhad Nazari | Farhad Nazari, Navid Mohajer, Darius Nahavandi, and Abbas Khosravi | Comparison of gait phase detection using traditional machine learning
and deep learning techniques | Copyright \c{opyright} This is the accepted version of an article
published in the proceedings of the 2022 IEEE International Conference on
Systems, Man, and Cybernetics (SMC) | 2022 IEEE International Conference on Systems, Man, and
Cybernetics (SMC) | 10.1109/SMC53654.2022.9945397 | null | eess.SP cs.CV cs.HC cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Human walking is a complex activity with a high level of cooperation and
interaction between different systems in the body. Accurate detection of the
phases of the gait in real-time is crucial to control lower-limb assistive
devices like exoskeletons and prostheses. There are several ways to detect the
walking gait phase, ranging from cameras and depth sensors to the sensors
attached to the device itself or the human body. Electromyography (EMG) is one
of the input methods that has captured lots of attention due to its precision
and time delay between neuromuscular activity and muscle movement. This study
proposes a few Machine Learning (ML) based models on lower-limb EMG data for
human walking. The proposed models are based on Gaussian Naive Bayes (NB),
Decision Tree (DT), Random Forest (RF), Linear Discriminant Analysis (LDA) and
Deep Convolutional Neural Networks (DCNN). The traditional ML models are
trained on hand-crafted features or their reduced components using Principal
Component Analysis (PCA). On the contrary, the DCNN model utilises
convolutional layers to extract features from raw data. The results show up to
75% average accuracy for traditional ML models and 79% for Deep Learning (DL)
model. The highest achieved accuracy in 50 trials of the training DL model is
89.5%.
| [
{
"created": "Thu, 7 Mar 2024 10:05:09 GMT",
"version": "v1"
}
] | 2024-03-12 | [
[
"Nazari",
"Farhad",
""
],
[
"Mohajer",
"Navid",
""
],
[
"Nahavandi",
"Darius",
""
],
[
"Khosravi",
"Abbas",
""
]
] |
2403.05602 | Gilchan Park | Gilchan Park, Sean McCorkle, Carlos Soto, Ian Blaby, Shinjae Yoo | Extracting Protein-Protein Interactions (PPIs) from Biomedical
Literature using Attention-based Relational Context Information | 10 pages, 3 figures, 7 tables, 2022 IEEE International Conference on
Big Data (Big Data) | In 2022 IEEE Big Data, pp. 2052-2061 (2022) | 10.1109/BigData55660.2022.10021099 | null | q-bio.BM cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Because protein-protein interactions (PPIs) are crucial to understand living
systems, harvesting these data is essential to probe disease development and
discern gene/protein functions and biological processes. Some curated datasets
contain PPI data derived from the literature and other sources (e.g., IntAct,
BioGrid, DIP, and HPRD). However, they are far from exhaustive, and their
maintenance is a labor-intensive process. On the other hand, machine learning
methods to automate PPI knowledge extraction from the scientific literature
have been limited by a shortage of appropriate annotated data. This work
presents a unified, multi-source PPI corpora with vetted interaction
definitions augmented by binary interaction type labels and a Transformer-based
deep learning method that exploits entities' relational context information for
relation representation to improve relation classification performance. The
model's performance is evaluated on four widely studied biomedical relation
extraction datasets, as well as this work's target PPI datasets, to observe the
effectiveness of the representation to relation extraction tasks in various
data. Results show the model outperforms prior state-of-the-art models. The
code and data are available at:
https://github.com/BNLNLP/PPI-Relation-Extraction
| [
{
"created": "Fri, 8 Mar 2024 01:43:21 GMT",
"version": "v1"
}
] | 2024-03-12 | [
[
"Park",
"Gilchan",
""
],
[
"McCorkle",
"Sean",
""
],
[
"Soto",
"Carlos",
""
],
[
"Blaby",
"Ian",
""
],
[
"Yoo",
"Shinjae",
""
]
] |
2403.05715 | Aditya Dave | Aditya Dave, Heeseung Bang, Andreas A. Malikopoulos | A Framework for Effective AI Recommendations in Cyber-Physical-Human
Systems | null | IEEE Control Systems Letters (L-CSS), Vol 8, 2024 | 10.1109/LCSYS.2024.3410145 | null | eess.SY cs.AI cs.HC cs.LG cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many cyber-physical-human systems (CPHS) involve a human decision-maker who
may receive recommendations from an artificial intelligence (AI) platform while
holding the ultimate responsibility of making decisions. In such CPHS
applications, the human decision-maker may depart from an optimal recommended
decision and instead implement a different one for various reasons. In this
letter, we develop a rigorous framework to overcome this challenge. In our
framework, we consider that humans may deviate from AI recommendations as they
perceive and interpret the system's state in a different way than the AI
platform. We establish the structural properties of optimal recommendation
strategies and develop an approximate human model (AHM) used by the AI. We
provide theoretical bounds on the optimality gap that arises from an AHM and
illustrate the efficacy of our results in a numerical example.
| [
{
"created": "Fri, 8 Mar 2024 23:02:20 GMT",
"version": "v1"
}
] | 2024-07-18 | [
[
"Dave",
"Aditya",
""
],
[
"Bang",
"Heeseung",
""
],
[
"Malikopoulos",
"Andreas A.",
""
]
] |
2403.05770 | Bingqian Lin | Bingqian Lin, Yanxin Long, Yi Zhu, Fengda Zhu, Xiaodan Liang, Qixiang
Ye, Liang Lin | Towards Deviation-Robust Agent Navigation via Perturbation-Aware
Contrastive Learning | Accepted by TPAMI 2023 | IEEE Transactions on Pattern Analysis and Machine Intelligence
(TPAMI,2023) | 10.1109/TPAMI.2023.3273594 | null | cs.CV cs.AI cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision-and-language navigation (VLN) asks an agent to follow a given language
instruction to navigate through a real 3D environment. Despite significant
advances, conventional VLN agents are trained typically under disturbance-free
environments and may easily fail in real-world scenarios, since they are
unaware of how to deal with various possible disturbances, such as sudden
obstacles or human interruptions, which widely exist and may usually cause an
unexpected route deviation. In this paper, we present a model-agnostic training
paradigm, called Progressive Perturbation-aware Contrastive Learning (PROPER)
to enhance the generalization ability of existing VLN agents, by requiring them
to learn towards deviation-robust navigation. Specifically, a simple yet
effective path perturbation scheme is introduced to implement the route
deviation, with which the agent is required to still navigate successfully
following the original instruction. Since directly enforcing the agent to learn
perturbed trajectories may lead to inefficient training, a progressively
perturbed trajectory augmentation strategy is designed, where the agent can
self-adaptively learn to navigate under perturbation with the improvement of
its navigation performance for each specific trajectory. For encouraging the
agent to well capture the difference brought by perturbation, a
perturbation-aware contrastive learning mechanism is further developed by
contrasting perturbation-free trajectory encodings and perturbation-based
counterparts. Extensive experiments on R2R show that PROPER can benefit
multiple VLN baselines in perturbation-free scenarios. We further collect the
perturbed path data to construct an introspection subset based on the R2R,
called Path-Perturbed R2R (PP-R2R). The results on PP-R2R show unsatisfying
robustness of popular VLN agents and the capability of PROPER in improving the
navigation robustness.
| [
{
"created": "Sat, 9 Mar 2024 02:34:13 GMT",
"version": "v1"
}
] | 2024-03-12 | [
[
"Lin",
"Bingqian",
""
],
[
"Long",
"Yanxin",
""
],
[
"Zhu",
"Yi",
""
],
[
"Zhu",
"Fengda",
""
],
[
"Liang",
"Xiaodan",
""
],
[
"Ye",
"Qixiang",
""
],
[
"Lin",
"Liang",
""
]
] |
2403.05802 | Jie Liu | Jie Liu, Zhongyuan Zhao, Zijian Ding, Benjamin Brock, Hongbo Rong,
Zhiru Zhang | UniSparse: An Intermediate Language for General Sparse Format
Customization | to be published in OOPSLA'24 | Proc. ACM Program. Lang. 8, OOPSLA1, Article 99 (April 2024), 29
pages | 10.1145/3649816 | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | The ongoing trend of hardware specialization has led to a growing use of
custom data formats when processing sparse workloads, which are typically
memory-bound. These formats facilitate optimized software/hardware
implementations by utilizing sparsity pattern- or target-aware data structures
and layouts to enhance memory access latency and bandwidth utilization.
However, existing sparse tensor programming models and compilers offer little
or no support for productively customizing the sparse formats. Additionally,
because these frameworks represent formats using a limited set of per-dimension
attributes, they lack the flexibility to accommodate numerous new variations of
custom sparse data structures and layouts. To overcome this deficiency, we
propose UniSparse, an intermediate language that provides a unified abstraction
for representing and customizing sparse formats. Unlike the existing
attribute-based frameworks, UniSparse decouples the logical representation of
the sparse tensor (i.e., the data structure) from its low-level memory layout,
enabling the customization of both. As a result, a rich set of format
customizations can be succinctly expressed in a small set of well-defined
query, mutation, and layout primitives. We also develop a compiler leveraging
the MLIR infrastructure, which supports adaptive customization of formats, and
automatic code generation of format conversion and compute operations for
heterogeneous architectures. We demonstrate the efficacy of our approach
through experiments running commonly-used sparse linear algebra operations with
specialized formats on multiple different hardware targets, including an Intel
CPU, an NVIDIA GPU, an AMD Xilinx FPGA, and a simulated processing-in-memory
(PIM) device.
| [
{
"created": "Sat, 9 Mar 2024 05:38:45 GMT",
"version": "v1"
}
] | 2024-03-12 | [
[
"Liu",
"Jie",
""
],
[
"Zhao",
"Zhongyuan",
""
],
[
"Ding",
"Zijian",
""
],
[
"Brock",
"Benjamin",
""
],
[
"Rong",
"Hongbo",
""
],
[
"Zhang",
"Zhiru",
""
]
] |
2403.05856 | Boshen Xu | Boshen Xu, Sipeng Zheng, Qin Jin | POV: Prompt-Oriented View-Agnostic Learning for Egocentric Hand-Object
Interaction in the Multi-View World | Accepted by ACM MM 2023. Project page: https://xuboshen.github.io/ | Proceedings of the 31st ACM International Conference on Multimedia
(2023). Association for Computing Machinery, New York, NY, USA, 2807-2816 | 10.1145/3581783.3612484 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We humans are good at translating third-person observations of hand-object
interactions (HOI) into an egocentric view. However, current methods struggle
to replicate this ability of view adaptation from third-person to first-person.
Although some approaches attempt to learn view-agnostic representation from
large-scale video datasets, they ignore the relationships among multiple
third-person views. To this end, we propose a Prompt-Oriented View-agnostic
learning (POV) framework in this paper, which enables this view adaptation with
few egocentric videos. Specifically, We introduce interactive masking prompts
at the frame level to capture fine-grained action information, and view-aware
prompts at the token level to learn view-agnostic representation. To verify our
method, we establish two benchmarks for transferring from multiple third-person
views to the egocentric view. Our extensive experiments on these benchmarks
demonstrate the efficiency and effectiveness of our POV framework and prompt
tuning techniques in terms of view adaptation and view generalization. Our code
is available at \url{https://github.com/xuboshen/pov_acmmm2023}.
| [
{
"created": "Sat, 9 Mar 2024 09:54:44 GMT",
"version": "v1"
}
] | 2024-03-12 | [
[
"Xu",
"Boshen",
""
],
[
"Zheng",
"Sipeng",
""
],
[
"Jin",
"Qin",
""
]
] |
2403.06014 | Jeonghwan Park | Jeonghwan Park, Paul Miller, Niall McLaughlin | Hard-label based Small Query Black-box Adversarial Attack | 11 pages, 3 figures | IEEE/CVF Winter Conference on Applications of Computer Vision,
2024 | null | null | cs.LG cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | We consider the hard label based black box adversarial attack setting which
solely observes predicted classes from the target model. Most of the attack
methods in this setting suffer from impractical number of queries required to
achieve a successful attack. One approach to tackle this drawback is utilising
the adversarial transferability between white box surrogate models and black
box target model. However, the majority of the methods adopting this approach
are soft label based to take the full advantage of zeroth order optimisation.
Unlike mainstream methods, we propose a new practical setting of hard label
based attack with an optimisation process guided by a pretrained surrogate
model. Experiments show the proposed method significantly improves the query
efficiency of the hard label based black-box attack across various target model
architectures. We find the proposed method achieves approximately 5 times
higher attack success rate compared to the benchmarks, especially at the small
query budgets as 100 and 250.
| [
{
"created": "Sat, 9 Mar 2024 21:26:22 GMT",
"version": "v1"
}
] | 2024-03-12 | [
[
"Park",
"Jeonghwan",
""
],
[
"Miller",
"Paul",
""
],
[
"McLaughlin",
"Niall",
""
]
] |
2403.06349 | Omnia Alwazzan | Omnia Alwazzan (1 and 2), Abbas Khan (1 and 2), Ioannis Patras (1 and
2), Gregory Slabaugh (1 and 2) ((1) School of Electronic Engineering and
Computer Science, Queen Mary University of London, UK, (2) Queen Mary Digital
Environment Research Institute (DERI), London, UK) | MOAB: Multi-Modal Outer Arithmetic Block For Fusion Of Histopathological
Images And Genetic Data For Brain Tumor Grading | null | pages={1--5},year={2023},organization={IEEE} | 10.1109/ISBI53787.2023.10230698 | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Brain tumors are an abnormal growth of cells in the brain. They can be
classified into distinct grades based on their growth. Often grading is
performed based on a histological image and is one of the most significant
predictors of a patients prognosis, the higher the grade, the more aggressive
the tumor. Correct diagnosis of a tumor grade remains challenging. Though
histopathological grading has been shown to be prognostic, results are subject
to interobserver variability, even among experienced pathologists. Recently,
the World Health Organization reported that advances in molecular genetics have
led to improvements in tumor classification. This paper seeks to integrate
histological images and genetic data for improved computer-aided diagnosis. We
propose a novel Multi-modal Outer Arithmetic Block (MOAB) based on arithmetic
operations to combine latent representations of the different modalities for
predicting the tumor grade (Grade \rom{2}, \rom{3} and \rom{4}). Extensive
experiments evaluate the effectiveness of our approach. By applying MOAB to The
Cancer Genome Atlas (TCGA) glioma dataset, we show that it can improve
separation between similar classes (Grade \rom{2} and \rom{3}) and outperform
prior state-of-the-art grade classification techniques.
| [
{
"created": "Mon, 11 Mar 2024 00:33:28 GMT",
"version": "v1"
}
] | 2024-03-12 | [
[
"Alwazzan",
"Omnia",
"",
"1 and 2"
],
[
"Khan",
"Abbas",
"",
"1 and 2"
],
[
"Patras",
"Ioannis",
"",
"1 and\n 2"
],
[
"Slabaugh",
"Gregory",
"",
"1 and 2"
]
] |
2403.06514 | Maria Lymperaiou | Angeliki Dimitriou, Maria Lymperaiou, Giorgos Filandrianos,
Konstantinos Thomas, Giorgos Stamou | Structure Your Data: Towards Semantic Graph Counterfactuals | null | ICML 2024 | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Counterfactual explanations (CEs) based on concepts are explanations that
consider alternative scenarios to understand which high-level semantic features
contributed to particular model predictions. In this work, we propose CEs based
on the semantic graphs accompanying input data to achieve more descriptive,
accurate, and human-aligned explanations. Building upon state-of-the-art (SoTA)
conceptual attempts, we adopt a model-agnostic edit-based approach and
introduce leveraging GNNs for efficient Graph Edit Distance (GED) computation.
With a focus on the visual domain, we represent images as scene graphs and
obtain their GNN embeddings to bypass solving the NP-hard graph similarity
problem for all input pairs, an integral part of the CE computation process. We
apply our method to benchmark and real-world datasets with varying difficulty
and availability of semantic annotations. Testing on diverse classifiers, we
find that our CEs outperform previous SoTA explanation models based on
semantics, including both white and black-box as well as conceptual and
pixel-level approaches. Their superiority is proven quantitatively and
qualitatively, as validated by human subjects, highlighting the significance of
leveraging semantic edges in the presence of intricate relationships. Our
model-agnostic graph-based approach is widely applicable and easily extensible,
producing actionable explanations across different contexts.
| [
{
"created": "Mon, 11 Mar 2024 08:40:37 GMT",
"version": "v1"
},
{
"created": "Sat, 20 Jul 2024 05:23:05 GMT",
"version": "v2"
}
] | 2024-07-23 | [
[
"Dimitriou",
"Angeliki",
""
],
[
"Lymperaiou",
"Maria",
""
],
[
"Filandrianos",
"Giorgos",
""
],
[
"Thomas",
"Konstantinos",
""
],
[
"Stamou",
"Giorgos",
""
]
] |
2403.06570 | Can Cui | Can Cui (MULTISPEECH), Imran Ahamad Sheikh, Mostafa Sadeghi
(MULTISPEECH), Emmanuel Vincent (MULTISPEECH) | Improving Speaker Assignment in Speaker-Attributed ASR for Real Meeting
Applications | Submitted to Odyssey 2024 | The Speaker and Language Recognition Workshop Odyssey 2024, Jun
2024, Quebec, Canada | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Past studies on end-to-end meeting transcription have focused on model
architecture and have mostly been evaluated on simulated meeting data. We
present a novel study aiming to optimize the use of a Speaker-Attributed ASR
(SA-ASR) system in real-life scenarios, such as the AMI meeting corpus, for
improved speaker assignment of speech segments. First, we propose a pipeline
tailored to real-life applications involving Voice Activity Detection (VAD),
Speaker Diarization (SD), and SA-ASR. Second, we advocate using VAD output
segments to fine-tune the SA-ASR model, considering that it is also applied to
VAD segments during test, and show that this results in a relative reduction of
Speaker Error Rate (SER) up to 28%. Finally, we explore strategies to enhance
the extraction of the speaker embedding templates used as inputs by the SA-ASR
system. We show that extracting them from SD output rather than annotated
speaker segments results in a relative SER reduction up to 20%.
| [
{
"created": "Mon, 11 Mar 2024 10:11:29 GMT",
"version": "v1"
},
{
"created": "Thu, 5 Sep 2024 07:46:09 GMT",
"version": "v2"
}
] | 2024-09-06 | [
[
"Cui",
"Can",
"",
"MULTISPEECH"
],
[
"Sheikh",
"Imran Ahamad",
"",
"MULTISPEECH"
],
[
"Sadeghi",
"Mostafa",
"",
"MULTISPEECH"
],
[
"Vincent",
"Emmanuel",
"",
"MULTISPEECH"
]
] |
2403.06804 | Souhaib Attaiki | Souhaib Attaiki, Maks Ovsjanikov | Shape Non-rigid Kinematics (SNK): A Zero-Shot Method for Non-Rigid Shape
Matching via Unsupervised Functional Map Regularized Reconstruction | NeurIPS 2023, 10 pages, 9 figures | 2023 Advances in Neural Information Processing Systems (NeurIPS) | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We present Shape Non-rigid Kinematics (SNK), a novel zero-shot method for
non-rigid shape matching that eliminates the need for extensive training or
ground truth data. SNK operates on a single pair of shapes, and employs a
reconstruction-based strategy using an encoder-decoder architecture, which
deforms the source shape to closely match the target shape. During the process,
an unsupervised functional map is predicted and converted into a point-to-point
map, serving as a supervisory mechanism for the reconstruction. To aid in
training, we have designed a new decoder architecture that generates smooth,
realistic deformations. SNK demonstrates competitive results on traditional
benchmarks, simplifying the shape-matching process without compromising
accuracy. Our code can be found online: https://github.com/pvnieo/SNK
| [
{
"created": "Mon, 11 Mar 2024 15:23:11 GMT",
"version": "v1"
}
] | 2024-03-12 | [
[
"Attaiki",
"Souhaib",
""
],
[
"Ovsjanikov",
"Maks",
""
]
] |
2403.06813 | Georgios Leontidis | Mohammad Alkhalefi, Georgios Leontidis, and Mingjun Zhong | LeOCLR: Leveraging Original Images for Contrastive Learning of Visual
Representations | 15 pages, 5 figures, 9 tables - accepted at TMLR 10/2024 | TMLR; 2024 | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Contrastive instance discrimination methods outperform supervised learning in
downstream tasks such as image classification and object detection. However,
these methods rely heavily on data augmentation during representation learning,
which can lead to suboptimal results if not implemented carefully. A common
augmentation technique in contrastive learning is random cropping followed by
resizing. This can degrade the quality of representation learning when the two
random crops contain distinct semantic content. To tackle this issue, we
introduce LeOCLR (Leveraging Original Images for Contrastive Learning of Visual
Representations), a framework that employs a novel instance discrimination
approach and an adapted loss function. This method prevents the loss of
important semantic features caused by mapping different object parts during
representation learning. Our experiments demonstrate that LeOCLR consistently
improves representation learning across various datasets, outperforming
baseline models. For instance, LeOCLR surpasses MoCo-v2 by 5.1% on ImageNet-1K
in linear evaluation and outperforms several other methods on transfer learning
and object detection tasks.
| [
{
"created": "Mon, 11 Mar 2024 15:33:32 GMT",
"version": "v1"
},
{
"created": "Thu, 18 Jul 2024 18:55:51 GMT",
"version": "v2"
},
{
"created": "Tue, 15 Oct 2024 15:52:15 GMT",
"version": "v3"
}
] | 2024-10-16 | [
[
"Alkhalefi",
"Mohammad",
""
],
[
"Leontidis",
"Georgios",
""
],
[
"Zhong",
"Mingjun",
""
]
] |
2403.07078 | Alhassan Mumuni | Fuseinin Mumuni and Alhassan Mumuni | Improving deep learning with prior knowledge and cognitive models: A
survey on enhancing explainability, adversarial robustness and zero-shot
learning | null | Cognitive Systems Research, 84 (2024) | 10.1016/j.cogsys.2023.101188 | null | cs.LG cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | We review current and emerging knowledge-informed and brain-inspired
cognitive systems for realizing adversarial defenses, eXplainable Artificial
Intelligence (XAI), and zero-shot or few-short learning. Data-driven deep
learning models have achieved remarkable performance and demonstrated
capabilities surpassing human experts in many applications. Yet, their
inability to exploit domain knowledge leads to serious performance limitations
in practical applications. In particular, deep learning systems are exposed to
adversarial attacks, which can trick them into making glaringly incorrect
decisions. Moreover, complex data-driven models typically lack interpretability
or explainability, i.e., their decisions cannot be understood by human
subjects. Furthermore, models are usually trained on standard datasets with a
closed-world assumption. Hence, they struggle to generalize to unseen cases
during inference in practical open-world environments, thus, raising the zero-
or few-shot generalization problem. Although many conventional solutions exist,
explicit domain knowledge, brain-inspired neural network and cognitive
architectures offer powerful new dimensions towards alleviating these problems.
Prior knowledge is represented in appropriate forms and incorporated in deep
learning frameworks to improve performance. Brain-inspired cognition methods
use computational models that mimic the human mind to enhance intelligent
behavior in artificial agents and autonomous robots. Ultimately, these models
achieve better explainability, higher adversarial robustness and data-efficient
learning, and can, in turn, provide insights for cognitive science and
neuroscience-that is, to deepen human understanding on how the brain works in
general, and how it handles these problems.
| [
{
"created": "Mon, 11 Mar 2024 18:11:00 GMT",
"version": "v1"
}
] | 2024-03-13 | [
[
"Mumuni",
"Fuseinin",
""
],
[
"Mumuni",
"Alhassan",
""
]
] |
2403.07087 | Serkan Sava\c{s} Assoc. Prof. Dr. | Mustafa Abbas Hussein Hussein, Serkan Sava\c{s} | LSTM-Based Text Generation: A Study on Historical Datasets | null | 16th International Istanbul Scientific Research Congress on Life,
Engineering, Architecture, and Mathematical Sciences Proceedings Book, Pages:
42-49, 2024 | 10.5281/zenodo.10776102 | ISBN: 978-625-6879-50-8 | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper presents an exploration of Long Short-Term Memory (LSTM) networks
in the realm of text generation, focusing on the utilization of historical
datasets for Shakespeare and Nietzsche. LSTMs, known for their effectiveness in
handling sequential data, are applied here to model complex language patterns
and structures inherent in historical texts. The study demonstrates that
LSTM-based models, when trained on historical datasets, can not only generate
text that is linguistically rich and contextually relevant but also provide
insights into the evolution of language patterns over time. The finding
presents models that are highly accurate and efficient in predicting text from
works of Nietzsche, with low loss values and a training time of 100 iterations.
The accuracy of the model is 0.9521, indicating high accuracy. The loss of the
model is 0.2518, indicating its effectiveness. The accuracy of the model in
predicting text from the work of Shakespeare is 0.9125, indicating a low error
rate. The training time of the model is 100, mirroring the efficiency of the
Nietzsche dataset. This efficiency demonstrates the effectiveness of the model
design and training methodology, especially when handling complex literary
texts. This research contributes to the field of natural language processing by
showcasing the versatility of LSTM networks in text generation and offering a
pathway for future explorations in historical linguistics and beyond.
| [
{
"created": "Mon, 11 Mar 2024 18:25:01 GMT",
"version": "v1"
}
] | 2024-03-13 | [
[
"Hussein",
"Mustafa Abbas Hussein",
""
],
[
"Savaş",
"Serkan",
""
]
] |
2403.07092 | Shadab Ahamed | Shadab Ahamed, Natalia Dubljevic, Ingrid Bloise, Claire Gowdy, Patrick
Martineau, Don Wilson, Carlos F. Uribe, Arman Rahmim, and Fereshteh
Yousefirizi | A cascaded deep network for automated tumor detection and segmentation
in clinical PET imaging of diffuse large B-cell lymphoma | 8 pages, 3 figures, 3 tables | Proc. SPIE 12032, Medical Imaging 2022: Image Processing, 120323M
(4 April 2022) | 10.1117/12.2612684 | null | eess.IV cs.CV cs.LG physics.med-ph | http://creativecommons.org/licenses/by/4.0/ | Accurate detection and segmentation of diffuse large B-cell lymphoma (DLBCL)
from PET images has important implications for estimation of total metabolic
tumor volume, radiomics analysis, surgical intervention and radiotherapy.
Manual segmentation of tumors in whole-body PET images is time-consuming,
labor-intensive and operator-dependent. In this work, we develop and validate a
fast and efficient three-step cascaded deep learning model for automated
detection and segmentation of DLBCL tumors from PET images. As compared to a
single end-to-end network for segmentation of tumors in whole-body PET images,
our three-step model is more effective (improves 3D Dice score from 58.9% to
78.1%) since each of its specialized modules, namely the slice classifier, the
tumor detector and the tumor segmentor, can be trained independently to a high
degree of skill to carry out a specific task, rather than a single network with
suboptimal performance on overall segmentation.
| [
{
"created": "Mon, 11 Mar 2024 18:36:55 GMT",
"version": "v1"
}
] | 2024-03-13 | [
[
"Ahamed",
"Shadab",
""
],
[
"Dubljevic",
"Natalia",
""
],
[
"Bloise",
"Ingrid",
""
],
[
"Gowdy",
"Claire",
""
],
[
"Martineau",
"Patrick",
""
],
[
"Wilson",
"Don",
""
],
[
"Uribe",
"Carlos F.",
""
],
[
"Rahmim",
"Arman",
""
],
[
"Yousefirizi",
"Fereshteh",
""
]
] |
2403.07105 | Shadab Ahamed | Shadab Ahamed, Yixi Xu, Ingrid Bloise, Joo H. O, Carlos F. Uribe,
Rahul Dodhia, Juan L. Ferres, and Arman Rahmim | A slice classification neural network for automated classification of
axial PET/CT slices from a multi-centric lymphoma dataset | 10 pages, 6 figures, 2 tables | Proc. SPIE 12464, Medical Imaging 2023: Image Processing, 124641Q
(3 April 2023) | 10.1117/12.2652947 | null | eess.IV cs.CV cs.LG physics.med-ph | http://creativecommons.org/licenses/by/4.0/ | Automated slice classification is clinically relevant since it can be
incorporated into medical image segmentation workflows as a preprocessing step
that would flag slices with a higher probability of containing tumors, thereby
directing physicians attention to the important slices. In this work, we train
a ResNet-18 network to classify axial slices of lymphoma PET/CT images
(collected from two institutions) depending on whether the slice intercepted a
tumor (positive slice) in the 3D image or if the slice did not (negative
slice). Various instances of the network were trained on 2D axial datasets
created in different ways: (i) slice-level split and (ii) patient-level split;
inputs of different types were used: (i) only PET slices and (ii) concatenated
PET and CT slices; and different training strategies were employed: (i)
center-aware (CAW) and (ii) center-agnostic (CAG). Model performances were
compared using the area under the receiver operating characteristic curve
(AUROC) and the area under the precision-recall curve (AUPRC), and various
binary classification metrics. We observe and describe a performance
overestimation in the case of slice-level split as compared to the
patient-level split training. The model trained using patient-level split data
with the network input containing only PET slices in the CAG training regime
was the best performing/generalizing model on a majority of metrics. Our models
were additionally more closely compared using the sensitivity metric on the
positive slices from their respective test sets.
| [
{
"created": "Mon, 11 Mar 2024 18:57:45 GMT",
"version": "v1"
}
] | 2024-03-13 | [
[
"Ahamed",
"Shadab",
""
],
[
"Xu",
"Yixi",
""
],
[
"Bloise",
"Ingrid",
""
],
[
"O",
"Joo H.",
""
],
[
"Uribe",
"Carlos F.",
""
],
[
"Dodhia",
"Rahul",
""
],
[
"Ferres",
"Juan L.",
""
],
[
"Rahmim",
"Arman",
""
]
] |
2403.07118 | Ameeta Agrawal | Atharva Phatak, Vijay K. Mago, Ameeta Agrawal, Aravind Inbasekaran,
Philippe J. Giabbanelli | Narrating Causal Graphs with Large Language Models | HICSS '24 | Proceedings of the 57th Hawaii International Conference on System
Sciences 2024 | null | https://hdl.handle.net/10125/107290 | cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The use of generative AI to create text descriptions from graphs has mostly
focused on knowledge graphs, which connect concepts using facts. In this work
we explore the capability of large pretrained language models to generate text
from causal graphs, where salient concepts are represented as nodes and
causality is represented via directed, typed edges. The causal reasoning
encoded in these graphs can support applications as diverse as healthcare or
marketing. Using two publicly available causal graph datasets, we empirically
investigate the performance of four GPT-3 models under various settings. Our
results indicate that while causal text descriptions improve with training
data, compared to fact-based graphs, they are harder to generate under
zero-shot settings. Results further suggest that users of generative AI can
deploy future applications faster since similar performances are obtained when
training a model with only a few examples as compared to fine-tuning via a
large curated dataset.
| [
{
"created": "Mon, 11 Mar 2024 19:19:59 GMT",
"version": "v1"
}
] | 2024-04-09 | [
[
"Phatak",
"Atharva",
""
],
[
"Mago",
"Vijay K.",
""
],
[
"Agrawal",
"Ameeta",
""
],
[
"Inbasekaran",
"Aravind",
""
],
[
"Giabbanelli",
"Philippe J.",
""
]
] |
2403.07193 | Jes\'us Peral | Antonio Ferr\'andez, Roc\'io Lavigne-Cerv\'an, Jes\'us Peral, Ignasi
Navarro-Soria, \'Angel Lloret, David Gil, Carmen Rocamora | CuentosIE: can a chatbot about "tales with a message" help to teach
emotional intelligence? | 26 pages | PeerJ Computer Science, Volume 10, February 2024, ID e1866 | 10.7717/peerj-cs.1866 | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this article, we present CuentosIE (TalesEI: chatbot of tales with a
message to develop Emotional Intelligence), an educational chatbot on emotions
that also provides teachers and psychologists with a tool to monitor their
students/patients through indicators and data compiled by CuentosIE. The use of
"tales with a message" is justified by their simplicity and easy understanding,
thanks to their moral or associated metaphors. The main contributions of
CuentosIE are the selection, collection, and classification of a set of highly
specialized tales, as well as the provision of tools (searching, reading
comprehension, chatting, recommending, and classifying) that are useful for
both educating users about emotions and monitoring their emotional development.
The preliminary evaluation of the tool has obtained encouraging results, which
provides an affirmative answer to the question posed in the title of the
article.
| [
{
"created": "Mon, 11 Mar 2024 22:27:16 GMT",
"version": "v1"
}
] | 2024-03-13 | [
[
"Ferrández",
"Antonio",
""
],
[
"Lavigne-Cerván",
"Rocío",
""
],
[
"Peral",
"Jesús",
""
],
[
"Navarro-Soria",
"Ignasi",
""
],
[
"Lloret",
"Ángel",
""
],
[
"Gil",
"David",
""
],
[
"Rocamora",
"Carmen",
""
]
] |
2403.07194 | Cristobal Romero | W. Chango, R. Cerezo, M. Sanchez-Santillan, R. Azevedo, and C. Romero | Improving prediction of students' performance in intelligent tutoring
systems using attribute selection and ensembles of different multimodal data
sources | null | Journal of Computing in Higher Education,2021, 33, 614-634 | 10.1007/s12528-021-09298-8 | null | cs.CY cs.AI cs.HC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The aim of this study was to predict university students' learning
performance using different sources of data from an Intelligent Tutoring
System. We collected and preprocessed data from 40 students from different
multimodal sources: learning strategies from system logs, emotions from face
recording videos, interaction zones from eye tracking, and test performance
from final knowledge evaluation. Our objective was to test whether the
prediction could be improved by using attribute selection and classification
ensembles. We carried out three experiments by applying six classification
algorithms to numerical and discretized preprocessed multimodal data. The
results show that the best predictions were produced using ensembles and
selecting the best attributes approach with numerical data.
| [
{
"created": "Sat, 10 Feb 2024 09:31:39 GMT",
"version": "v1"
}
] | 2024-03-13 | [
[
"Chango",
"W.",
""
],
[
"Cerezo",
"R.",
""
],
[
"Sanchez-Santillan",
"M.",
""
],
[
"Azevedo",
"R.",
""
],
[
"Romero",
"C.",
""
]
] |
2403.07286 | Hsin-Ju Lin | Hsin-Ju Lin, Tsu-Chun Chung, Ching-Chun Hsiao, Pin-Yu Chen, Wei-Chen
Chiu, and Ching-Chun Huang | MENTOR: Multilingual tExt detectioN TOward leaRning by analogy | 8 pages, 4 figures, published to IROS 2023 | 2023 IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS), Detroit, MI, USA, 2023, pp. 3248-3255 | 10.1109/IROS55552.2023.10342419 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Text detection is frequently used in vision-based mobile robots when they
need to interpret texts in their surroundings to perform a given task. For
instance, delivery robots in multilingual cities need to be capable of doing
multilingual text detection so that the robots can read traffic signs and road
markings. Moreover, the target languages change from region to region, implying
the need of efficiently re-training the models to recognize the novel/new
languages. However, collecting and labeling training data for novel languages
are cumbersome, and the efforts to re-train an existing/trained text detector
are considerable. Even worse, such a routine would repeat whenever a novel
language appears. This motivates us to propose a new problem setting for
tackling the aforementioned challenges in a more efficient way: "We ask for a
generalizable multilingual text detection framework to detect and identify both
seen and unseen language regions inside scene images without the requirement of
collecting supervised training data for unseen languages as well as model
re-training". To this end, we propose "MENTOR", the first work to realize a
learning strategy between zero-shot learning and few-shot learning for
multilingual scene text detection.
| [
{
"created": "Tue, 12 Mar 2024 03:35:17 GMT",
"version": "v1"
}
] | 2024-03-13 | [
[
"Lin",
"Hsin-Ju",
""
],
[
"Chung",
"Tsu-Chun",
""
],
[
"Hsiao",
"Ching-Chun",
""
],
[
"Chen",
"Pin-Yu",
""
],
[
"Chiu",
"Wei-Chen",
""
],
[
"Huang",
"Ching-Chun",
""
]
] |
2403.07363 | Yingtao Ren | Yingtao Ren, Xiaomin Zhu, Kaiyuan Bai, Runtong Zhang | A New Random Forest Ensemble of Intuitionistic Fuzzy Decision Trees | null | IEEE Transactions on Fuzzy Systems 31.5 (2023): 1729-1741 | 10.1109/TFUZZ.2022.3215725 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Classification is essential to the applications in the field of data mining,
artificial intelligence, and fault detection. There exists a strong need in
developing accurate, suitable, and efficient classification methods and
algorithms with broad applicability. Random forest is a general algorithm that
is often used for classification under complex conditions. Although it has been
widely adopted, its combination with diverse fuzzy theory is still worth
exploring. In this paper, we propose the intuitionistic fuzzy random forest
(IFRF), a new random forest ensemble of intuitionistic fuzzy decision trees
(IFDT). Such trees in forest use intuitionistic fuzzy information gain to
select features and consider hesitation in information transmission. The
proposed method enjoys the power of the randomness from bootstrapped sampling
and feature selection, the flexibility of fuzzy logic and fuzzy sets, and the
robustness of multiple classifier systems. Extensive experiments demonstrate
that the IFRF has competitative and superior performance compared to other
state-of-the-art fuzzy and ensemble algorithms. IFDT is more suitable for
ensemble learning with outstanding classification accuracy. This study is the
first to propose a random forest ensemble based on the intuitionistic fuzzy
theory.
| [
{
"created": "Tue, 12 Mar 2024 06:52:24 GMT",
"version": "v1"
},
{
"created": "Sun, 17 Mar 2024 11:08:15 GMT",
"version": "v2"
}
] | 2024-03-19 | [
[
"Ren",
"Yingtao",
""
],
[
"Zhu",
"Xiaomin",
""
],
[
"Bai",
"Kaiyuan",
""
],
[
"Zhang",
"Runtong",
""
]
] |