id
stringlengths 10
10
| submitter
stringlengths 3
52
| authors
stringlengths 6
7.24k
| title
stringlengths 12
217
| comments
stringlengths 1
446
⌀ | journal-ref
stringlengths 4
297
| doi
stringlengths 12
118
⌀ | report-no
stringclasses 237
values | categories
stringlengths 5
71
| license
stringclasses 6
values | abstract
stringlengths 90
3.26k
| versions
listlengths 1
17
| update_date
stringclasses 969
values | authors_parsed
sequencelengths 1
451
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2403.15019 | Jiahao Lu | Jiahao Lu and Jiacheng Deng and Tianzhu Zhang | BSNet: Box-Supervised Simulation-assisted Mean Teacher for 3D Instance
Segmentation | null | CVPR 2024 | null | Accepted by CVPR 2024 | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 3D instance segmentation (3DIS) is a crucial task, but point-level
annotations are tedious in fully supervised settings. Thus, using bounding
boxes (bboxes) as annotations has shown great potential. The current mainstream
approach is a two-step process, involving the generation of pseudo-labels from
box annotations and the training of a 3DIS network with the pseudo-labels.
However, due to the presence of intersections among bboxes, not every point has
a determined instance label, especially in overlapping areas. To generate
higher quality pseudo-labels and achieve more precise weakly supervised 3DIS
results, we propose the Box-Supervised Simulation-assisted Mean Teacher for 3D
Instance Segmentation (BSNet), which devises a novel pseudo-labeler called
Simulation-assisted Transformer. The labeler consists of two main components.
The first is Simulation-assisted Mean Teacher, which introduces Mean Teacher
for the first time in this task and constructs simulated samples to assist the
labeler in acquiring prior knowledge about overlapping areas. To better model
local-global structure, we also propose Local-Global Aware Attention as the
decoder for teacher and student labelers. Extensive experiments conducted on
the ScanNetV2 and S3DIS datasets verify the superiority of our designs. Code is
available at
\href{https://github.com/peoplelu/BSNet}{https://github.com/peoplelu/BSNet}.
| [
{
"created": "Fri, 22 Mar 2024 08:05:30 GMT",
"version": "v1"
}
] | 2024-03-26 | [
[
"Lu",
"Jiahao",
""
],
[
"Deng",
"Jiacheng",
""
],
[
"Zhang",
"Tianzhu",
""
]
] |
2403.15408 | Sergio Gonz\'alez V\'azquez | Sergio Gonz\'alez, Abel Ko-Chun Yi, Wan-Ting Hsieh, Wei-Chao Chen,
Chun-Li Wang, Victor Chien-Chia Wu, Shang-Hung Chang | Multi-modal Heart Failure Risk Estimation based on Short ECG and Sampled
Long-Term HRV | null | S. Gonz\'alez, A. K.-C. Yi, W.-T. Hsieh, W.-C. Chen, C.-L. Wang,
V. C.-C. Wu, S.-H. Chang, Multi-modal heart failure risk estimation based on
short ECG and sampled long-term HRV, Information Fusion 107 (2024) 102337 | 10.1016/j.inffus.2024.102337 | null | eess.SP cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Cardiovascular diseases, including Heart Failure (HF), remain a leading
global cause of mortality, often evading early detection. In this context,
accessible and effective risk assessment is indispensable. Traditional
approaches rely on resource-intensive diagnostic tests, typically administered
after the onset of symptoms. The widespread availability of electrocardiogram
(ECG) technology and the power of Machine Learning are emerging as viable
alternatives within smart healthcare. In this paper, we propose several
multi-modal approaches that combine 30-second ECG recordings and approximate
long-term Heart Rate Variability (HRV) data to estimate the risk of HF
hospitalization. We introduce two survival models: an XGBoost model with
Accelerated Failure Time (AFT) incorporating comprehensive ECG features and a
ResNet model that learns from the raw ECG. We extend these with our novel
long-term HRVs extracted from the combination of ultra-short-term beat-to-beat
measurements taken over the day. To capture their temporal dynamics, we propose
a survival model comprising ResNet and Transformer architectures (TFM-ResNet).
Our experiments demonstrate high model performance for HF risk assessment with
a concordance index of 0.8537 compared to 14 survival models and competitive
discrimination power on various external ECG datasets. After transferability
tests with Apple Watch data, our approach implemented in the myHeartScore App
offers cost-effective and highly accessible HF risk assessment, contributing to
its prevention and management.
| [
{
"created": "Fri, 1 Mar 2024 01:16:27 GMT",
"version": "v1"
}
] | 2024-03-26 | [
[
"González",
"Sergio",
""
],
[
"Yi",
"Abel Ko-Chun",
""
],
[
"Hsieh",
"Wan-Ting",
""
],
[
"Chen",
"Wei-Chao",
""
],
[
"Wang",
"Chun-Li",
""
],
[
"Wu",
"Victor Chien-Chia",
""
],
[
"Chang",
"Shang-Hung",
""
]
] |
2403.15442 | Hamza Kheddar | Billel Essaid, Hamza Kheddar, Noureddine Batel, Muhammad
E.H.Chowdhury, Abderrahmane Lakas | Artificial Intelligence for Cochlear Implants: Review of Strategies,
Challenges, and Perspectives | null | IEEE Access, 2024 | 10.1109/ACCESS.2024.3429524 | null | eess.AS cs.AI cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | Automatic speech recognition (ASR) plays a pivotal role in our daily lives,
offering utility not only for interacting with machines but also for
facilitating communication for individuals with partial or profound hearing
impairments. The process involves receiving the speech signal in analog form,
followed by various signal processing algorithms to make it compatible with
devices of limited capacities, such as cochlear implants (CIs). Unfortunately,
these implants, equipped with a finite number of electrodes, often result in
speech distortion during synthesis. Despite efforts by researchers to enhance
received speech quality using various state-of-the-art (SOTA) signal processing
techniques, challenges persist, especially in scenarios involving multiple
sources of speech, environmental noise, and other adverse conditions. The
advent of new artificial intelligence (AI) methods has ushered in cutting-edge
strategies to address the limitations and difficulties associated with
traditional signal processing techniques dedicated to CIs. This review aims to
comprehensively cover advancements in CI-based ASR and speech enhancement,
among other related aspects. The primary objective is to provide a thorough
overview of metrics and datasets, exploring the capabilities of AI algorithms
in this biomedical field, and summarizing and commenting on the best results
obtained. Additionally, the review will delve into potential applications and
suggest future directions to bridge existing research gaps in this domain.
| [
{
"created": "Sun, 17 Mar 2024 11:28:23 GMT",
"version": "v1"
},
{
"created": "Sun, 21 Jul 2024 21:33:33 GMT",
"version": "v2"
}
] | 2024-07-23 | [
[
"Essaid",
"Billel",
""
],
[
"Kheddar",
"Hamza",
""
],
[
"Batel",
"Noureddine",
""
],
[
"Chowdhury",
"Muhammad E. H.",
""
],
[
"Lakas",
"Abderrahmane",
""
]
] |
2403.15458 | Daniel Fesalbon | Daniel Fesalbon, Arvin De La Cruz, Marvin Mallari, and Nelson Rodelas | Fine-Tuning Pre-trained Language Models to Detect In-Game Trash Talks | null | IJFMR Volume 6, Issue 2, March-April 2024 | 10.36948/ijfmr.2024.v06i02.14927 | null | cs.CL cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Common problems in playing online mobile and computer games were related to
toxic behavior and abusive communication among players. Based on different
reports and studies, the study also discusses the impact of online hate speech
and toxicity on players' in-game performance and overall well-being. This study
investigates the capability of pre-trained language models to classify or
detect trash talk or toxic in-game messages The study employs and evaluates the
performance of pre-trained BERT and GPT language models in detecting toxicity
within in-game chats. Using publicly available APIs, in-game chat data from
DOTA 2 game matches were collected, processed, reviewed, and labeled as
non-toxic, mild (toxicity), and toxic. The study was able to collect around two
thousand in-game chats to train and test BERT (Base-uncased), BERT
(Large-uncased), and GPT-3 models. Based on the three models' state-of-the-art
performance, this study concludes pre-trained language models' promising
potential for addressing online hate speech and in-game insulting trash talk.
| [
{
"created": "Tue, 19 Mar 2024 11:36:53 GMT",
"version": "v1"
}
] | 2024-03-28 | [
[
"Fesalbon",
"Daniel",
""
],
[
"De La Cruz",
"Arvin",
""
],
[
"Mallari",
"Marvin",
""
],
[
"Rodelas",
"Nelson",
""
]
] |
2403.15491 | Javier Conde | Javier Conde, Miguel Gonz\'alez, Nina Melero, Raquel Ferrando, Gonzalo
Mart\'inez, Elena Merino-G\'omez, Jos\'e Alberto Hern\'andez and Pedro
Reviriego | Open Conversational LLMs do not know most Spanish words | Procesamiento del Lenguaje Natural, 73, 95-108 | Procesamiento del Lenguaje Natural, n. 73, 2024.
http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6603 | 10.26342/2024-73-7 | null | cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The growing interest in Large Language Models (LLMs) and in particular in
conversational models with which users can interact has led to the development
of a large number of open-source chat LLMs. These models are evaluated on a
wide range of benchmarks to assess their capabilities in answering questions or
solving problems on almost any possible topic or to test their ability to
reason or interpret texts. Instead, the evaluation of the knowledge that these
models have of the languages has received much less attention. For example, the
words that they can recognize and use in different languages. In this paper, we
evaluate the knowledge that open-source chat LLMs have of Spanish words by
testing a sample of words in a reference dictionary. The results show that
open-source chat LLMs produce incorrect meanings for an important fraction of
the words and are not able to use most of the words correctly to write
sentences with context. These results show how Spanish is left behind in the
open-source LLM race and highlight the need to push for linguistic fairness in
conversational LLMs ensuring that they provide similar performance across
languages.
| [
{
"created": "Thu, 21 Mar 2024 15:41:02 GMT",
"version": "v1"
},
{
"created": "Tue, 24 Sep 2024 13:25:01 GMT",
"version": "v2"
}
] | 2024-09-25 | [
[
"Conde",
"Javier",
""
],
[
"González",
"Miguel",
""
],
[
"Melero",
"Nina",
""
],
[
"Ferrando",
"Raquel",
""
],
[
"Martínez",
"Gonzalo",
""
],
[
"Merino-Gómez",
"Elena",
""
],
[
"Hernández",
"José Alberto",
""
],
[
"Reviriego",
"Pedro",
""
]
] |
2403.15523 | Jordy Thielen | H. A. Scheppink, S. Ahmadi, P. Desain, M. Tangermann, J. Thielen | Towards auditory attention decoding with noise-tagging: A pilot study | 6 pages, 2 figures, 9th Graz Brain-Computer Interface Conference 2024 | 9th Graz Brain-Computer Interface Conference (2024) 337-342 | 10.3217/978-3-99161-014-4-059 | null | q-bio.NC cs.AI cs.LG cs.SD eess.AS | http://creativecommons.org/licenses/by/4.0/ | Auditory attention decoding (AAD) aims to extract from brain activity the
attended speaker amidst candidate speakers, offering promising applications for
neuro-steered hearing devices and brain-computer interfacing. This pilot study
makes a first step towards AAD using the noise-tagging stimulus protocol, which
evokes reliable code-modulated evoked potentials, but is minimally explored in
the auditory modality. Participants were sequentially presented with two Dutch
speech stimuli that were amplitude-modulated with a unique binary pseudo-random
noise-code, effectively tagging these with additional decodable information. We
compared the decoding of unmodulated audio against audio modulated with various
modulation depths, and a conventional AAD method against a standard method to
decode noise-codes. Our pilot study revealed higher performances for the
conventional method with 70 to 100 percent modulation depths compared to
unmodulated audio. The noise-code decoder did not further improve these
results. These fundamental insights highlight the potential of integrating
noise-codes in speech to enhance auditory speaker detection when multiple
speakers are presented simultaneously.
| [
{
"created": "Fri, 22 Mar 2024 13:35:34 GMT",
"version": "v1"
},
{
"created": "Fri, 17 May 2024 14:44:24 GMT",
"version": "v2"
}
] | 2024-10-15 | [
[
"Scheppink",
"H. A.",
""
],
[
"Ahmadi",
"S.",
""
],
[
"Desain",
"P.",
""
],
[
"Tangermann",
"M.",
""
],
[
"Thielen",
"J.",
""
]
] |
2403.15699 | Huaiwen Zhang | Huaiwen Zhang, Yu Chen, Ming Wang and Shi Feng | FEEL: A Framework for Evaluating Emotional Support Capability with Large
Language Models | Accepted to ICIC 2024 | Advanced Intelligent Computing Technology and Applications. ICIC
2024. Lecture Notes in Computer Science | 10.1007/978-981-97-5618-6_9 | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Emotional Support Conversation (ESC) is a typical dialogue that can
effectively assist the user in mitigating emotional pressures. However, owing
to the inherent subjectivity involved in analyzing emotions, current
non-artificial methodologies face challenges in effectively appraising the
emotional support capability. These metrics exhibit a low correlation with
human judgments. Concurrently, manual evaluation methods extremely will cause
high costs. To solve these problems, we propose a novel model FEEL (Framework
for Evaluating Emotional Support Capability with Large Lan-guage Models),
employing Large Language Models (LLMs) as evaluators to assess emotional
support capabilities. The model meticulously considers various evaluative
aspects of ESC to apply a more comprehensive and accurate evaluation method for
ESC. Additionally, it employs a probability distribution approach for a more
stable result and integrates an ensemble learning strategy, leveraging multiple
LLMs with assigned weights to enhance evaluation accuracy. To appraise the
performance of FEEL, we conduct extensive experiments on existing ESC model
dialogues. Experimental results demonstrate our model exhibits a substantial
enhancement in alignment with human evaluations compared to the baselines. Our
source code is available at https://github.com/Ansisy/FEEL.
| [
{
"created": "Sat, 23 Mar 2024 03:32:26 GMT",
"version": "v1"
},
{
"created": "Thu, 16 May 2024 02:15:38 GMT",
"version": "v2"
},
{
"created": "Sun, 21 Jul 2024 13:27:02 GMT",
"version": "v3"
}
] | 2024-08-05 | [
[
"Zhang",
"Huaiwen",
""
],
[
"Chen",
"Yu",
""
],
[
"Wang",
"Ming",
""
],
[
"Feng",
"Shi",
""
]
] |
2403.15712 | Chensheng Peng | Chensheng Peng, Zhaoyu Zeng, Jinling Gao, Jundong Zhou, Masayoshi
Tomizuka, Xinbing Wang, Chenghu Zhou, Nanyang Ye | PNAS-MOT: Multi-Modal Object Tracking with Pareto Neural Architecture
Search | IEEE Robotics and Automation Letters 2024. Code is available at
https://github.com/PholyPeng/PNAS-MOT | IEEE Robotics and Automation Letters, 2024 | 10.1109/LRA.2024.3379865 | null | cs.CV cs.RO | http://creativecommons.org/licenses/by/4.0/ | Multiple object tracking is a critical task in autonomous driving. Existing
works primarily focus on the heuristic design of neural networks to obtain high
accuracy. As tracking accuracy improves, however, neural networks become
increasingly complex, posing challenges for their practical application in real
driving scenarios due to the high level of latency. In this paper, we explore
the use of the neural architecture search (NAS) methods to search for efficient
architectures for tracking, aiming for low real-time latency while maintaining
relatively high accuracy. Another challenge for object tracking is the
unreliability of a single sensor, therefore, we propose a multi-modal framework
to improve the robustness. Experiments demonstrate that our algorithm can run
on edge devices within lower latency constraints, thus greatly reducing the
computational requirements for multi-modal object tracking while keeping lower
latency.
| [
{
"created": "Sat, 23 Mar 2024 04:18:49 GMT",
"version": "v1"
}
] | 2024-03-26 | [
[
"Peng",
"Chensheng",
""
],
[
"Zeng",
"Zhaoyu",
""
],
[
"Gao",
"Jinling",
""
],
[
"Zhou",
"Jundong",
""
],
[
"Tomizuka",
"Masayoshi",
""
],
[
"Wang",
"Xinbing",
""
],
[
"Zhou",
"Chenghu",
""
],
[
"Ye",
"Nanyang",
""
]
] |
2403.15857 | Hassan Sartaj | Hassan Sartaj, Asmar Muqeet, Muhammad Zohaib Iqbal, Muhammad Uzair
Khan | Automated System-level Testing of Unmanned Aerial Systems | Published in Automated Software Engineering | Autom Softw Eng 31, 64 (2024) | 10.1007/s10515-024-00462-9 | null | cs.SE cs.AI cs.RO | http://creativecommons.org/licenses/by/4.0/ | Unmanned aerial systems (UAS) rely on various avionics systems that are
safety-critical and mission-critical. A major requirement of international
safety standards is to perform rigorous system-level testing of avionics
software systems. The current industrial practice is to manually create test
scenarios, manually/automatically execute these scenarios using simulators, and
manually evaluate outcomes. The test scenarios typically consist of setting
certain flight or environment conditions and testing the system under test in
these settings. The state-of-the-art approaches for this purpose also require
manual test scenario development and evaluation. In this paper, we propose a
novel approach to automate the system-level testing of the UAS. The proposed
approach (AITester) utilizes model-based testing and artificial intelligence
(AI) techniques to automatically generate, execute, and evaluate various test
scenarios. The test scenarios are generated on the fly, i.e., during test
execution based on the environmental context at runtime. The approach is
supported by a toolset. We empirically evaluate the proposed approach on two
core components of UAS, an autopilot system of an unmanned aerial vehicle (UAV)
and cockpit display systems (CDS) of the ground control station (GCS). The
results show that the AITester effectively generates test scenarios causing
deviations from the expected behavior of the UAV autopilot and reveals
potential flaws in the GCS-CDS.
| [
{
"created": "Sat, 23 Mar 2024 14:47:26 GMT",
"version": "v1"
},
{
"created": "Fri, 2 Aug 2024 11:36:14 GMT",
"version": "v2"
}
] | 2024-08-05 | [
[
"Sartaj",
"Hassan",
""
],
[
"Muqeet",
"Asmar",
""
],
[
"Iqbal",
"Muhammad Zohaib",
""
],
[
"Khan",
"Muhammad Uzair",
""
]
] |
2403.15977 | Timur Ibrayev | Timur Ibrayev, Amitangshu Mukherjee, Sai Aparna Aketi, and Kaushik Roy | Towards Two-Stream Foveation-based Active Vision Learning | Accepted version of the article, 18 pages, 14 figures | IEEE Transactions on Cognitive and Developmental Systems, 2024 | 10.1109/TCDS.2024.3390597 | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep neural network (DNN) based machine perception frameworks process the
entire input in a one-shot manner to provide answers to both "what object is
being observed" and "where it is located". In contrast, the "two-stream
hypothesis" from neuroscience explains the neural processing in the human
visual cortex as an active vision system that utilizes two separate regions of
the brain to answer the what and the where questions. In this work, we propose
a machine learning framework inspired by the "two-stream hypothesis" and
explore the potential benefits that it offers. Specifically, the proposed
framework models the following mechanisms: 1) ventral (what) stream focusing on
the input regions perceived by the fovea part of an eye (foveation), 2) dorsal
(where) stream providing visual guidance, and 3) iterative processing of the
two streams to calibrate visual focus and process the sequence of focused image
patches. The training of the proposed framework is accomplished by label-based
DNN training for the ventral stream model and reinforcement learning for the
dorsal stream model. We show that the two-stream foveation-based learning is
applicable to the challenging task of weakly-supervised object localization
(WSOL), where the training data is limited to the object class or its
attributes. The framework is capable of both predicting the properties of an
object and successfully localizing it by predicting its bounding box. We also
show that, due to the independent nature of the two streams, the dorsal model
can be applied on its own to unseen images to localize objects from different
datasets.
| [
{
"created": "Sun, 24 Mar 2024 01:20:08 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Apr 2024 21:08:05 GMT",
"version": "v2"
},
{
"created": "Sat, 20 Apr 2024 20:19:11 GMT",
"version": "v3"
}
] | 2024-04-23 | [
[
"Ibrayev",
"Timur",
""
],
[
"Mukherjee",
"Amitangshu",
""
],
[
"Aketi",
"Sai Aparna",
""
],
[
"Roy",
"Kaushik",
""
]
] |
2403.16020 | Tanvir Mahmud | Tanvir Mahmud, Burhaneddin Yaman, Chun-Hao Liu, Diana Marculescu | PaPr: Training-Free One-Step Patch Pruning with Lightweight ConvNets for
Faster Inference | Accepted in ECCV 2024. Code: https://github.com/tanvir-utexas/PaPr | European Conference on Computer Vision (ECCV) 2024 | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | As deep neural networks evolve from convolutional neural networks (ConvNets)
to advanced vision transformers (ViTs), there is an increased need to eliminate
redundant data for faster processing without compromising accuracy. Previous
methods are often architecture-specific or necessitate re-training, restricting
their applicability with frequent model updates. To solve this, we first
introduce a novel property of lightweight ConvNets: their ability to identify
key discriminative patch regions in images, irrespective of model's final
accuracy or size. We demonstrate that fully-connected layers are the primary
bottleneck for ConvNets performance, and their suppression with simple weight
recalibration markedly enhances discriminative patch localization performance.
Using this insight, we introduce PaPr, a method for substantially pruning
redundant patches with minimal accuracy loss using lightweight ConvNets across
a variety of deep learning architectures, including ViTs, ConvNets, and hybrid
transformers, without any re-training. Moreover, the simple early-stage
one-step patch pruning with PaPr enhances existing patch reduction methods.
Through extensive testing on diverse architectures, PaPr achieves significantly
higher accuracy over state-of-the-art patch reduction methods with similar FLOP
count reduction. More specifically, PaPr reduces about 70% of redundant patches
in videos with less than 0.8% drop in accuracy, and up to 3.7x FLOPs reduction,
which is a 15% more reduction with 2.5% higher accuracy. Code is released at
https://github.com/tanvir-utexas/PaPr.
| [
{
"created": "Sun, 24 Mar 2024 05:50:00 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Jul 2024 07:21:18 GMT",
"version": "v2"
}
] | 2024-07-04 | [
[
"Mahmud",
"Tanvir",
""
],
[
"Yaman",
"Burhaneddin",
""
],
[
"Liu",
"Chun-Hao",
""
],
[
"Marculescu",
"Diana",
""
]
] |
2403.16071 | Linzhi Wu | Linzhi Wu, Xingyu Zhang, Yakun Zhang, Changyan Zheng, Tiejun Liu,
Liang Xie, Ye Yan and Erwei Yin | Landmark-Guided Cross-Speaker Lip Reading with Mutual Information
Regularization | To appear in LREC-COLING 2024 | The 2024 Joint International Conference on Computational
Linguistics, Language Resources and Evaluation (LREC-COLING 2024) | null | null | cs.AI cs.CV cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Lip reading, the process of interpreting silent speech from visual lip
movements, has gained rising attention for its wide range of realistic
applications. Deep learning approaches greatly improve current lip reading
systems. However, lip reading in cross-speaker scenarios where the speaker
identity changes, poses a challenging problem due to inter-speaker variability.
A well-trained lip reading system may perform poorly when handling a brand new
speaker. To learn a speaker-robust lip reading model, a key insight is to
reduce visual variations across speakers, avoiding the model overfitting to
specific speakers. In this work, in view of both input visual clues and latent
representations based on a hybrid CTC/attention architecture, we propose to
exploit the lip landmark-guided fine-grained visual clues instead of
frequently-used mouth-cropped images as input features, diminishing
speaker-specific appearance characteristics. Furthermore, a max-min mutual
information regularization approach is proposed to capture speaker-insensitive
latent representations. Experimental evaluations on public lip reading datasets
demonstrate the effectiveness of the proposed approach under the intra-speaker
and inter-speaker conditions.
| [
{
"created": "Sun, 24 Mar 2024 09:18:21 GMT",
"version": "v1"
},
{
"created": "Thu, 2 May 2024 08:53:35 GMT",
"version": "v2"
}
] | 2024-05-03 | [
[
"Wu",
"Linzhi",
""
],
[
"Zhang",
"Xingyu",
""
],
[
"Zhang",
"Yakun",
""
],
[
"Zheng",
"Changyan",
""
],
[
"Liu",
"Tiejun",
""
],
[
"Xie",
"Liang",
""
],
[
"Yan",
"Ye",
""
],
[
"Yin",
"Erwei",
""
]
] |
2403.16081 | Mutlu Cukurova PhD | Mutlu Cukurova | The Interplay of Learning, Analytics, and Artificial Intelligence in
Education: A Vision for Hybrid Intelligence | 20 pages, 7 figures, this paper is based on the keynote talk given by
the author at the ACM International Conference on Learning Analytics &
Knowledge (LAK) 2024 in Kyoto, Japan.
https://www.solaresearch.org/events/lak/lak24/keynotes/ | British Journal of Educational Technology 2024 | null | null | cs.CY cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This paper presents a multi-dimensional view of AI's role in learning and
education, emphasizing the intricate interplay between AI, analytics, and the
learning processes. Here, I challenge the prevalent narrow conceptualisation of
AI as tools, as exemplified in generative AI tools, and argue for the
importance of alternative conceptualisations of AI for achieving human-AI
hybrid intelligence. I highlight the differences between human intelligence and
artificial information processing, the importance of hybrid human-AI systems to
extend human cognition, and posit that AI can also serve as an instrument for
understanding human learning. Early learning sciences and AI in Education
research (AIED), which saw AI as an analogy for human intelligence, have
diverged from this perspective, prompting a need to rekindle this connection.
The paper presents three unique conceptualisations of AI: the externalization
of human cognition, the internalization of AI models to influence human mental
models, and the extension of human cognition via tightly coupled human-AI
hybrid intelligence systems. Examples from current research and practice are
examined as instances of the three conceptualisations in education,
highlighting the potential value and limitations of each conceptualisation for
education, as well as the perils of overemphasis on externalising human
cognition. The paper concludes with advocacy for a broader approach to AIED
that goes beyond considerations on the design and development of AI, but also
includes educating people about AI and innovating educational systems to remain
relevant in an AI-ubiquitous world.
| [
{
"created": "Sun, 24 Mar 2024 10:07:46 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Apr 2024 06:14:57 GMT",
"version": "v2"
},
{
"created": "Fri, 14 Jun 2024 08:05:18 GMT",
"version": "v3"
},
{
"created": "Mon, 8 Jul 2024 13:38:27 GMT",
"version": "v4"
}
] | 2024-07-09 | [
[
"Cukurova",
"Mutlu",
""
]
] |
2403.16158 | SungJoo Byun | Sungjoo Byun, Jiseung Hong, Sumin Park, Dongjun Jang, Jean Seo,
Minseok Kim, Chaeyoung Oh, Hyopil Shin | Korean Bio-Medical Corpus (KBMC) for Medical Named Entity Recognition | null | LREC-COLING 2024 | null | null | cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | Named Entity Recognition (NER) plays a pivotal role in medical Natural
Language Processing (NLP). Yet, there has not been an open-source medical NER
dataset specifically for the Korean language. To address this, we utilized
ChatGPT to assist in constructing the KBMC (Korean Bio-Medical Corpus), which
we are now presenting to the public. With the KBMC dataset, we noticed an
impressive 20% increase in medical NER performance compared to models trained
on general Korean NER datasets. This research underscores the significant
benefits and importance of using specialized tools and datasets, like ChatGPT,
to enhance language processing in specialized fields such as healthcare.
| [
{
"created": "Sun, 24 Mar 2024 13:51:05 GMT",
"version": "v1"
}
] | 2024-03-26 | [
[
"Byun",
"Sungjoo",
""
],
[
"Hong",
"Jiseung",
""
],
[
"Park",
"Sumin",
""
],
[
"Jang",
"Dongjun",
""
],
[
"Seo",
"Jean",
""
],
[
"Kim",
"Minseok",
""
],
[
"Oh",
"Chaeyoung",
""
],
[
"Shin",
"Hyopil",
""
]
] |
2403.16198 | Junqiao Fan | Junqiao Fan, Jianfei Yang, Yuecong Xu, Lihua Xie | Diffusion Model is a Good Pose Estimator from 3D RF-Vision | null | ECCV 2024 | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Human pose estimation (HPE) from Radio Frequency vision (RF-vision) performs
human sensing using RF signals that penetrate obstacles without revealing
privacy (e.g., facial information). Recently, mmWave radar has emerged as a
promising RF-vision sensor, providing radar point clouds by processing RF
signals. However, the mmWave radar has a limited resolution with severe noise,
leading to inaccurate and inconsistent human pose estimation. This work
proposes mmDiff, a novel diffusion-based pose estimator tailored for noisy
radar data. Our approach aims to provide reliable guidance as conditions to
diffusion models. Two key challenges are addressed by mmDiff: (1)
miss-detection of parts of human bodies, which is addressed by a module that
isolates feature extraction from different body parts, and (2) signal
inconsistency due to environmental interference, which is tackled by
incorporating prior knowledge of body structure and motion. Several modules are
designed to achieve these goals, whose features work as the conditions for the
subsequent diffusion model, eliminating the miss-detection and instability of
HPE based on RF-vision. Extensive experiments demonstrate that mmDiff
outperforms existing methods significantly, achieving state-of-the-art
performances on public datasets.
| [
{
"created": "Sun, 24 Mar 2024 15:39:52 GMT",
"version": "v1"
},
{
"created": "Mon, 22 Jul 2024 03:27:30 GMT",
"version": "v2"
}
] | 2024-07-23 | [
[
"Fan",
"Junqiao",
""
],
[
"Yang",
"Jianfei",
""
],
[
"Xu",
"Yuecong",
""
],
[
"Xie",
"Lihua",
""
]
] |
2403.16347 | Minaoar Tanzil | Minaoar Hossain Tanzil, Junaed Younus Khan, Gias Uddin | ChatGPT Incorrectness Detection in Software Reviews | null | IEEE/ACM 46th International Conference on Software Engineering
(ICSE 2024) | 10.1145/3597503.3639194 | null | cs.SE cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We conducted a survey of 135 software engineering (SE) practitioners to
understand how they use Generative AI-based chatbots like ChatGPT for SE tasks.
We find that they want to use ChatGPT for SE tasks like software library
selection but often worry about the truthfulness of ChatGPT responses. We
developed a suite of techniques and a tool called CID (ChatGPT Incorrectness
Detector) to automatically test and detect the incorrectness in ChatGPT
responses. CID is based on the iterative prompting to ChatGPT by asking it
contextually similar but textually divergent questions (using an approach that
utilizes metamorphic relationships in texts). The underlying principle in CID
is that for a given question, a response that is different from other responses
(across multiple incarnations of the question) is likely an incorrect response.
In a benchmark study of library selection, we show that CID can detect
incorrect responses from ChatGPT with an F1-score of 0.74 - 0.75.
| [
{
"created": "Mon, 25 Mar 2024 00:50:27 GMT",
"version": "v1"
}
] | 2024-03-26 | [
[
"Tanzil",
"Minaoar Hossain",
""
],
[
"Khan",
"Junaed Younus",
""
],
[
"Uddin",
"Gias",
""
]
] |
2403.16384 | Jintong Hu | Jintong Hu, Hui Che, Zishuo Li, Wenming Yang | Residual Dense Swin Transformer for Continuous Depth-Independent
Ultrasound Imaging | Accepted by ICASSP2024, https://ieeexplore.ieee.org/document/10447712 | ICASSP 2024 - 2024 IEEE International Conference on Acoustics,
Speech and Signal Processing (ICASSP) | 10.1109/ICASSP48485.2024.10447712 | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Ultrasound imaging is crucial for evaluating organ morphology and function,
yet depth adjustment can degrade image quality and field-of-view, presenting a
depth-dependent dilemma. Traditional interpolation-based zoom-in techniques
often sacrifice detail and introduce artifacts. Motivated by the potential of
arbitrary-scale super-resolution to naturally address these inherent
challenges, we present the Residual Dense Swin Transformer Network (RDSTN),
designed to capture the non-local characteristics and long-range dependencies
intrinsic to ultrasound images. It comprises a linear embedding module for
feature enhancement, an encoder with shifted-window attention for modeling
non-locality, and an MLP decoder for continuous detail reconstruction. This
strategy streamlines balancing image quality and field-of-view, which offers
superior textures over traditional methods. Experimentally, RDSTN outperforms
existing approaches while requiring fewer parameters. In conclusion, RDSTN
shows promising potential for ultrasound image enhancement by overcoming the
limitations of conventional interpolation-based methods and achieving
depth-independent imaging.
| [
{
"created": "Mon, 25 Mar 2024 03:01:53 GMT",
"version": "v1"
}
] | 2024-03-27 | [
[
"Hu",
"Jintong",
""
],
[
"Che",
"Hui",
""
],
[
"Li",
"Zishuo",
""
],
[
"Yang",
"Wenming",
""
]
] |
2403.16418 | Thiago Alves Rocha | Ant\^onio Carlos Souza Ferreira J\'unior, Thiago Alves Rocha | An Incremental MaxSAT-based Model to Learn Interpretable and Balanced
Classification Rules | 16 pages, 5 tables, submitted to BRACIS 2023 (Brazilian Conference on
Intelligent Systems), accepted version published in Intelligent Systems,
LNCS, vol 14195 | Intelligent Systems (2023), LNCS, vol 14195 (pp. 227-242),
Springer Nature | 10.1007/978-3-031-45368-7_15 | null | cs.LG cs.AI cs.LO | http://creativecommons.org/licenses/by/4.0/ | The increasing advancements in the field of machine learning have led to the
development of numerous applications that effectively address a wide range of
problems with accurate predictions. However, in certain cases, accuracy alone
may not be sufficient. Many real-world problems also demand explanations and
interpretability behind the predictions. One of the most popular interpretable
models that are classification rules. This work aims to propose an incremental
model for learning interpretable and balanced rules based on MaxSAT, called
IMLIB. This new model was based on two other approaches, one based on SAT and
the other on MaxSAT. The one based on SAT limits the size of each generated
rule, making it possible to balance them. We suggest that such a set of rules
seem more natural to be understood compared to a mixture of large and small
rules. The approach based on MaxSAT, called IMLI, presents a technique to
increase performance that involves learning a set of rules by incrementally
applying the model in a dataset. Finally, IMLIB and IMLI are compared using
diverse databases. IMLIB obtained results comparable to IMLI in terms of
accuracy, generating more balanced rules with smaller sizes.
| [
{
"created": "Mon, 25 Mar 2024 04:43:47 GMT",
"version": "v1"
},
{
"created": "Mon, 29 Apr 2024 13:00:21 GMT",
"version": "v2"
}
] | 2024-04-30 | [
[
"Júnior",
"Antônio Carlos Souza Ferreira",
""
],
[
"Rocha",
"Thiago Alves",
""
]
] |
2403.16438 | Yosuke Bando | Yosuke Bando, Ramdas Pillai, Atsushi Kajita, Farhan Abdul Hakeem, Yves
Quemener, Hua-an Tseng, Kiryl D. Piatkevich, Changyang Linghu, Xue Han,
Edward S. Boyden | Real-time Neuron Segmentation for Voltage Imaging | null | IEEE International Conference on Bioinformatics and Biomedicine
(BIBM), 813-818, 2023 | 10.1109/BIBM58861.2023.10385929 | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In voltage imaging, where the membrane potentials of individual neurons are
recorded at from hundreds to thousand frames per second using fluorescence
microscopy, data processing presents a challenge. Even a fraction of a minute
of recording with a limited image size yields gigabytes of video data
consisting of tens of thousands of frames, which can be time-consuming to
process. Moreover, millisecond-level short exposures lead to noisy video
frames, obscuring neuron footprints especially in deep-brain samples where
noisy signals are buried in background fluorescence. To address this challenge,
we propose a fast neuron segmentation method able to detect multiple,
potentially overlapping, spiking neurons from noisy video frames, and implement
a data processing pipeline incorporating the proposed segmentation method along
with GPU-accelerated motion correction. By testing on existing datasets as well
as on new datasets we introduce, we show that our pipeline extracts neuron
footprints that agree well with human annotation even from cluttered datasets,
and demonstrate real-time processing of voltage imaging data on a single
desktop computer for the first time.
| [
{
"created": "Mon, 25 Mar 2024 05:46:06 GMT",
"version": "v1"
}
] | 2024-03-26 | [
[
"Bando",
"Yosuke",
""
],
[
"Pillai",
"Ramdas",
""
],
[
"Kajita",
"Atsushi",
""
],
[
"Hakeem",
"Farhan Abdul",
""
],
[
"Quemener",
"Yves",
""
],
[
"Tseng",
"Hua-an",
""
],
[
"Piatkevich",
"Kiryl D.",
""
],
[
"Linghu",
"Changyang",
""
],
[
"Han",
"Xue",
""
],
[
"Boyden",
"Edward S.",
""
]
] |
2403.16495 | Haifeng Li | Qinyao Luo, Silu He, Xing Han, Yuhan Wang, Haifeng Li | LSTTN: A Long-Short Term Transformer-based Spatio-temporal Neural
Network for Traffic Flow Forecasting | 15 pages, 10 figures, 6 tables | Knowledge-Based Systems 2024 | 10.1016/j.knosys.2024.111637 | null | cs.LG cs.AI cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate traffic forecasting is a fundamental problem in intelligent
transportation systems and learning long-range traffic representations with key
information through spatiotemporal graph neural networks (STGNNs) is a basic
assumption of current traffic flow prediction models. However, due to
structural limitations, existing STGNNs can only utilize short-range traffic
flow data; therefore, the models cannot adequately learn the complex trends and
periodic features in traffic flow. Besides, it is challenging to extract the
key temporal information from the long historical traffic series and obtain a
compact representation. To solve the above problems, we propose a novel LSTTN
(Long-Short Term Transformer-based Network) framework comprehensively
considering the long- and short-term features in historical traffic flow.
First, we employ a masked subseries Transformer to infer the content of masked
subseries from a small portion of unmasked subseries and their temporal context
in a pretraining manner, forcing the model to efficiently learn compressed and
contextual subseries temporal representations from long historical series.
Then, based on the learned representations, long-term trend is extracted by
using stacked 1D dilated convolution layers, and periodic features are
extracted by dynamic graph convolution layers. For the difficulties in making
time-step level prediction, LSTTN adopts a short-term trend extractor to learn
fine-grained short-term temporal features. Finally, LSTTN fuses the long-term
trend, periodic features and short-term features to obtain the prediction
results. Experiments on four real-world datasets show that in 60-minute-ahead
long-term forecasting, the LSTTN model achieves a minimum improvement of 5.63\%
and a maximum improvement of 16.78\% over baseline models. The source code is
available at https://github.com/GeoX-Lab/LSTTN.
| [
{
"created": "Mon, 25 Mar 2024 07:23:23 GMT",
"version": "v1"
}
] | 2024-03-26 | [
[
"Luo",
"Qinyao",
""
],
[
"He",
"Silu",
""
],
[
"Han",
"Xing",
""
],
[
"Wang",
"Yuhan",
""
],
[
"Li",
"Haifeng",
""
]
] |
2403.16609 | Biswesh Mohapatra | Biswesh Mohapatra, Seemab Hassan, Laurent Romary and Justine Cassell | Conversational Grounding: Annotation and Analysis of Grounding Acts and
Grounding Units | null | LREC-COLING 2024 | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Successful conversations often rest on common understanding, where all
parties are on the same page about the information being shared. This process,
known as conversational grounding, is crucial for building trustworthy dialog
systems that can accurately keep track of and recall the shared information.
The proficiencies of an agent in grounding the conveyed information
significantly contribute to building a reliable dialog system. Despite recent
advancements in dialog systems, there exists a noticeable deficit in their
grounding capabilities. Traum provided a framework for conversational grounding
introducing Grounding Acts and Grounding Units, but substantial progress,
especially in the realm of Large Language Models, remains lacking. To bridge
this gap, we present the annotation of two dialog corpora employing Grounding
Acts, Grounding Units, and a measure of their degree of grounding. We discuss
our key findings during the annotation and also provide a baseline model to
test the performance of current Language Models in categorizing the grounding
acts of the dialogs. Our work aims to provide a useful resource for further
research in making conversations with machines better understood and more
reliable in natural day-to-day collaborative dialogs.
| [
{
"created": "Mon, 25 Mar 2024 10:39:18 GMT",
"version": "v1"
}
] | 2024-03-26 | [
[
"Mohapatra",
"Biswesh",
""
],
[
"Hassan",
"Seemab",
""
],
[
"Romary",
"Laurent",
""
],
[
"Cassell",
"Justine",
""
]
] |
2403.16655 | Pb Pati | Rohit Raju, Peeta Basa Pati, SA Gandheesh, Gayatri Sanjana Sannala and
Suriya KS | Grammatical vs Spelling Error Correction: An Investigation into the
Responsiveness of Transformer-based Language Models using BART and MarianMT | null | Journal of Information & Knowledge Management, 2024, World
Scientific | 10.1142/S0219649224500370 | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Text continues to remain a relevant form of representation for information.
Text documents are created either in digital native platforms or through the
conversion of other media files such as images and speech. While the digital
native text is invariably obtained through physical or virtual keyboards,
technologies such as OCR and speech recognition are utilized to transform the
images and speech signals into text content. All these variety of mechanisms of
text generation also introduce errors into the captured text.
This project aims at analyzing different kinds of error that occurs in text
documents. The work employs two of the advanced deep neural network-based
language models, namely, BART and MarianMT, to rectify the anomalies present in
the text. Transfer learning of these models with available dataset is performed
to finetune their capacity for error correction. A comparative study is
conducted to investigate the effectiveness of these models in handling each of
the defined error categories. It is observed that while both models can bring
down the erroneous sentences by 20+%, BART can handle spelling errors far
better (24.6%) than grammatical errors (8.8%).
| [
{
"created": "Mon, 25 Mar 2024 11:45:21 GMT",
"version": "v1"
}
] | 2024-03-26 | [
[
"Raju",
"Rohit",
""
],
[
"Pati",
"Peeta Basa",
""
],
[
"Gandheesh",
"SA",
""
],
[
"Sannala",
"Gayatri Sanjana",
""
],
[
"KS",
"Suriya",
""
]
] |
2403.16669 | Yin Zhang | Yin Zhang, Jinhong Deng, Peidong Liu, Wen Li, and Shiyu Zhao | Domain Adaptive Detection of MAVs: A Benchmark and Noise Suppression
Network | 17 pages, 11 figures. Accepted by IEEE Transactions on Automation
Science and Engineering | IEEE Transactions on Automation Science and Engineering, 2024 | 10.1109/TASE.2024.3370147 | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual detection of Micro Air Vehicles (MAVs) has attracted increasing
attention in recent years due to its important application in various tasks.
The existing methods for MAV detection assume that the training set and testing
set have the same distribution. As a result, when deployed in new domains, the
detectors would have a significant performance degradation due to domain
discrepancy. In this paper, we study the problem of cross-domain MAV detection.
The contributions of this paper are threefold. 1) We propose a
Multi-MAV-Multi-Domain (M3D) dataset consisting of both simulation and
realistic images. Compared to other existing datasets, the proposed one is more
comprehensive in the sense that it covers rich scenes, diverse MAV types, and
various viewing angles. A new benchmark for cross-domain MAV detection is
proposed based on the proposed dataset. 2) We propose a Noise Suppression
Network (NSN) based on the framework of pseudo-labeling and a large-to-small
training procedure. To reduce the challenging pseudo-label noises, two novel
modules are designed in this network. The first is a prior-based curriculum
learning module for allocating adaptive thresholds for pseudo labels with
different difficulties. The second is a masked copy-paste augmentation module
for pasting truly-labeled MAVs on unlabeled target images and thus decreasing
pseudo-label noises. 3) Extensive experimental results verify the superior
performance of the proposed method compared to the state-of-the-art ones. In
particular, it achieves mAP of 46.9%(+5.8%), 50.5%(+3.7%), and 61.5%(+11.3%) on
the tasks of simulation-to-real adaptation, cross-scene adaptation, and
cross-camera adaptation, respectively.
| [
{
"created": "Mon, 25 Mar 2024 12:07:24 GMT",
"version": "v1"
}
] | 2024-03-26 | [
[
"Zhang",
"Yin",
""
],
[
"Deng",
"Jinhong",
""
],
[
"Liu",
"Peidong",
""
],
[
"Li",
"Wen",
""
],
[
"Zhao",
"Shiyu",
""
]
] |
2403.17012 | Fanfei Meng | Fanfei Meng, Chen-Ao Wang, Lele Zhang | Evolution and Efficiency in Neural Architecture Search: Bridging the Gap
Between Expert Design and Automated Optimization | 7 Pages, Double Column | Journal of Mathematical Techniques and Computational Mathematics,
2024, Volume 3, Issue 3 | null | null | cs.NE cs.AI | http://creativecommons.org/licenses/by/4.0/ | The paper provides a comprehensive overview of Neural Architecture Search
(NAS), emphasizing its evolution from manual design to automated,
computationally-driven approaches. It covers the inception and growth of NAS,
highlighting its application across various domains, including medical imaging
and natural language processing. The document details the shift from
expert-driven design to algorithm-driven processes, exploring initial
methodologies like reinforcement learning and evolutionary algorithms. It also
discusses the challenges of computational demands and the emergence of
efficient NAS methodologies, such as Differentiable Architecture Search and
hardware-aware NAS. The paper further elaborates on NAS's application in
computer vision, NLP, and beyond, demonstrating its versatility and potential
for optimizing neural network architectures across different tasks. Future
directions and challenges, including computational efficiency and the
integration with emerging AI domains, are addressed, showcasing NAS's dynamic
nature and its continued evolution towards more sophisticated and efficient
architecture search methods.
| [
{
"created": "Sun, 11 Feb 2024 18:27:29 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Apr 2024 06:35:04 GMT",
"version": "v2"
}
] | 2024-04-03 | [
[
"Meng",
"Fanfei",
""
],
[
"Wang",
"Chen-Ao",
""
],
[
"Zhang",
"Lele",
""
]
] |
2403.17089 | Ben Wang | Ben Wang | GOLF: Goal-Oriented Long-term liFe tasks supported by human-AI
collaboration | null | Proceedings of the 47th International ACM SIGIR Conference on
Research and Development in Information Retrieval (SIGIR 2024) | 10.1145/3626772.3657655 | null | cs.HC cs.AI cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The advent of ChatGPT and similar large language models (LLMs) has
revolutionized the human-AI interaction and information-seeking process.
Leveraging LLMs as an alternative to search engines, users can now access
summarized information tailored to their queries, significantly reducing the
cognitive load associated with navigating vast information resources. This
shift underscores the potential of LLMs in redefining information access
paradigms. Drawing on the foundation of task-focused information retrieval and
LLMs' task planning ability, this research extends the scope of LLM
capabilities beyond routine task automation to support users in navigating
long-term and significant life tasks. It introduces the GOLF framework
(Goal-Oriented Long-term liFe tasks), which focuses on enhancing LLMs' ability
to assist in significant life decisions through goal orientation and long-term
planning. The methodology encompasses a comprehensive simulation study to test
the framework's efficacy, followed by model and human evaluations to develop a
dataset benchmark for long-term life tasks, and experiments across different
models and settings. By shifting the focus from short-term tasks to the broader
spectrum of long-term life goals, this research underscores the transformative
potential of LLMs in enhancing human decision-making processes and task
management, marking a significant step forward in the evolution of human-AI
collaboration.
| [
{
"created": "Mon, 25 Mar 2024 18:25:10 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Apr 2024 15:00:58 GMT",
"version": "v2"
}
] | 2024-04-18 | [
[
"Wang",
"Ben",
""
]
] |
2403.17130 | Mihaela Breaban | Radu-Andrei Rosu, Mihaela-Elena Breaban, Henri Luchian | Exploring the potential of prototype-based soft-labels data distillation
for imbalanced data classification | null | 24th International Symposium on Symbolic and Numeric Algorithms
for Scientific Computing (SYNASC), pp. 173-180, 2022. IEEE | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Dataset distillation aims at synthesizing a dataset by a small number of
artificially generated data items, which, when used as training data, reproduce
or approximate a machine learning (ML) model as if it were trained on the
entire original dataset. Consequently, data distillation methods are usually
tied to a specific ML algorithm. While recent literature deals mainly with
distillation of large collections of images in the context of neural network
models, tabular data distillation is much less represented and mainly focused
on a theoretical perspective. The current paper explores the potential of a
simple distillation technique previously proposed in the context of
Less-than-one shot learning. The main goal is to push further the performance
of prototype-based soft-labels distillation in terms of classification
accuracy, by integrating optimization steps in the distillation process. The
analysis is performed on real-world data sets with various degrees of
imbalance. Experimental studies trace the capability of the method to distill
the data, but also the opportunity to act as an augmentation method, i.e. to
generate new data that is able to increase model accuracy when used in
conjunction with - as opposed to instead of - the original data.
| [
{
"created": "Mon, 25 Mar 2024 19:15:19 GMT",
"version": "v1"
}
] | 2024-03-27 | [
[
"Rosu",
"Radu-Andrei",
""
],
[
"Breaban",
"Mihaela-Elena",
""
],
[
"Luchian",
"Henri",
""
]
] |
2403.17599 | Katie Seaborn | Katie Seaborn, Yuto Sawa, Mizuki Watanabe | Coimagining the Future of Voice Assistants with Cultural Sensitivity | 21 pages | Human Behavior and Emerging Technologies, vol. 2024, Article ID
3238737, 21 pages, 2024 | 10.1155/2024/3238737 | null | cs.HC cs.CL cs.CY | http://creativecommons.org/licenses/by/4.0/ | Voice assistants (VAs) are becoming a feature of our everyday life. Yet, the
user experience (UX) is often limited, leading to underuse, disengagement, and
abandonment. Co-designing interactions for VAs with potential end-users can be
useful. Crowdsourcing this process online and anonymously may add value.
However, most work has been done in the English-speaking West on dialogue data
sets. We must be sensitive to cultural differences in language, social
interactions, and attitudes towards technology. Our aims were to explore the
value of co-designing VAs in the non-Western context of Japan and demonstrate
the necessity of cultural sensitivity. We conducted an online elicitation study
(N = 135) where Americans (n = 64) and Japanese people (n = 71) imagined
dialogues (N = 282) and activities (N = 73) with future VAs. We discuss the
implications for coimagining interactions with future VAs, offer design
guidelines for the Japanese and English-speaking US contexts, and suggest
opportunities for cultural plurality in VA design and scholarship.
| [
{
"created": "Tue, 26 Mar 2024 11:09:58 GMT",
"version": "v1"
}
] | 2024-03-27 | [
[
"Seaborn",
"Katie",
""
],
[
"Sawa",
"Yuto",
""
],
[
"Watanabe",
"Mizuki",
""
]
] |
2403.17637 | Frederico Metelo | Frederico Metelo, Stevo Rackovi\'c, Pedro \'Akos Costa, Cl\'audia
Soares | PeersimGym: An Environment for Solving the Task Offloading Problem with
Reinforcement Learning | Published in the proceedings of the conference on Machine Learning
and Knowledge Discovery in Databases. Applied Data Science Track. ECML PKDD
2024. Lecture Notes in Computer Science(), vol 14949. Springer, Cham | Machine Learning and Knowledge Discovery in Databases. Applied
Data Science Track. ECML PKDD 2024. Lecture Notes in Computer Science(), vol
14949. Springer, Cham | 10.1007/978-3-031-70378-2_3 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Task offloading, crucial for balancing computational loads across devices in
networks such as the Internet of Things, poses significant optimization
challenges, including minimizing latency and energy usage under strict
communication and storage constraints. While traditional optimization falls
short in scalability; and heuristic approaches lack in achieving optimal
outcomes, Reinforcement Learning (RL) offers a promising avenue by enabling the
learning of optimal offloading strategies through iterative interactions.
However, the efficacy of RL hinges on access to rich datasets and
custom-tailored, realistic training environments. To address this, we introduce
PeersimGym, an open-source, customizable simulation environment tailored for
developing and optimizing task offloading strategies within computational
networks. PeersimGym supports a wide range of network topologies and
computational constraints and integrates a \textit{PettingZoo}-based interface
for RL agent deployment in both solo and multi-agent setups. Furthermore, we
demonstrate the utility of the environment through experiments with Deep
Reinforcement Learning agents, showcasing the potential of RL-based approaches
to significantly enhance offloading strategies in distributed computing
settings. PeersimGym thus bridges the gap between theoretical RL models and
their practical applications, paving the way for advancements in efficient task
offloading methodologies.
| [
{
"created": "Tue, 26 Mar 2024 12:12:44 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Apr 2024 12:17:30 GMT",
"version": "v2"
},
{
"created": "Tue, 8 Oct 2024 10:56:03 GMT",
"version": "v3"
}
] | 2024-10-10 | [
[
"Metelo",
"Frederico",
""
],
[
"Racković",
"Stevo",
""
],
[
"Costa",
"Pedro Ákos",
""
],
[
"Soares",
"Cláudia",
""
]
] |
2403.17643 | Pedro Campos Vieira | Pedro C. Vieira, Jo\~ao P. Montrezol, Jo\~ao T. Vieira, Jo\~ao Gama | S+t-SNE -- Bringing dimensionality reduction to data streams | This preprint has not undergone peer review or any post-submission
improvements or corrections. We will soon add a link to the final version of
this contribution that underwent peer-review and post-acceptance improvements
and was presented at IDA2024 (https://ida2024.org/) | Advances in Intelligent Data Analysis XXII. IDA 2024. Lecture
Notes in Computer Science, vol 14642., pp 95-106 (2024). Springer, Cham | 10.1007/978-3-031-58553-1_8 | null | cs.AI cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present S+t-SNE, an adaptation of the t-SNE algorithm designed to handle
infinite data streams. The core idea behind S+t-SNE is to update the t-SNE
embedding incrementally as new data arrives, ensuring scalability and
adaptability to handle streaming scenarios. By selecting the most important
points at each step, the algorithm ensures scalability while keeping
informative visualisations. Employing a blind method for drift management
adjusts the embedding space, facilitating continuous visualisation of evolving
data dynamics. Our experimental evaluations demonstrate the effectiveness and
efficiency of S+t-SNE. The results highlight its ability to capture patterns in
a streaming scenario. We hope our approach offers researchers and practitioners
a real-time tool for understanding and interpreting high-dimensional data.
| [
{
"created": "Tue, 26 Mar 2024 12:23:34 GMT",
"version": "v1"
}
] | 2024-04-17 | [
[
"Vieira",
"Pedro C.",
""
],
[
"Montrezol",
"João P.",
""
],
[
"Vieira",
"João T.",
""
],
[
"Gama",
"João",
""
]
] |
2403.17727 | Kazuki Kawamura | Kazuki Kawamura and Jun Rekimoto | FastPerson: Enhancing Video Learning through Effective Video
Summarization that Preserves Linguistic and Visual Contexts | null | AHs '24: Proceedings of the Augmented Humans International
Conference 2024 | 10.1145/3652920.3652922 | null | cs.CV cs.CL cs.HC cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Quickly understanding lengthy lecture videos is essential for learners with
limited time and interest in various topics to improve their learning
efficiency. To this end, video summarization has been actively researched to
enable users to view only important scenes from a video. However, these studies
focus on either the visual or audio information of a video and extract
important segments in the video. Therefore, there is a risk of missing
important information when both the teacher's speech and visual information on
the blackboard or slides are important, such as in a lecture video. To tackle
this issue, we propose FastPerson, a video summarization approach that
considers both the visual and auditory information in lecture videos.
FastPerson creates summary videos by utilizing audio transcriptions along with
on-screen images and text, minimizing the risk of overlooking crucial
information for learners. Further, it provides a feature that allows learners
to switch between the summary and original videos for each chapter of the
video, enabling them to adjust the pace of learning based on their interests
and level of understanding. We conducted an evaluation with 40 participants to
assess the effectiveness of our method and confirmed that it reduced viewing
time by 53\% at the same level of comprehension as that when using traditional
video playback methods.
| [
{
"created": "Tue, 26 Mar 2024 14:16:56 GMT",
"version": "v1"
}
] | 2024-03-27 | [
[
"Kawamura",
"Kazuki",
""
],
[
"Rekimoto",
"Jun",
""
]
] |
2403.17778 | Bj\"orn Schembera | Marco Reidelbach, Bj\"orn Schembera, Marcus Weber | Towards a FAIR Documentation of Workflows and Models in Applied
Mathematics | null | International Congress on Mathematical Software (pp. 254-262).
Cham: Springer Nature Switzerland (2024, July) | 10.1007/978-3-031-64529-7_27 | null | cs.AI cs.DB cs.DL | http://creativecommons.org/licenses/by/4.0/ | Modeling-Simulation-Optimization workflows play a fundamental role in applied
mathematics. The Mathematical Research Data Initiative, MaRDI, responded to
this by developing a FAIR and machine-interpretable template for a
comprehensive documentation of such workflows. MaRDMO, a Plugin for the
Research Data Management Organiser, enables scientists from diverse fields to
document and publish their workflows on the MaRDI Portal seamlessly using the
MaRDI template. Central to these workflows are mathematical models. MaRDI
addresses them with the MathModDB ontology, offering a structured formal model
description. Here, we showcase the interaction between MaRDMO and the MathModDB
Knowledge Graph through an algebraic modeling workflow from the Digital
Humanities. This demonstration underscores the versatility of both services
beyond their original numerical domain.
| [
{
"created": "Tue, 26 Mar 2024 15:11:18 GMT",
"version": "v1"
},
{
"created": "Wed, 31 Jul 2024 08:19:16 GMT",
"version": "v2"
}
] | 2024-08-01 | [
[
"Reidelbach",
"Marco",
""
],
[
"Schembera",
"Björn",
""
],
[
"Weber",
"Marcus",
""
]
] |
2403.17811 | Leonidas Gee | Leonidas Gee, Andrea Zugarini, Novi Quadrianto | Are Compressed Language Models Less Subgroup Robust? | The 2023 Conference on Empirical Methods in Natural Language
Processing (EMNLP 2023) | Proceedings of the 2023 Conference on Empirical Methods in Natural
Language Processing: Main Track | 10.18653/v1/2023.emnlp-main.983 | null | cs.LG cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To reduce the inference cost of large language models, model compression is
increasingly used to create smaller scalable models. However, little is known
about their robustness to minority subgroups defined by the labels and
attributes of a dataset. In this paper, we investigate the effects of 18
different compression methods and settings on the subgroup robustness of BERT
language models. We show that worst-group performance does not depend on model
size alone, but also on the compression method used. Additionally, we find that
model compression does not always worsen the performance on minority subgroups.
Altogether, our analysis serves to further research into the subgroup
robustness of model compression.
| [
{
"created": "Tue, 26 Mar 2024 15:50:37 GMT",
"version": "v1"
}
] | 2024-03-27 | [
[
"Gee",
"Leonidas",
""
],
[
"Zugarini",
"Andrea",
""
],
[
"Quadrianto",
"Novi",
""
]
] |
2403.17859 | Bhawna Piryani | Bhawna Piryani, Jamshid Mozafari, Adam Jatowt | ChroniclingAmericaQA: A Large-scale Question Answering Dataset based on
Historical American Newspaper Pages | Accepted at SIGIR 2024 | Proceedings of the 47th International ACM SIGIR Conference on
Research and Development in Information Retrieval (SIGIR 2024) | 10.1145/3626772.3657891 | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Question answering (QA) and Machine Reading Comprehension (MRC) tasks have
significantly advanced in recent years due to the rapid development of deep
learning techniques and, more recently, large language models. At the same
time, many benchmark datasets have become available for QA and MRC tasks.
However, most existing large-scale benchmark datasets have been created
predominantly using synchronous document collections like Wikipedia or the Web.
Archival document collections, such as historical newspapers, contain valuable
information from the past that is still not widely used to train large language
models. To further contribute to advancing QA and MRC tasks and to overcome the
limitation of previous datasets, we introduce ChroniclingAmericaQA, a
large-scale temporal QA dataset with 487K question-answer pairs created based
on the historical newspaper collection Chronicling America. Our dataset is
constructed from a subset of the Chronicling America newspaper collection
spanning 120 years. One of the significant challenges for utilizing digitized
historical newspaper collections is the low quality of OCR text. Therefore, to
enable realistic testing of QA models, our dataset can be used in three
different ways: answering questions from raw and noisy content, answering
questions from cleaner, corrected version of the content, as well as answering
questions from scanned images of newspaper pages. This and the fact that
ChroniclingAmericaQA spans the longest time period among available QA datasets
make it quite a unique and useful resource.
| [
{
"created": "Tue, 26 Mar 2024 16:48:13 GMT",
"version": "v1"
},
{
"created": "Fri, 10 May 2024 17:15:24 GMT",
"version": "v2"
}
] | 2024-05-13 | [
[
"Piryani",
"Bhawna",
""
],
[
"Mozafari",
"Jamshid",
""
],
[
"Jatowt",
"Adam",
""
]
] |
2403.18233 | Mohamed Harmanani | Mohamed Harmanani, Paul F. R. Wilson, Fahimeh Fooladgar, Amoon Jamzad,
Mahdi Gilany, Minh Nguyen Nhat To, Brian Wodlinger, Purang Abolmaesumi,
Parvin Mousavi | Benchmarking Image Transformers for Prostate Cancer Detection from
Ultrasound Data | early draft, 7 pages; Accepted to SPIE Medical Imaging 2024 | Proc. SPIE 12928, Medical Imaging 2024: Image-Guided Procedures,
Robotic Interventions, and Modeling, 1292815 (29 March 2024) | 10.1117/12.3006049 | null | eess.IV cs.CV cs.LG q-bio.TO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | PURPOSE: Deep learning methods for classifying prostate cancer (PCa) in
ultrasound images typically employ convolutional networks (CNNs) to detect
cancer in small regions of interest (ROI) along a needle trace region. However,
this approach suffers from weak labelling, since the ground-truth
histopathology labels do not describe the properties of individual ROIs.
Recently, multi-scale approaches have sought to mitigate this issue by
combining the context awareness of transformers with a CNN feature extractor to
detect cancer from multiple ROIs using multiple-instance learning (MIL). In
this work, we present a detailed study of several image transformer
architectures for both ROI-scale and multi-scale classification, and a
comparison of the performance of CNNs and transformers for ultrasound-based
prostate cancer classification. We also design a novel multi-objective learning
strategy that combines both ROI and core predictions to further mitigate label
noise. METHODS: We evaluate 3 image transformers on ROI-scale cancer
classification, then use the strongest model to tune a multi-scale classifier
with MIL. We train our MIL models using our novel multi-objective learning
strategy and compare our results to existing baselines. RESULTS: We find that
for both ROI-scale and multi-scale PCa detection, image transformer backbones
lag behind their CNN counterparts. This deficit in performance is even more
noticeable for larger models. When using multi-objective learning, we can
improve performance of MIL, with a 77.9% AUROC, a sensitivity of 75.9%, and a
specificity of 66.3%. CONCLUSION: Convolutional networks are better suited for
modelling sparse datasets of prostate ultrasounds, producing more robust
features than transformers in PCa detection. Multi-scale methods remain the
best architecture for this task, with multi-objective learning presenting an
effective way to improve performance.
| [
{
"created": "Wed, 27 Mar 2024 03:39:57 GMT",
"version": "v1"
}
] | 2024-04-03 | [
[
"Harmanani",
"Mohamed",
""
],
[
"Wilson",
"Paul F. R.",
""
],
[
"Fooladgar",
"Fahimeh",
""
],
[
"Jamzad",
"Amoon",
""
],
[
"Gilany",
"Mahdi",
""
],
[
"To",
"Minh Nguyen Nhat",
""
],
[
"Wodlinger",
"Brian",
""
],
[
"Abolmaesumi",
"Purang",
""
],
[
"Mousavi",
"Parvin",
""
]
] |
2403.18426 | Jamshid Mozafari | Jamshid Mozafari, Anubhav Jangra, Adam Jatowt | TriviaHG: A Dataset for Automatic Hint Generation from Factoid Questions | Accepted at SIGIR 2024 | Proceedings of the 47th International ACM SIGIR Conference on
Research and Development in Information Retrieval (SIGIR 2024) | 10.1145/3626772.3657855 | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Nowadays, individuals tend to engage in dialogues with Large Language Models,
seeking answers to their questions. In times when such answers are readily
accessible to anyone, the stimulation and preservation of human's cognitive
abilities, as well as the assurance of maintaining good reasoning skills by
humans becomes crucial. This study addresses such needs by proposing hints
(instead of final answers or before giving answers) as a viable solution. We
introduce a framework for the automatic hint generation for factoid questions,
employing it to construct TriviaHG, a novel large-scale dataset featuring
160,230 hints corresponding to 16,645 questions from the TriviaQA dataset.
Additionally, we present an automatic evaluation method that measures the
Convergence and Familiarity quality attributes of hints. To evaluate the
TriviaHG dataset and the proposed evaluation method, we enlisted 10 individuals
to annotate 2,791 hints and tasked 6 humans with answering questions using the
provided hints. The effectiveness of hints varied, with success rates of 96%,
78%, and 36% for questions with easy, medium, and hard answers, respectively.
Moreover, the proposed automatic evaluation methods showed a robust correlation
with annotators' results. Conclusively, the findings highlight three key
insights: the facilitative role of hints in resolving unknown questions, the
dependence of hint quality on answer difficulty, and the feasibility of
employing automatic evaluation methods for hint assessment.
| [
{
"created": "Wed, 27 Mar 2024 10:27:28 GMT",
"version": "v1"
},
{
"created": "Fri, 10 May 2024 17:10:47 GMT",
"version": "v2"
}
] | 2024-05-13 | [
[
"Mozafari",
"Jamshid",
""
],
[
"Jangra",
"Anubhav",
""
],
[
"Jatowt",
"Adam",
""
]
] |
2403.18430 | Juan Ignacio De Gregorio | Juan De Gregorio, Ra\'ul Toral, David S\'anchez | Exploring language relations through syntactic distances and geographic
proximity | 39 pages | EPJ Data Science 13, 61 (2024) | 10.1140/epjds/s13688-024-00498-7 | null | cs.CL physics.data-an physics.soc-ph stat.AP | http://creativecommons.org/licenses/by/4.0/ | Languages are grouped into families that share common linguistic traits.
While this approach has been successful in understanding genetic relations
between diverse languages, more analyses are needed to accurately quantify
their relatedness, especially in less studied linguistic levels such as syntax.
Here, we explore linguistic distances using series of parts of speech (POS)
extracted from the Universal Dependencies dataset. Within an
information-theoretic framework, we show that employing POS trigrams maximizes
the possibility of capturing syntactic variations while being at the same time
compatible with the amount of available data. Linguistic connections are then
established by assessing pairwise distances based on the POS distributions.
Intriguingly, our analysis reveals definite clusters that correspond to well
known language families and groups, with exceptions explained by distinct
morphological typologies. Furthermore, we obtain a significant correlation
between language similarity and geographic distance, which underscores the
influence of spatial proximity on language kinships.
| [
{
"created": "Wed, 27 Mar 2024 10:36:17 GMT",
"version": "v1"
},
{
"created": "Thu, 3 Oct 2024 08:24:40 GMT",
"version": "v2"
}
] | 2024-10-04 | [
[
"De Gregorio",
"Juan",
""
],
[
"Toral",
"Raúl",
""
],
[
"Sánchez",
"David",
""
]
] |
2403.18565 | Mohammadreza Amirian | Mohammadreza Amirian, Daniel Barco, Ivo Herzig, and Frank-Peter
Schilling | Artifact Reduction in 3D and 4D Cone-beam Computed Tomography Images
with Deep Learning -- A Review | 16 pages, 4 figures, 1 Table, published in IEEE Access Journal | IEEE Access, vol. 12, pp. 10281-10295, 2024 | 10.1109/ACCESS.2024.3353195 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Deep learning based approaches have been used to improve image quality in
cone-beam computed tomography (CBCT), a medical imaging technique often used in
applications such as image-guided radiation therapy, implant dentistry or
orthopaedics. In particular, while deep learning methods have been applied to
reduce various types of CBCT image artifacts arising from motion, metal
objects, or low-dose acquisition, a comprehensive review summarizing the
successes and shortcomings of these approaches, with a primary focus on the
type of artifacts rather than the architecture of neural networks, is lacking
in the literature. In this review, the data generation and simulation
pipelines, and artifact reduction techniques are specifically investigated for
each type of artifact. We provide an overview of deep learning techniques that
have successfully been shown to reduce artifacts in 3D, as well as in
time-resolved (4D) CBCT through the use of projection- and/or volume-domain
optimizations, or by introducing neural networks directly within the CBCT
reconstruction algorithms. Research gaps are identified to suggest avenues for
future exploration. One of the key findings of this work is an observed trend
towards the use of generative models including GANs and score-based or
diffusion models, accompanied with the need for more diverse and open training
datasets and simulations.
| [
{
"created": "Wed, 27 Mar 2024 13:46:01 GMT",
"version": "v1"
}
] | 2024-03-28 | [
[
"Amirian",
"Mohammadreza",
""
],
[
"Barco",
"Daniel",
""
],
[
"Herzig",
"Ivo",
""
],
[
"Schilling",
"Frank-Peter",
""
]
] |
2403.18593 | Haifeng Li | Run Shao, Zhaoyang Zhang, Chao Tao, Yunsheng Zhang, Chengli Peng,
Haifeng Li | Homogeneous Tokenizer Matters: Homogeneous Visual Tokenizer for Remote
Sensing Image Understanding | 24 pages, 9 figures, 8 tables | ISPRS Journal of Photogrammetry and Remote Sensing 2024 | 10.1016/j.isprsjprs.2024.09.009 | 10.1016/j.isprsjprs.2024.09.009 | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The tokenizer, as one of the fundamental components of large models, has long
been overlooked or even misunderstood in visual tasks. One key factor of the
great comprehension power of the large language model is that natural language
tokenizers utilize meaningful words or subwords as the basic elements of
language. In contrast, mainstream visual tokenizers, represented by patch-based
methods such as Patch Embed, rely on meaningless rectangular patches as basic
elements of vision, which cannot serve as effectively as words or subwords in
language. Starting from the essence of the tokenizer, we defined semantically
independent regions (SIRs) for vision. We designed a simple HOmogeneous visual
tOKenizer: HOOK. HOOK mainly consists of two modules: the Object Perception
Module (OPM) and the Object Vectorization Module (OVM). To achieve homogeneity,
the OPM splits the image into 4*4 pixel seeds and then utilizes the attention
mechanism to perceive SIRs. The OVM employs cross-attention to merge seeds
within the same SIR. To achieve adaptability, the OVM defines a variable number
of learnable vectors as cross-attention queries, allowing for the adjustment of
token quantity. We conducted experiments on the NWPU-RESISC45, WHU-RS19
classification dataset, and GID5 segmentation dataset for sparse and dense
tasks. The results demonstrate that the visual tokens obtained by HOOK
correspond to individual objects, which demonstrates homogeneity. HOOK
outperformed Patch Embed by 6\% and 10\% in the two tasks and achieved
state-of-the-art performance compared to the baselines used for comparison.
Compared to Patch Embed, which requires more than one hundred tokens for one
image, HOOK requires only 6 and 8 tokens for sparse and dense tasks,
respectively, resulting in efficiency improvements of 1.5 to 2.8 times. The
code is available at https://github.com/GeoX-Lab/Hook.
| [
{
"created": "Wed, 27 Mar 2024 14:18:09 GMT",
"version": "v1"
},
{
"created": "Sun, 13 Oct 2024 03:01:11 GMT",
"version": "v2"
}
] | 2024-10-15 | [
[
"Shao",
"Run",
""
],
[
"Zhang",
"Zhaoyang",
""
],
[
"Tao",
"Chao",
""
],
[
"Zhang",
"Yunsheng",
""
],
[
"Peng",
"Chengli",
""
],
[
"Li",
"Haifeng",
""
]
] |
2403.18674 | Mohammadreza Amirian | Mohammadreza Amirian | Deep Learning for Robust and Explainable Models in Computer Vision | 150 pages, 37 figures, 12 tables | OPARU is the OPen Access Repository of Ulm University and Ulm
University of Applied Sciences, 2023 | 10.18725/OPARU-51464 | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Recent breakthroughs in machine and deep learning (ML and DL) research have
provided excellent tools for leveraging enormous amounts of data and optimizing
huge models with millions of parameters to obtain accurate networks for image
processing. These developments open up tremendous opportunities for using
artificial intelligence (AI) in the automation and human assisted AI industry.
However, as more and more models are deployed and used in practice, many
challenges have emerged. This thesis presents various approaches that address
robustness and explainability challenges for using ML and DL in practice.
Robustness and reliability are the critical components of any model before
certification and deployment in practice. Deep convolutional neural networks
(CNNs) exhibit vulnerability to transformations of their inputs, such as
rotation and scaling, or intentional manipulations as described in the
adversarial attack literature. In addition, building trust in AI-based models
requires a better understanding of current models and developing methods that
are more explainable and interpretable a priori.
This thesis presents developments in computer vision models' robustness and
explainability. Furthermore, this thesis offers an example of using vision
models' feature response visualization (models' interpretations) to improve
robustness despite interpretability and robustness being seemingly unrelated in
the related research. Besides methodological developments for robust and
explainable vision models, a key message of this thesis is introducing model
interpretation techniques as a tool for understanding vision models and
improving their design and robustness. In addition to the theoretical
developments, this thesis demonstrates several applications of ML and DL in
different contexts, such as medical imaging and affective computing.
| [
{
"created": "Wed, 27 Mar 2024 15:17:10 GMT",
"version": "v1"
}
] | 2024-03-28 | [
[
"Amirian",
"Mohammadreza",
""
]
] |
2403.18803 | Hillary Dawkins | Hillary Dawkins, Isar Nejadgholi, Daniel Gillis, and Judi McCuaig | Projective Methods for Mitigating Gender Bias in Pre-trained Language
Models | null | Proceedings of the 2024 Joint International Conference on
Computational Linguistics, Language Resources and Evaluation (LREC-COLING
2024) | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Mitigation of gender bias in NLP has a long history tied to debiasing static
word embeddings. More recently, attention has shifted to debiasing pre-trained
language models. We study to what extent the simplest projective debiasing
methods, developed for word embeddings, can help when applied to BERT's
internal representations. Projective methods are fast to implement, use a small
number of saved parameters, and make no updates to the existing model
parameters. We evaluate the efficacy of the methods in reducing both intrinsic
bias, as measured by BERT's next sentence prediction task, and in mitigating
observed bias in a downstream setting when fine-tuned. To this end, we also
provide a critical analysis of a popular gender-bias assessment test for
quantifying intrinsic bias, resulting in an enhanced test set and new bias
measures. We find that projective methods can be effective at both intrinsic
bias and downstream bias mitigation, but that the two outcomes are not
necessarily correlated. This finding serves as a warning that intrinsic bias
test sets, based either on language modeling tasks or next sentence prediction,
should not be the only benchmark in developing a debiased language model.
| [
{
"created": "Wed, 27 Mar 2024 17:49:31 GMT",
"version": "v1"
}
] | 2024-05-27 | [
[
"Dawkins",
"Hillary",
""
],
[
"Nejadgholi",
"Isar",
""
],
[
"Gillis",
"Daniel",
""
],
[
"McCuaig",
"Judi",
""
]
] |
2403.18831 | Armand Cismaru | Armand Mihai Cismaru | DeepTraderX: Challenging Conventional Trading Strategies with Deep
Learning in Multi-Threaded Market Simulations | 11 pages, 9 png figures, uses apalike.sty and SCITEPRESS.sty, to be
published in the proceedings of ICAART 2024 | In Proceedings of the 16th International Conference on Agents and
Artificial Intelligence - Volume 3, ISBN 978-989-758-680-4, ISSN 2184-433X,
pages 412-421 (2024) | 10.5220/0000183700003636 | null | q-fin.TR cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In this paper, we introduce DeepTraderX (DTX), a simple Deep Learning-based
trader, and present results that demonstrate its performance in a
multi-threaded market simulation. In a total of about 500 simulated market
days, DTX has learned solely by watching the prices that other strategies
produce. By doing this, it has successfully created a mapping from market data
to quotes, either bid or ask orders, to place for an asset. Trained on
historical Level-2 market data, i.e., the Limit Order Book (LOB) for specific
tradable assets, DTX processes the market state $S$ at each timestep $T$ to
determine a price $P$ for market orders. The market data used in both training
and testing was generated from unique market schedules based on real historic
stock market data. DTX was tested extensively against the best strategies in
the literature, with its results validated by statistical analysis. Our
findings underscore DTX's capability to rival, and in many instances, surpass,
the performance of public-domain traders, including those that outclass human
traders, emphasising the efficiency of simple models, as this is required to
succeed in intricate multi-threaded simulations. This highlights the potential
of leveraging "black-box" Deep Learning systems to create more efficient
financial markets.
| [
{
"created": "Tue, 6 Feb 2024 14:20:51 GMT",
"version": "v1"
}
] | 2024-03-29 | [
[
"Cismaru",
"Armand Mihai",
""
]
] |
2403.18938 | Tommaso Mario Buonocore | Laura Bergomi, Tommaso M. Buonocore, Paolo Antonazzo, Lorenzo
Alberghi, Riccardo Bellazzi, Lorenzo Preda, Chandra Bortolotto, Enea
Parimbelli | Reshaping Free-Text Radiology Notes Into Structured Reports With
Generative Transformers | null | Artificial Intelligence in Medicine, Volume 154, 2024 | 10.1016/j.artmed.2024.102924 | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | BACKGROUND: Radiology reports are typically written in a free-text format,
making clinical information difficult to extract and use. Recently the adoption
of structured reporting (SR) has been recommended by various medical societies
thanks to the advantages it offers, e.g. standardization, completeness and
information retrieval. We propose a pipeline to extract information from
free-text radiology reports, that fits with the items of the reference SR
registry proposed by a national society of interventional and medical
radiology, focusing on CT staging of patients with lymphoma. METHODS: Our work
aims to leverage the potential of Natural Language Processing (NLP) and
Transformer-based models to deal with automatic SR registry filling. With the
availability of 174 radiology reports, we investigate a rule-free generative
Question Answering approach based on a domain-specific version of T5 (IT5). Two
strategies (batch-truncation and ex-post combination) are implemented to comply
with the model's context length limitations. Performance is evaluated in terms
of strict accuracy, F1, and format accuracy, and compared with the widely used
GPT-3.5 Large Language Model. A 5-point Likert scale questionnaire is used to
collect human-expert feedback on the similarity between medical annotations and
generated answers. RESULTS: The combination of fine-tuning and batch splitting
allows IT5 to achieve notable results; it performs on par with GPT-3.5 albeit
its size being a thousand times smaller in terms of parameters. Human-based
assessment scores show a high correlation (Spearman's correlation
coefficients>0.88, p-values<0.001) with AI performance metrics (F1) and confirm
the superior ability of LLMs (i.e., GPT-3.5, 175B of parameters) in generating
plausible human-like statements.
| [
{
"created": "Wed, 27 Mar 2024 18:38:39 GMT",
"version": "v1"
}
] | 2024-07-09 | [
[
"Bergomi",
"Laura",
""
],
[
"Buonocore",
"Tommaso M.",
""
],
[
"Antonazzo",
"Paolo",
""
],
[
"Alberghi",
"Lorenzo",
""
],
[
"Bellazzi",
"Riccardo",
""
],
[
"Preda",
"Lorenzo",
""
],
[
"Bortolotto",
"Chandra",
""
],
[
"Parimbelli",
"Enea",
""
]
] |
2403.18985 | Soumyendu Sarkar | Soumyendu Sarkar, Ashwin Ramesh Babu, Sajad Mousavi, Vineet Gundecha,
Avisek Naug, Sahand Ghorbanpour | Robustness and Visual Explanation for Black Box Image, Video, and ECG
Signal Classification with Reinforcement Learning | AAAI Proceedings reference:
https://ojs.aaai.org/index.php/AAAI/article/view/30579 | 2024 Proceedings of the AAAI Conference on Artificial Intelligence | 10.1609/aaai.v38i21.30579 | null | cs.LG cs.AI cs.CR cs.CV cs.MA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a generic Reinforcement Learning (RL) framework optimized for
crafting adversarial attacks on different model types spanning from ECG signal
analysis (1D), image classification (2D), and video classification (3D). The
framework focuses on identifying sensitive regions and inducing
misclassifications with minimal distortions and various distortion types. The
novel RL method outperforms state-of-the-art methods for all three
applications, proving its efficiency. Our RL approach produces superior
localization masks, enhancing interpretability for image classification and ECG
analysis models. For applications such as ECG analysis, our platform highlights
critical ECG segments for clinicians while ensuring resilience against
prevalent distortions. This comprehensive tool aims to bolster both resilience
with adversarial training and transparency across varied applications and data
types.
| [
{
"created": "Wed, 27 Mar 2024 20:07:39 GMT",
"version": "v1"
},
{
"created": "Mon, 22 Apr 2024 14:49:36 GMT",
"version": "v2"
}
] | 2024-04-23 | [
[
"Sarkar",
"Soumyendu",
""
],
[
"Babu",
"Ashwin Ramesh",
""
],
[
"Mousavi",
"Sajad",
""
],
[
"Gundecha",
"Vineet",
""
],
[
"Naug",
"Avisek",
""
],
[
"Ghorbanpour",
"Sahand",
""
]
] |
2403.19076 | Wei-Chen Wang | Ji Lin, Ligeng Zhu, Wei-Ming Chen, Wei-Chen Wang, Song Han | Tiny Machine Learning: Progress and Futures | arXiv admin note: text overlap with arXiv:2206.15472 | IEEE Circuits and Systems Magazine, 23(3), pp. 8-34, October 2023 | 10.1109/MCAS.2023.3302182 | null | cs.LG cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tiny Machine Learning (TinyML) is a new frontier of machine learning. By
squeezing deep learning models into billions of IoT devices and
microcontrollers (MCUs), we expand the scope of AI applications and enable
ubiquitous intelligence. However, TinyML is challenging due to hardware
constraints: the tiny memory resource makes it difficult to hold deep learning
models designed for cloud and mobile platforms. There is also limited compiler
and inference engine support for bare-metal devices. Therefore, we need to
co-design the algorithm and system stack to enable TinyML. In this review, we
will first discuss the definition, challenges, and applications of TinyML. We
then survey the recent progress in TinyML and deep learning on MCUs. Next, we
will introduce MCUNet, showing how we can achieve ImageNet-scale AI
applications on IoT devices with system-algorithm co-design. We will further
extend the solution from inference to training and introduce tiny on-device
training techniques. Finally, we present future directions in this area.
Today's large model might be tomorrow's tiny model. The scope of TinyML should
evolve and adapt over time.
| [
{
"created": "Thu, 28 Mar 2024 00:34:56 GMT",
"version": "v1"
},
{
"created": "Fri, 29 Mar 2024 21:33:39 GMT",
"version": "v2"
}
] | 2024-04-02 | [
[
"Lin",
"Ji",
""
],
[
"Zhu",
"Ligeng",
""
],
[
"Chen",
"Wei-Ming",
""
],
[
"Wang",
"Wei-Chen",
""
],
[
"Han",
"Song",
""
]
] |
2403.19093 | Yishuai Cai | Yishuai Cai, Shaowu Yang, Minglong Li, Xinglin Chen, Yunxin Mao,
Xiaodong Yi and Wenjing Yang | Task2Morph: Differentiable Task-inspired Framework for Contact-Aware
Robot Design | 9 pages, 10 figures, published to IROS | 2023 IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS). IEEE, 2023: 452-459 | 10.1109/IROS55552.2023.10341360 | null | cs.RO cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Optimizing the morphologies and the controllers that adapt to various tasks
is a critical issue in the field of robot design, aka. embodied intelligence.
Previous works typically model it as a joint optimization problem and use
search-based methods to find the optimal solution in the morphology space.
However, they ignore the implicit knowledge of task-to-morphology mapping which
can directly inspire robot design. For example, flipping heavier boxes tends to
require more muscular robot arms. This paper proposes a novel and general
differentiable task-inspired framework for contact-aware robot design called
Task2Morph. We abstract task features highly related to task performance and
use them to build a task-to-morphology mapping. Further, we embed the mapping
into a differentiable robot design process, where the gradient information is
leveraged for both the mapping learning and the whole optimization. The
experiments are conducted on three scenarios, and the results validate that
Task2Morph outperforms DiffHand, which lacks a task-inspired morphology module,
in terms of efficiency and effectiveness.
| [
{
"created": "Thu, 28 Mar 2024 02:02:00 GMT",
"version": "v1"
}
] | 2024-03-29 | [
[
"Cai",
"Yishuai",
""
],
[
"Yang",
"Shaowu",
""
],
[
"Li",
"Minglong",
""
],
[
"Chen",
"Xinglin",
""
],
[
"Mao",
"Yunxin",
""
],
[
"Yi",
"Xiaodong",
""
],
[
"Yang",
"Wenjing",
""
]
] |
2403.19646 | Liu Chenyang | Chenyang Liu, Keyan Chen, Haotian Zhang, Zipeng Qi, Zhengxia Zou, and
Zhenwei Shi | Change-Agent: Towards Interactive Comprehensive Remote Sensing Change
Interpretation and Analysis | IEEE Transactions on Geoscience and Remote Sensing 2024 | IEEE Transactions on Geoscience and Remote Sensing 2024 | 10.1109/TGRS.2024.3425815 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Monitoring changes in the Earth's surface is crucial for understanding
natural processes and human impacts, necessitating precise and comprehensive
interpretation methodologies. Remote sensing satellite imagery offers a unique
perspective for monitoring these changes, leading to the emergence of remote
sensing image change interpretation (RSICI) as a significant research focus.
Current RSICI technology encompasses change detection and change captioning,
each with its limitations in providing comprehensive interpretation. To address
this, we propose an interactive Change-Agent, which can follow user
instructions to achieve comprehensive change interpretation and insightful
analysis, such as change detection and change captioning, change object
counting, change cause analysis, etc. The Change-Agent integrates a multi-level
change interpretation (MCI) model as the eyes and a large language model (LLM)
as the brain. The MCI model contains two branches of pixel-level change
detection and semantic-level change captioning, in which the BI-temporal
Iterative Interaction (BI3) layer is proposed to enhance the model's
discriminative feature representation capabilities. To support the training of
the MCI model, we build the LEVIR-MCI dataset with a large number of change
masks and captions of changes. Experiments demonstrate the SOTA performance of
the MCI model in achieving both change detection and change description
simultaneously, and highlight the promising application value of our
Change-Agent in facilitating comprehensive interpretation of surface changes,
which opens up a new avenue for intelligent remote sensing applications. To
facilitate future research, we will make our dataset and codebase of the MCI
model and Change-Agent publicly available at
https://github.com/Chen-Yang-Liu/Change-Agent
| [
{
"created": "Thu, 28 Mar 2024 17:55:42 GMT",
"version": "v1"
},
{
"created": "Mon, 1 Apr 2024 08:00:56 GMT",
"version": "v2"
},
{
"created": "Tue, 16 Jul 2024 08:43:23 GMT",
"version": "v3"
}
] | 2024-07-17 | [
[
"Liu",
"Chenyang",
""
],
[
"Chen",
"Keyan",
""
],
[
"Zhang",
"Haotian",
""
],
[
"Qi",
"Zipeng",
""
],
[
"Zou",
"Zhengxia",
""
],
[
"Shi",
"Zhenwei",
""
]
] |
2403.19726 | Christophe Servan | Nesrine Bannour (STL), Christophe Servan (STL), Aur\'elie N\'ev\'eol
(STL), Xavier Tannier (LIMICS) | A Benchmark Evaluation of Clinical Named Entity Recognition in French | null | The 2024 Joint International Conference on Computational
Linguistics, Language Resources and Evaluation (LREC-COLING 2024), May 2024,
Torino, Italy | null | null | cs.CL cs.AI q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: Transformer-based language models have shown strong performance
on many Natural LanguageProcessing (NLP) tasks. Masked Language Models (MLMs)
attract sustained interest because they can be adaptedto different languages
and sub-domains through training or fine-tuning on specific corpora while
remaining lighterthan modern Large Language Models (LLMs). Recently, several
MLMs have been released for the biomedicaldomain in French, and experiments
suggest that they outperform standard French counterparts. However,
nosystematic evaluation comparing all models on the same corpora is available.
Objective: This paper presentsan evaluation of masked language models for
biomedical French on the task of clinical named entity recognition.Material and
methods: We evaluate biomedical models CamemBERT-bio and DrBERT and compare
them tostandard French models CamemBERT, FlauBERT and FrALBERT as well as
multilingual mBERT using three publicallyavailable corpora for clinical named
entity recognition in French. The evaluation set-up relies on
gold-standardcorpora as released by the corpus developers. Results: Results
suggest that CamemBERT-bio outperformsDrBERT consistently while FlauBERT offers
competitive performance and FrAlBERT achieves the lowest carbonfootprint.
Conclusion: This is the first benchmark evaluation of biomedical masked
language models for Frenchclinical entity recognition that compares model
performance consistently on nested entity recognition using metricscovering
performance and environmental impact.
| [
{
"created": "Thu, 28 Mar 2024 07:59:58 GMT",
"version": "v1"
}
] | 2024-04-01 | [
[
"Bannour",
"Nesrine",
"",
"STL"
],
[
"Servan",
"Christophe",
"",
"STL"
],
[
"Névéol",
"Aurélie",
"",
"STL"
],
[
"Tannier",
"Xavier",
"",
"LIMICS"
]
] |
2403.19727 | Christophe Servan | Nad\`ege Alavoine (STL), Ga\"elle Laperriere (LIA), Christophe Servan
(STL), Sahar Ghannay (STL), Sophie Rosset (STL) | New Semantic Task for the French Spoken Language Understanding MEDIA
Benchmark | null | The 2024 Joint International Conference on Computational
Linguistics, Language Resources and Evaluation (LREC-COLING 2024), May 2024,
Torino, Italy | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Intent classification and slot-filling are essential tasks of Spoken Language
Understanding (SLU). In most SLUsystems, those tasks are realized by
independent modules. For about fifteen years, models achieving both of
themjointly and exploiting their mutual enhancement have been proposed. A
multilingual module using a joint modelwas envisioned to create a touristic
dialogue system for a European project, HumanE-AI-Net. A combination ofmultiple
datasets, including the MEDIA dataset, was suggested for training this joint
model. The MEDIA SLU datasetis a French dataset distributed since 2005 by ELRA,
mainly used by the French research community and free foracademic research
since 2020. Unfortunately, it is annotated only in slots but not intents. An
enhanced version ofMEDIA annotated with intents has been built to extend its
use to more tasks and use cases. This paper presents thesemi-automatic
methodology used to obtain this enhanced version. In addition, we present the
first results of SLUexperiments on this enhanced dataset using joint models for
intent classification and slot-filling.
| [
{
"created": "Thu, 28 Mar 2024 08:40:02 GMT",
"version": "v1"
}
] | 2024-04-01 | [
[
"Alavoine",
"Nadège",
"",
"STL"
],
[
"Laperriere",
"Gaëlle",
"",
"LIA"
],
[
"Servan",
"Christophe",
"",
"STL"
],
[
"Ghannay",
"Sahar",
"",
"STL"
],
[
"Rosset",
"Sophie",
"",
"STL"
]
] |
2403.19946 | Andr\'e Yuji Yasutomi | Andr\'e Yuji Yasutomi, Hiroki Mori, Tetsuya Ogata | A Peg-in-hole Task Strategy for Holes in Concrete | Published in 2021 IEEE International Conference on Robotics and
Automation (ICRA) on 30 May 2021 | 2021 IEEE International Conference on Robotics and Automation
(ICRA), Xi'an, China, 2021, pp. 2205-2211 | 10.1109/ICRA48506.2021.9561370 | null | cs.RO cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A method that enables an industrial robot to accomplish the peg-in-hole task
for holes in concrete is proposed. The proposed method involves slightly
detaching the peg from the wall, when moving between search positions, to avoid
the negative influence of the concrete's high friction coefficient. It uses a
deep neural network (DNN), trained via reinforcement learning, to effectively
find holes with variable shape and surface finish (due to the brittle nature of
concrete) without analytical modeling or control parameter tuning. The method
uses displacement of the peg toward the wall surface, in addition to force and
torque, as one of the inputs of the DNN. Since the displacement increases as
the peg gets closer to the hole (due to the chamfered shape of holes in
concrete), it is a useful parameter for inputting in the DNN. The proposed
method was evaluated by training the DNN on a hole 500 times and attempting to
find 12 unknown holes. The results of the evaluation show the DNN enabled a
robot to find the unknown holes with average success rate of 96.1% and average
execution time of 12.5 seconds. Additional evaluations with random initial
positions and a different type of peg demonstrate the trained DNN can
generalize well to different conditions. Analyses of the influence of the peg
displacement input showed the success rate of the DNN is increased by utilizing
this parameter. These results validate the proposed method in terms of its
effectiveness and applicability to the construction industry.
| [
{
"created": "Fri, 29 Mar 2024 03:00:54 GMT",
"version": "v1"
}
] | 2024-04-01 | [
[
"Yasutomi",
"André Yuji",
""
],
[
"Mori",
"Hiroki",
""
],
[
"Ogata",
"Tetsuya",
""
]
] |
2403.20158 | Zehao Wen | Zehao Wen and Rabih Younes | ChatGPT v.s. Media Bias: A Comparative Study of GPT-3.5 and Fine-tuned
Language Models | 9 pages, 1 figure, published on Applied and Computational Engineering | ACE (2023) Vol. 21: 249-257. | 10.54254/2755-2721/21/20231153 | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | In our rapidly evolving digital sphere, the ability to discern media bias
becomes crucial as it can shape public sentiment and influence pivotal
decisions. The advent of large language models (LLMs), such as ChatGPT, noted
for their broad utility in various natural language processing (NLP) tasks,
invites exploration of their efficacy in media bias detection. Can ChatGPT
detect media bias? This study seeks to answer this question by leveraging the
Media Bias Identification Benchmark (MBIB) to assess ChatGPT's competency in
distinguishing six categories of media bias, juxtaposed against fine-tuned
models such as BART, ConvBERT, and GPT-2. The findings present a dichotomy:
ChatGPT performs at par with fine-tuned models in detecting hate speech and
text-level context bias, yet faces difficulties with subtler elements of other
bias detections, namely, fake news, racial, gender, and cognitive biases.
| [
{
"created": "Fri, 29 Mar 2024 13:12:09 GMT",
"version": "v1"
}
] | 2024-04-01 | [
[
"Wen",
"Zehao",
""
],
[
"Younes",
"Rabih",
""
]
] |
2403.20266 | Naiara Pérez Miguel | Julen Etxaniz, Oscar Sainz, Naiara Perez, Itziar Aldabe, German Rigau,
Eneko Agirre, Aitor Ormazabal, Mikel Artetxe, Aitor Soroa | Latxa: An Open Language Model and Evaluation Suite for Basque | ACL 2024 | Proceedings of the 62nd Annual Meeting of the Association for
Computational Linguistics (Volume 1: Long Papers), pages 14952--14972. 2024 | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce Latxa, a family of large language models for Basque ranging from
7 to 70 billion parameters. Latxa is based on Llama 2, which we continue
pretraining on a new Basque corpus comprising 4.3M documents and 4.2B tokens.
Addressing the scarcity of high-quality benchmarks for Basque, we further
introduce 4 multiple choice evaluation datasets: EusProficiency, comprising
5,169 questions from official language proficiency exams; EusReading,
comprising 352 reading comprehension questions; EusTrivia, comprising 1,715
trivia questions from 5 knowledge areas; and EusExams, comprising 16,774
questions from public examinations. In our extensive evaluation, Latxa
outperforms all previous open models we compare to by a large margin. In
addition, it is competitive with GPT-4 Turbo in language proficiency and
understanding, despite lagging behind in reading comprehension and
knowledge-intensive tasks. Both the Latxa family of models, as well as our new
pretraining corpora and evaluation datasets, are publicly available under open
licenses. Our suite enables reproducible research on methods to build LLMs for
low-resource languages.
| [
{
"created": "Fri, 29 Mar 2024 16:16:48 GMT",
"version": "v1"
},
{
"created": "Fri, 20 Sep 2024 11:52:52 GMT",
"version": "v2"
}
] | 2024-09-23 | [
[
"Etxaniz",
"Julen",
""
],
[
"Sainz",
"Oscar",
""
],
[
"Perez",
"Naiara",
""
],
[
"Aldabe",
"Itziar",
""
],
[
"Rigau",
"German",
""
],
[
"Agirre",
"Eneko",
""
],
[
"Ormazabal",
"Aitor",
""
],
[
"Artetxe",
"Mikel",
""
],
[
"Soroa",
"Aitor",
""
]
] |
2404.00026 | Azmine Toushik Wasi | Azmine Toushik Wasi and Raima Islam and Mst Rafia Islam | Ink and Individuality: Crafting a Personalised Narrative in the Age of
LLMs | 5 Pages, 4 Figures. Accepted in The Third Workshop on Intelligent and
Interactive Writing Assistants at CHI 2024 | The Third Workshop on Intelligent and Interactive Writing
Assistants at CHI 2024 | 10.1145/3690712.3690724 | null | cs.HC cs.AI cs.CL cs.IR cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Individuality and personalization comprise the distinctive characteristics
that make each writer unique and influence their words in order to effectively
engage readers while conveying authenticity. However, our growing reliance on
LLM-based writing assistants risks compromising our creativity and
individuality over time. We often overlook the negative impacts of this trend
on our creativity and uniqueness, despite the possible consequences. This study
investigates these concerns by performing a brief survey to explore different
perspectives and concepts, as well as trying to understand people's viewpoints,
in conjunction with past studies in the area. Addressing these issues is
essential for improving human-computer interaction systems and enhancing
writing assistants for personalization and individuality.
| [
{
"created": "Wed, 20 Mar 2024 21:02:16 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Apr 2024 15:42:05 GMT",
"version": "v2"
},
{
"created": "Mon, 22 Apr 2024 08:30:28 GMT",
"version": "v3"
},
{
"created": "Sun, 28 Jul 2024 00:29:22 GMT",
"version": "v4"
},
{
"created": "Wed, 2 Oct 2024 20:45:53 GMT",
"version": "v5"
}
] | 2024-10-17 | [
[
"Wasi",
"Azmine Toushik",
""
],
[
"Islam",
"Raima",
""
],
[
"Islam",
"Mst Rafia",
""
]
] |
2404.00027 | Azmine Toushik Wasi | Azmine Toushik Wasi and Mst Rafia Islam and Raima Islam | LLMs as Writing Assistants: Exploring Perspectives on Sense of Ownership
and Reasoning | 5 Pages, 3 Figures. Accepted in The Third Workshop on Intelligent and
Interactive Writing Assistants at CHI 2024 | The Third Workshop on Intelligent and Interactive Writing
Assistants at CHI 2024 | 10.1145/3690712.3690723 | null | cs.HC cs.AI cs.CL cs.CY cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Sense of ownership in writing confines our investment of thoughts, time, and
contribution, leading to attachment to the output. However, using writing
assistants introduces a mental dilemma, as some content isn't directly our
creation. For instance, we tend to credit Large Language Models (LLMs) more in
creative tasks, even though all tasks are equal for them. Additionally, while
we may not claim complete ownership of LLM-generated content, we freely claim
authorship. We conduct a short survey to examine these issues and understand
underlying cognitive processes in order to gain a better knowledge of
human-computer interaction in writing and improve writing aid systems.
| [
{
"created": "Wed, 20 Mar 2024 21:06:42 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Apr 2024 15:40:21 GMT",
"version": "v2"
},
{
"created": "Mon, 22 Apr 2024 08:30:30 GMT",
"version": "v3"
},
{
"created": "Sun, 28 Jul 2024 00:26:14 GMT",
"version": "v4"
},
{
"created": "Wed, 2 Oct 2024 20:45:35 GMT",
"version": "v5"
}
] | 2024-10-17 | [
[
"Wasi",
"Azmine Toushik",
""
],
[
"Islam",
"Mst Rafia",
""
],
[
"Islam",
"Raima",
""
]
] |
2404.00224 | Gustavo Guedes | Gustavo Bartz Guedes, Ana Estela Antunes da Silva | Classification and Clustering of Sentence-Level Embeddings of Scientific
Articles Generated by Contrastive Learning | null | Computer Science & Information Technology (CS & IT), pp. 293-305,
2023 | 10.5121/csit.2023.131923 | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Scientific articles are long text documents organized into sections, each
describing aspects of the research. Analyzing scientific production has become
progressively challenging due to the increase in the number of available
articles. Within this scenario, our approach consisted of fine-tuning
transformer language models to generate sentence-level embeddings from
scientific articles, considering the following labels: background, objective,
methods, results, and conclusion. We trained our models on three datasets with
contrastive learning. Two datasets are from the article's abstracts in the
computer science and medical domains. Also, we introduce PMC-Sents-FULL, a
novel dataset of sentences extracted from the full texts of medical articles.
We compare the fine-tuned and baseline models in clustering and classification
tasks to evaluate our approach. On average, clustering agreement measures
values were five times higher. For the classification measures, in the
best-case scenario, we had an average improvement in F1-micro of 30.73\%.
Results show that fine-tuning sentence transformers with contrastive learning
and using the generated embeddings in downstream tasks is a feasible approach
to sentence classification in scientific articles. Our experiment codes are
available on GitHub.
| [
{
"created": "Sat, 30 Mar 2024 02:52:14 GMT",
"version": "v1"
}
] | 2024-04-02 | [
[
"Guedes",
"Gustavo Bartz",
""
],
[
"da Silva",
"Ana Estela Antunes",
""
]
] |
2404.00320 | Zekun Wu | Xingrui Gu, Zhixuan Wang, Irisa Jin, Zekun Wu | Advancing Multimodal Data Fusion in Pain Recognition: A Strategy
Leveraging Statistical Correlation and Human-Centered Perspectives | Accepted by AHRI 2024 | 979-8-3315-1645-1/24/$31.00 \c{opyright}2024 IEEE | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This research presents a novel multimodal data fusion methodology for pain
behavior recognition, integrating statistical correlation analysis with
human-centered insights. Our approach introduces two key innovations: 1)
integrating data-driven statistical relevance weights into the fusion strategy
to effectively utilize complementary information from heterogeneous modalities,
and 2) incorporating human-centric movement characteristics into multimodal
representation learning for detailed modeling of pain behaviors. Validated
across various deep learning architectures, our method demonstrates superior
performance and broad applicability. We propose a customizable framework that
aligns each modality with a suitable classifier based on statistical
significance, advancing personalized and effective multimodal fusion.
Furthermore, our methodology provides explainable analysis of multimodal data,
contributing to interpretable and explainable AI in healthcare. By highlighting
the importance of data diversity and modality-specific representations, we
enhance traditional fusion techniques and set new standards for recognizing
complex pain behaviors. Our findings have significant implications for
promoting patient-centered healthcare interventions and supporting explainable
clinical decision-making.
| [
{
"created": "Sat, 30 Mar 2024 11:13:18 GMT",
"version": "v1"
},
{
"created": "Thu, 1 Aug 2024 09:07:45 GMT",
"version": "v2"
}
] | 2024-08-02 | [
[
"Gu",
"Xingrui",
""
],
[
"Wang",
"Zhixuan",
""
],
[
"Jin",
"Irisa",
""
],
[
"Wu",
"Zekun",
""
]
] |
2404.00366 | Guancheng Zhou | Guan-Cheng Zhou, Chen Chengb, Yan-zhou Chena | Efficient Multi-branch Segmentation Network for Situation Awareness in
Autonomous Navigation | null | Ocean Engineering 302 (2024) 117741 | 10.1016/j.oceaneng.2024.117741 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Real-time and high-precision situational awareness technology is critical for
autonomous navigation of unmanned surface vehicles (USVs). In particular,
robust and fast obstacle semantic segmentation methods are essential. However,
distinguishing between the sea and the sky is challenging due to the
differences between port and maritime environments. In this study, we built a
dataset that captured perspectives from USVs and unmanned aerial vehicles in a
maritime port environment and analysed the data features. Statistical analysis
revealed a high correlation between the distribution of the sea and sky and row
positional information. Based on this finding, a three-branch semantic
segmentation network with a row position encoding module (RPEM) was proposed to
improve the prediction accuracy between the sea and the sky. The proposed RPEM
highlights the effect of row coordinates on feature extraction. Compared to the
baseline, the three-branch network with RPEM significantly improved the ability
to distinguish between the sea and the sky without significantly reducing the
computational speed.
| [
{
"created": "Sat, 30 Mar 2024 13:38:07 GMT",
"version": "v1"
}
] | 2024-04-30 | [
[
"Zhou",
"Guan-Cheng",
""
],
[
"Chengb",
"Chen",
""
],
[
"Chena",
"Yan-zhou",
""
]
] |
2404.00383 | Stefano Di Carlo | Anil Bayram Gogebakan, Enrico Magliano, Alessio Carpegna, Annachiara
Ruospo, Alessandro Savino, Stefano Di Carlo | SpikingJET: Enhancing Fault Injection for Fully and Convolutional
Spiking Neural Networks | null | 2024 IEEE 30th International Symposium on On-Line Testing and
Robust System Design (IOLTS) | 10.1109/IOLTS60994.2024.10616060 | null | cs.NE cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | As artificial neural networks become increasingly integrated into
safety-critical systems such as autonomous vehicles, devices for medical
diagnosis, and industrial automation, ensuring their reliability in the face of
random hardware faults becomes paramount. This paper introduces SpikingJET, a
novel fault injector designed specifically for fully connected and
convolutional Spiking Neural Networks (SNNs). Our work underscores the critical
need to evaluate the resilience of SNNs to hardware faults, considering their
growing prominence in real-world applications. SpikingJET provides a
comprehensive platform for assessing the resilience of SNNs by inducing errors
and injecting faults into critical components such as synaptic weights, neuron
model parameters, internal states, and activation functions. This paper
demonstrates the effectiveness of Spiking-JET through extensive software-level
experiments on various SNN architectures, revealing insights into their
vulnerability and resilience to hardware faults. Moreover, highlighting the
importance of fault resilience in SNNs contributes to the ongoing effort to
enhance the reliability and safety of Neural Network (NN)-powered systems in
diverse domains.
| [
{
"created": "Sat, 30 Mar 2024 14:51:01 GMT",
"version": "v1"
}
] | 2024-09-05 | [
[
"Gogebakan",
"Anil Bayram",
""
],
[
"Magliano",
"Enrico",
""
],
[
"Carpegna",
"Alessio",
""
],
[
"Ruospo",
"Annachiara",
""
],
[
"Savino",
"Alessandro",
""
],
[
"Di Carlo",
"Stefano",
""
]
] |
2404.00471 | Snigdha Saha | Sreemanti Dey, Snigdha Saha, Berthy T. Feng, Manxiu Cui, Laure
Delisle, Oscar Leong, Lihong V. Wang, Katherine L. Bouman | Score-Based Diffusion Models for Photoacoustic Tomography Image
Reconstruction | 5 pages | ICASSP 2024 - 2024 IEEE International Conference on Acoustics,
Speech and Signal Processing (ICASSP), Seoul, Korea, Republic of, 2024, pp.
2470-2474 | 10.1109/ICASSP48485.2024.10447579 | null | physics.med-ph cs.CV cs.LG eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Photoacoustic tomography (PAT) is a rapidly-evolving medical imaging modality
that combines optical absorption contrast with ultrasound imaging depth. One
challenge in PAT is image reconstruction with inadequate acoustic signals due
to limited sensor coverage or due to the density of the transducer array. Such
cases call for solving an ill-posed inverse reconstruction problem. In this
work, we use score-based diffusion models to solve the inverse problem of
reconstructing an image from limited PAT measurements. The proposed approach
allows us to incorporate an expressive prior learned by a diffusion model on
simulated vessel structures while still being robust to varying transducer
sparsity conditions.
| [
{
"created": "Sat, 30 Mar 2024 20:34:49 GMT",
"version": "v1"
}
] | 2024-04-02 | [
[
"Dey",
"Sreemanti",
""
],
[
"Saha",
"Snigdha",
""
],
[
"Feng",
"Berthy T.",
""
],
[
"Cui",
"Manxiu",
""
],
[
"Delisle",
"Laure",
""
],
[
"Leong",
"Oscar",
""
],
[
"Wang",
"Lihong V.",
""
],
[
"Bouman",
"Katherine L.",
""
]
] |
2404.00620 | Deborah N. Jakobi | Deborah N. Jakobi and Daniel G. Krakowczyk and Lena A. J\"ager | Reporting Eye-Tracking Data Quality: Towards a New Standard | null | Proceedings of the 2024 Symposium on Eye Tracking Research and
Applications (ETRA '24) Article 47 1-3 | 10.1145/3649902.3655658 | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Eye-tracking datasets are often shared in the format used by their creators
for their original analyses, usually resulting in the exclusion of data
considered irrelevant to the primary purpose. In order to increase re-usability
of existing eye-tracking datasets for more diverse and initially not considered
use cases, this work advocates a new approach of sharing eye-tracking data.
Instead of publishing filtered and pre-processed datasets, the eye-tracking
data at all pre-processing stages should be published together with data
quality reports. In order to transparently report data quality and enable
cross-dataset comparisons, we develop data quality reporting standards and
metrics that can be automatically applied to a dataset, and integrate them into
the open-source Python package pymovements
(https://github.com/aeye-lab/pymovements).
| [
{
"created": "Sun, 31 Mar 2024 09:17:34 GMT",
"version": "v1"
}
] | 2024-06-13 | [
[
"Jakobi",
"Deborah N.",
""
],
[
"Krakowczyk",
"Daniel G.",
""
],
[
"Jäger",
"Lena A.",
""
]
] |
2404.00650 | Xiaorui Huang | Xiaorui Huang, Gen Luo, Chaoyang Zhu, Bo Tong, Yiyi Zhou, Xiaoshuai
Sun, Rongrong Ji | Deep Instruction Tuning for Segment Anything Model | null | ACM Multimedia 2024 | 10.1145/3664647.3680571 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, Segment Anything Model (SAM) has become a research hotspot in the
fields of multimedia and computer vision, which exhibits powerful yet versatile
capabilities on various (un) conditional image segmentation tasks. Although SAM
can support different types of segmentation prompts, we note that, compared to
point- and box-guided segmentations, it performs much worse on text-instructed
tasks, e.g., referring image segmentation (RIS). In this paper, we argue that
deep text instruction tuning is key to mitigate such shortcoming caused by the
shallow fusion scheme in its default light-weight mask decoder. To address this
issue, we propose two simple yet effective deep instruction tuning (DIT)
methods for SAM, one is end-to-end and the other is layer-wise. With minimal
modifications, DITs can directly transform the image encoder of SAM as a
stand-alone vision-language learner in contrast to building another deep fusion
branch, maximizing the benefit of its superior segmentation capability.
Extensive experiments on three highly competitive benchmark datasets of RIS
show that a simple end-to-end DIT can improve SAM by a large margin, while the
layer-wise DIT can further boost the performance to state-of-the-art with much
less data and training expenditures. Our code is released at:
https://github.com/wysnzzzz/DIT.
| [
{
"created": "Sun, 31 Mar 2024 11:37:43 GMT",
"version": "v1"
},
{
"created": "Sat, 27 Apr 2024 07:05:43 GMT",
"version": "v2"
}
] | 2024-08-30 | [
[
"Huang",
"Xiaorui",
""
],
[
"Luo",
"Gen",
""
],
[
"Zhu",
"Chaoyang",
""
],
[
"Tong",
"Bo",
""
],
[
"Zhou",
"Yiyi",
""
],
[
"Sun",
"Xiaoshuai",
""
],
[
"Ji",
"Rongrong",
""
]
] |
2404.00676 | Min H. Kim | Dongyoung Choi, Hyeonjoong Jang, Min H. Kim | OmniLocalRF: Omnidirectional Local Radiance Fields from Dynamic Videos | null | Proceedings of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition (CVPR) 2024 | null | null | cs.CV cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Omnidirectional cameras are extensively used in various applications to
provide a wide field of vision. However, they face a challenge in synthesizing
novel views due to the inevitable presence of dynamic objects, including the
photographer, in their wide field of view. In this paper, we introduce a new
approach called Omnidirectional Local Radiance Fields (OmniLocalRF) that can
render static-only scene views, removing and inpainting dynamic objects
simultaneously. Our approach combines the principles of local radiance fields
with the bidirectional optimization of omnidirectional rays. Our input is an
omnidirectional video, and we evaluate the mutual observations of the entire
angle between the previous and current frames. To reduce ghosting artifacts of
dynamic objects and inpaint occlusions, we devise a multi-resolution motion
mask prediction module. Unlike existing methods that primarily separate dynamic
components through the temporal domain, our method uses multi-resolution neural
feature planes for precise segmentation, which is more suitable for long
360-degree videos. Our experiments validate that OmniLocalRF outperforms
existing methods in both qualitative and quantitative metrics, especially in
scenarios with complex real-world scenes. In particular, our approach
eliminates the need for manual interaction, such as drawing motion masks by
hand and additional pose estimation, making it a highly effective and efficient
solution.
| [
{
"created": "Sun, 31 Mar 2024 12:55:05 GMT",
"version": "v1"
}
] | 2024-04-02 | [
[
"Choi",
"Dongyoung",
""
],
[
"Jang",
"Hyeonjoong",
""
],
[
"Kim",
"Min H.",
""
]
] |
2404.00678 | Min H. Kim | Hakyeong Kim, Andreas Meuleman, Hyeonjoong Jang, James Tompkin, Min H.
Kim | OmniSDF: Scene Reconstruction using Omnidirectional Signed Distance
Functions and Adaptive Binoctrees | null | Proceedings of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition (CVPR) 2024 | null | null | cs.CV cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a method to reconstruct indoor and outdoor static scene geometry
and appearance from an omnidirectional video moving in a small circular sweep.
This setting is challenging because of the small baseline and large depth
ranges, making it difficult to find ray crossings. To better constrain the
optimization, we estimate geometry as a signed distance field within a
spherical binoctree data structure and use a complementary efficient tree
traversal strategy based on a breadth-first search for sampling. Unlike regular
grids or trees, the shape of this structure well-matches the camera setting,
creating a better memory-quality trade-off. From an initial depth estimate, the
binoctree is adaptively subdivided throughout the optimization; previous
methods use a fixed depth that leaves the scene undersampled. In comparison
with three neural optimization methods and two non-neural methods, ours shows
decreased geometry error on average, especially in a detailed scene, while
significantly reducing the required number of voxels to represent such details.
| [
{
"created": "Sun, 31 Mar 2024 13:07:00 GMT",
"version": "v1"
}
] | 2024-04-02 | [
[
"Kim",
"Hakyeong",
""
],
[
"Meuleman",
"Andreas",
""
],
[
"Jang",
"Hyeonjoong",
""
],
[
"Tompkin",
"James",
""
],
[
"Kim",
"Min H.",
""
]
] |
2404.00746 | Kashob Kumar Roy | Kashob Kumar Roy, Md Hasibul Haque Moon, Md Mahmudur Rahman, Chowdhury
Farhan Ahmed, Carson Kai-Sang Leung | Mining Weighted Sequential Patterns in Incremental Uncertain Databases | Accepted to Information Science journal | Information Sciences 582 (2022): 865-896 | null | null | cs.DB cs.AI | http://creativecommons.org/licenses/by/4.0/ | Due to the rapid development of science and technology, the importance of
imprecise, noisy, and uncertain data is increasing at an exponential rate.
Thus, mining patterns in uncertain databases have drawn the attention of
researchers. Moreover, frequent sequences of items from these databases need to
be discovered for meaningful knowledge with great impact. In many real cases,
weights of items and patterns are introduced to find interesting sequences as a
measure of importance. Hence, a constraint of weight needs to be handled while
mining sequential patterns. Besides, due to the dynamic nature of databases,
mining important information has become more challenging. Instead of mining
patterns from scratch after each increment, incremental mining algorithms
utilize previously mined information to update the result immediately. Several
algorithms exist to mine frequent patterns and weighted sequences from
incremental databases. However, these algorithms are confined to mine the
precise ones. Therefore, we have developed an algorithm to mine frequent
sequences in an uncertain database in this work. Furthermore, we have proposed
two new techniques for mining when the database is incremental. Extensive
experiments have been conducted for performance evaluation. The analysis showed
the efficiency of our proposed framework.
| [
{
"created": "Sun, 31 Mar 2024 17:32:08 GMT",
"version": "v1"
}
] | 2024-04-02 | [
[
"Roy",
"Kashob Kumar",
""
],
[
"Moon",
"Md Hasibul Haque",
""
],
[
"Rahman",
"Md Mahmudur",
""
],
[
"Ahmed",
"Chowdhury Farhan",
""
],
[
"Leung",
"Carson Kai-Sang",
""
]
] |
2404.00837 | Aydogan Ozcan | Sahan Yoruc Selcuk, Xilin Yang, Bijie Bai, Yijie Zhang, Yuzhu Li, Musa
Aydin, Aras Firat Unal, Aditya Gomatam, Zhen Guo, Darrow Morgan Angus, Goren
Kolodney, Karine Atlan, Tal Keidar Haran, Nir Pillar, Aydogan Ozcan | Automated HER2 Scoring in Breast Cancer Images Using Deep Learning and
Pyramid Sampling | 21 Pages, 7 Figures | BME Frontiers (2024) | 10.34133/bmef.0048 | null | eess.IV cs.CV cs.LG physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human epidermal growth factor receptor 2 (HER2) is a critical protein in
cancer cell growth that signifies the aggressiveness of breast cancer (BC) and
helps predict its prognosis. Accurate assessment of immunohistochemically (IHC)
stained tissue slides for HER2 expression levels is essential for both
treatment guidance and understanding of cancer mechanisms. Nevertheless, the
traditional workflow of manual examination by board-certified pathologists
encounters challenges, including inter- and intra-observer inconsistency and
extended turnaround times. Here, we introduce a deep learning-based approach
utilizing pyramid sampling for the automated classification of HER2 status in
IHC-stained BC tissue images. Our approach analyzes morphological features at
various spatial scales, efficiently managing the computational load and
facilitating a detailed examination of cellular and larger-scale tissue-level
details. This method addresses the tissue heterogeneity of HER2 expression by
providing a comprehensive view, leading to a blind testing classification
accuracy of 84.70%, on a dataset of 523 core images from tissue microarrays.
Our automated system, proving reliable as an adjunct pathology tool, has the
potential to enhance diagnostic precision and evaluation speed, and might
significantly impact cancer treatment planning.
| [
{
"created": "Mon, 1 Apr 2024 00:23:22 GMT",
"version": "v1"
}
] | 2024-07-18 | [
[
"Selcuk",
"Sahan Yoruc",
""
],
[
"Yang",
"Xilin",
""
],
[
"Bai",
"Bijie",
""
],
[
"Zhang",
"Yijie",
""
],
[
"Li",
"Yuzhu",
""
],
[
"Aydin",
"Musa",
""
],
[
"Unal",
"Aras Firat",
""
],
[
"Gomatam",
"Aditya",
""
],
[
"Guo",
"Zhen",
""
],
[
"Angus",
"Darrow Morgan",
""
],
[
"Kolodney",
"Goren",
""
],
[
"Atlan",
"Karine",
""
],
[
"Haran",
"Tal Keidar",
""
],
[
"Pillar",
"Nir",
""
],
[
"Ozcan",
"Aydogan",
""
]
] |
2404.00842 | Ling Gao | Ling Gao, Daniel Gehrig, Hang Su, Davide Scaramuzza, Laurent Kneip | An N-Point Linear Solver for Line and Motion Estimation with Event
Cameras | null | IEEE/CVF Conference on Computer Vision and Pattern Recognition
(CVPR), 2024 | 10.1109/CVPR52733.2024.01383 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Event cameras respond primarily to edges--formed by strong gradients--and are
thus particularly well-suited for line-based motion estimation. Recent work has
shown that events generated by a single line each satisfy a polynomial
constraint which describes a manifold in the space-time volume. Multiple such
constraints can be solved simultaneously to recover the partial linear velocity
and line parameters. In this work, we show that, with a suitable line
parametrization, this system of constraints is actually linear in the unknowns,
which allows us to design a novel linear solver. Unlike existing solvers, our
linear solver (i) is fast and numerically stable since it does not rely on
expensive root finding, (ii) can solve both minimal and overdetermined systems
with more than 5 events, and (iii) admits the characterization of all
degenerate cases and multiple solutions. The found line parameters are
singularity-free and have a fixed scale, which eliminates the need for
auxiliary constraints typically encountered in previous work. To recover the
full linear camera velocity we fuse observations from multiple lines with a
novel velocity averaging scheme that relies on a geometrically-motivated
residual, and thus solves the problem more efficiently than previous schemes
which minimize an algebraic residual. Extensive experiments in synthetic and
real-world settings demonstrate that our method surpasses the previous work in
numerical stability, and operates over 600 times faster.
| [
{
"created": "Mon, 1 Apr 2024 00:47:02 GMT",
"version": "v1"
}
] | 2024-09-20 | [
[
"Gao",
"Ling",
""
],
[
"Gehrig",
"Daniel",
""
],
[
"Su",
"Hang",
""
],
[
"Scaramuzza",
"Davide",
""
],
[
"Kneip",
"Laurent",
""
]
] |
2404.00852 | Hieu Nguyen | Hieu Nguyen, Cong-Hoang Ta, Phuong-Thuy Le-Nguyen, Minh-Triet Tran and
Trung-Nghia Le | Ensemble Learning for Vietnamese Scene Text Spotting in Urban
Environments | RIVF 2023 | In 2023 RIVF International Conference on Computing and
Communication Technologies (RIVF) (pp. 177-182). IEEE | 10.1109/rivf60135.2023.10471878 | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | This paper presents a simple yet efficient ensemble learning framework for
Vietnamese scene text spotting. Leveraging the power of ensemble learning,
which combines multiple models to yield more accurate predictions, our approach
aims to significantly enhance the performance of scene text spotting in
challenging urban settings. Through experimental evaluations on the VinText
dataset, our proposed method achieves a significant improvement in accuracy
compared to existing methods with an impressive accuracy of 5%. These results
unequivocally demonstrate the efficacy of ensemble learning in the context of
Vietnamese scene text spotting in urban environments, highlighting its
potential for real world applications, such as text detection and recognition
in urban signage, advertisements, and various text-rich urban scenes.
| [
{
"created": "Mon, 1 Apr 2024 01:45:30 GMT",
"version": "v1"
}
] | 2024-04-02 | [
[
"Nguyen",
"Hieu",
""
],
[
"Ta",
"Cong-Hoang",
""
],
[
"Le-Nguyen",
"Phuong-Thuy",
""
],
[
"Tran",
"Minh-Triet",
""
],
[
"Le",
"Trung-Nghia",
""
]
] |
2404.00989 | Hao Chen Calvin | Hao Chen, Yuqi Hou, Chenyuan Qu, Irene Testini, Xiaohan Hong, Jianbo
Jiao | 360+x: A Panoptic Multi-modal Scene Understanding Dataset | CVPR 2024 (Oral Presentation), Project page:
https://x360dataset.github.io/ | The IEEE/CVF Computer Vision and Pattern Recognition Conference
(CVPR) 2024 | null | null | cs.CV cs.AI cs.MM cs.SD eess.AS | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Human perception of the world is shaped by a multitude of viewpoints and
modalities. While many existing datasets focus on scene understanding from a
certain perspective (e.g. egocentric or third-person views), our dataset offers
a panoptic perspective (i.e. multiple viewpoints with multiple data
modalities). Specifically, we encapsulate third-person panoramic and front
views, as well as egocentric monocular/binocular views with rich modalities
including video, multi-channel audio, directional binaural delay, location data
and textual scene descriptions within each scene captured, presenting
comprehensive observation of the world. Figure 1 offers a glimpse of all 28
scene categories of our 360+x dataset. To the best of our knowledge, this is
the first database that covers multiple viewpoints with multiple data
modalities to mimic how daily information is accessed in the real world.
Through our benchmark analysis, we presented 5 different scene understanding
tasks on the proposed 360+x dataset to evaluate the impact and benefit of each
data modality and perspective in panoptic scene understanding. We hope this
unique dataset could broaden the scope of comprehensive scene understanding and
encourage the community to approach these problems from more diverse
perspectives.
| [
{
"created": "Mon, 1 Apr 2024 08:34:42 GMT",
"version": "v1"
},
{
"created": "Mon, 8 Apr 2024 02:37:25 GMT",
"version": "v2"
}
] | 2024-04-09 | [
[
"Chen",
"Hao",
""
],
[
"Hou",
"Yuqi",
""
],
[
"Qu",
"Chenyuan",
""
],
[
"Testini",
"Irene",
""
],
[
"Hong",
"Xiaohan",
""
],
[
"Jiao",
"Jianbo",
""
]
] |
2404.01036 | Oluwaseun Ajao | Bayode Ogunleye, Kudirat Ibilola Zakariyyah, Oluwaseun Ajao, Olakunle
Olayinka and Hemlata Sharma | Higher education assessment practice in the era of generative AI tools | 11 pages, 7 tables published in the Journal of Applied Learning &
Teaching | Higher education assessment practice in the era of generative AI
tools. (2024). Journal of applied learning and teaching, 7(1) | 10.37074/jalt.2024.7.1.28 | null | cs.IR cs.AI cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | The higher education (HE) sector benefits every nation's economy and society
at large. However, their contributions are challenged by advanced technologies
like generative artificial intelligence (GenAI) tools. In this paper, we
provide a comprehensive assessment of GenAI tools towards assessment and
pedagogic practice and, subsequently, discuss the potential impacts. This study
experimented using three assessment instruments from data science, data
analytics, and construction management disciplines. Our findings are two-fold:
first, the findings revealed that GenAI tools exhibit subject knowledge,
problem-solving, analytical, critical thinking, and presentation skills and
thus can limit learning when used unethically. Secondly, the design of the
assessment of certain disciplines revealed the limitations of the GenAI tools.
Based on our findings, we made recommendations on how AI tools can be utilised
for teaching and learning in HE.
| [
{
"created": "Mon, 1 Apr 2024 10:43:50 GMT",
"version": "v1"
}
] | 2024-04-02 | [
[
"Ogunleye",
"Bayode",
""
],
[
"Zakariyyah",
"Kudirat Ibilola",
""
],
[
"Ajao",
"Oluwaseun",
""
],
[
"Olayinka",
"Olakunle",
""
],
[
"Sharma",
"Hemlata",
""
]
] |
2404.01104 | Jaemin Kim | Jaemin Kim, Yohan Na, Kangmin Kim, Sang Rak Lee, Dong-Kyu Chae | SentiCSE: A Sentiment-aware Contrastive Sentence Embedding Framework
with Sentiment-guided Textual Similarity | 14 pages, 8 figures | LREC-COLING2024 | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, sentiment-aware pre-trained language models (PLMs) demonstrate
impressive results in downstream sentiment analysis tasks. However, they
neglect to evaluate the quality of their constructed sentiment representations;
they just focus on improving the fine-tuning performance, which overshadows the
representation quality. We argue that without guaranteeing the representation
quality, their downstream performance can be highly dependent on the
supervision of the fine-tuning data rather than representation quality. This
problem would make them difficult to foray into other sentiment-related
domains, especially where labeled data is scarce. We first propose
Sentiment-guided Textual Similarity (SgTS), a novel metric for evaluating the
quality of sentiment representations, which is designed based on the degree of
equivalence in sentiment polarity between two sentences. We then propose
SentiCSE, a novel Sentiment-aware Contrastive Sentence Embedding framework for
constructing sentiment representations via combined word-level and
sentence-level objectives, whose quality is guaranteed by SgTS. Qualitative and
quantitative comparison with the previous sentiment-aware PLMs shows the
superiority of our work. Our code is available at:
https://github.com/nayohan/SentiCSE
| [
{
"created": "Mon, 1 Apr 2024 13:24:20 GMT",
"version": "v1"
}
] | 2024-04-02 | [
[
"Kim",
"Jaemin",
""
],
[
"Na",
"Yohan",
""
],
[
"Kim",
"Kangmin",
""
],
[
"Lee",
"Sang Rak",
""
],
[
"Chae",
"Dong-Kyu",
""
]
] |
2404.01261 | Yekyung Kim | Yekyung Kim, Yapei Chang, Marzena Karpinska, Aparna Garimella, Varun
Manjunatha, Kyle Lo, Tanya Goyal, Mohit Iyyer | FABLES: Evaluating faithfulness and content selection in book-length
summarization | preprint - 39 pages | 1st Conference on Language Modeling (COLM 2024) | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While long-context large language models (LLMs) can technically summarize
book-length documents (>100K tokens), the length and complexity of the
documents have so far prohibited evaluations of input-dependent aspects like
faithfulness. In this paper, we conduct the first large-scale human evaluation
of faithfulness and content selection on LLM-generated summaries of fictional
books. Our study mitigates the issue of data contamination by focusing on
summaries of books published in 2023 or 2024, and we hire annotators who have
fully read each book prior to the annotation task to minimize cost and
cognitive burden. We collect FABLES, a dataset of annotations on 3,158 claims
made in LLM-generated summaries of 26 books, at a cost of $5.2K USD, which
allows us to rank LLM summarizers based on faithfulness: Claude-3-Opus
significantly outperforms all closed-source LLMs, while the open-source Mixtral
is on par with GPT-3.5-Turbo. An analysis of the annotations reveals that most
unfaithful claims relate to events and character states, and they generally
require indirect reasoning over the narrative to invalidate. While LLM-based
auto-raters have proven reliable for factuality and coherence in other
settings, we implement several LLM raters of faithfulness and find that none
correlates strongly with human annotations, especially with regard to detecting
unfaithful claims. Our experiments suggest that detecting unfaithful claims is
an important future direction not only for summarization evaluation but also as
a testbed for long-context understanding. Finally, we move beyond faithfulness
by exploring content selection errors in book-length summarization: we develop
a typology of omission errors related to crucial narrative elements and also
identify a systematic over-emphasis on events occurring towards the end of the
book.
| [
{
"created": "Mon, 1 Apr 2024 17:33:38 GMT",
"version": "v1"
},
{
"created": "Mon, 30 Sep 2024 17:39:59 GMT",
"version": "v2"
}
] | 2024-10-01 | [
[
"Kim",
"Yekyung",
""
],
[
"Chang",
"Yapei",
""
],
[
"Karpinska",
"Marzena",
""
],
[
"Garimella",
"Aparna",
""
],
[
"Manjunatha",
"Varun",
""
],
[
"Lo",
"Kyle",
""
],
[
"Goyal",
"Tanya",
""
],
[
"Iyyer",
"Mohit",
""
]
] |
2404.01364 | Adrian Moldovan | Adrian Moldovan, Angel Cataron, Razvan Andonie | Information Plane Analysis Visualization in Deep Learning via Transfer
Entropy | null | 2023 27th International Conference Information Visualisation (IV),
pages 278-285 | 10.1109/IV60283.2023.00055 | null | cs.LG cs.AI cs.HC cs.IT math.IT | http://creativecommons.org/licenses/by/4.0/ | In a feedforward network, Transfer Entropy (TE) can be used to measure the
influence that one layer has on another by quantifying the information transfer
between them during training. According to the Information Bottleneck
principle, a neural model's internal representation should compress the input
data as much as possible while still retaining sufficient information about the
output. Information Plane analysis is a visualization technique used to
understand the trade-off between compression and information preservation in
the context of the Information Bottleneck method by plotting the amount of
information in the input data against the compressed representation. The claim
that there is a causal link between information-theoretic compression and
generalization, measured by mutual information, is plausible, but results from
different studies are conflicting. In contrast to mutual information, TE can
capture temporal relationships between variables. To explore such links, in our
novel approach we use TE to quantify information transfer between neural layers
and perform Information Plane analysis. We obtained encouraging experimental
results, opening the possibility for further investigations.
| [
{
"created": "Mon, 1 Apr 2024 17:34:18 GMT",
"version": "v1"
}
] | 2024-04-03 | [
[
"Moldovan",
"Adrian",
""
],
[
"Cataron",
"Angel",
""
],
[
"Andonie",
"Razvan",
""
]
] |
2404.01437 | Florian Kraus | Florian Kraus, Nicolas Scheiner, Werner Ritter, Klaus Dietmayer | The Radar Ghost Dataset -- An Evaluation of Ghost Objects in Automotive
Radar Data | null | IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS), 2021, pp. 8570-8577 | 10.1109/IROS51168.2021.9636338 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Radar sensors have a long tradition in advanced driver assistance systems
(ADAS) and also play a major role in current concepts for autonomous vehicles.
Their importance is reasoned by their high robustness against meteorological
effects, such as rain, snow, or fog, and the radar's ability to measure
relative radial velocity differences via the Doppler effect. The cause for
these advantages, namely the large wavelength, is also one of the drawbacks of
radar sensors. Compared to camera or lidar sensor, a lot more surfaces in a
typical traffic scenario appear flat relative to the radar's emitted signal.
This results in multi-path reflections or so called ghost detections in the
radar signal. Ghost objects pose a major source for potential false positive
detections in a vehicle's perception pipeline. Therefore, it is important to be
able to segregate multi-path reflections from direct ones. In this article, we
present a dataset with detailed manual annotations for different kinds of ghost
detections. Moreover, two different approaches for identifying these kinds of
objects are evaluated. We hope that our dataset encourages more researchers to
engage in the fields of multi-path object suppression or exploitation.
| [
{
"created": "Mon, 1 Apr 2024 19:20:32 GMT",
"version": "v1"
}
] | 2024-04-03 | [
[
"Kraus",
"Florian",
""
],
[
"Scheiner",
"Nicolas",
""
],
[
"Ritter",
"Werner",
""
],
[
"Dietmayer",
"Klaus",
""
]
] |
2404.01547 | Jinshan Pan | Xiang Chen, Jinshan Pan, and Jiangxin Dong | Bidirectional Multi-Scale Implicit Neural Representations for Image
Deraining | Project website: https://github.com/cschenxiang/NeRD-Rain | CVPR 2024 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How to effectively explore multi-scale representations of rain streaks is
important for image deraining. In contrast to existing Transformer-based
methods that depend mostly on single-scale rain appearance, we develop an
end-to-end multi-scale Transformer that leverages the potentially useful
features in various scales to facilitate high-quality image reconstruction. To
better explore the common degradation representations from spatially-varying
rain streaks, we incorporate intra-scale implicit neural representations based
on pixel coordinates with the degraded inputs in a closed-loop design, enabling
the learned features to facilitate rain removal and improve the robustness of
the model in complex scenarios. To ensure richer collaborative representation
from different scales, we embed a simple yet effective inter-scale
bidirectional feedback operation into our multi-scale Transformer by performing
coarse-to-fine and fine-to-coarse information communication. Extensive
experiments demonstrate that our approach, named as NeRD-Rain, performs
favorably against the state-of-the-art ones on both synthetic and real-world
benchmark datasets. The source code and trained models are available at
https://github.com/cschenxiang/NeRD-Rain.
| [
{
"created": "Tue, 2 Apr 2024 01:18:16 GMT",
"version": "v1"
}
] | 2024-04-03 | [
[
"Chen",
"Xiang",
""
],
[
"Pan",
"Jinshan",
""
],
[
"Dong",
"Jiangxin",
""
]
] |
2404.01569 | Manish Sanwal | Manish Sanwal | Evaluating Large Language Models Using Contrast Sets: An Experimental
Approach | null | Article ID: IJAIRD_02_02_007, Volume 2, Issue 2, July-Dec 2024,
pp. 90-97 | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | In the domain of Natural Language Inference (NLI), especially in tasks
involving the classification of multiple input texts, the Cross-Entropy Loss
metric is widely employed as a standard for error measurement. However, this
metric falls short in effectively evaluating a model's capacity to understand
language entailments. In this study, we introduce an innovative technique for
generating a contrast set for the Stanford Natural Language Inference (SNLI)
dataset. Our strategy involves the automated substitution of verbs, adverbs,
and adjectives with their synonyms to preserve the original meaning of
sentences. This method aims to assess whether a model's performance is based on
genuine language comprehension or simply on pattern recognition. We conducted
our analysis using the ELECTRA-small model. The model achieved an accuracy of
89.9% on the conventional SNLI dataset but showed a reduced accuracy of 72.5%
on our contrast set, indicating a substantial 17% decline. This outcome led us
to conduct a detailed examination of the model's learning behaviors. Following
this, we improved the model's resilience by fine-tuning it with a
contrast-enhanced training dataset specifically designed for SNLI, which
increased its accuracy to 85.5% on the contrast sets. Our findings highlight
the importance of incorporating diverse linguistic expressions into datasets
for NLI tasks. We hope that our research will encourage the creation of more
inclusive datasets, thereby contributing to the development of NLI models that
are both more sophisticated and effective.
| [
{
"created": "Tue, 2 Apr 2024 02:03:28 GMT",
"version": "v1"
},
{
"created": "Wed, 2 Oct 2024 12:31:11 GMT",
"version": "v2"
}
] | 2024-10-03 | [
[
"Sanwal",
"Manish",
""
]
] |
2404.01703 | Yuhang Li | Zhanwen Liu, Yuhang Li, Yang Wang, Bolin Gao, Yisheng An, Xiangmo Zhao | Boosting Visual Recognition in Real-world Degradations via Unsupervised
Feature Enhancement Module with Deep Channel Prior | 14 pages, 14 figures, publised to TIV2024 | IEEE Transactions on Intelligent Vehicles, April 2024 | 10.1109/TIV.2024.3395455 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The environmental perception of autonomous vehicles in normal conditions have
achieved considerable success in the past decade. However, various unfavourable
conditions such as fog, low-light, and motion blur will degrade image quality
and pose tremendous threats to the safety of autonomous driving. That is, when
applied to degraded images, state-of-the-art visual models often suffer
performance decline due to the feature content loss and artifact interference
caused by statistical and structural properties disruption of captured images.
To address this problem, this work proposes a novel Deep Channel Prior (DCP)
for degraded visual recognition. Specifically, we observe that, in the deep
representation space of pre-trained models, the channel correlations of
degraded features with the same degradation type have uniform distribution even
if they have different content and semantics, which can facilitate the mapping
relationship learning between degraded and clear representations in
high-sparsity feature space. Based on this, a novel plug-and-play Unsupervised
Feature Enhancement Module (UFEM) is proposed to achieve unsupervised feature
correction, where the multi-adversarial mechanism is introduced in the first
stage of UFEM to achieve the latent content restoration and artifact removal in
high-sparsity feature space. Then, the generated features are transferred to
the second stage for global correlation modulation under the guidance of DCP to
obtain high-quality and recognition-friendly features. Evaluations of three
tasks and eight benchmark datasets demonstrate that our proposed method can
comprehensively improve the performance of pre-trained models in real
degradation conditions. The source code is available at
https://github.com/liyuhang166/Deep_Channel_Prior
| [
{
"created": "Tue, 2 Apr 2024 07:16:56 GMT",
"version": "v1"
},
{
"created": "Sun, 12 May 2024 03:10:41 GMT",
"version": "v2"
}
] | 2024-05-14 | [
[
"Liu",
"Zhanwen",
""
],
[
"Li",
"Yuhang",
""
],
[
"Wang",
"Yang",
""
],
[
"Gao",
"Bolin",
""
],
[
"An",
"Yisheng",
""
],
[
"Zhao",
"Xiangmo",
""
]
] |
2404.01751 | Tanvir Mahmud | Tanvir Mahmud, Yapeng Tian, Diana Marculescu | T-VSL: Text-Guided Visual Sound Source Localization in Mixtures | Accepted in CVPR-2024. Code:
https://github.com/enyac-group/T-VSL/tree/main | IEEE/CVF Computer Vision and Pattern Recognition (CVPR)
Conference, 2024 | null | null | cs.CV cs.SD eess.AS | http://creativecommons.org/licenses/by/4.0/ | Visual sound source localization poses a significant challenge in identifying
the semantic region of each sounding source within a video. Existing
self-supervised and weakly supervised source localization methods struggle to
accurately distinguish the semantic regions of each sounding object,
particularly in multi-source mixtures. These methods often rely on audio-visual
correspondence as guidance, which can lead to substantial performance drops in
complex multi-source localization scenarios. The lack of access to individual
source sounds in multi-source mixtures during training exacerbates the
difficulty of learning effective audio-visual correspondence for localization.
To address this limitation, in this paper, we propose incorporating the text
modality as an intermediate feature guide using tri-modal joint embedding
models (e.g., AudioCLIP) to disentangle the semantic audio-visual source
correspondence in multi-source mixtures. Our framework, dubbed T-VSL, begins by
predicting the class of sounding entities in mixtures. Subsequently, the
textual representation of each sounding source is employed as guidance to
disentangle fine-grained audio-visual source correspondence from multi-source
mixtures, leveraging the tri-modal AudioCLIP embedding. This approach enables
our framework to handle a flexible number of sources and exhibits promising
zero-shot transferability to unseen classes during test time. Extensive
experiments conducted on the MUSIC, VGGSound, and VGGSound-Instruments datasets
demonstrate significant performance improvements over state-of-the-art methods.
Code is released at https://github.com/enyac-group/T-VSL/tree/main
| [
{
"created": "Tue, 2 Apr 2024 09:07:05 GMT",
"version": "v1"
},
{
"created": "Sun, 7 Jul 2024 06:30:25 GMT",
"version": "v2"
}
] | 2024-07-09 | [
[
"Mahmud",
"Tanvir",
""
],
[
"Tian",
"Yapeng",
""
],
[
"Marculescu",
"Diana",
""
]
] |
2404.01753 | Gaurish Thakkar Mr | Gaurish Thakkar, Sherzod Hakimov, Marko Tadi\'c | M2SA: Multimodal and Multilingual Model for Sentiment Analysis of Tweets | null | LREC-COLING 2024 - The 2024 Joint International Conference on
Computational Linguistics, Language Resources and Evaluation | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | In recent years, multimodal natural language processing, aimed at learning
from diverse data types, has garnered significant attention. However, there
needs to be more clarity when it comes to analysing multimodal tasks in
multi-lingual contexts. While prior studies on sentiment analysis of tweets
have predominantly focused on the English language, this paper addresses this
gap by transforming an existing textual Twitter sentiment dataset into a
multimodal format through a straightforward curation process. Our work opens up
new avenues for sentiment-related research within the research community.
Additionally, we conduct baseline experiments utilising this augmented dataset
and report the findings. Notably, our evaluations reveal that when comparing
unimodal and multimodal configurations, using a sentiment-tuned large language
model as a text encoder performs exceptionally well.
| [
{
"created": "Tue, 2 Apr 2024 09:11:58 GMT",
"version": "v1"
},
{
"created": "Wed, 5 Jun 2024 13:34:55 GMT",
"version": "v2"
},
{
"created": "Wed, 12 Jun 2024 07:12:36 GMT",
"version": "v3"
}
] | 2024-06-13 | [
[
"Thakkar",
"Gaurish",
""
],
[
"Hakimov",
"Sherzod",
""
],
[
"Tadić",
"Marko",
""
]
] |
2404.01822 | Ivo Verhoeven | Ivo Verhoeven, Pushkar Mishra, Rahel Beloch, Helen Yannakoudakis,
Ekaterina Shutova | A (More) Realistic Evaluation Setup for Generalisation of Community
Models on Malicious Content Detection | To be published at Findings of NAACL 2024 | https://aclanthology.org/2024.findings-naacl.30/ | 10.18653/v1/2024.findings-naacl.30 | null | cs.LG cs.CL cs.SI | http://creativecommons.org/licenses/by/4.0/ | Community models for malicious content detection, which take into account the
context from a social graph alongside the content itself, have shown remarkable
performance on benchmark datasets. Yet, misinformation and hate speech continue
to propagate on social media networks. This mismatch can be partially
attributed to the limitations of current evaluation setups that neglect the
rapid evolution of online content and the underlying social graph. In this
paper, we propose a novel evaluation setup for model generalisation based on
our few-shot subgraph sampling approach. This setup tests for generalisation
through few labelled examples in local explorations of a larger graph,
emulating more realistic application settings. We show this to be a challenging
inductive setup, wherein strong performance on the training graph is not
indicative of performance on unseen tasks, domains, or graph structures.
Lastly, we show that graph meta-learners trained with our proposed few-shot
subgraph sampling outperform standard community models in the inductive setup.
We make our code publicly available.
| [
{
"created": "Tue, 2 Apr 2024 10:32:21 GMT",
"version": "v1"
}
] | 2024-09-30 | [
[
"Verhoeven",
"Ivo",
""
],
[
"Mishra",
"Pushkar",
""
],
[
"Beloch",
"Rahel",
""
],
[
"Yannakoudakis",
"Helen",
""
],
[
"Shutova",
"Ekaterina",
""
]
] |
2404.01860 | Mattia Opper | Mattia Opper and N. Siddharth | Self-StrAE at SemEval-2024 Task 1: Making Self-Structuring AutoEncoders
Learn More With Less | SemEval 2024 | Association for Computational Linguistics: SemEval 2024 | 10.18653/v1/2024.semeval-1.18 | 2024.semeval-1.18 | cs.CL | http://creativecommons.org/licenses/by/4.0/ | This paper presents two simple improvements to the Self-Structuring
AutoEncoder (Self-StrAE). Firstly, we show that including reconstruction to the
vocabulary as an auxiliary objective improves representation quality. Secondly,
we demonstrate that increasing the number of independent channels leads to
significant improvements in embedding quality, while simultaneously reducing
the number of parameters. Surprisingly, we demonstrate that this trend can be
followed to the extreme, even to point of reducing the total number of
non-embedding parameters to seven. Our system can be pre-trained from scratch
with as little as 10M tokens of input data, and proves effective across
English, Spanish and Afrikaans.
| [
{
"created": "Tue, 2 Apr 2024 11:38:11 GMT",
"version": "v1"
}
] | 2024-09-26 | [
[
"Opper",
"Mattia",
""
],
[
"Siddharth",
"N.",
""
]
] |
2404.01892 | Cheng Gong | Cheng Gong, Haoshuai Zheng, Mengting Hu, Zheng Lin, Deng-Ping Fan,
Yuzhi Zhang, Tao Li | Minimize Quantization Output Error with Bias Compensation | 10 pages, 6 figures | CAAI Artificial Intelligence Research, 2024 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Quantization is a promising method that reduces memory usage and
computational intensity of Deep Neural Networks (DNNs), but it often leads to
significant output error that hinder model deployment. In this paper, we
propose Bias Compensation (BC) to minimize the output error, thus realizing
ultra-low-precision quantization without model fine-tuning. Instead of
optimizing the non-convex quantization process as in most previous methods, the
proposed BC bypasses the step to directly minimize the quantizing output error
by identifying a bias vector for compensation. We have established that the
minimization of output error through BC is a convex problem and provides an
efficient strategy to procure optimal solutions associated with minimal output
error,without the need for training or fine-tuning. We conduct extensive
experiments on Vision Transformer models and Large Language Models, and the
results show that our method notably reduces quantization output error, thereby
permitting ultra-low-precision post-training quantization and enhancing the
task performance of models. Especially, BC improves the accuracy of ViT-B with
4-bit PTQ4ViT by 36.89% on the ImageNet-1k task, and decreases the perplexity
of OPT-350M with 3-bit GPTQ by 5.97 on WikiText2.The code is in
https://github.com/GongCheng1919/bias-compensation.
| [
{
"created": "Tue, 2 Apr 2024 12:29:31 GMT",
"version": "v1"
}
] | 2024-06-26 | [
[
"Gong",
"Cheng",
""
],
[
"Zheng",
"Haoshuai",
""
],
[
"Hu",
"Mengting",
""
],
[
"Lin",
"Zheng",
""
],
[
"Fan",
"Deng-Ping",
""
],
[
"Zhang",
"Yuzhi",
""
],
[
"Li",
"Tao",
""
]
] |
2404.01991 | Elodie Gauthier | Elodie Gauthier, Aminata Ndiaye, Abdoulaye Guiss\'e | Kallaama: A Transcribed Speech Dataset about Agriculture in the Three
Most Widely Spoken Languages in Senegal | To appear in RAIL 2024 | The Fifth Workshop on Resources for African Indigenous Languages
@LREC-COLING-2024 (RAIL) | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | This work is part of the Kallaama project, whose objective is to produce and
disseminate national languages corpora for speech technologies developments, in
the field of agriculture. Except for Wolof, which benefits from some language
data for natural language processing, national languages of Senegal are largely
ignored by language technology providers. However, such technologies are keys
to the protection, promotion and teaching of these languages. Kallaama focuses
on the 3 main spoken languages by Senegalese people: Wolof, Pulaar and Sereer.
These languages are widely spoken by the population, with around 10 million of
native Senegalese speakers, not to mention those outside the country. However,
they remain under-resourced in terms of machine-readable data that can be used
for automatic processing and language technologies, all the more so in the
agricultural sector. We release a transcribed speech dataset containing 125
hours of recordings, about agriculture, in each of the above-mentioned
languages. These resources are specifically designed for Automatic Speech
Recognition purpose, including traditional approaches. To build such
technologies, we provide textual corpora in Wolof and Pulaar, and a
pronunciation lexicon containing 49,132 entries from the Wolof dataset.
| [
{
"created": "Tue, 2 Apr 2024 14:31:14 GMT",
"version": "v1"
}
] | 2024-06-04 | [
[
"Gauthier",
"Elodie",
""
],
[
"Ndiaye",
"Aminata",
""
],
[
"Guissé",
"Abdoulaye",
""
]
] |
2404.02009 | Elodie Gauthier | Elodie Gauthier, Papa-S\'ega Wade, Thierry Moudenc, Patrice Collen,
Emilie De Neef, Oumar Ba, Ndeye Khoyane Cama, Cheikh Ahmadou Bamba Kebe,
Ndeye Aissatou Gningue, Thomas Mendo'o Aristide | Preuve de concept d'un bot vocal dialoguant en wolof | in French language | Actes de la 29e Conf\'erence sur le Traitement Automatique des
Langues Naturelles. Volume 1 : conf\'erence principale (Est\`eve et al.,
JEP/TALN/RECITAL 2022) | null | null | cs.CL cs.HC | http://creativecommons.org/licenses/by/4.0/ | This paper presents the proof-of-concept of the first automatic voice
assistant ever built in Wolof language, the main vehicular language spoken in
Senegal. This voicebot is the result of a collaborative research project
between Orange Innovation in France, Orange Senegal (aka Sonatel) and ADNCorp,
a small IT company based in Dakar, Senegal. The purpose of the voicebot is to
provide information to Orange customers about the Sargal loyalty program of
Orange Senegal by using the most natural mean to communicate: speech. The
voicebot receives in input the customer's oral request that is then processed
by a SLU system to reply to the customer's request using audio recordings. The
first results of this proof-of-concept are encouraging as we achieved 22\% of
WER for the ASR task and 78\% of F1-score on the NLU task.
| [
{
"created": "Tue, 2 Apr 2024 14:53:41 GMT",
"version": "v1"
}
] | 2024-04-03 | [
[
"Gauthier",
"Elodie",
""
],
[
"Wade",
"Papa-Séga",
""
],
[
"Moudenc",
"Thierry",
""
],
[
"Collen",
"Patrice",
""
],
[
"De Neef",
"Emilie",
""
],
[
"Ba",
"Oumar",
""
],
[
"Cama",
"Ndeye Khoyane",
""
],
[
"Kebe",
"Cheikh Ahmadou Bamba",
""
],
[
"Gningue",
"Ndeye Aissatou",
""
],
[
"Aristide",
"Thomas Mendo'o",
""
]
] |
2404.02068 | Zhuo Chen | Zhuo Chen, Chengyue Jiang, Kewei Tu | Using Interpretation Methods for Model Enhancement | EMNLP 2023 | Proceedings of the 2023 Conference on Empirical Methods in Natural
Language Processing, pages 424-438 | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the age of neural natural language processing, there are plenty of works
trying to derive interpretations of neural models. Intuitively, when gold
rationales exist during training, one can additionally train the model to match
its interpretation with the rationales. However, this intuitive idea has not
been fully explored. In this paper, we propose a framework of utilizing
interpretation methods and gold rationales to enhance models. Our framework is
very general in the sense that it can incorporate various interpretation
methods. Previously proposed gradient-based methods can be shown as an instance
of our framework. We also propose two novel instances utilizing two other types
of interpretation methods, erasure/replace-based and extractor-based methods,
for model enhancement. We conduct comprehensive experiments on a variety of
tasks. Experimental results show that our framework is effective especially in
low-resource settings in enhancing models with various interpretation methods,
and our two newly-proposed methods outperform gradient-based methods in most
settings. Code is available at https://github.com/Chord-Chen-30/UIMER.
| [
{
"created": "Tue, 2 Apr 2024 16:10:29 GMT",
"version": "v1"
}
] | 2024-04-03 | [
[
"Chen",
"Zhuo",
""
],
[
"Jiang",
"Chengyue",
""
],
[
"Tu",
"Kewei",
""
]
] |
2404.02090 | Denis Antipov | Denis Antipov, Benjamin Doerr, Alexandra Ivanova | Already Moderate Population Sizes Provably Yield Strong Robustness to
Noise | Full version of the same-titled paper accepted at GECCO 2024 | GECCO '24: Proceedings of the Genetic and Evolutionary Computation
Conference, 1524-1532, 2024. ACM | 10.1145/3638529.3654196 | null | cs.NE cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Experience shows that typical evolutionary algorithms can cope well with
stochastic disturbances such as noisy function evaluations.
In this first mathematical runtime analysis of the $(1+\lambda)$ and
$(1,\lambda)$ evolutionary algorithms in the presence of prior bit-wise noise,
we show that both algorithms can tolerate constant noise probabilities without
increasing the asymptotic runtime on the OneMax benchmark. For this, a
population size $\lambda$ suffices that is at least logarithmic in the problem
size $n$. The only previous result in this direction regarded the less
realistic one-bit noise model, required a population size super-linear in the
problem size, and proved a runtime guarantee roughly cubic in the noiseless
runtime for the OneMax benchmark. Our significantly stronger results are based
on the novel proof argument that the noiseless offspring can be seen as a
biased uniform crossover between the parent and the noisy offspring. We are
optimistic that the technical lemmas resulting from this insight will find
applications also in future mathematical runtime analyses of evolutionary
algorithms.
| [
{
"created": "Tue, 2 Apr 2024 16:35:52 GMT",
"version": "v1"
},
{
"created": "Mon, 8 Apr 2024 01:07:43 GMT",
"version": "v2"
},
{
"created": "Thu, 2 May 2024 04:44:50 GMT",
"version": "v3"
},
{
"created": "Mon, 13 May 2024 05:01:01 GMT",
"version": "v4"
}
] | 2024-07-17 | [
[
"Antipov",
"Denis",
""
],
[
"Doerr",
"Benjamin",
""
],
[
"Ivanova",
"Alexandra",
""
]
] |
2404.02180 | Rohitash Chandra | Sandeep Nagar, Ehsan Farahbakhsh, Joseph Awange, Rohitash Chandra | Remote sensing framework for geological mapping via stacked autoencoders
and clustering | null | Advances in Space Research, 2024 | 10.1016/j.asr.2024.09.013 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Supervised machine learning methods for geological mapping via remote sensing
face limitations due to the scarcity of accurately labelled training data that
can be addressed by unsupervised learning, such as dimensionality reduction and
clustering. Dimensionality reduction methods have the potential to play a
crucial role in improving the accuracy of geological maps. Although
conventional dimensionality reduction methods may struggle with nonlinear data,
unsupervised deep learning models such as autoencoders can model non-linear
relationships. Stacked autoencoders feature multiple interconnected layers to
capture hierarchical data representations useful for remote sensing data. We
present an unsupervised machine learning-based framework for processing remote
sensing data using stacked autoencoders for dimensionality reduction and
k-means clustering for mapping geological units. We use Landsat 8, ASTER, and
Sentinel-2 datasets to evaluate the framework for geological mapping of the
Mutawintji region in Western New South Wales, Australia. We also compare
stacked autoencoders with principal component analysis (PCA) and canonical
autoencoders. Our results reveal that the framework produces accurate and
interpretable geological maps, efficiently discriminating rock units. The
results reveal that the combination of stacked autoencoders with Sentinel-2
data yields the best performance accuracy when compared to other combinations.
We find that stacked autoencoders enable better extraction of complex and
hierarchical representations of the input data when compared to canonical
autoencoders and PCA. We also find that the generated maps align with prior
geological knowledge of the study area while providing novel insights into
geological structures.
| [
{
"created": "Tue, 2 Apr 2024 09:15:32 GMT",
"version": "v1"
},
{
"created": "Mon, 1 Jul 2024 11:11:29 GMT",
"version": "v2"
},
{
"created": "Tue, 2 Jul 2024 05:52:15 GMT",
"version": "v3"
},
{
"created": "Sat, 21 Sep 2024 06:02:47 GMT",
"version": "v4"
}
] | 2024-09-24 | [
[
"Nagar",
"Sandeep",
""
],
[
"Farahbakhsh",
"Ehsan",
""
],
[
"Awange",
"Joseph",
""
],
[
"Chandra",
"Rohitash",
""
]
] |
2404.02287 | Mehmet Ergezer | Mehmet Ergezer and Phat Duong and Christian Green and Tommy Nguyen and
Abdurrahman Zeybey | One Noise to Rule Them All: Multi-View Adversarial Attacks with
Universal Perturbation | 6 pages, 4 figures, presented at ICAIA, Springer to publish under
Algorithms for Intelligent Systems | 2nd International Conference on Artificial Intelligence and
Applications (ICAIA 2024) | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper presents a novel universal perturbation method for generating
robust multi-view adversarial examples in 3D object recognition. Unlike
conventional attacks limited to single views, our approach operates on multiple
2D images, offering a practical and scalable solution for enhancing model
scalability and robustness. This generalizable method bridges the gap between
2D perturbations and 3D-like attack capabilities, making it suitable for
real-world applications.
Existing adversarial attacks may become ineffective when images undergo
transformations like changes in lighting, camera position, or natural
deformations. We address this challenge by crafting a single universal noise
perturbation applicable to various object views. Experiments on diverse
rendered 3D objects demonstrate the effectiveness of our approach. The
universal perturbation successfully identified a single adversarial noise for
each given set of 3D object renders from multiple poses and viewpoints.
Compared to single-view attacks, our universal attacks lower classification
confidence across multiple viewing angles, especially at low noise levels. A
sample implementation is made available at
https://github.com/memoatwit/UniversalPerturbation.
| [
{
"created": "Tue, 2 Apr 2024 20:29:59 GMT",
"version": "v1"
}
] | 2024-04-04 | [
[
"Ergezer",
"Mehmet",
""
],
[
"Duong",
"Phat",
""
],
[
"Green",
"Christian",
""
],
[
"Nguyen",
"Tommy",
""
],
[
"Zeybey",
"Abdurrahman",
""
]
] |
2404.02304 | Mengjie Zhao | Mengjie Zhao, Cees Taal, Stephan Baggerohr, and Olga Fink | Virtual Sensor for Real-Time Bearing Load Prediction Using Heterogeneous
Temporal Graph Neural Networks | 8 pages, 6 figures | Vol. 8 No. 1 (2024): Proceedings of the European Conference of the
PHM Society 2024 Technical Papers | 10.36001/phme.2024.v8i1.3998 | null | cs.LG cs.AI cs.ET | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate bearing load monitoring is essential for their Prognostics and
Health Management (PHM), enabling damage assessment, wear prediction, and
proactive maintenance. While bearing sensors are typically placed on the
bearing housing, direct load monitoring requires sensors inside the bearing
itself. Recently introduced sensor rollers enable direct bearing load
monitoring but are constrained by their battery life. Data-driven virtual
sensors can learn from sensor roller data collected during a batterys lifetime
to map operating conditions to bearing loads. Although spatially distributed
bearing sensors offer insights into load distribution (e.g., correlating
temperature with load), traditional machine learning algorithms struggle to
fully exploit these spatial-temporal dependencies. To address this gap, we
introduce a graph-based virtual sensor that leverages Graph Neural Networks
(GNNs) to analyze spatial-temporal dependencies among sensor signals, mapping
existing measurements (temperature, vibration) to bearing loads. Since
temperature and vibration signals exhibit vastly different dynamics, we propose
Heterogeneous Temporal Graph Neural Networks (HTGNN), which explicitly models
these signal types and their interactions for effective load prediction. Our
results demonstrate that HTGNN outperforms Convolutional Neural Networks
(CNNs), which struggle to capture both spatial and heterogeneous signal
characteristics. These findings highlight the importance of capturing the
complex spatial interactions between temperature, vibration, and load.
| [
{
"created": "Tue, 2 Apr 2024 21:03:17 GMT",
"version": "v1"
}
] | 2024-07-29 | [
[
"Zhao",
"Mengjie",
""
],
[
"Taal",
"Cees",
""
],
[
"Baggerohr",
"Stephan",
""
],
[
"Fink",
"Olga",
""
]
] |
2404.02579 | Carlos Monserrat | David Nieves, Mar\'ia Jos\'e Ram\'irez-Quintana, Carlos Monserrat,
C\'esar Ferri, Jos\'e Hern\'andez-Orallo | Learning Alternative Ways of Performing a Task | 32 pages, Github repository, published paper, authors' version | Expert Systems With Applications, volume 148, 2020, 113263 | 10.1016/j.eswa.2020.113263 | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | A common way of learning to perform a task is to observe how it is carried
out by experts. However, it is well known that for most tasks there is no
unique way to perform them. This is especially noticeable the more complex the
task is because factors such as the skill or the know-how of the expert may
well affect the way she solves the task. In addition, learning from experts
also suffers of having a small set of training examples generally coming from
several experts (since experts are usually a limited and expensive resource),
being all of them positive examples (i.e. examples that represent successful
executions of the task). Traditional machine learning techniques are not useful
in such scenarios, as they require extensive training data. Starting from very
few executions of the task presented as activity sequences, we introduce a
novel inductive approach for learning multiple models, with each one
representing an alternative strategy of performing a task. By an iterative
process based on generalisation and specialisation, we learn the underlying
patterns that capture the different styles of performing a task exhibited by
the examples. We illustrate our approach on two common activity recognition
tasks: a surgical skills training task and a cooking domain. We evaluate the
inferred models with respect to two metrics that measure how well the models
represent the examples and capture the different forms of executing a task
showed by the examples. We compare our results with the traditional process
mining approach and show that a small set of meaningful examples is enough to
obtain patterns that capture the different strategies that are followed to
solve the tasks.
| [
{
"created": "Wed, 3 Apr 2024 08:54:58 GMT",
"version": "v1"
}
] | 2024-04-04 | [
[
"Nieves",
"David",
""
],
[
"Ramírez-Quintana",
"María José",
""
],
[
"Monserrat",
"Carlos",
""
],
[
"Ferri",
"César",
""
],
[
"Hernández-Orallo",
"José",
""
]
] |
2404.02637 | Christoph Neumann | Patrick Levi and Christoph P. Neumann | Vocabulary Attack to Hijack Large Language Model Applications | null | Proc of the 15th International Conference on Cloud Computing,
GRIDs, and Virtualization (Cloud Computing 2024), Venice, Italy, April 2024,
pp. 19-24, ISSN 2308-4294 | null | null | cs.CR cs.AI cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The fast advancements in Large Language Models (LLMs) are driving an
increasing number of applications. Together with the growing number of users,
we also see an increasing number of attackers who try to outsmart these
systems. They want the model to reveal confidential information, specific false
information, or offensive behavior. To this end, they manipulate their
instructions for the LLM by inserting separators or rephrasing them
systematically until they reach their goal. Our approach is different. It
inserts words from the model vocabulary. We find these words using an
optimization procedure and embeddings from another LLM (attacker LLM). We prove
our approach by goal hijacking two popular open-source LLMs from the Llama2 and
the Flan-T5 families, respectively. We present two main findings. First, our
approach creates inconspicuous instructions and therefore it is hard to detect.
For many attack cases, we find that even a single word insertion is sufficient.
Second, we demonstrate that we can conduct our attack using a different model
than the target model to conduct our attack with.
| [
{
"created": "Wed, 3 Apr 2024 10:54:07 GMT",
"version": "v1"
},
{
"created": "Thu, 30 May 2024 06:28:31 GMT",
"version": "v2"
}
] | 2024-05-31 | [
[
"Levi",
"Patrick",
""
],
[
"Neumann",
"Christoph P.",
""
]
] |
2404.02817 | Zhigen Zhao | Zhigen Zhao, Shuo Cheng, Yan Ding, Ziyi Zhou, Shiqi Zhang, Danfei Xu,
Ye Zhao | A Survey of Optimization-based Task and Motion Planning: From Classical
To Learning Approaches | 26 pages, 13 figures, published at IEEE/ASME Transactions on
Mechatronics | IEEE/ASME Transactions on Mechatronics (2024) | 10.1109/TMECH.2024.3452509 | null | cs.RO cs.AI | http://creativecommons.org/licenses/by/4.0/ | Task and Motion Planning (TAMP) integrates high-level task planning and
low-level motion planning to equip robots with the autonomy to effectively
reason over long-horizon, dynamic tasks. Optimization-based TAMP focuses on
hybrid optimization approaches that define goal conditions via objective
functions and are capable of handling open-ended goals, robotic dynamics, and
physical interaction between the robot and the environment. Therefore,
optimization-based TAMP is particularly suited to solve highly complex,
contact-rich locomotion and manipulation problems. This survey provides a
comprehensive review on optimization-based TAMP, covering (i) planning domain
representations, including action description languages and temporal logic,
(ii) individual solution strategies for components of TAMP, including AI
planning and trajectory optimization (TO), and (iii) the dynamic interplay
between logic-based task planning and model-based TO. A particular focus of
this survey is to highlight the algorithm structures to efficiently solve TAMP,
especially hierarchical and distributed approaches. Additionally, the survey
emphasizes the synergy between the classical methods and contemporary
learning-based innovations such as large language models. Furthermore, the
future research directions for TAMP is discussed in this survey, highlighting
both algorithmic and application-specific challenges.
| [
{
"created": "Wed, 3 Apr 2024 15:38:36 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Apr 2024 09:06:00 GMT",
"version": "v2"
},
{
"created": "Fri, 19 Apr 2024 14:26:25 GMT",
"version": "v3"
},
{
"created": "Sun, 30 Jun 2024 23:56:53 GMT",
"version": "v4"
},
{
"created": "Mon, 7 Oct 2024 10:09:16 GMT",
"version": "v5"
}
] | 2024-10-08 | [
[
"Zhao",
"Zhigen",
""
],
[
"Cheng",
"Shuo",
""
],
[
"Ding",
"Yan",
""
],
[
"Zhou",
"Ziyi",
""
],
[
"Zhang",
"Shiqi",
""
],
[
"Xu",
"Danfei",
""
],
[
"Zhao",
"Ye",
""
]
] |
2404.02830 | Poulami Sinhamahapatra | Poulami Sinhamahapatra, Suprosanna Shit, Anjany Sekuboyina, Malek
Husseini, David Schinz, Nicolas Lenhart, Joern Menze, Jan Kirschke, Karsten
Roscher, Stephan Guennemann | Enhancing Interpretability of Vertebrae Fracture Grading using
Human-interpretable Prototypes | Accepted for publication at the Journal of Machine Learning for
Biomedical Imaging (MELBA) https://melba-journal.org/2024:015 | Machine.Learning.for.Biomedical.Imaging. 2 (2024) | 10.59275/j.melba.2024-258b | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Vertebral fracture grading classifies the severity of vertebral fractures,
which is a challenging task in medical imaging and has recently attracted Deep
Learning (DL) models. Only a few works attempted to make such models
human-interpretable despite the need for transparency and trustworthiness in
critical use cases like DL-assisted medical diagnosis. Moreover, such models
either rely on post-hoc methods or additional annotations. In this work, we
propose a novel interpretable-by-design method, ProtoVerse, to find relevant
sub-parts of vertebral fractures (prototypes) that reliably explain the model's
decision in a human-understandable way. Specifically, we introduce a novel
diversity-promoting loss to mitigate prototype repetitions in small datasets
with intricate semantics. We have experimented with the VerSe'19 dataset and
outperformed the existing prototype-based method. Further, our model provides
superior interpretability against the post-hoc method. Importantly, expert
radiologists validated the visual interpretability of our results, showing
clinical applicability.
| [
{
"created": "Wed, 3 Apr 2024 16:04:59 GMT",
"version": "v1"
},
{
"created": "Wed, 31 Jul 2024 12:34:39 GMT",
"version": "v2"
}
] | 2024-08-01 | [
[
"Sinhamahapatra",
"Poulami",
""
],
[
"Shit",
"Suprosanna",
""
],
[
"Sekuboyina",
"Anjany",
""
],
[
"Husseini",
"Malek",
""
],
[
"Schinz",
"David",
""
],
[
"Lenhart",
"Nicolas",
""
],
[
"Menze",
"Joern",
""
],
[
"Kirschke",
"Jan",
""
],
[
"Roscher",
"Karsten",
""
],
[
"Guennemann",
"Stephan",
""
]
] |
2404.02869 | Sahil Rajesh Dhayalkar | Mayur Sonawane, Sahil Rajesh Dhayalkar, Siddesh Waje, Soyal
Markhelkar, Akshay Wattamwar, Seema C. Shrawne | Human Activity Recognition using Smartphones | null | International Journal of Engineering Science and Computing,
October 2018 | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Human Activity Recognition is a subject of great research today and has its
applications in remote healthcare, activity tracking of the elderly or the
disables, calories burnt tracking etc. In our project, we have created an
Android application that recognizes the daily human activities and calculate
the calories burnt in real time. We first captured labeled triaxial
acceleration readings for different daily human activities from the
smartphone's embedded accelerometer. These readings were preprocessed using a
median filter. 42 features were extracted using various methods. We then tested
various machine learning algorithms along with dimensionality reduction.
Finally, in our Android application, we used the machine learning algorithm and
a subset of features that provided maximum accuracy and minimum model building
time. This is used for real-time activity recognition and calculation of
calories burnt using a formula based on Metabolic Equivalent.
| [
{
"created": "Wed, 3 Apr 2024 17:05:41 GMT",
"version": "v1"
}
] | 2024-04-04 | [
[
"Sonawane",
"Mayur",
""
],
[
"Dhayalkar",
"Sahil Rajesh",
""
],
[
"Waje",
"Siddesh",
""
],
[
"Markhelkar",
"Soyal",
""
],
[
"Wattamwar",
"Akshay",
""
],
[
"Shrawne",
"Seema C.",
""
]
] |
2404.02943 | Adrian Moldovan | Adrian Moldovan, Angel Ca\c{t}aron, R\u{a}zvan Andonie | Learning in Convolutional Neural Networks Accelerated by Transfer
Entropy | null | Entropy - MDPI, Year 2021, Number 9, Article Number 1218, PubMedID
34573843, ISSN 1099-4300 | 10.3390/e23091218 | null | cs.LG cs.AI cs.IT math.IT | http://creativecommons.org/licenses/by/4.0/ | Recently, there is a growing interest in applying Transfer Entropy (TE) in
quantifying the effective connectivity between artificial neurons. In a
feedforward network, the TE can be used to quantify the relationships between
neuron output pairs located in different layers. Our focus is on how to include
the TE in the learning mechanisms of a Convolutional Neural Network (CNN)
architecture. We introduce a novel training mechanism for CNN architectures
which integrates the TE feedback connections. Adding the TE feedback parameter
accelerates the training process, as fewer epochs are needed. On the flip side,
it adds computational overhead to each epoch. According to our experiments on
CNN classifiers, to achieve a reasonable computational overhead--accuracy
trade-off, it is efficient to consider only the inter-neural information
transfer of a random subset of the neuron pairs from the last two fully
connected layers. The TE acts as a smoothing factor, generating stability and
becoming active only periodically, not after processing each input sample.
Therefore, we can consider the TE is in our model a slowly changing
meta-parameter.
| [
{
"created": "Wed, 3 Apr 2024 13:31:49 GMT",
"version": "v1"
}
] | 2024-04-05 | [
[
"Moldovan",
"Adrian",
""
],
[
"Caţaron",
"Angel",
""
],
[
"Andonie",
"Răzvan",
""
]
] |
2404.03098 | Lucas Emanuel Resck | Lucas E. Resck, Marcos M. Raimundo, Jorge Poco | Exploring the Trade-off Between Model Performance and Explanation
Plausibility of Text Classifiers Using Human Rationales | 27 pages, 22 figures, 8 tables; to appear in NAACL Findings 2024;
code and data available at
https://github.com/visual-ds/plausible-nlp-explanations | NAACL Findings (2024) 4190-4216; NAACL 2024 | 10.18653/v1/2024.findings-naacl.262 | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Saliency post-hoc explainability methods are important tools for
understanding increasingly complex NLP models. While these methods can reflect
the model's reasoning, they may not align with human intuition, making the
explanations not plausible. In this work, we present a methodology for
incorporating rationales, which are text annotations explaining human
decisions, into text classification models. This incorporation enhances the
plausibility of post-hoc explanations while preserving their faithfulness. Our
approach is agnostic to model architectures and explainability methods. We
introduce the rationales during model training by augmenting the standard
cross-entropy loss with a novel loss function inspired by contrastive learning.
By leveraging a multi-objective optimization algorithm, we explore the
trade-off between the two loss functions and generate a Pareto-optimal frontier
of models that balance performance and plausibility. Through extensive
experiments involving diverse models, datasets, and explainability methods, we
demonstrate that our approach significantly enhances the quality of model
explanations without causing substantial (sometimes negligible) degradation in
the original model's performance.
| [
{
"created": "Wed, 3 Apr 2024 22:39:33 GMT",
"version": "v1"
}
] | 2024-08-20 | [
[
"Resck",
"Lucas E.",
""
],
[
"Raimundo",
"Marcos M.",
""
],
[
"Poco",
"Jorge",
""
]
] |
2404.03251 | Guillermo Gallego | Maik Wischow, Patrick Irmisch, Anko Boerner, Guillermo Gallego | Real-time Noise Source Estimation of a Camera System from an Image and
Metadata | 16 pages, 16 figures, 12 tables, Project page:
https://github.com/MaikWischow/Noise-Source-Estimation | Advanced Intelligent Systems, 2024 | 10.1002/aisy.202300479 | null | cs.CV cs.RO eess.IV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Autonomous machines must self-maintain proper functionality to ensure the
safety of humans and themselves. This pertains particularly to its cameras as
predominant sensors to perceive the environment and support actions. A
fundamental camera problem addressed in this study is noise. Solutions often
focus on denoising images a posteriori, that is, fighting symptoms rather than
root causes. However, tackling root causes requires identifying the noise
sources, considering the limitations of mobile platforms. This work
investigates a real-time, memory-efficient and reliable noise source estimator
that combines data- and physically-based models. To this end, a DNN that
examines an image with camera metadata for major camera noise sources is built
and trained. In addition, it quantifies unexpected factors that impact image
noise or metadata. This study investigates seven different estimators on six
datasets that include synthetic noise, real-world noise from two camera
systems, and real field campaigns. For these, only the model with most metadata
is capable to accurately and robustly quantify all individual noise
contributions. This method outperforms total image noise estimators and can be
plug-and-play deployed. It also serves as a basis to include more advanced
noise sources, or as part of an automatic countermeasure feedback-loop to
approach fully reliable machines.
| [
{
"created": "Thu, 4 Apr 2024 07:14:12 GMT",
"version": "v1"
}
] | 2024-04-05 | [
[
"Wischow",
"Maik",
""
],
[
"Irmisch",
"Patrick",
""
],
[
"Boerner",
"Anko",
""
],
[
"Gallego",
"Guillermo",
""
]
] |
2404.03276 | Marco Arazzi | Marco Arazzi, Serena Nicolazzo, Antonino Nocera | A Deep Reinforcement Learning Approach for Security-Aware Service
Acquisition in IoT | null | Journal of Information Security and Applications 2024 | 10.1016/j.jisa.2024.103856 | null | cs.CR cs.AI | http://creativecommons.org/licenses/by/4.0/ | The novel Internet of Things (IoT) paradigm is composed of a growing number
of heterogeneous smart objects and services that are transforming architectures
and applications, increasing systems' complexity, and the need for reliability
and autonomy. In this context, both smart objects and services are often
provided by third parties which do not give full transparency regarding the
security and privacy of the features offered. Although machine-based Service
Level Agreements (SLA) have been recently leveraged to establish and share
policies in Cloud-based scenarios, and also in the IoT context, the issue of
making end users aware of the overall system security levels and the
fulfillment of their privacy requirements through the provision of the
requested service remains a challenging task. To tackle this problem, we
propose a complete framework that defines suitable levels of privacy and
security requirements in the acquisition of services in IoT, according to the
user needs. Through the use of a Reinforcement Learning based solution, a user
agent, inside the environment, is trained to choose the best smart objects
granting access to the target services. Moreover, the solution is designed to
guarantee deadline requirements and user security and privacy needs. Finally,
to evaluate the correctness and the performance of the proposed approach we
illustrate an extensive experimental analysis.
| [
{
"created": "Thu, 4 Apr 2024 08:00:12 GMT",
"version": "v1"
}
] | 2024-08-27 | [
[
"Arazzi",
"Marco",
""
],
[
"Nicolazzo",
"Serena",
""
],
[
"Nocera",
"Antonino",
""
]
] |
2404.03425 | Hongruixuan Chen | Hongruixuan Chen and Jian Song and Chengxi Han and Junshi Xia and
Naoto Yokoya | ChangeMamba: Remote Sensing Change Detection With Spatiotemporal State
Space Model | Accepted by IEEE TGRS: https://ieeexplore.ieee.org/document/10565926 | IEEE Transactions on Geoscience and Remote Sensing, vol. 62, pp.
1-20, 2024, Art no. 4409720 | 10.1109/TGRS.2024.3417253 | null | eess.IV cs.AI cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Convolutional neural networks (CNN) and Transformers have made impressive
progress in the field of remote sensing change detection (CD). However, both
architectures have inherent shortcomings: CNN are constrained by a limited
receptive field that may hinder their ability to capture broader spatial
contexts, while Transformers are computationally intensive, making them costly
to train and deploy on large datasets. Recently, the Mamba architecture, based
on state space models, has shown remarkable performance in a series of natural
language processing tasks, which can effectively compensate for the
shortcomings of the above two architectures. In this paper, we explore for the
first time the potential of the Mamba architecture for remote sensing CD tasks.
We tailor the corresponding frameworks, called MambaBCD, MambaSCD, and
MambaBDA, for binary change detection (BCD), semantic change detection (SCD),
and building damage assessment (BDA), respectively. All three frameworks adopt
the cutting-edge Visual Mamba architecture as the encoder, which allows full
learning of global spatial contextual information from the input images. For
the change decoder, which is available in all three architectures, we propose
three spatio-temporal relationship modeling mechanisms, which can be naturally
combined with the Mamba architecture and fully utilize its attribute to achieve
spatio-temporal interaction of multi-temporal features, thereby obtaining
accurate change information. On five benchmark datasets, our proposed
frameworks outperform current CNN- and Transformer-based approaches without
using any complex training strategies or tricks, fully demonstrating the
potential of the Mamba architecture in CD tasks. Further experiments show that
our architecture is quite robust to degraded data. The source code will be
available in https://github.com/ChenHongruixuan/MambaCD
| [
{
"created": "Thu, 4 Apr 2024 13:06:25 GMT",
"version": "v1"
},
{
"created": "Thu, 11 Apr 2024 10:51:34 GMT",
"version": "v2"
},
{
"created": "Sun, 14 Apr 2024 10:41:40 GMT",
"version": "v3"
},
{
"created": "Mon, 17 Jun 2024 19:57:36 GMT",
"version": "v4"
},
{
"created": "Wed, 26 Jun 2024 10:38:29 GMT",
"version": "v5"
},
{
"created": "Fri, 26 Jul 2024 06:25:48 GMT",
"version": "v6"
}
] | 2024-07-29 | [
[
"Chen",
"Hongruixuan",
""
],
[
"Song",
"Jian",
""
],
[
"Han",
"Chengxi",
""
],
[
"Xia",
"Junshi",
""
],
[
"Yokoya",
"Naoto",
""
]
] |
2404.03528 | Azmine Toushik Wasi | Azmine Toushik Wasi and Taki Hasan Rafi and Raima Islam and Dong-Kyu
Chae | BanglaAutoKG: Automatic Bangla Knowledge Graph Construction with
Semantic Neural Graph Filtering | 7 pages, 3 figures. Accepted to LREC-COLING 2024. Read in ACL
Anthology: https://aclanthology.org/2024.lrec-main.189/ | The 2024 Joint International Conference on Computational
Linguistics, Language Resources and Evaluation (LREC-COLING 2024) | null | null | cs.CL cs.IR cs.LG cs.NE cs.SI | http://creativecommons.org/licenses/by/4.0/ | Knowledge Graphs (KGs) have proven essential in information processing and
reasoning applications because they link related entities and give context-rich
information, supporting efficient information retrieval and knowledge
discovery; presenting information flow in a very effective manner. Despite
being widely used globally, Bangla is relatively underrepresented in KGs due to
a lack of comprehensive datasets, encoders, NER (named entity recognition)
models, POS (part-of-speech) taggers, and lemmatizers, hindering efficient
information processing and reasoning applications in the language. Addressing
the KG scarcity in Bengali, we propose BanglaAutoKG, a pioneering framework
that is able to automatically construct Bengali KGs from any Bangla text. We
utilize multilingual LLMs to understand various languages and correlate
entities and relations universally. By employing a translation dictionary to
identify English equivalents and extracting word features from pre-trained BERT
models, we construct the foundational KG. To reduce noise and align word
embeddings with our goal, we employ graph-based polynomial filters. Lastly, we
implement a GNN-based semantic filter, which elevates contextual understanding
and trims unnecessary edges, culminating in the formation of the definitive KG.
Empirical findings and case studies demonstrate the universal effectiveness of
our model, capable of autonomously constructing semantically enriched KGs from
any text.
| [
{
"created": "Thu, 4 Apr 2024 15:31:21 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Apr 2024 09:35:50 GMT",
"version": "v2"
},
{
"created": "Wed, 5 Jun 2024 13:39:56 GMT",
"version": "v3"
}
] | 2024-06-06 | [
[
"Wasi",
"Azmine Toushik",
""
],
[
"Rafi",
"Taki Hasan",
""
],
[
"Islam",
"Raima",
""
],
[
"Chae",
"Dong-Kyu",
""
]
] |
2404.03650 | Francis Engelmann | Francis Engelmann, Fabian Manhardt, Michael Niemeyer, Keisuke Tateno,
Marc Pollefeys, Federico Tombari | OpenNeRF: Open Set 3D Neural Scene Segmentation with Pixel-Wise Features
and Rendered Novel Views | ICLR 2024, Project page: https://opennerf.github.io | ICLR 2024 | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Large visual-language models (VLMs), like CLIP, enable open-set image
segmentation to segment arbitrary concepts from an image in a zero-shot manner.
This goes beyond the traditional closed-set assumption, i.e., where models can
only segment classes from a pre-defined training set. More recently, first
works on open-set segmentation in 3D scenes have appeared in the literature.
These methods are heavily influenced by closed-set 3D convolutional approaches
that process point clouds or polygon meshes. However, these 3D scene
representations do not align well with the image-based nature of the
visual-language models. Indeed, point cloud and 3D meshes typically have a
lower resolution than images and the reconstructed 3D scene geometry might not
project well to the underlying 2D image sequences used to compute pixel-aligned
CLIP features. To address these challenges, we propose OpenNeRF which naturally
operates on posed images and directly encodes the VLM features within the NeRF.
This is similar in spirit to LERF, however our work shows that using pixel-wise
VLM features (instead of global CLIP features) results in an overall less
complex architecture without the need for additional DINO regularization. Our
OpenNeRF further leverages NeRF's ability to render novel views and extract
open-set VLM features from areas that are not well observed in the initial
posed images. For 3D point cloud segmentation on the Replica dataset, OpenNeRF
outperforms recent open-vocabulary methods such as LERF and OpenScene by at
least +4.9 mIoU.
| [
{
"created": "Thu, 4 Apr 2024 17:59:08 GMT",
"version": "v1"
}
] | 2024-04-05 | [
[
"Engelmann",
"Francis",
""
],
[
"Manhardt",
"Fabian",
""
],
[
"Niemeyer",
"Michael",
""
],
[
"Tateno",
"Keisuke",
""
],
[
"Pollefeys",
"Marc",
""
],
[
"Tombari",
"Federico",
""
]
] |
2404.03704 | Luis Sigcha | Luis Sigcha, Luigi Borz\`i, Ignacio Pav\'on, N\'elson Costa, Susana
Costa, Pedro Arezes, Juan-Manuel L\'opez, Guillermo De Arcas | Improvement of Performance in Freezing of Gait detection in Parkinsons
Disease using Transformer networks and a single waist worn triaxial
accelerometer | null | Engineering Applications of Artificial Intelligence Volume 116,
November 2022, 105482 | 10.1016/j.engappai.2022.105482 | null | cs.LG cs.AI eess.SP | http://creativecommons.org/licenses/by/4.0/ | Freezing of gait (FOG) is one of the most incapacitating symptoms in
Parkinsons disease, affecting more than 50 percent of patients in advanced
stages of the disease. The presence of FOG may lead to falls and a loss of
independence with a consequent reduction in the quality of life. Wearable
technology and artificial intelligence have been used for automatic FOG
detection to optimize monitoring. However, differences between laboratory and
daily-life conditions present challenges for the implementation of reliable
detection systems. Consequently, improvement of FOG detection methods remains
important to provide accurate monitoring mechanisms intended for free-living
and real-time use. This paper presents advances in automatic FOG detection
using a single body-worn triaxial accelerometer and a novel classification
algorithm based on Transformers and convolutional networks. This study was
performed with data from 21 patients who manifested FOG episodes while
performing activities of daily living in a home setting. Results indicate that
the proposed FOG-Transformer can bring a significant improvement in FOG
detection using leave-one-subject-out cross-validation (LOSO CV). These results
bring opportunities for the implementation of accurate monitoring systems for
use in ambulatory or home settings.
| [
{
"created": "Thu, 4 Apr 2024 09:02:17 GMT",
"version": "v1"
}
] | 2024-04-08 | [
[
"Sigcha",
"Luis",
""
],
[
"Borzì",
"Luigi",
""
],
[
"Pavón",
"Ignacio",
""
],
[
"Costa",
"Nélson",
""
],
[
"Costa",
"Susana",
""
],
[
"Arezes",
"Pedro",
""
],
[
"López",
"Juan-Manuel",
""
],
[
"De Arcas",
"Guillermo",
""
]
] |