id
stringlengths 10
10
| submitter
stringlengths 3
52
| authors
stringlengths 6
7.24k
| title
stringlengths 12
217
| comments
stringlengths 1
446
⌀ | journal-ref
stringlengths 4
297
| doi
stringlengths 12
118
⌀ | report-no
stringclasses 237
values | categories
stringlengths 5
71
| license
stringclasses 6
values | abstract
stringlengths 90
3.26k
| versions
listlengths 1
17
| update_date
stringclasses 969
values | authors_parsed
sequencelengths 1
451
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2402.05158 | AKM Shahariar Azad Rabby | AKM Shahariar Azad Rabby, Hasmot Ali, Md. Majedul Islam, Sheikh
Abujar, Fuad Rahman | Enhancement of Bengali OCR by Specialized Models and Advanced Techniques
for Diverse Document Types | 8 pages, 7 figures, 4 table Link of the paper
https://openaccess.thecvf.com/content/WACV2024W/WVLL/html/Rabby_Enhancement_of_Bengali_OCR_by_Specialized_Models_and_Advanced_Techniques_WACVW_2024_paper.html | Proceedings of the IEEE/CVF Winter Conference on Applications of
Computer Vision (WACV) Workshops, 2024, pp. 1102-1109 | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This research paper presents a unique Bengali OCR system with some
capabilities. The system excels in reconstructing document layouts while
preserving structure, alignment, and images. It incorporates advanced image and
signature detection for accurate extraction. Specialized models for word
segmentation cater to diverse document types, including computer-composed,
letterpress, typewriter, and handwritten documents. The system handles static
and dynamic handwritten inputs, recognizing various writing styles.
Furthermore, it has the ability to recognize compound characters in Bengali.
Extensive data collection efforts provide a diverse corpus, while advanced
technical components optimize character and word recognition. Additional
contributions include image, logo, signature and table recognition, perspective
correction, layout reconstruction, and a queuing module for efficient and
scalable processing. The system demonstrates outstanding performance in
efficient and accurate text extraction and analysis.
| [
{
"created": "Wed, 7 Feb 2024 18:02:33 GMT",
"version": "v1"
}
] | 2024-02-09 | [
[
"Rabby",
"AKM Shahariar Azad",
""
],
[
"Ali",
"Hasmot",
""
],
[
"Islam",
"Md. Majedul",
""
],
[
"Abujar",
"Sheikh",
""
],
[
"Rahman",
"Fuad",
""
]
] |
2402.05248 | David Gonz\'alez Ortega | David Gonz\'alez-Ortega, Francisco Javier D\'iaz-Perna, Mario
Mart\'inez-Zarzuela and M\'iriam Ant\'on-Rodr\'iguez | Comparative Analysis of Kinect-Based and Oculus-Based Gaze Region
Estimation Methods in a Driving Simulator | 25 pages | Sensors 2021, 21, 26 | 10.3390/s21010026 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Driver's gaze information can be crucial in driving research because of its
relation to driver attention. Particularly, the inclusion of gaze data in
driving simulators broadens the scope of research studies as they can relate
drivers' gaze patterns to their features and performance. In this paper, we
present two gaze region estimation modules integrated in a driving simulator.
One uses the 3D Kinect device and another uses the virtual reality Oculus Rift
device. The modules are able to detect the region, out of seven in which the
driving scene was divided, where a driver is gazing at in every route processed
frame. Four methods were implemented and compared for gaze estimation, which
learn the relation between gaze displacement and head movement. Two are simpler
and based on points that try to capture this relation and two are based on
classifiers such as MLP and SVM. Experiments were carried out with 12 users
that drove on the same scenario twice, each one with a different visualization
display, first with a big screen and later with Oculus Rift. On the whole,
Oculus Rift outperformed Kinect as the best hardware for gaze estimation. The
Oculus-based gaze region estimation method with the highest performance
achieved an accuracy of 97.94%. The information provided by the Oculus Rift
module enriches the driving simulator data and makes it possible a multimodal
driving performance analysis apart from the immersion and realism obtained with
the virtual reality experience provided by Oculus.
| [
{
"created": "Sun, 4 Feb 2024 18:02:58 GMT",
"version": "v1"
}
] | 2024-02-09 | [
[
"González-Ortega",
"David",
""
],
[
"Díaz-Perna",
"Francisco Javier",
""
],
[
"Martínez-Zarzuela",
"Mario",
""
],
[
"Antón-Rodríguez",
"Míriam",
""
]
] |
2402.05519 | Mike Thelwall Prof | Mike Thelwall | Can ChatGPT evaluate research quality? | null | Journal of Data and Information Science, 9(2), 1-21 | 10.2478/jdis-2024-0013 | null | cs.DL cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Purpose: Assess whether ChatGPT 4.0 is accurate enough to perform research
evaluations on journal articles to automate this time-consuming task.
Design/methodology/approach: Test the extent to which ChatGPT-4 can assess the
quality of journal articles using a case study of the published scoring
guidelines of the UK Research Excellence Framework (REF) 2021 to create a
research evaluation ChatGPT. This was applied to 51 of my own articles and
compared against my own quality judgements. Findings: ChatGPT-4 can produce
plausible document summaries and quality evaluation rationales that match the
REF criteria. Its overall scores have weak correlations with my self-evaluation
scores of the same documents (averaging r=0.281 over 15 iterations, with 8
being statistically significantly different from 0). In contrast, the average
scores from the 15 iterations produced a statistically significant positive
correlation of 0.509. Thus, averaging scores from multiple ChatGPT-4 rounds
seems more effective than individual scores. The positive correlation may be
due to ChatGPT being able to extract the author's significance, rigour, and
originality claims from inside each paper. If my weakest articles are removed,
then the correlation with average scores (r=0.200) falls below statistical
significance, suggesting that ChatGPT struggles to make fine-grained
evaluations. Research limitations: The data is self-evaluations of a
convenience sample of articles from one academic in one field. Practical
implications: Overall, ChatGPT does not yet seem to be accurate enough to be
trusted for any formal or informal research quality evaluation tasks. Research
evaluators, including journal editors, should therefore take steps to control
its use. Originality/value: This is the first published attempt at
post-publication expert review accuracy testing for ChatGPT.
| [
{
"created": "Thu, 8 Feb 2024 10:00:40 GMT",
"version": "v1"
}
] | 2024-05-01 | [
[
"Thelwall",
"Mike",
""
]
] |
2402.05536 | Jos\'e Alberto Ben\'itez-Andrades Ph.D. | Jos\'e Alberto Ben\'itez-Andrades, Mar\'ia Teresa Garc\'ia-Ord\'as,
Mayra Russo, Ahmad Sakor, Luis Daniel Fernandes Rotger and Maria-Esther Vidal | Empowering machine learning models with contextual knowledge for
enhancing the detection of eating disorders in social media posts | null | Semantic Web, Volume 4, Issue 5, pp. 873-892, 2023 | 10.3233/SW-223269 | null | cs.LG cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Social networks are vital for information sharing, especially in the health
sector for discussing diseases and treatments. These platforms, however, often
feature posts as brief texts, posing challenges for Artificial Intelligence
(AI) in understanding context. We introduce a novel hybrid approach combining
community-maintained knowledge graphs (like Wikidata) with deep learning to
enhance the categorization of social media posts. This method uses advanced
entity recognizers and linkers (like Falcon 2.0) to connect short post entities
to knowledge graphs. Knowledge graph embeddings (KGEs) and contextualized word
embeddings (like BERT) are then employed to create rich, context-based
representations of these posts.
Our focus is on the health domain, particularly in identifying posts related
to eating disorders (e.g., anorexia, bulimia) to aid healthcare providers in
early diagnosis. We tested our approach on a dataset of 2,000 tweets about
eating disorders, finding that merging word embeddings with knowledge graph
information enhances the predictive models' reliability. This methodology aims
to assist health experts in spotting patterns indicative of mental disorders,
thereby improving early detection and accurate diagnosis for personalized
medicine.
| [
{
"created": "Thu, 8 Feb 2024 10:15:41 GMT",
"version": "v1"
}
] | 2024-02-09 | [
[
"Benítez-Andrades",
"José Alberto",
""
],
[
"García-Ordás",
"María Teresa",
""
],
[
"Russo",
"Mayra",
""
],
[
"Sakor",
"Ahmad",
""
],
[
"Rotger",
"Luis Daniel Fernandes",
""
],
[
"Vidal",
"Maria-Esther",
""
]
] |
2402.05554 | Jiajun Zeng | Jiayu Peng, Jiajun Zeng, Manlin Lai, Ruobing Huang, Dong Ni, Zhenzhou
Li | One-Stop Automated Diagnostic System for Carpal Tunnel Syndrome in
Ultrasound Images Using Deep Learning | Accepted by Ultrasound in Medicine & Biology | Ultrasound in Medicine & Biology, Volume 50, Issue 2, February
2024, Pages 304-314 | 10.1016/j.ultrasmedbio.2023.10.009 | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Objective: Ultrasound (US) examination has unique advantages in diagnosing
carpal tunnel syndrome (CTS) while identifying the median nerve (MN) and
diagnosing CTS depends heavily on the expertise of examiners. To alleviate this
problem, we aimed to develop a one-stop automated CTS diagnosis system
(OSA-CTSD) and evaluate its effectiveness as a computer-aided diagnostic tool.
Methods: We combined real-time MN delineation, accurate biometric measurements,
and explainable CTS diagnosis into a unified framework, called OSA-CTSD. We
collected a total of 32,301 static images from US videos of 90 normal wrists
and 40 CTS wrists for evaluation using a simplified scanning protocol. Results:
The proposed model showed better segmentation and measurement performance than
competing methods, reporting that HD95 score of 7.21px, ASSD score of 2.64px,
Dice score of 85.78%, and IoU score of 76.00%, respectively. In the reader
study, it demonstrated comparable performance with the average performance of
the experienced in classifying the CTS, while outperformed that of the
inexperienced radiologists in terms of classification metrics (e.g., accuracy
score of 3.59% higher and F1 score of 5.85% higher). Conclusion: The OSA-CTSD
demonstrated promising diagnostic performance with the advantages of real-time,
automation, and clinical interpretability. The application of such a tool can
not only reduce reliance on the expertise of examiners, but also can help to
promote the future standardization of the CTS diagnosis process, benefiting
both patients and radiologists.
| [
{
"created": "Thu, 8 Feb 2024 10:43:55 GMT",
"version": "v1"
}
] | 2024-02-09 | [
[
"Peng",
"Jiayu",
""
],
[
"Zeng",
"Jiajun",
""
],
[
"Lai",
"Manlin",
""
],
[
"Huang",
"Ruobing",
""
],
[
"Ni",
"Dong",
""
],
[
"Li",
"Zhenzhou",
""
]
] |
2402.05571 | Jos\'e Alberto Ben\'itez-Andrades Ph.D. | Jos\'e Alberto Ben\'itez-Andrades, Jos\'e-Manuel Alija-P\'erez,
Maria-Esther Vidal, Rafael Pastor-Vargas and Mar\'ia Teresa Garc\'ia-Ord\'as | Traditional Machine Learning Models and Bidirectional Encoder
Representations From Transformer (BERT)-Based Automatic Classification of
Tweets About Eating Disorders: Algorithm Development and Validation Study | null | JMIR Medical Informatics, Volume 10, Issue 2, 2022, ID e34492 | 10.2196/34492 | null | cs.CL cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Background: Eating disorders are increasingly prevalent, and social networks
offer valuable information.
Objective: Our goal was to identify efficient machine learning models for
categorizing tweets related to eating disorders.
Methods: Over three months, we collected tweets about eating disorders. A
2,000-tweet subset was labeled for: (1) being written by individuals with
eating disorders, (2) promoting eating disorders, (3) informativeness, and (4)
scientific content. Both traditional machine learning and deep learning models
were employed for classification, assessing accuracy, F1 score, and
computational time.
Results: From 1,058,957 collected tweets, transformer-based bidirectional
encoder representations achieved the highest F1 scores (71.1%-86.4%) across all
four categories.
Conclusions: Transformer-based models outperform traditional techniques in
classifying eating disorder-related tweets, though they require more
computational resources.
| [
{
"created": "Thu, 8 Feb 2024 11:16:13 GMT",
"version": "v1"
}
] | 2024-02-09 | [
[
"Benítez-Andrades",
"José Alberto",
""
],
[
"Alija-Pérez",
"José-Manuel",
""
],
[
"Vidal",
"Maria-Esther",
""
],
[
"Pastor-Vargas",
"Rafael",
""
],
[
"García-Ordás",
"María Teresa",
""
]
] |
2402.05593 | Thomas P\"ollabauer | Thomas P\"ollabauer, Julius K\"uhn | A Concept for Reconstructing Stucco Statues from historic Sketches using
synthetic Data only | null | Eurographics Workshop on Graphics and Cultural Heritage 2022 | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In medieval times, stuccoworkers used a red color, called sinopia, to first
create a sketch of the to-be-made statue on the wall. Today, many of these
statues are destroyed, but using the original drawings, deriving from the red
color also called sinopia, we can reconstruct how the final statue might have
looked.We propose a fully-automated approach to reconstruct a point cloud and
show preliminary results by generating a color-image, a depth-map, as well as
surface normals requiring only a single sketch, and without requiring a
collection of other, similar samples. Our proposed solution allows real-time
reconstruction on-site, for instance, within an exhibition, or to generate a
useful starting point for an expert, trying to manually reconstruct the statue,
all while using only synthetic data for training.
| [
{
"created": "Thu, 8 Feb 2024 11:46:26 GMT",
"version": "v1"
}
] | 2024-02-09 | [
[
"Pöllabauer",
"Thomas",
""
],
[
"Kühn",
"Julius",
""
]
] |
2402.05782 | Qizhen Zhang | Kitty Fung, Qizhen Zhang, Chris Lu, Jia Wan, Timon Willi, Jakob
Foerster | Analysing the Sample Complexity of Opponent Shaping | null | AAMAS 2024 | null | null | cs.LG cs.AI cs.GT cs.MA | http://creativecommons.org/licenses/by/4.0/ | Learning in general-sum games often yields collectively sub-optimal results.
Addressing this, opponent shaping (OS) methods actively guide the learning
processes of other agents, empirically leading to improved individual and group
performances in many settings. Early OS methods use higher-order derivatives to
shape the learning of co-players, making them unsuitable for shaping multiple
learning steps. Follow-up work, Model-free Opponent Shaping (M-FOS), addresses
these by reframing the OS problem as a meta-game. In contrast to early OS
methods, there is little theoretical understanding of the M-FOS framework.
Providing theoretical guarantees for M-FOS is hard because A) there is little
literature on theoretical sample complexity bounds for meta-reinforcement
learning B) M-FOS operates in continuous state and action spaces, so
theoretical analysis is challenging. In this work, we present R-FOS, a tabular
version of M-FOS that is more suitable for theoretical analysis. R-FOS
discretises the continuous meta-game MDP into a tabular MDP. Within this
discretised MDP, we adapt the $R_{max}$ algorithm, most prominently used to
derive PAC-bounds for MDPs, as the meta-learner in the R-FOS algorithm. We
derive a sample complexity bound that is exponential in the cardinality of the
inner state and action space and the number of agents. Our bound guarantees
that, with high probability, the final policy learned by an R-FOS agent is
close to the optimal policy, apart from a constant factor. Finally, we
investigate how R-FOS's sample complexity scales in the size of state-action
space. Our theoretical results on scaling are supported empirically in the
Matching Pennies environment.
| [
{
"created": "Thu, 8 Feb 2024 16:17:18 GMT",
"version": "v1"
}
] | 2024-02-09 | [
[
"Fung",
"Kitty",
""
],
[
"Zhang",
"Qizhen",
""
],
[
"Lu",
"Chris",
""
],
[
"Wan",
"Jia",
""
],
[
"Willi",
"Timon",
""
],
[
"Foerster",
"Jakob",
""
]
] |
2402.05958 | David Gonz\'alez Ortega | Mario Mart\'inez-Zarzuela, David Gonz\'alez-Ortega, M\'iriam
Ant\'on-Rodr\'iguez, Francisco Javier D\'iaz-Pernas, Henning M\"uller,
Cristina Sim\'on-Mart\'inez | A comparative study on wearables and single-camera video for upper-limb
out-of-thelab activity recognition with different deep learning architectures | null | Gait & Posture (2023) 106, p. 119-120 | 10.1016/j.gaitpost.2023.07.149 | null | cs.CV cs.LG eess.SP | http://creativecommons.org/licenses/by/4.0/ | The use of a wide range of computer vision solutions, and more recently
high-end Inertial Measurement Units (IMU) have become increasingly popular for
assessing human physical activity in clinical and research settings.
Nevertheless, to increase the feasibility of patient tracking in out-of-the-lab
settings, it is necessary to use a reduced number of devices for movement
acquisition. Promising solutions in this context are IMU-based wearables and
single camera systems. Additionally, the development of machine learning
systems able to recognize and digest clinically relevant data in-the-wild is
needed, and therefore determining the ideal input to those is crucial.
| [
{
"created": "Sun, 4 Feb 2024 19:45:59 GMT",
"version": "v1"
}
] | 2024-02-12 | [
[
"Martínez-Zarzuela",
"Mario",
""
],
[
"González-Ortega",
"David",
""
],
[
"Antón-Rodríguez",
"Míriam",
""
],
[
"Díaz-Pernas",
"Francisco Javier",
""
],
[
"Müller",
"Henning",
""
],
[
"Simón-Martínez",
"Cristina",
""
]
] |
2402.05975 | David Gonz\'alez Ortega | Francisco Javier D\'iaz-Pernas, Mario Mart\'inez-Zarzuela, M\'iriam
Ant\'on-Rodr\'iguez, and David Gonz\'alez-Ortega | A Deep Learning Approach for Brain Tumor Classification and Segmentation
Using a Multiscale Convolutional Neural Network | 14 pages | Healthcare 2021, 9, 153 | 10.3390/healthcare9020153 | null | eess.IV cs.AI cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | In this paper, we present a fully automatic brain tumor segmentation and
classification model using a Deep Convolutional Neural Network that includes a
multiscale approach. One of the differences of our proposal with respect to
previous works is that input images are processed in three spatial scales along
different processing pathways. This mechanism is inspired in the inherent
operation of the Human Visual System. The proposed neural model can analyze MRI
images containing three types of tumors: meningioma, glioma, and pituitary
tumor, over sagittal, coronal, and axial views and does not need preprocessing
of input images to remove skull or vertebral column parts in advance. The
performance of our method on a publicly available MRI image dataset of 3064
slices from 233 patients is compared with previously classical machine learning
and deep learning published methods. In the comparison, our method remarkably
obtained a tumor classification accuracy of 0.973, higher than the other
approaches using the same database.
| [
{
"created": "Sun, 4 Feb 2024 17:47:03 GMT",
"version": "v1"
}
] | 2024-02-12 | [
[
"Díaz-Pernas",
"Francisco Javier",
""
],
[
"Martínez-Zarzuela",
"Mario",
""
],
[
"Antón-Rodríguez",
"Míriam",
""
],
[
"González-Ortega",
"David",
""
]
] |
2402.06075 | Scotty Black | Scotty Black, Christian Darken | Scaling Artificial Intelligence for Digital Wargaming in Support of
Decision-Making | null | NATO STO-MP-MSG-207 2023 | 10.14339/STO-MP-MSG-207-23-PDF | STO-MP-MSG-207-23 | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this unprecedented era of technology-driven transformation, it becomes
more critical than ever that we aggressively invest in developing robust
artificial intelligence (AI) for wargaming in support of decision-making. By
advancing AI-enabled systems and pairing these with human judgment, we will be
able to enhance all-domain awareness, improve the speed and quality of our
decision cycles, offer recommendations for novel courses of action, and more
rapidly counter our adversary's actions. It therefore becomes imperative that
we accelerate the development of AI to help us better address the complexity of
modern challenges and dilemmas that currently requires human intelligence and,
if possible, attempt to surpass human intelligence--not to replace humans, but
to augment and better inform human decision-making at machine speed. Although
deep reinforcement learning continues to show promising results in intelligent
agent behavior development for the long-horizon, complex tasks typically found
in combat modeling and simulation, further research is needed to enable the
scaling of AI to deal with these intricate and expansive state-spaces
characteristic of wargaming for either concept development, education, or
analysis. To help address this challenge, in our research, we are developing
and implementing a hierarchical reinforcement learning framework that includes
a multi-model approach and dimension-invariant observation abstractions.
| [
{
"created": "Thu, 8 Feb 2024 21:51:07 GMT",
"version": "v1"
}
] | 2024-02-12 | [
[
"Black",
"Scotty",
""
],
[
"Darken",
"Christian",
""
]
] |
2402.06078 | Ruben Martinez-Cantin | Pedro Os\'orio, Alexandre Bernardino, Ruben Martinez-Cantin, Jos\'e
Santos-Victor | Gaussian Mixture Models for Affordance Learning using Bayesian Networks | IEEE/RSJ International Conference on Intelligent Robots and Systems
2010 | Published on the Proceedings of the IEEE/RSJ International
Conference on Intelligent Robots and Systems 2010 | 10.1109/IROS.2010.5650297 | null | cs.RO cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Affordances are fundamental descriptors of relationships between actions,
objects and effects. They provide the means whereby a robot can predict
effects, recognize actions, select objects and plan its behavior according to
desired goals. This paper approaches the problem of an embodied agent exploring
the world and learning these affordances autonomously from its sensory
experiences. Models exist for learning the structure and the parameters of a
Bayesian Network encoding this knowledge. Although Bayesian Networks are
capable of dealing with uncertainty and redundancy, previous work considered
complete observability of the discrete sensory data, which may lead to hard
errors in the presence of noise. In this paper we consider a probabilistic
representation of the sensors by Gaussian Mixture Models (GMMs) and explicitly
taking into account the probability distribution contained in each discrete
affordance concept, which can lead to a more correct learning.
| [
{
"created": "Thu, 8 Feb 2024 22:05:45 GMT",
"version": "v1"
}
] | 2024-02-12 | [
[
"Osório",
"Pedro",
""
],
[
"Bernardino",
"Alexandre",
""
],
[
"Martinez-Cantin",
"Ruben",
""
],
[
"Santos-Victor",
"José",
""
]
] |
2402.06107 | Feng Xia | Yemeng Liu, Jing Ren, Jianshuo Xu, Xiaomei Bai, Roopdeep Kaur, Feng
Xia | Multiple Instance Learning for Cheating Detection and Localization in
Online Examinations | 12 pages, 7 figures | IEEE Transactions on Cognitive and Developmental Systems 2024 | 10.1109/TCDS.2024.3349705 | null | cs.CV cs.AI cs.CY cs.LG | http://creativecommons.org/licenses/by/4.0/ | The spread of the Coronavirus disease-2019 epidemic has caused many courses
and exams to be conducted online. The cheating behavior detection model in
examination invigilation systems plays a pivotal role in guaranteeing the
equality of long-distance examinations. However, cheating behavior is rare, and
most researchers do not comprehensively take into account features such as head
posture, gaze angle, body posture, and background information in the task of
cheating behavior detection. In this paper, we develop and present CHEESE, a
CHEating detection framework via multiplE inStancE learning. The framework
consists of a label generator that implements weak supervision and a feature
encoder to learn discriminative features. In addition, the framework combines
body posture and background features extracted by 3D convolution with eye gaze,
head posture and facial features captured by OpenFace 2.0. These features are
fed into the spatio-temporal graph module by stitching to analyze the
spatio-temporal changes in video clips to detect the cheating behaviors. Our
experiments on three datasets, UCF-Crime, ShanghaiTech and Online Exam
Proctoring (OEP), prove the effectiveness of our method as compared to the
state-of-the-art approaches, and obtain the frame-level AUC score of 87.58% on
the OEP dataset.
| [
{
"created": "Fri, 9 Feb 2024 00:01:42 GMT",
"version": "v1"
}
] | 2024-02-12 | [
[
"Liu",
"Yemeng",
""
],
[
"Ren",
"Jing",
""
],
[
"Xu",
"Jianshuo",
""
],
[
"Bai",
"Xiaomei",
""
],
[
"Kaur",
"Roopdeep",
""
],
[
"Xia",
"Feng",
""
]
] |
2402.06563 | Neslihan Suzen | Neslihan Suzen, Evgeny M. Mirkes, Damian Roland, Jeremy Levesley,
Alexander N. Gorban, Tim J. Coats | What is Hiding in Medicine's Dark Matter? Learning with Missing Data in
Medical Practices | 8 pages | 2023 IEEE International Conference on Big Data (BigData),
4979-4986 | 10.1109/BigData59044.2023.10386194 | null | cs.LG cs.AI cs.CL cs.HC cs.IT math.IT | http://creativecommons.org/licenses/by/4.0/ | Electronic patient records (EPRs) produce a wealth of data but contain
significant missing information. Understanding and handling this missing data
is an important part of clinical data analysis and if left unaddressed could
result in bias in analysis and distortion in critical conclusions. Missing data
may be linked to health care professional practice patterns and imputation of
missing data can increase the validity of clinical decisions. This study
focuses on statistical approaches for understanding and interpreting the
missing data and machine learning based clinical data imputation using a single
centre's paediatric emergency data and the data from UK's largest clinical
audit for traumatic injury database (TARN). In the study of 56,961 data points
related to initial vital signs and observations taken on children presenting to
an Emergency Department, we have shown that missing data are likely to be
non-random and how these are linked to health care professional practice
patterns. We have then examined 79 TARN fields with missing values for 5,791
trauma cases. Singular Value Decomposition (SVD) and k-Nearest Neighbour (kNN)
based missing data imputation methods are used and imputation results against
the original dataset are compared and statistically tested. We have concluded
that the 1NN imputer is the best imputation which indicates a usual pattern of
clinical decision making: find the most similar patients and take their
attributes as imputation.
| [
{
"created": "Fri, 9 Feb 2024 17:27:35 GMT",
"version": "v1"
}
] | 2024-02-12 | [
[
"Suzen",
"Neslihan",
""
],
[
"Mirkes",
"Evgeny M.",
""
],
[
"Roland",
"Damian",
""
],
[
"Levesley",
"Jeremy",
""
],
[
"Gorban",
"Alexander N.",
""
],
[
"Coats",
"Tim J.",
""
]
] |
2402.06640 | Ishir Rao | Ishir Rao | Modeling and Optimization of Epidemiological Control Policies Through
Reinforcement Learning | 22 pages, 8 figures | J. Emerging Investigators Article (2023) Vol. 6 | 10.59720/22-157 | null | cs.AI q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Pandemics involve the high transmission of a disease that impacts global and
local health and economic patterns. The impact of a pandemic can be minimized
by enforcing certain restrictions on a community. However, while minimizing
infection and death rates, these restrictions can also lead to economic crises.
Epidemiological models help propose pandemic control strategies based on
non-pharmaceutical interventions such as social distancing, curfews, and
lockdowns, reducing the economic impact of these restrictions. However,
designing manual control strategies while considering disease spread and
economic status is non-trivial. Optimal strategies can be designed through
multi-objective reinforcement learning (MORL) models, which demonstrate how
restrictions can be used to optimize the outcome of a pandemic. In this
research, we utilized an epidemiological Susceptible, Exposed, Infected,
Recovered, Deceased (SEIRD) model: a compartmental model for virtually
simulating a pandemic day by day. We combined the SEIRD model with a deep
double recurrent Q-network to train a reinforcement learning agent to enforce
the optimal restriction on the SEIRD simulation based on a reward function. We
tested two agents with unique reward functions and pandemic goals to obtain two
strategies. The first agent placed long lockdowns to reduce the initial spread
of the disease, followed by cyclical and shorter lockdowns to mitigate the
resurgence of the disease. The second agent provided similar infection rates
but an improved economy by implementing a 10-day lockdown and 20-day
no-restriction cycle. This use of reinforcement learning and epidemiological
modeling allowed for both economic and infection mitigation in multiple
pandemic scenarios.
| [
{
"created": "Thu, 25 Jan 2024 22:39:39 GMT",
"version": "v1"
}
] | 2024-02-13 | [
[
"Rao",
"Ishir",
""
]
] |
2402.06694 | Scotty Black | Scotty Black, Christian Darken | Scaling Intelligent Agents in Combat Simulations for Wargaming | arXiv admin note: text overlap with arXiv:2402.06075 | I/ITSEC Conference Proceedings 2023 | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Remaining competitive in future conflicts with technologically-advanced
competitors requires us to accelerate our research and development in
artificial intelligence (AI) for wargaming. More importantly, leveraging
machine learning for intelligent combat behavior development will be key to one
day achieving superhuman performance in this domain--elevating the quality and
accelerating the speed of our decisions in future wars. Although deep
reinforcement learning (RL) continues to show promising results in intelligent
agent behavior development in games, it has yet to perform at or above the
human level in the long-horizon, complex tasks typically found in combat
modeling and simulation. Capitalizing on the proven potential of RL and recent
successes of hierarchical reinforcement learning (HRL), our research is
investigating and extending the use of HRL to create intelligent agents capable
of performing effectively in these large and complex simulation environments.
Our ultimate goal is to develop an agent capable of superhuman performance that
could then serve as an AI advisor to military planners and decision-makers.
This papers covers our ongoing approach and the first three of our five
research areas aimed at managing the exponential growth of computations that
have thus far limited the use of AI in combat simulations: (1) developing an
HRL training framework and agent architecture for combat units; (2) developing
a multi-model framework for agent decision-making; (3) developing
dimension-invariant observation abstractions of the state space to manage the
exponential growth of computations; (4) developing an intrinsic rewards engine
to enable long-term planning; and (5) implementing this framework into a
higher-fidelity combat simulation.
| [
{
"created": "Thu, 8 Feb 2024 21:57:10 GMT",
"version": "v1"
}
] | 2024-02-13 | [
[
"Black",
"Scotty",
""
],
[
"Darken",
"Christian",
""
]
] |
2402.06733 | Satvik Golechha | Pragya Srivastava, Satvik Golechha, Amit Deshpande, Amit Sharma | NICE: To Optimize In-Context Examples or Not? | Accepted as a full paper (9 pages) at ACL 2024 (Main) | Proceedings of the 62nd Annual Meeting of the Association for
Computational Linguistics 2024 (Volume 1: Long Papers) | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Recent work shows that in-context learning and optimization of in-context
examples (ICE) can significantly improve the accuracy of large language models
(LLMs) on a wide range of tasks, leading to an apparent consensus that ICE
optimization is crucial for better performance. However, most of these studies
assume a fixed or no instruction provided in the prompt. We challenge this
consensus by investigating the necessity of optimizing ICE when task-specific
instructions are provided and find that there are many tasks for which it
yields diminishing returns. In particular, using a diverse set of tasks and a
systematically created instruction set with gradually added details, we find
that as the prompt instruction becomes more detailed, the returns on ICE
optimization diminish. To characterize this behavior, we introduce a
task-specific metric called Normalized Invariability to Choice of Examples
(NICE) that quantifies the learnability of tasks from a given instruction, and
provides a heuristic to help decide whether to optimize instructions or ICE for
a new task. Given a task, the proposed metric can reliably predict the utility
of optimizing ICE compared to using random ICE. Our code is available at
https://github.com/microsoft/nice-icl.
| [
{
"created": "Fri, 9 Feb 2024 19:09:19 GMT",
"version": "v1"
},
{
"created": "Fri, 16 Feb 2024 12:08:38 GMT",
"version": "v2"
},
{
"created": "Thu, 6 Jun 2024 12:16:55 GMT",
"version": "v3"
}
] | 2024-06-07 | [
[
"Srivastava",
"Pragya",
""
],
[
"Golechha",
"Satvik",
""
],
[
"Deshpande",
"Amit",
""
],
[
"Sharma",
"Amit",
""
]
] |
2402.06784 | Stefano Martina PhD | Matteo Paiano, Stefano Martina, Carlotta Giannelli, Filippo Caruso | Transfer learning with generative models for object detection on limited
datasets | 28 pages, 16 figures, 1 table | 2024 Mach. Learn.: Sci. Technol. 5 035041 | 10.1088/2632-2153/ad65b5 | null | cs.CV cond-mat.dis-nn cs.AI cs.LG cs.NA math.NA | http://creativecommons.org/licenses/by/4.0/ | The availability of data is limited in some fields, especially for object
detection tasks, where it is necessary to have correctly labeled bounding boxes
around each object. A notable example of such data scarcity is found in the
domain of marine biology, where it is useful to develop methods to
automatically detect submarine species for environmental monitoring. To address
this data limitation, the state-of-the-art machine learning strategies employ
two main approaches. The first involves pretraining models on existing datasets
before generalizing to the specific domain of interest. The second strategy is
to create synthetic datasets specifically tailored to the target domain using
methods like copy-paste techniques or ad-hoc simulators. The first strategy
often faces a significant domain shift, while the second demands custom
solutions crafted for the specific task. In response to these challenges, here
we propose a transfer learning framework that is valid for a generic scenario.
In this framework, generated images help to improve the performances of an
object detector in a few-real data regime. This is achieved through a
diffusion-based generative model that was pretrained on large generic datasets.
With respect to the state-of-the-art, we find that it is not necessary to fine
tune the generative model on the specific domain of interest. We believe that
this is an important advance because it mitigates the labor-intensive task of
manual labeling the images in object detection tasks. We validate our approach
focusing on fishes in an underwater environment, and on the more common domain
of cars in an urban setting. Our method achieves detection performance
comparable to models trained on thousands of images, using only a few hundreds
of input data. Our results pave the way for new generative AI-based protocols
for machine learning applications in various domains.
| [
{
"created": "Fri, 9 Feb 2024 21:17:31 GMT",
"version": "v1"
},
{
"created": "Thu, 13 Jun 2024 10:09:51 GMT",
"version": "v2"
}
] | 2024-09-11 | [
[
"Paiano",
"Matteo",
""
],
[
"Martina",
"Stefano",
""
],
[
"Giannelli",
"Carlotta",
""
],
[
"Caruso",
"Filippo",
""
]
] |
2402.07043 | Yunzhen Feng | Elvis Dohmatob, Yunzhen Feng, Pu Yang, Francois Charton and Julia
Kempe | A Tale of Tails: Model Collapse as a Change of Scaling Laws | null | ICML 2024 | null | null | cs.LG cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As AI model size grows, neural scaling laws have become a crucial tool to
predict the improvements of large models when increasing capacity and the size
of original (human or natural) training data. Yet, the widespread use of
popular models means that the ecosystem of online data and text will co-evolve
to progressively contain increased amounts of synthesized data. In this paper
we ask: How will the scaling laws change in the inevitable regime where
synthetic data makes its way into the training corpus? Will future models,
still improve, or be doomed to degenerate up to total (model) collapse? We
develop a theoretical framework of model collapse through the lens of scaling
laws. We discover a wide range of decay phenomena, analyzing loss of scaling,
shifted scaling with number of generations, the ''un-learning" of skills, and
grokking when mixing human and synthesized data. Our theory is validated by
large-scale experiments with a transformer on an arithmetic task and text
generation using the large language model Llama2.
| [
{
"created": "Sat, 10 Feb 2024 21:06:34 GMT",
"version": "v1"
},
{
"created": "Fri, 31 May 2024 12:27:52 GMT",
"version": "v2"
}
] | 2024-06-03 | [
[
"Dohmatob",
"Elvis",
""
],
[
"Feng",
"Yunzhen",
""
],
[
"Yang",
"Pu",
""
],
[
"Charton",
"Francois",
""
],
[
"Kempe",
"Julia",
""
]
] |
2402.07085 | Kenichi Fujita | Kenichi Fujita, Atsushi Ando, Yusuke Ijima | Speech Rhythm-Based Speaker Embeddings Extraction from Phonemes and
Phoneme Duration for Multi-Speaker Speech Synthesis | 11 pages,9 figures, Accepted to IEICE TRANSACTIONS on Information and
Systems | IEICE TRANSACTIONS on Information and Systems 107.1 (2024): 93-104 | 10.1587/transinf.2023EDP7039 | null | cs.SD cs.CL cs.LG eess.AS | http://creativecommons.org/licenses/by-sa/4.0/ | This paper proposes a speech rhythm-based method for speaker embeddings to
model phoneme duration using a few utterances by the target speaker. Speech
rhythm is one of the essential factors among speaker characteristics, along
with acoustic features such as F0, for reproducing individual utterances in
speech synthesis. A novel feature of the proposed method is the rhythm-based
embeddings extracted from phonemes and their durations, which are known to be
related to speaking rhythm. They are extracted with a speaker identification
model similar to the conventional spectral feature-based one. We conducted
three experiments, speaker embeddings generation, speech synthesis with
generated embeddings, and embedding space analysis, to evaluate the
performance. The proposed method demonstrated a moderate speaker identification
performance (15.2% EER), even with only phonemes and their duration
information. The objective and subjective evaluation results demonstrated that
the proposed method can synthesize speech with speech rhythm closer to the
target speaker than the conventional method. We also visualized the embeddings
to evaluate the relationship between the distance of the embeddings and the
perceptual similarity. The visualization of the embedding space and the
relation analysis between the closeness indicated that the distribution of
embeddings reflects the subjective and objective similarity.
| [
{
"created": "Sun, 11 Feb 2024 02:26:43 GMT",
"version": "v1"
}
] | 2024-02-13 | [
[
"Fujita",
"Kenichi",
""
],
[
"Ando",
"Atsushi",
""
],
[
"Ijima",
"Yusuke",
""
]
] |
2402.07244 | Junhao Song | Junhao Song, Yingfang Yuan, Wei Pang | SAIS: A Novel Bio-Inspired Artificial Immune System Based on Symbiotic
Paradigm | null | Proceedings of the Genetic and Evolutionary Computation Conf.
Companion, GECCO '24, 2024, pp. 2115-2118 | 10.1145/3638530.3664188 | null | cs.NE cs.AI | http://creativecommons.org/licenses/by/4.0/ | We propose a novel type of Artificial Immune System (AIS): Symbiotic
Artificial Immune Systems (SAIS), drawing inspiration from symbiotic
relationships in biology. SAIS parallels the three key stages (i.e., mutualism,
commensalism and parasitism) of population updating from the Symbiotic
Organisms Search (SOS) algorithm. This parallel approach effectively addresses
the challenges of large population size and enhances population diversity in
AIS, which traditional AIS and SOS struggle to resolve efficiently. We
conducted a series of experiments, which demonstrated that our SAIS achieved
comparable performance to the state-of-the-art approach SOS and outperformed
other popular AIS approaches and evolutionary algorithms across 26 benchmark
problems. Furthermore, we investigated the problem of parameter selection and
found that SAIS performs better in handling larger population sizes while
requiring fewer generations. Finally, we believe SAIS, as a novel bio-inspired
and immune-inspired algorithm, paves the way for innovation in bio-inspired
computing with the symbiotic paradigm.
| [
{
"created": "Sun, 11 Feb 2024 16:58:59 GMT",
"version": "v1"
}
] | 2024-09-24 | [
[
"Song",
"Junhao",
""
],
[
"Yuan",
"Yingfang",
""
],
[
"Pang",
"Wei",
""
]
] |
2402.07301 | Atharva Pandey | Atharva Pandey, Vishal Yadav, Rajendra Nagar, Santanu Chaudhury | LISR: Learning Linear 3D Implicit Surface Representation Using Compactly
Supported Radial Basis Functions | null | AAAI 2024 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Implicit 3D surface reconstruction of an object from its partial and noisy 3D
point cloud scan is the classical geometry processing and 3D computer vision
problem. In the literature, various 3D shape representations have been
developed, differing in memory efficiency and shape retrieval effectiveness,
such as volumetric, parametric, and implicit surfaces. Radial basis functions
provide memory-efficient parameterization of the implicit surface. However, we
show that training a neural network using the mean squared error between the
ground-truth implicit surface and the linear basis-based implicit surfaces does
not converge to the global solution. In this work, we propose locally supported
compact radial basis functions for a linear representation of the implicit
surface. This representation enables us to generate 3D shapes with arbitrary
topologies at any resolution due to their continuous nature. We then propose a
neural network architecture for learning the linear implicit shape
representation of the 3D surface of an object. We learn linear implicit shapes
within a supervised learning framework using ground truth Signed-Distance Field
(SDF) data for guidance. The classical strategies face difficulties in finding
linear implicit shapes from a given 3D point cloud due to numerical issues
(requires solving inverse of a large matrix) in basis and query point
selection. The proposed approach achieves better Chamfer distance and
comparable F-score than the state-of-the-art approach on the benchmark dataset.
We also show the effectiveness of the proposed approach by using it for the 3D
shape completion task.
| [
{
"created": "Sun, 11 Feb 2024 20:42:49 GMT",
"version": "v1"
}
] | 2024-02-13 | [
[
"Pandey",
"Atharva",
""
],
[
"Yadav",
"Vishal",
""
],
[
"Nagar",
"Rajendra",
""
],
[
"Chaudhury",
"Santanu",
""
]
] |
2402.07386 | Qingkai Zeng | Qingkai Zeng, Yuyang Bai, Zhaoxuan Tan, Shangbin Feng, Zhenwen Liang,
Zhihan Zhang, Meng Jiang | Chain-of-Layer: Iteratively Prompting Large Language Models for Taxonomy
Induction from Limited Examples | null | Published in CIKM 2024 | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic taxonomy induction is crucial for web search, recommendation
systems, and question answering. Manual curation of taxonomies is expensive in
terms of human effort, making automatic taxonomy construction highly desirable.
In this work, we introduce Chain-of-Layer which is an in-context learning
framework designed to induct taxonomies from a given set of entities.
Chain-of-Layer breaks down the task into selecting relevant candidate entities
in each layer and gradually building the taxonomy from top to bottom. To
minimize errors, we introduce the Ensemble-based Ranking Filter to reduce the
hallucinated content generated at each iteration. Through extensive
experiments, we demonstrate that Chain-of-Layer achieves state-of-the-art
performance on four real-world benchmarks.
| [
{
"created": "Mon, 12 Feb 2024 03:05:54 GMT",
"version": "v1"
},
{
"created": "Thu, 25 Jul 2024 02:46:50 GMT",
"version": "v2"
}
] | 2024-07-26 | [
[
"Zeng",
"Qingkai",
""
],
[
"Bai",
"Yuyang",
""
],
[
"Tan",
"Zhaoxuan",
""
],
[
"Feng",
"Shangbin",
""
],
[
"Liang",
"Zhenwen",
""
],
[
"Zhang",
"Zhihan",
""
],
[
"Jiang",
"Meng",
""
]
] |
2402.07422 | Chufeng Jiang | Tianrui Liu, Changxin Xu, Yuxin Qiao, Chufeng Jiang, Weisheng Chen | News Recommendation with Attention Mechanism | 7 pages, Journal of Industrial Engineering and Applied Science | Journal of Industrial Engineering and Applied Science 2024 | 10.5281/zenodo.10635481 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper explores the area of news recommendation, a key component of
online information sharing. Initially, we provide a clear introduction to news
recommendation, defining the core problem and summarizing current methods and
notable recent algorithms. We then present our work on implementing the NRAM
(News Recommendation with Attention Mechanism), an attention-based approach for
news recommendation, and assess its effectiveness. Our evaluation shows that
NRAM has the potential to significantly improve how news content is
personalized for users on digital news platforms.
| [
{
"created": "Mon, 12 Feb 2024 05:56:12 GMT",
"version": "v1"
},
{
"created": "Tue, 20 Feb 2024 02:46:17 GMT",
"version": "v2"
}
] | 2024-02-21 | [
[
"Liu",
"Tianrui",
""
],
[
"Xu",
"Changxin",
""
],
[
"Qiao",
"Yuxin",
""
],
[
"Jiang",
"Chufeng",
""
],
[
"Chen",
"Weisheng",
""
]
] |
2402.07429 | Chufeng Jiang | Tianrui Liu, Changxin Xu, Yuxin Qiao, Chufeng Jiang, Jiqiang Yu | Particle Filter SLAM for Vehicle Localization | 6 pages, Journal of Industrial Engineering and Applied Science | Journal of Industrial Engineering and Applied Science 2024 | 10.5281/zenodo.10635489 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Simultaneous Localization and Mapping (SLAM) presents a formidable challenge
in robotics, involving the dynamic construction of a map while concurrently
determining the precise location of the robotic agent within an unfamiliar
environment. This intricate task is further compounded by the inherent
"chicken-and-egg" dilemma, where accurate mapping relies on a dependable
estimation of the robot's location, and vice versa. Moreover, the computational
intensity of SLAM adds an additional layer of complexity, making it a crucial
yet demanding topic in the field. In our research, we address the challenges of
SLAM by adopting the Particle Filter SLAM method. Our approach leverages
encoded data and fiber optic gyro (FOG) information to enable precise
estimation of vehicle motion, while lidar technology contributes to
environmental perception by providing detailed insights into surrounding
obstacles. The integration of these data streams culminates in the
establishment of a Particle Filter SLAM framework, representing a key endeavor
in this paper to effectively navigate and overcome the complexities associated
with simultaneous localization and mapping in robotic systems.
| [
{
"created": "Mon, 12 Feb 2024 06:06:09 GMT",
"version": "v1"
},
{
"created": "Tue, 20 Feb 2024 02:42:33 GMT",
"version": "v2"
}
] | 2024-02-21 | [
[
"Liu",
"Tianrui",
""
],
[
"Xu",
"Changxin",
""
],
[
"Qiao",
"Yuxin",
""
],
[
"Jiang",
"Chufeng",
""
],
[
"Yu",
"Jiqiang",
""
]
] |
2402.07526 | Gilles Bertrand | Gilles Bertrand (LIGM) | Morse sequences | null | International Conference on Discrete Geometry and Mathematical
Morphology (DGMM), Apr 2024, Florence, Italy | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce the notion of a Morse sequence, which provides a simple and
effective approach to discrete Morse theory. A Morse sequence is a sequence
composed solely of two elementary operations, that is, expansions (the inverse
of a collapse), and fillings (the inverse of a perforation). We show that a
Morse sequence may be seen as an alternative way to represent the gradient
vector field of an arbitrary discrete Morse function. We also show that it is
possible, in a straightforward manner, to make a link between Morse sequences
and different kinds of Morse functions. At last, we introduce maximal Morse
sequences, which formalize two basic schemes for building a Morse sequence from
an arbitrary simplicial complex.
| [
{
"created": "Mon, 12 Feb 2024 09:49:56 GMT",
"version": "v1"
}
] | 2024-02-13 | [
[
"Bertrand",
"Gilles",
"",
"LIGM"
]
] |
2402.07547 | Stefania Costantini | Stefania Costantini | Ensuring trustworthy and ethical behaviour in intelligent logical agents | null | Journal of Logic and Computation, Volume 32, Issue 2, March 2022,
Pages 443-478 | 10.1093/logcom/exab091 | null | cs.MA cs.AI cs.LO cs.SC | http://creativecommons.org/licenses/by/4.0/ | Autonomous Intelligent Agents are employed in many applications upon which
the life and welfare of living beings and vital social functions may depend.
Therefore, agents should be trustworthy. A priori certification techniques
(i.e., techniques applied prior to system's deployment) can be useful, but are
not sufficient for agents that evolve, and thus modify their epistemic and
belief state, and for open Multi-Agent Systems, where heterogeneous agents can
join or leave the system at any stage of its operation. In this paper, we
propose/refine/extend dynamic (runtime) logic-based self-checking techniques,
devised in order to be able to ensure agents' trustworthy and ethical
behaviour.
| [
{
"created": "Mon, 12 Feb 2024 10:19:17 GMT",
"version": "v1"
}
] | 2024-02-13 | [
[
"Costantini",
"Stefania",
""
]
] |
2402.07633 | Zecheng Li | Zecheng Li, Zening Zeng, Yuqi Liang, Jin-Gang Yu | Complete Instances Mining for Weakly Supervised Instance Segmentation | 7 pages | Proceedings of the Thirty-Second International Joint Conference on
Artificial Intelligence(IJCAI 2023). Main Track. Pages 1142-1150 | 10.24963/ijcai.2023/127 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Weakly supervised instance segmentation (WSIS) using only image-level labels
is a challenging task due to the difficulty of aligning coarse annotations with
the finer task. However, with the advancement of deep neural networks (DNNs),
WSIS has garnered significant attention. Following a proposal-based paradigm,
we encounter a redundant segmentation problem resulting from a single instance
being represented by multiple proposals. For example, we feed a picture of a
dog and proposals into the network and expect to output only one proposal
containing a dog, but the network outputs multiple proposals. To address this
problem, we propose a novel approach for WSIS that focuses on the online
refinement of complete instances through the use of MaskIoU heads to predict
the integrity scores of proposals and a Complete Instances Mining (CIM)
strategy to explicitly model the redundant segmentation problem and generate
refined pseudo labels. Our approach allows the network to become aware of
multiple instances and complete instances, and we further improve its
robustness through the incorporation of an Anti-noise strategy. Empirical
evaluations on the PASCAL VOC 2012 and MS COCO datasets demonstrate that our
method achieves state-of-the-art performance with a notable margin. Our
implementation will be made available at https://github.com/ZechengLi19/CIM.
| [
{
"created": "Mon, 12 Feb 2024 13:16:47 GMT",
"version": "v1"
}
] | 2024-02-13 | [
[
"Li",
"Zecheng",
""
],
[
"Zeng",
"Zening",
""
],
[
"Liang",
"Yuqi",
""
],
[
"Yu",
"Jin-Gang",
""
]
] |
2402.07680 | Tanmoy Dam | Tanmoy Dam, Sanjay Bhargav Dharavath, Sameer Alam, Nimrod Lilith,
Supriyo Chakraborty and Mir Feroskhan | AYDIV: Adaptable Yielding 3D Object Detection via Integrated Contextual
Vision Transformer | This paper has been accepted for ICRA 2024, and copyright will
automatically transfer to IEEE upon its availability on the IEEE portal | 2024 IEEE International Conference on Robotics and Automation
(ICRA) | 10.1109/ICRA57147.2024.10610908 | null | cs.CV cs.AI cs.RO | http://creativecommons.org/licenses/by-sa/4.0/ | Combining LiDAR and camera data has shown potential in enhancing
short-distance object detection in autonomous driving systems. Yet, the fusion
encounters difficulties with extended distance detection due to the contrast
between LiDAR's sparse data and the dense resolution of cameras. Besides,
discrepancies in the two data representations further complicate fusion
methods. We introduce AYDIV, a novel framework integrating a tri-phase
alignment process specifically designed to enhance long-distance detection even
amidst data discrepancies. AYDIV consists of the Global Contextual Fusion
Alignment Transformer (GCFAT), which improves the extraction of camera features
and provides a deeper understanding of large-scale patterns; the Sparse Fused
Feature Attention (SFFA), which fine-tunes the fusion of LiDAR and camera
details; and the Volumetric Grid Attention (VGA) for a comprehensive spatial
data fusion. AYDIV's performance on the Waymo Open Dataset (WOD) with an
improvement of 1.24% in mAPH value(L2 difficulty) and the Argoverse2 Dataset
with a performance improvement of 7.40% in AP value demonstrates its efficacy
in comparison to other existing fusion-based methods. Our code is publicly
available at https://github.com/sanjay-810/AYDIV2
| [
{
"created": "Mon, 12 Feb 2024 14:40:43 GMT",
"version": "v1"
}
] | 2024-09-04 | [
[
"Dam",
"Tanmoy",
""
],
[
"Dharavath",
"Sanjay Bhargav",
""
],
[
"Alam",
"Sameer",
""
],
[
"Lilith",
"Nimrod",
""
],
[
"Chakraborty",
"Supriyo",
""
],
[
"Feroskhan",
"Mir",
""
]
] |
2402.07682 | Marie Candito | Marie Candito | Auxiliary Tasks to Boost Biaffine Semantic Dependency Parsing | null | Findings of the Association for Computational Linguistics: ACL
2022, pp. 2422-2429 | 10.18653/v1/2022.findings-acl.190 | null | cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | The biaffine parser of Dozat and Manning (2017) was successfully extended to
semantic dependency parsing (SDP) (Dozat and Manning, 2018). Its performance on
graphs is surprisingly high given that, without the constraint of producing a
tree, all arcs for a given sentence are predicted independently from each other
(modulo a shared representation of tokens). To circumvent such an independence
of decision, while retaining the O(n^2) complexity and highly parallelizable
architecture, we propose to use simple auxiliary tasks that introduce some form
of interdependence between arcs. Experiments on the three English acyclic
datasets of SemEval 2015 task 18 (Oepen et al., 2015), and on French deep
syntactic cyclic graphs (Ribeyre et al., 2014) show modest but systematic
performance gains on a near state-of-the-art baseline using transformer-based
contextualized representations. This provides a simple and robust method to
boost SDP performance.
| [
{
"created": "Mon, 12 Feb 2024 14:42:33 GMT",
"version": "v1"
}
] | 2024-02-13 | [
[
"Candito",
"Marie",
""
]
] |
2402.07956 | Cristobal Romero | C. Romero, S. Ventura | Educational data mining and learning analytics: An updated survey | null | Wiley interdisciplinary reviews: Data mining and knowledge
discovery;2020; 10(3):e1355 | 10.1002/widm.1355 | null | cs.HC cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This survey is an updated and improved version of the previous one published
in 2013 in this journal with the title data mining in education. It reviews in
a comprehensible and very general way how Educational Data Mining and Learning
Analytics have been applied over educational data. In the last decade, this
research area has evolved enormously and a wide range of related terms are now
used in the bibliography such as Academic Analytics, Institutional Analytics,
Teaching Analytics, Data-Driven Education, Data-Driven Decision-Making in
Education, Big Data in Education, and Educational Data Science. This paper
provides the current state of the art by reviewing the main publications, the
key milestones, the knowledge discovery cycle, the main educational
environments, the specific tools, the free available datasets, the most used
methods, the main objectives, and the future trends in this research area.
| [
{
"created": "Sat, 10 Feb 2024 18:48:45 GMT",
"version": "v1"
}
] | 2024-02-14 | [
[
"Romero",
"C.",
""
],
[
"Ventura",
"S.",
""
]
] |
2402.08145 | Rushang Karia | Rushang Karia, Pulkit Verma, Alberto Speranzon, Siddharth Srivastava | Epistemic Exploration for Generalizable Planning and Learning in
Non-Stationary Settings | To appear at ICAPS-24 | Proceedings of the International Conference on Automated Planning
and Scheduling, 34(1), 310-318, 2024 | 10.1609/icaps.v34i1.31489 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces a new approach for continual planning and model
learning in relational, non-stationary stochastic environments. Such
capabilities are essential for the deployment of sequential decision-making
systems in the uncertain and constantly evolving real world. Working in such
practical settings with unknown (and non-stationary) transition systems and
changing tasks, the proposed framework models gaps in the agent's current state
of knowledge and uses them to conduct focused, investigative explorations. Data
collected using these explorations is used for learning generalizable
probabilistic models for solving the current task despite continual changes in
the environment dynamics. Empirical evaluations on several non-stationary
benchmark domains show that this approach significantly outperforms planning
and RL baselines in terms of sample complexity. Theoretical results show that
the system exhibits desirable convergence properties when stationarity holds.
| [
{
"created": "Tue, 13 Feb 2024 00:50:06 GMT",
"version": "v1"
},
{
"created": "Fri, 7 Jun 2024 01:21:18 GMT",
"version": "v2"
}
] | 2024-07-24 | [
[
"Karia",
"Rushang",
""
],
[
"Verma",
"Pulkit",
""
],
[
"Speranzon",
"Alberto",
""
],
[
"Srivastava",
"Siddharth",
""
]
] |
2402.08310 | Thomas P\"ollabauer | Thomas P\"ollabauer, Julius K\"uhn, Jiayi Li, Arjan Kuijper | One-to-many Reconstruction of 3D Geometry of cultural Artifacts using a
synthetically trained Generative Model | null | 21st Eurographics Workshop on Graphics and Cultural Heritage (GCH
2023) | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Estimating the 3D shape of an object using a single image is a difficult
problem. Modern approaches achieve good results for general objects, based on
real photographs, but worse results on less expressive representations such as
historic sketches. Our automated approach generates a variety of detailed 3D
representation from a single sketch, depicting a medieval statue, and can be
guided by multi-modal inputs, such as text prompts. It relies solely on
synthetic data for training, making it adoptable even in cases of only small
numbers of training examples. Our solution allows domain experts such as a
curators to interactively reconstruct potential appearances of lost artifacts.
| [
{
"created": "Tue, 13 Feb 2024 09:13:30 GMT",
"version": "v1"
}
] | 2024-02-14 | [
[
"Pöllabauer",
"Thomas",
""
],
[
"Kühn",
"Julius",
""
],
[
"Li",
"Jiayi",
""
],
[
"Kuijper",
"Arjan",
""
]
] |
2402.08318 | Martin Ruskov | Alba Morollon Diaz-Faes, Carla Sofia Ribeiro Murteira, Martin Ruskov | Values That Are Explicitly Present in Fairy Tales: Comparing Samples
from German, Italian and Portuguese Traditions | In Proceedings of the Joint 3rd International Conference on Natural
Language Processing for Digital Humanities and 8th International Workshop on
Computational Linguistics for Uralic Languages | Journal of Data Mining & Digital Humanities, NLP4DH (June 4, 2024)
jdmdh:13120 | 10.46298/jdmdh.13120 | null | cs.CL cs.CY | http://creativecommons.org/licenses/by/4.0/ | Looking at how social values are represented in fairy tales can give insights
about the variations in communication of values across cultures. We study how
values are communicated in fairy tales from Portugal, Italy and Germany using a
technique called word embedding with a compass to quantify vocabulary
differences and commonalities. We study how these three national traditions
differ in their explicit references to values. To do this, we specify a list of
value-charged tokens, consider their word stems and analyse the distance
between these in a bespoke pre-trained Word2Vec model. We triangulate and
critically discuss the validity of the resulting hypotheses emerging from this
quantitative model. Our claim is that this is a reusable and reproducible
method for the study of the values explicitly referenced in historical corpora.
Finally, our preliminary findings hint at a shared cultural understanding and
the expression of values such as Benevolence, Conformity, and Universalism
across the studied cultures, suggesting the potential existence of a
pan-European cultural memory.
| [
{
"created": "Tue, 13 Feb 2024 09:26:19 GMT",
"version": "v1"
},
{
"created": "Sun, 25 Feb 2024 09:53:05 GMT",
"version": "v2"
},
{
"created": "Mon, 6 May 2024 07:19:08 GMT",
"version": "v3"
}
] | 2024-08-07 | [
[
"Diaz-Faes",
"Alba Morollon",
""
],
[
"Murteira",
"Carla Sofia Ribeiro",
""
],
[
"Ruskov",
"Martin",
""
]
] |
2402.08345 | Ufuk Can Bi\c{c}ici | Ufuk Can Bicici, Tuna Han Salih Meral, Lale Akarun | Conditional Information Gain Trellis | Accepted by Pattern Recognition Letters | Conditional Information Gain Trellis, Pattern Recognition Letters,
Volume 184, 2024, Pages 212-218, ISSN 0167-8655 | 10.1016/j.patrec.2024.06.018 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Conditional computing processes an input using only part of the neural
network's computational units. Learning to execute parts of a deep
convolutional network by routing individual samples has several advantages:
Reducing the computational burden is an obvious advantage. Furthermore, if
similar classes are routed to the same path, that part of the network learns to
discriminate between finer differences and better classification accuracies can
be attained with fewer parameters. Recently, several papers have exploited this
idea to take a particular child of a node in a tree-shaped network or to skip
parts of a network. In this work, we follow a Trellis-based approach for
generating specific execution paths in a deep convolutional neural network. We
have designed routing mechanisms that use differentiable information gain-based
cost functions to determine which subset of features in a convolutional layer
will be executed. We call our method Conditional Information Gain Trellis
(CIGT). We show that our conditional execution mechanism achieves comparable or
better model performance compared to unconditional baselines, using only a
fraction of the computational resources.
| [
{
"created": "Tue, 13 Feb 2024 10:23:45 GMT",
"version": "v1"
},
{
"created": "Mon, 8 Jul 2024 14:18:44 GMT",
"version": "v2"
}
] | 2024-07-09 | [
[
"Bicici",
"Ufuk Can",
""
],
[
"Meral",
"Tuna Han Salih",
""
],
[
"Akarun",
"Lale",
""
]
] |
2402.08400 | Alaa Anani | Alaa Anani, Tobias Lorenz, Bernt Schiele, Mario Fritz | Adaptive Hierarchical Certification for Segmentation using Randomized
Smoothing | null | International Conference on Machine Learning (ICML), 2024 | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Certification for machine learning is proving that no adversarial sample can
evade a model within a range under certain conditions, a necessity for
safety-critical domains. Common certification methods for segmentation use a
flat set of fine-grained classes, leading to high abstain rates due to model
uncertainty across many classes. We propose a novel, more practical setting,
which certifies pixels within a multi-level hierarchy, and adaptively relaxes
the certification to a coarser level for unstable components classic methods
would abstain from, effectively lowering the abstain rate whilst providing more
certified semantically meaningful information. We mathematically formulate the
problem setup, introduce an adaptive hierarchical certification algorithm and
prove the correctness of its guarantees. Since certified accuracy does not take
the loss of information into account for coarser classes, we introduce the
Certified Information Gain ($\mathrm{CIG}$) metric, which is proportional to
the class granularity level. Our extensive experiments on the datasets
Cityscapes, PASCAL-Context, ACDC and COCO-Stuff demonstrate that our adaptive
algorithm achieves a higher $\mathrm{CIG}$ and lower abstain rate compared to
the current state-of-the-art certification method. Our code can be found here:
https://github.com/AlaaAnani/adaptive-certify.
| [
{
"created": "Tue, 13 Feb 2024 11:59:43 GMT",
"version": "v1"
},
{
"created": "Mon, 3 Jun 2024 23:02:26 GMT",
"version": "v2"
}
] | 2024-06-05 | [
[
"Anani",
"Alaa",
""
],
[
"Lorenz",
"Tobias",
""
],
[
"Schiele",
"Bernt",
""
],
[
"Fritz",
"Mario",
""
]
] |
2402.08430 | Oliviero Riganelli | Ionut Daniel Fagadau, Leonardo Mariani, Daniela Micucci and Oliviero
Riganelli | Analyzing Prompt Influence on Automated Method Generation: An Empirical
Study with Copilot | null | Proceedings of the 32nd IEEE/ACM International Conference on
Program Comprehension (ICPC 2024) | 10.1145/3643916.3644409 | null | cs.SE cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generative AI is changing the way developers interact with software systems,
providing services that can produce and deliver new content, crafted to satisfy
the actual needs of developers. For instance, developers can ask for new code
directly from within their IDEs by writing natural language prompts, and
integrated services based on generative AI, such as Copilot, immediately
respond to prompts by providing ready-to-use code snippets. Formulating the
prompt appropriately, and incorporating the useful information while avoiding
any information overload, can be an important factor in obtaining the right
piece of code. The task of designing good prompts is known as prompt
engineering. In this paper, we systematically investigate the influence of
eight prompt features on the style and the content of prompts, on the level of
correctness, complexity, size, and similarity to the developers' code of the
generated code. We specifically consider the task of using Copilot with 124,800
prompts obtained by systematically combining the eight considered prompt
features to generate the implementation of 200 Java methods. Results show how
some prompt features, such as the presence of examples and the summary of the
purpose of the method, can significantly influence the quality of the result.
| [
{
"created": "Tue, 13 Feb 2024 12:58:53 GMT",
"version": "v1"
}
] | 2024-02-15 | [
[
"Fagadau",
"Ionut Daniel",
""
],
[
"Mariani",
"Leonardo",
""
],
[
"Micucci",
"Daniela",
""
],
[
"Riganelli",
"Oliviero",
""
]
] |
2402.08509 | Philipp Seifer | Philipp Seifer, Daniel Hern\'andez, Ralf L\"ammel, Steffen Staab | From Shapes to Shapes: Inferring SHACL Shapes for Results of SPARQL
CONSTRUCT Queries (Extended Version) | 19 pages, 5 figures | WWW '24: Proceedings of the ACM Web Conference 2024. ACM, 2024,
pp. 2064-2074 | 10.1145/3589334.3645550 | null | cs.DB cs.AI cs.LO | http://creativecommons.org/licenses/by/4.0/ | SPARQL CONSTRUCT queries allow for the specification of data processing
pipelines that transform given input graphs into new output graphs. It is now
common to constrain graphs through SHACL shapes allowing users to understand
which data they can expect and which not. However, it becomes challenging to
understand what graph data can be expected at the end of a data processing
pipeline without knowing the particular input data: Shape constraints on the
input graph may affect the output graph, but may no longer apply literally, and
new shapes may be imposed by the query template. In this paper, we study the
derivation of shape constraints that hold on all possible output graphs of a
given SPARQL CONSTRUCT query. We assume that the SPARQL CONSTRUCT query is
fixed, e.g., being part of a program, whereas the input graphs adhere to input
shape constraints but may otherwise vary over time and, thus, are mostly
unknown. We study a fragment of SPARQL CONSTRUCT queries (SCCQ) and a fragment
of SHACL (Simple SHACL). We formally define the problem of deriving the most
restrictive set of Simple SHACL shapes that constrain the results from
evaluating a SCCQ over any input graph restricted by a given set of Simple
SHACL shapes. We propose and implement an algorithm that statically analyses
input SHACL shapes and CONSTRUCT queries and prove its soundness and
complexity.
| [
{
"created": "Tue, 13 Feb 2024 15:04:11 GMT",
"version": "v1"
}
] | 2024-05-22 | [
[
"Seifer",
"Philipp",
""
],
[
"Hernández",
"Daniel",
""
],
[
"Lämmel",
"Ralf",
""
],
[
"Staab",
"Steffen",
""
]
] |
2402.08702 | Yongchao Chen | Yongchao Chen, Jacob Arkin, Yilun Hao, Yang Zhang, Nicholas Roy,
Chuchu Fan | PRompt Optimization in Multi-Step Tasks (PROMST): Integrating Human
Feedback and Heuristic-based Sampling | 62 pages, 14 figures, Published in EMNLP 2024 Main | EMNLP 2024 Main (The 2024 Conference on Empirical Methods on
Natural Language Processing ) | null | null | cs.CL cs.AI cs.HC cs.RO | http://creativecommons.org/publicdomain/zero/1.0/ | Prompt optimization aims to find the best prompt to a large language model
(LLM) for a given task. LLMs have been successfully used to help find and
improve prompt candidates for single-step tasks. However, realistic tasks for
agents are multi-step and introduce new challenges: (1) Prompt content is
likely to be more extensive and complex, making it more difficult for LLMs to
analyze errors, (2) the impact of an individual step is difficult to evaluate,
and (3) different people may have varied preferences about task execution.
While humans struggle to optimize prompts, they are good at providing feedback
about LLM outputs; we therefore introduce a new LLM-driven discrete prompt
optimization framework PRompt Optimization in Multi-Step Tasks (PROMST) that
incorporates human-designed feedback rules to automatically offer direct
suggestions for improvement. We also use an extra learned heuristic model that
predicts prompt performance to efficiently sample from prompt candidates. This
approach significantly outperforms both human-engineered prompts and several
other prompt optimization methods across 11 representative multi-step tasks (an
average 10.6\%-29.3\% improvement to current best methods on five LLMs
respectively). We believe our work can serve as a benchmark for automatic
prompt optimization for LLM-driven multi-step tasks. Datasets and Codes are
available at https://github.com/yongchao98/PROMST. Project Page is available at
https://yongchao98.github.io/MIT-REALM-PROMST.
| [
{
"created": "Tue, 13 Feb 2024 16:38:01 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Apr 2024 18:29:43 GMT",
"version": "v2"
},
{
"created": "Sun, 16 Jun 2024 18:01:06 GMT",
"version": "v3"
},
{
"created": "Thu, 3 Oct 2024 16:11:43 GMT",
"version": "v4"
}
] | 2024-10-04 | [
[
"Chen",
"Yongchao",
""
],
[
"Arkin",
"Jacob",
""
],
[
"Hao",
"Yilun",
""
],
[
"Zhang",
"Yang",
""
],
[
"Roy",
"Nicholas",
""
],
[
"Fan",
"Chuchu",
""
]
] |
2402.08957 | Yinya Huang | Yinya Huang, Xiaohan Lin, Zhengying Liu, Qingxing Cao, Huajian Xin,
Haiming Wang, Zhenguo Li, Linqi Song, Xiaodan Liang | MUSTARD: Mastering Uniform Synthesis of Theorem and Proof Data | null | ICLR 2024 spotlight | null | null | cs.AI cs.CL cs.FL cs.LG cs.PL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent large language models (LLMs) have witnessed significant advancement in
various tasks, including mathematical reasoning and theorem proving. As these
two tasks require strict and formal multi-step inference, they are appealing
domains for exploring the reasoning ability of LLMs but still face important
challenges. Previous studies such as Chain-of-Thought (CoT) have revealed the
effectiveness of intermediate steps guidance. However, such step-wise
annotation requires heavy labor, leading to insufficient training steps for
current benchmarks. To fill this gap, this work introduces MUSTARD, a data
generation framework that masters uniform synthesis of theorem and proof data
of high quality and diversity. MUSTARD synthesizes data in three stages: (1) It
samples a few mathematical concept seeds as the problem category. (2) Then, it
prompts a generative language model with the sampled concepts to obtain both
the problems and their step-wise formal solutions. (3) Lastly, the framework
utilizes a proof assistant (e.g., Lean Prover) to filter the valid proofs. With
the proposed MUSTARD, we present a theorem-and-proof benchmark MUSTARDSAUCE
with 5,866 valid data points. Each data point contains an informal statement,
an informal proof, and a translated formal proof that passes the prover
validation. We perform extensive analysis and demonstrate that MUSTARD
generates validated high-quality step-by-step data. We further apply the
MUSTARDSAUCE for fine-tuning smaller language models. The fine-tuned Llama 2-7B
achieves a 15.41% average relative performance gain in automated theorem
proving, and 8.18% in math word problems. Codes and data are available at
https://github.com/Eleanor-H/MUSTARD.
| [
{
"created": "Wed, 14 Feb 2024 05:57:58 GMT",
"version": "v1"
},
{
"created": "Thu, 7 Mar 2024 13:02:58 GMT",
"version": "v2"
},
{
"created": "Thu, 23 May 2024 03:13:23 GMT",
"version": "v3"
}
] | 2024-05-24 | [
[
"Huang",
"Yinya",
""
],
[
"Lin",
"Xiaohan",
""
],
[
"Liu",
"Zhengying",
""
],
[
"Cao",
"Qingxing",
""
],
[
"Xin",
"Huajian",
""
],
[
"Wang",
"Haiming",
""
],
[
"Li",
"Zhenguo",
""
],
[
"Song",
"Linqi",
""
],
[
"Liang",
"Xiaodan",
""
]
] |
2402.09056 | Mira J\"urgens | Mira J\"urgens, Nis Meinert, Viktor Bengs, Eyke H\"ullermeier, Willem
Waegeman | Is Epistemic Uncertainty Faithfully Represented by Evidential Deep
Learning Methods? | null | Proceedings of the 41st International Conference on Machine
Learning (ICML), 2024, pp. 22624--22642 | null | null | cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Trustworthy ML systems should not only return accurate predictions, but also
a reliable representation of their uncertainty. Bayesian methods are commonly
used to quantify both aleatoric and epistemic uncertainty, but alternative
approaches, such as evidential deep learning methods, have become popular in
recent years. The latter group of methods in essence extends empirical risk
minimization (ERM) for predicting second-order probability distributions over
outcomes, from which measures of epistemic (and aleatoric) uncertainty can be
extracted. This paper presents novel theoretical insights of evidential deep
learning, highlighting the difficulties in optimizing second-order loss
functions and interpreting the resulting epistemic uncertainty measures. With a
systematic setup that covers a wide range of approaches for classification,
regression and counts, it provides novel insights into issues of
identifiability and convergence in second-order loss minimization, and the
relative (rather than absolute) nature of epistemic uncertainty measures.
| [
{
"created": "Wed, 14 Feb 2024 10:07:05 GMT",
"version": "v1"
},
{
"created": "Tue, 20 Feb 2024 21:59:39 GMT",
"version": "v2"
},
{
"created": "Mon, 9 Sep 2024 20:54:39 GMT",
"version": "v3"
}
] | 2024-09-11 | [
[
"Jürgens",
"Mira",
""
],
[
"Meinert",
"Nis",
""
],
[
"Bengs",
"Viktor",
""
],
[
"Hüllermeier",
"Eyke",
""
],
[
"Waegeman",
"Willem",
""
]
] |
2402.09066 | Luca Morandini | Piero Fraternali, Luca Morandini and Sergio Luis Herrera Gonz\'alez | Solid Waste Detection, Monitoring and Mapping in Remote Sensing Images:
A Survey | null | Waste Management 189 (2024) 88-102 | 10.1016/j.wasman.2024.08.003 | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The detection and characterization of illegal solid waste disposal sites are
essential for environmental protection, particularly for mitigating pollution
and health hazards. Improperly managed landfills contaminate soil and
groundwater via rainwater infiltration, posing threats to both animals and
humans. Traditional landfill identification approaches, such as on-site
inspections, are time-consuming and expensive. Remote sensing is a
cost-effective solution for the identification and monitoring of solid waste
disposal sites that enables broad coverage and repeated acquisitions over time.
Earth Observation (EO) satellites, equipped with an array of sensors and
imaging capabilities, have been providing high-resolution data for several
decades. Researchers proposed specialized techniques that leverage remote
sensing imagery to perform a range of tasks such as waste site detection,
dumping site monitoring, and assessment of suitable locations for new
landfills. This review aims to provide a detailed illustration of the most
relevant proposals for the detection and monitoring of solid waste sites by
describing and comparing the approaches, the implemented techniques, and the
employed data. Furthermore, since the data sources are of the utmost importance
for developing an effective solid waste detection model, a comprehensive
overview of the satellites and publicly available data sets is presented.
Finally, this paper identifies the open issues in the state-of-the-art and
discusses the relevant research directions for reducing the costs and improving
the effectiveness of novel solid waste detection methods.
| [
{
"created": "Wed, 14 Feb 2024 10:24:04 GMT",
"version": "v1"
},
{
"created": "Wed, 28 Aug 2024 09:01:37 GMT",
"version": "v2"
}
] | 2024-08-29 | [
[
"Fraternali",
"Piero",
""
],
[
"Morandini",
"Luca",
""
],
[
"González",
"Sergio Luis Herrera",
""
]
] |
2402.09085 | Oliver Broadrick | Oliver Broadrick, Honghua Zhang, Guy Van den Broeck | Polynomial Semantics of Tractable Probabilistic Circuits | null | In Proceedings of the 40th Conference on Uncertainty in Artificial
Intelligence (UAI), 2024 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Probabilistic circuits compute multilinear polynomials that represent
multivariate probability distributions. They are tractable models that support
efficient marginal inference. However, various polynomial semantics have been
considered in the literature (e.g., network polynomials, likelihood
polynomials, generating functions, and Fourier transforms). The relationships
between circuit representations of these polynomial encodings of distributions
is largely unknown. In this paper, we prove that for distributions over binary
variables, each of these probabilistic circuit models is equivalent in the
sense that any circuit for one of them can be transformed into a circuit for
any of the others with only a polynomial increase in size. They are therefore
all tractable for marginal inference on the same class of distributions.
Finally, we explore the natural extension of one such polynomial semantics,
called probabilistic generating circuits, to categorical random variables, and
establish that inference becomes #P-hard.
| [
{
"created": "Wed, 14 Feb 2024 11:02:04 GMT",
"version": "v1"
},
{
"created": "Sun, 28 Apr 2024 19:34:38 GMT",
"version": "v2"
},
{
"created": "Thu, 8 Aug 2024 05:58:30 GMT",
"version": "v3"
}
] | 2024-08-09 | [
[
"Broadrick",
"Oliver",
""
],
[
"Zhang",
"Honghua",
""
],
[
"Broeck",
"Guy Van den",
""
]
] |
2402.09091 | Zhiyuan Chang | Zhiyuan Chang, Mingyang Li, Yi Liu, Junjie Wang, Qing Wang, Yang Liu | Play Guessing Game with LLM: Indirect Jailbreak Attack with Implicit
Clues | 13 pages, 6 figures | The 62nd Annual Meeting of the Association for Computational
Linguistics (ACL 2024) | null | null | cs.CR cs.AI cs.HC | http://creativecommons.org/licenses/by/4.0/ | With the development of LLMs, the security threats of LLMs are getting more
and more attention. Numerous jailbreak attacks have been proposed to assess the
security defense of LLMs. Current jailbreak attacks primarily utilize scenario
camouflage techniques. However their explicitly mention of malicious intent
will be easily recognized and defended by LLMs. In this paper, we propose an
indirect jailbreak attack approach, Puzzler, which can bypass the LLM's defense
strategy and obtain malicious response by implicitly providing LLMs with some
clues about the original malicious query. In addition, inspired by the wisdom
of "When unable to attack, defend" from Sun Tzu's Art of War, we adopt a
defensive stance to gather clues about the original malicious query through
LLMs. Extensive experimental results show that Puzzler achieves a query success
rate of 96.6% on closed-source LLMs, which is 57.9%-82.7% higher than
baselines. Furthermore, when tested against the state-of-the-art jailbreak
detection approaches, Puzzler proves to be more effective at evading detection
compared to baselines.
| [
{
"created": "Wed, 14 Feb 2024 11:11:51 GMT",
"version": "v1"
},
{
"created": "Fri, 16 Feb 2024 10:24:04 GMT",
"version": "v2"
}
] | 2024-08-22 | [
[
"Chang",
"Zhiyuan",
""
],
[
"Li",
"Mingyang",
""
],
[
"Liu",
"Yi",
""
],
[
"Wang",
"Junjie",
""
],
[
"Wang",
"Qing",
""
],
[
"Liu",
"Yang",
""
]
] |
2402.09100 | Fatemeh Ghorbani Lohesara | Fatemeh Ghorbani Lohesara, Karen Egiazarian, Sebastian Knorr | Towards Realistic Landmark-Guided Facial Video Inpainting Based on GANs | Accepted in Electronic Imaging 2024 | Electronic Imaging 2024 | 10.2352/EI.2024.36.10.IPAS-246 | Volume: 36 | Article ID: IPAS-246 | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Facial video inpainting plays a crucial role in a wide range of applications,
including but not limited to the removal of obstructions in video conferencing
and telemedicine, enhancement of facial expression analysis, privacy
protection, integration of graphical overlays, and virtual makeup. This domain
presents serious challenges due to the intricate nature of facial features and
the inherent human familiarity with faces, heightening the need for accurate
and persuasive completions. In addressing challenges specifically related to
occlusion removal in this context, our focus is on the progressive task of
generating complete images from facial data covered by masks, ensuring both
spatial and temporal coherence. Our study introduces a network designed for
expression-based video inpainting, employing generative adversarial networks
(GANs) to handle static and moving occlusions across all frames. By utilizing
facial landmarks and an occlusion-free reference image, our model maintains the
user's identity consistently across frames. We further enhance emotional
preservation through a customized facial expression recognition (FER) loss
function, ensuring detailed inpainted outputs. Our proposed framework exhibits
proficiency in eliminating occlusions from facial videos in an adaptive form,
whether appearing static or dynamic on the frames, while providing realistic
and coherent results.
| [
{
"created": "Wed, 14 Feb 2024 11:20:47 GMT",
"version": "v1"
}
] | 2024-07-12 | [
[
"Lohesara",
"Fatemeh Ghorbani",
""
],
[
"Egiazarian",
"Karen",
""
],
[
"Knorr",
"Sebastian",
""
]
] |
2402.09137 | Ayodeji Ijishakin | Ayodeji Ijishakin, Sophie Martin, Florence Townend, Federica Agosta,
Edoardo Gioele Spinelli, Silvia Basaia, Paride Schito, Yuri Falzone, Massimo
Filippi, James Cole, Andrea Malaspina | Semi-Supervised Diffusion Model for Brain Age Prediction | null | Deep Generative Models for Health Workshop, NeurIPS 2023 | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Brain age prediction models have succeeded in predicting clinical outcomes in
neurodegenerative diseases, but can struggle with tasks involving faster
progressing diseases and low quality data. To enhance their performance, we
employ a semi-supervised diffusion model, obtaining a 0.83(p<0.01) correlation
between chronological and predicted age on low quality T1w MR images. This was
competitive with state-of-the-art non-generative methods. Furthermore, the
predictions produced by our model were significantly associated with survival
length (r=0.24, p<0.05) in Amyotrophic Lateral Sclerosis. Thus, our approach
demonstrates the value of diffusion-based architectures for the task of brain
age prediction.
| [
{
"created": "Wed, 14 Feb 2024 12:38:04 GMT",
"version": "v1"
}
] | 2024-02-15 | [
[
"Ijishakin",
"Ayodeji",
""
],
[
"Martin",
"Sophie",
""
],
[
"Townend",
"Florence",
""
],
[
"Agosta",
"Federica",
""
],
[
"Spinelli",
"Edoardo Gioele",
""
],
[
"Basaia",
"Silvia",
""
],
[
"Schito",
"Paride",
""
],
[
"Falzone",
"Yuri",
""
],
[
"Filippi",
"Massimo",
""
],
[
"Cole",
"James",
""
],
[
"Malaspina",
"Andrea",
""
]
] |
2402.09161 | Igor Ivkic | Rita Stampfl, Igor Ivki\'c and Barbara Geyer | Role-Playing Simulation Games using ChatGPT | Link to online article:
https://ercim-news.ercim.eu/en136/special/role-playing-simulation-games-using-chatgpt | ERCIM News Special Theme: Large Language Models 2024 | null | null | cs.AI cs.HC | http://creativecommons.org/licenses/by/4.0/ | Since the COVID-19 pandemic, educational institutions have embarked on
digital transformation projects. The success of these projects depends on
integrating new technologies and understanding the needs of digitally literate
students. The "learning by doing" approach suggests that real success in
learning new skills is achieved when students can try out and practise these
skills. In this article, we demonstrate how Large Language Models (LLMs) can
enhance the quality of teaching by using ChatGPT in a role-playing simulation
game scenario to promote active learning. Moreover, we discuss how LLMs can
boost students' interest in learning by allowing them to practice real-life
scenarios using ChatGPT.
| [
{
"created": "Wed, 14 Feb 2024 13:24:21 GMT",
"version": "v1"
}
] | 2024-02-15 | [
[
"Stampfl",
"Rita",
""
],
[
"Ivkić",
"Igor",
""
],
[
"Geyer",
"Barbara",
""
]
] |
2402.09199 | Qiang Sheng | Yuhui Shi, Qiang Sheng, Juan Cao, Hao Mi, Beizhe Hu, Danding Wang | Ten Words Only Still Help: Improving Black-Box AI-Generated Text
Detection via Proxy-Guided Efficient Re-Sampling | 13 pages, 6 figures, 7 tables | IJCAI 2024 | 10.24963/ijcai.2024/55 | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | With the rapidly increasing application of large language models (LLMs),
their abuse has caused many undesirable societal problems such as fake news,
academic dishonesty, and information pollution. This makes AI-generated text
(AIGT) detection of great importance. Among existing methods, white-box methods
are generally superior to black-box methods in terms of performance and
generalizability, but they require access to LLMs' internal states and are not
applicable to black-box settings. In this paper, we propose to estimate word
generation probabilities as pseudo white-box features via multiple re-sampling
to help improve AIGT detection under the black-box setting. Specifically, we
design POGER, a proxy-guided efficient re-sampling method, which selects a
small subset of representative words (e.g., 10 words) for performing multiple
re-sampling in black-box AIGT detection. Experiments on datasets containing
texts from humans and seven LLMs show that POGER outperforms all baselines in
macro F1 under black-box, partial white-box, and out-of-distribution settings
and maintains lower re-sampling costs than its existing counterparts.
| [
{
"created": "Wed, 14 Feb 2024 14:32:16 GMT",
"version": "v1"
}
] | 2024-08-29 | [
[
"Shi",
"Yuhui",
""
],
[
"Sheng",
"Qiang",
""
],
[
"Cao",
"Juan",
""
],
[
"Mi",
"Hao",
""
],
[
"Hu",
"Beizhe",
""
],
[
"Wang",
"Danding",
""
]
] |
2402.09204 | Jiexin Wang | Jiexin Wang, Jiahao Chen, Bing Su | Domain-adaptive and Subgroup-specific Cascaded Temperature Regression
for Out-of-distribution Calibration | null | 2024 IEEE International Conference on Acoustics, Speech, and
Signal Processing (ICASSP 2024), Seoul, Korea | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although deep neural networks yield high classification accuracy given
sufficient training data, their predictions are typically overconfident or
under-confident, i.e., the prediction confidences cannot truly reflect the
accuracy. Post-hoc calibration tackles this problem by calibrating the
prediction confidences without re-training the classification model. However,
current approaches assume congruence between test and validation data
distributions, limiting their applicability to out-of-distribution scenarios.
To this end, we propose a novel meta-set-based cascaded temperature regression
method for post-hoc calibration. Our method tailors fine-grained scaling
functions to distinct test sets by simulating various domain shifts through
data augmentation on the validation set. We partition each meta-set into
subgroups based on predicted category and confidence level, capturing diverse
uncertainties. A regression network is then trained to derive category-specific
and confidence-level-specific scaling, achieving calibration across meta-sets.
Extensive experimental results on MNIST, CIFAR-10, and TinyImageNet demonstrate
the effectiveness of the proposed method.
| [
{
"created": "Wed, 14 Feb 2024 14:35:57 GMT",
"version": "v1"
}
] | 2024-02-15 | [
[
"Wang",
"Jiexin",
""
],
[
"Chen",
"Jiahao",
""
],
[
"Su",
"Bing",
""
]
] |
2402.09251 | Yang Zhong | Yang Zhong, Hongyu Yu, Jihui Yang, Xingyu Guo, Hongjun Xiang, and
Xingao Gong | Universal Machine Learning Kohn-Sham Hamiltonian for Materials | 20 pages, 9 figures | Chin. Phys. Lett. 41, 077103 (2024) | 10.1088/0256-307X/41/7/077103 | null | physics.comp-ph cond-mat.mtrl-sci cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | While density functional theory (DFT) serves as a prevalent computational
approach in electronic structure calculations, its computational demands and
scalability limitations persist. Recently, leveraging neural networks to
parameterize the Kohn-Sham DFT Hamiltonian has emerged as a promising avenue
for accelerating electronic structure computations. Despite advancements,
challenges such as the necessity for computing extensive DFT training data to
explore each new system and the complexity of establishing accurate ML models
for multi-elemental materials still exist. Addressing these hurdles, this study
introduces a universal electronic Hamiltonian model trained on Hamiltonian
matrices obtained from first-principles DFT calculations of nearly all crystal
structures on the Materials Project. We demonstrate its generality in
predicting electronic structures across the whole periodic table, including
complex multi-elemental systems, solid-state electrolytes, Moir\'e twisted
bilayer heterostructure, and metal-organic frameworks (MOFs). Moreover, we
utilize the universal model to conduct high-throughput calculations of
electronic structures for crystals in GeNOME datasets, identifying 3,940
crystals with direct band gaps and 5,109 crystals with flat bands. By offering
a reliable efficient framework for computing electronic properties, this
universal Hamiltonian model lays the groundwork for advancements in diverse
fields, such as easily providing a huge data set of electronic structures and
also making the materials design across the whole periodic table possible.
| [
{
"created": "Wed, 14 Feb 2024 15:38:56 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Apr 2024 06:20:55 GMT",
"version": "v2"
}
] | 2024-06-18 | [
[
"Zhong",
"Yang",
""
],
[
"Yu",
"Hongyu",
""
],
[
"Yang",
"Jihui",
""
],
[
"Guo",
"Xingyu",
""
],
[
"Xiang",
"Hongjun",
""
],
[
"Gong",
"Xingao",
""
]
] |
2402.09266 | Andres Molares-Ulloa | Andres Molares-Ulloa, Enrique Fernandez-Blanco, Alejandro Pazos and
Daniel Rivero | Machine Learning in management of precautionary closures caused by
lipophilic biotoxins | null | Computers and Electronics in Agriculture, 197, 106956. (2022) | 10.1016/j.compag.2022.106956 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mussel farming is one of the most important aquaculture industries. The main
risk to mussel farming is harmful algal blooms (HABs), which pose a risk to
human consumption. In Galicia, the Spanish main producer of cultivated mussels,
the opening and closing of the production areas is controlled by a monitoring
program. In addition to the closures resulting from the presence of toxicity
exceeding the legal threshold, in the absence of a confirmatory sampling and
the existence of risk factors, precautionary closures may be applied. These
decisions are made by experts without the support or formalisation of the
experience on which they are based. Therefore, this work proposes a predictive
model capable of supporting the application of precautionary closures.
Achieving sensitivity, accuracy and kappa index values of 97.34%, 91.83% and
0.75 respectively, the kNN algorithm has provided the best results. This allows
the creation of a system capable of helping in complex situations where
forecast errors are more common.
| [
{
"created": "Wed, 14 Feb 2024 15:51:58 GMT",
"version": "v1"
}
] | 2024-02-15 | [
[
"Molares-Ulloa",
"Andres",
""
],
[
"Fernandez-Blanco",
"Enrique",
""
],
[
"Pazos",
"Alejandro",
""
],
[
"Rivero",
"Daniel",
""
]
] |
2402.09267 | Xiaoying Zhang | Xiaoying Zhang, Baolin Peng, Ye Tian, Jingyan Zhou, Lifeng Jin,
Linfeng Song, Haitao Mi, Helen Meng | Self-Alignment for Factuality: Mitigating Hallucinations in LLMs via
Self-Evaluation | 20 pages | ACL2024 Main | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite showing increasingly human-like abilities, large language models
(LLMs) often struggle with factual inaccuracies, i.e. "hallucinations", even
when they hold relevant knowledge. To address these hallucinations, current
approaches typically necessitate high-quality human factuality annotations. In
this work, we explore Self-Alignment for Factuality, where we leverage the
self-evaluation capability of an LLM to provide training signals that steer the
model towards factuality. Specifically, we incorporate Self-Eval, a
self-evaluation component, to prompt an LLM to validate the factuality of its
own generated responses solely based on its internal knowledge. Additionally,
we design Self-Knowledge Tuning (SK-Tuning) to augment the LLM's
self-evaluation ability by improving the model's confidence estimation and
calibration. We then utilize these self-annotated responses to fine-tune the
model via Direct Preference Optimization algorithm. We show that the proposed
self-alignment approach substantially enhances factual accuracy over Llama
family models across three key knowledge-intensive tasks on TruthfulQA and
BioGEN.
| [
{
"created": "Wed, 14 Feb 2024 15:52:42 GMT",
"version": "v1"
},
{
"created": "Tue, 11 Jun 2024 12:22:14 GMT",
"version": "v2"
}
] | 2024-06-12 | [
[
"Zhang",
"Xiaoying",
""
],
[
"Peng",
"Baolin",
""
],
[
"Tian",
"Ye",
""
],
[
"Zhou",
"Jingyan",
""
],
[
"Jin",
"Lifeng",
""
],
[
"Song",
"Linfeng",
""
],
[
"Mi",
"Haitao",
""
],
[
"Meng",
"Helen",
""
]
] |
2402.09424 | Chang Gao | Qinyu Chen, Congyi Sun, Chang Gao, Shih-Chii Liu | Epilepsy Seizure Detection and Prediction using an Approximate Spiking
Convolutional Transformer | To be published at the 2024 IEEE International Symposium on Circuits
and Systems (ISCAS), Singapore | 2024 IEEE International Symposium on Circuits and Systems (ISCAS) | 10.1109/ISCAS58744.2024.10558341 | null | eess.SP cs.CV cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Epilepsy is a common disease of the nervous system. Timely prediction of
seizures and intervention treatment can significantly reduce the accidental
injury of patients and protect the life and health of patients. This paper
presents a neuromorphic Spiking Convolutional Transformer, named Spiking
Conformer, to detect and predict epileptic seizure segments from scalped
long-term electroencephalogram (EEG) recordings. We report evaluation results
from the Spiking Conformer model using the Boston Children's Hospital-MIT
(CHB-MIT) EEG dataset. By leveraging spike-based addition operations, the
Spiking Conformer significantly reduces the classification computational cost
compared to the non-spiking model. Additionally, we introduce an approximate
spiking neuron layer to further reduce spike-triggered neuron updates by nearly
38% without sacrificing accuracy. Using raw EEG data as input, the proposed
Spiking Conformer achieved an average sensitivity rate of 94.9% and a
specificity rate of 99.3% for the seizure detection task, and 96.8%, 89.5% for
the seizure prediction task, and needs >10x fewer operations compared to the
non-spiking equivalent model.
| [
{
"created": "Sun, 21 Jan 2024 19:23:56 GMT",
"version": "v1"
}
] | 2024-10-01 | [
[
"Chen",
"Qinyu",
""
],
[
"Sun",
"Congyi",
""
],
[
"Gao",
"Chang",
""
],
[
"Liu",
"Shih-Chii",
""
]
] |
2402.09459 | David Gonz\'alez Ortega | Javier Gonz\'alez-Alonso, David Oviedo-Pastor, H\'ector J. Aguado,
Francisco J. D\'iaz-Pernas, David Gonz\'alez-Ortega, and Mario
Mart\'inez-Zarzuela | Custom IMU-Based Wearable System for Robust 2.4 GHz Wireless Human Body
Parts Orientation Tracking and 3D Movement Visualization on an Avatar | 25 pages | Sensors 2021, 21, 6642 | 10.3390/s21196642 | null | eess.SP cs.CV cs.LG cs.NI | http://creativecommons.org/licenses/by/4.0/ | Recent studies confirm the applicability of Inertial Measurement Unit
(IMU)-based systems for human motion analysis. Notwithstanding, high-end
IMU-based commercial solutions are yet too expensive and complex to democratize
their use among a wide range of potential users. Less featured entry-level
commercial solutions are being introduced in the market, trying to fill this
gap, but still present some limitations that need to be overcome. At the same
time, there is a growing number of scientific papers using not commercial, but
custom do-it-yourself IMU-based systems in medical and sports applications.
Even though these solutions can help to popularize the use of this technology,
they have more limited features and the description on how to design and build
them from scratch is yet too scarce in the literature. The aim of this work is
two-fold: (1) Proving the feasibility of building an affordable custom solution
aimed at simultaneous multiple body parts orientation tracking; while providing
a detailed bottom-up description of the required hardware, tools, and
mathematical operations to estimate and represent 3D movement in real-time. (2)
Showing how the introduction of a custom 2.4 GHz communication protocol
including a channel hopping strategy can address some of the current
communication limitations of entry-level commercial solutions. The proposed
system can be used for wireless real-time human body parts orientation tracking
with up to 10 custom sensors, at least at 50 Hz. In addition, it provides a
more reliable motion data acquisition in Bluetooth and Wi-Fi crowded
environments, where the use of entry-level commercial solutions might be
unfeasible. This system can be used as a groundwork for developing affordable
human motion analysis solutions that do not require an accurate kinematic
analysis.
| [
{
"created": "Sun, 4 Feb 2024 19:08:34 GMT",
"version": "v1"
}
] | 2024-02-17 | [
[
"González-Alonso",
"Javier",
""
],
[
"Oviedo-Pastor",
"David",
""
],
[
"Aguado",
"Héctor J.",
""
],
[
"Díaz-Pernas",
"Francisco J.",
""
],
[
"González-Ortega",
"David",
""
],
[
"Martínez-Zarzuela",
"Mario",
""
]
] |
2402.09466 | Felix Ott | Felix Ott, Lucas Heublein, Nisha Lakshmana Raichur, Tobias Feigl,
Jonathan Hansen, Alexander R\"ugamer, Christopher Mutschler | Few-Shot Learning with Uncertainty-based Quadruplet Selection for
Interference Classification in GNSS Data | null | IEEE 2024 International Conference on Localization and GNSS
(ICL-GNSS) | 10.1109/ICL-GNSS60721.2024.10578525 | null | eess.SP cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Jamming devices pose a significant threat by disrupting signals from the
global navigation satellite system (GNSS), compromising the robustness of
accurate positioning. Detecting anomalies in frequency snapshots is crucial to
counteract these interferences effectively. The ability to adapt to diverse,
unseen interference characteristics is essential for ensuring the reliability
of GNSS in real-world applications. In this paper, we propose a few-shot
learning (FSL) approach to adapt to new interference classes. Our method
employs quadruplet selection for the model to learn representations using
various positive and negative interference classes. Furthermore, our quadruplet
variant selects pairs based on the aleatoric and epistemic uncertainty to
differentiate between similar classes. We recorded a dataset at a motorway with
eight interference classes on which our FSL method with quadruplet loss
outperforms other FSL techniques in jammer classification accuracy with 97.66%.
Dataset available at:
https://gitlab.cc-asp.fraunhofer.de/darcy_gnss/FIOT_highway
| [
{
"created": "Fri, 9 Feb 2024 13:59:14 GMT",
"version": "v1"
},
{
"created": "Thu, 2 May 2024 07:17:50 GMT",
"version": "v2"
}
] | 2024-10-08 | [
[
"Ott",
"Felix",
""
],
[
"Heublein",
"Lucas",
""
],
[
"Raichur",
"Nisha Lakshmana",
""
],
[
"Feigl",
"Tobias",
""
],
[
"Hansen",
"Jonathan",
""
],
[
"Rügamer",
"Alexander",
""
],
[
"Mutschler",
"Christopher",
""
]
] |
2402.09476 | Bardia Yousefi | Mahtab Darvish, Ryan Trask, Patrick Tallon, M\'elina Khansari, Lei
Ren, Michelle Hershman, Bardia Yousefi | AI-Enabled Lung Cancer Prognosis | This is the author's version of a book chapter entitled: "Cancer
Research: An Interdisciplinary Approach", Springer | Springer book chapter "Cancer Research: An Interdisciplinary
Approach" 2024 | null | null | q-bio.QM cs.AI eess.IV | http://creativecommons.org/licenses/by/4.0/ | Lung cancer is the primary cause of cancer-related mortality, claiming
approximately 1.79 million lives globally in 2020, with an estimated 2.21
million new cases diagnosed within the same period. Among these, Non-Small Cell
Lung Cancer (NSCLC) is the predominant subtype, characterized by a notably
bleak prognosis and low overall survival rate of approximately 25% over five
years across all disease stages. However, survival outcomes vary considerably
based on the stage at diagnosis and the therapeutic interventions administered.
Recent advancements in artificial intelligence (AI) have revolutionized the
landscape of lung cancer prognosis. AI-driven methodologies, including machine
learning and deep learning algorithms, have shown promise in enhancing survival
prediction accuracy by efficiently analyzing complex multi-omics data and
integrating diverse clinical variables. By leveraging AI techniques, clinicians
can harness comprehensive prognostic insights to tailor personalized treatment
strategies, ultimately improving patient outcomes in NSCLC. Overviewing
AI-driven data processing can significantly help bolster the understanding and
provide better directions for using such systems.
| [
{
"created": "Mon, 12 Feb 2024 22:09:43 GMT",
"version": "v1"
}
] | 2024-02-16 | [
[
"Darvish",
"Mahtab",
""
],
[
"Trask",
"Ryan",
""
],
[
"Tallon",
"Patrick",
""
],
[
"Khansari",
"Mélina",
""
],
[
"Ren",
"Lei",
""
],
[
"Hershman",
"Michelle",
""
],
[
"Yousefi",
"Bardia",
""
]
] |
2402.09498 | Jos\'e Alberto Ben\'itez-Andrades Ph.D. | Jos\'e Alberto Ben\'itez-Andrades, Mar\'ia Teresa Garc\'ia-Ord\'as,
Mar\'ia \'Alvarez-Gonz\'alez, Raquel Leir\'os-Rodr\'iguez and Ana F L\'opez
Rodr\'iguez | Detection of the most influential variables for preventing postpartum
urinary incontinence using machine learning techniques | null | Digital Health, Volume 8, 2022, 20552076221111289 | 10.1177/20552076221111289 | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Background: Postpartum urinary incontinence (PUI) is a common issue among
postnatal women. Previous studies identified potential related variables, but
lacked analysis on certain intrinsic and extrinsic patient variables during
pregnancy.
Objective: The study aims to evaluate the most influential variables in PUI
using machine learning, focusing on intrinsic, extrinsic, and combined variable
groups.
Methods: Data from 93 pregnant women were analyzed using machine learning and
oversampling techniques. Four key variables were predicted: occurrence,
frequency, intensity of urinary incontinence, and stress urinary incontinence.
Results: Models using extrinsic variables were most accurate, with 70%
accuracy for urinary incontinence, 77% for frequency, 71% for intensity, and
93% for stress urinary incontinence.
Conclusions: The study highlights extrinsic variables as significant
predictors of PUI issues. This suggests that PUI prevention might be achievable
through healthy habits during pregnancy, although further research is needed
for confirmation.
| [
{
"created": "Wed, 14 Feb 2024 16:45:10 GMT",
"version": "v1"
}
] | 2024-02-16 | [
[
"Benítez-Andrades",
"José Alberto",
""
],
[
"García-Ordás",
"María Teresa",
""
],
[
"Álvarez-González",
"María",
""
],
[
"Leirós-Rodríguez",
"Raquel",
""
],
[
"Rodríguez",
"Ana F López",
""
]
] |
2402.09553 | Dilli Sharma | Dilli Prasad Sharma, Nasim Beigi-Mohammadi, Hongxiang Geng, Dawn
Dixon, Rob Madro, Phil Emmenegger, Carlos Tobar, Jeff Li, Alberto Leon-Garcia | Statistical and Machine Learning Models for Predicting Fire and Other
Emergency Events | null | IEEE Access 12(2024) 56880-56909 | 10.1109/ACCESS.2024.3390089 | null | cs.AI cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | Emergency events in a city cause considerable economic loss to individuals,
their families, and the community. Accurate and timely prediction of events can
help the emergency fire and rescue services in preparing for and mitigating the
consequences of emergency events. In this paper, we present a systematic
development of predictive models for various types of emergency events in the
City of Edmonton, Canada. We present methods for (i) data collection and
dataset development; (ii) descriptive analysis of each event type and its
characteristics at different spatiotemporal levels; (iii) feature analysis and
selection based on correlation coefficient analysis and feature importance
analysis; and (iv) development of prediction models for the likelihood of
occurrence of each event type at different temporal and spatial resolutions. We
analyze the association of event types with socioeconomic and demographic data
at the neighborhood level, identify a set of predictors for each event type,
and develop predictive models with negative binomial regression. We conduct
evaluations at neighborhood and fire station service area levels. Our results
show that the models perform well for most of the event types with acceptable
prediction errors for weekly and monthly periods. The evaluation shows that the
prediction accuracy is consistent at the level of the fire station, so the
predictions can be used in management by fire rescue service departments for
planning resource allocation for these time periods. We also examine the impact
of the COVID-19 pandemic on the occurrence of events and on the accuracy of
event predictor models. Our findings show that COVID-19 had a significant
impact on the performance of the event prediction models.
| [
{
"created": "Wed, 14 Feb 2024 20:10:30 GMT",
"version": "v1"
}
] | 2024-04-29 | [
[
"Sharma",
"Dilli Prasad",
""
],
[
"Beigi-Mohammadi",
"Nasim",
""
],
[
"Geng",
"Hongxiang",
""
],
[
"Dixon",
"Dawn",
""
],
[
"Madro",
"Rob",
""
],
[
"Emmenegger",
"Phil",
""
],
[
"Tobar",
"Carlos",
""
],
[
"Li",
"Jeff",
""
],
[
"Leon-Garcia",
"Alberto",
""
]
] |
2402.09592 | Jos\'e Alberto Ben\'itez-Andrades Ph.D. | Jos\'e Alberto Ben\'itez-Andrades, Jos\'e Emilio Labra, Enedina
Quiroga, Vicente Mart\'in, Isa\'ias Garc\'ia, Pilar Marqu\'es-S\'anchez and
Carmen Benavides | A Web-Based Tool for Automatic Data Collection, Curation, and
Visualization of Complex Healthcare Survey Studies including Social Network
Analysis | null | Computation and Mathematical Methods in Medicine, Volume 2017,
Article ID 2579848 | 10.1155/2017/2579848 | null | cs.AI cs.HC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | There is a great concern nowadays regarding alcohol consumption and drug
abuse, especially in young people. Analyzing the social environment where these
adolescents are immersed, as well as a series of measures determining the
alcohol abuse risk or personal situation and perception using a number of
questionnaires like AUDIT, FAS, KIDSCREEN, and others, it is possible to gain
insight into the current situation of a given individual regarding his/her
consumption behavior. But this analysis, in order to be achieved, requires the
use of tools that can ease the process of questionnaire creation, data
gathering, curation and representation, and later analysis and visualization to
the user. This research presents the design and construction of a web-based
platform able to facilitate each of the mentioned processes by integrating the
different phases into an intuitive system with a graphical user interface that
hides the complexity underlying each of the questionnaires and techniques used
and presenting the results in a flexible and visual way, avoiding any manual
handling of data during the process. Advantages of this approach are shown and
compared to the previous situation where some of the tasks were accomplished by
time consuming and error prone manipulations of data.
| [
{
"created": "Wed, 14 Feb 2024 21:37:59 GMT",
"version": "v1"
}
] | 2024-02-16 | [
[
"Benítez-Andrades",
"José Alberto",
""
],
[
"Labra",
"José Emilio",
""
],
[
"Quiroga",
"Enedina",
""
],
[
"Martín",
"Vicente",
""
],
[
"García",
"Isaías",
""
],
[
"Marqués-Sánchez",
"Pilar",
""
],
[
"Benavides",
"Carmen",
""
]
] |
2402.09683 | Zeya Chen | Zeya Chen, Ruth Schmidt | Exploring a Behavioral Model of "Positive Friction" in Human-AI
Interaction | This preprint has not undergone peer review or any post-submission
corrections. The Version of Record of this contribution will be published in
Springer Nature Computer Science book series in Volume HCI International 2024 | DESIGN, USER EXPERIENCE AND USABILITY. HCII 2024 | null | null | cs.HC cs.AI cs.CY | http://creativecommons.org/licenses/by-sa/4.0/ | Designing seamless, frictionless user experiences has long been a dominant
trend in both applied behavioral science and artificial intelligence (AI), in
which the goal of making desirable actions easy and efficient informs efforts
to minimize friction in user experiences. However, in some settings, friction
can be genuinely beneficial, such as the insertion of deliberate delays to
increase reflection, preventing individuals from resorting to automatic or
biased behaviors, and enhancing opportunities for unexpected discoveries. More
recently, the popularization and availability of AI on a widespread scale has
only increased the need to examine how friction can help or hinder users of AI;
it also suggests a need to consider how positive friction can benefit AI
practitioners, both during development processes (e.g., working with diverse
teams) and to inform how AI is designed into offerings. This paper first
proposes a "positive friction" model that can help characterize how friction is
currently beneficial in user and developer experiences with AI, diagnose the
potential need for friction where it may not yet exist in these contexts, and
inform how positive friction can be used to generate solutions, especially as
advances in AI continue to be progress and new opportunities emerge. It then
explores this model in the context of AI users and developers by proposing the
value of taking a hybrid "AI+human" lens, and concludes by suggesting questions
for further exploration.
| [
{
"created": "Thu, 15 Feb 2024 03:39:55 GMT",
"version": "v1"
}
] | 2024-02-16 | [
[
"Chen",
"Zeya",
""
],
[
"Schmidt",
"Ruth",
""
]
] |
2402.09766 | Alexey Zaytsev | Valeriy Shevchenko, Nikita Belousov, Alexey Vasilev, Vladimir
Zholobov, Artyom Sosedka, Natalia Semenova, Anna Volodkevich, Andrey
Savchenko, Alexey Zaytsev | From Variability to Stability: Advancing RecSys Benchmarking Practices | 8 pages with 11 figures | KDD 2024: Proceedings of the 30th ACM SIGKDD Conference on
Knowledge Discovery and Data Mining | 10.1145/3637528.3671655 | null | cs.IR cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | In the rapidly evolving domain of Recommender Systems (RecSys), new
algorithms frequently claim state-of-the-art performance based on evaluations
over a limited set of arbitrarily selected datasets. However, this approach may
fail to holistically reflect their effectiveness due to the significant impact
of dataset characteristics on algorithm performance. Addressing this
deficiency, this paper introduces a novel benchmarking methodology to
facilitate a fair and robust comparison of RecSys algorithms, thereby advancing
evaluation practices. By utilizing a diverse set of $30$ open datasets,
including two introduced in this work, and evaluating $11$ collaborative
filtering algorithms across $9$ metrics, we critically examine the influence of
dataset characteristics on algorithm performance. We further investigate the
feasibility of aggregating outcomes from multiple datasets into a unified
ranking. Through rigorous experimental analysis, we validate the reliability of
our methodology under the variability of datasets, offering a benchmarking
strategy that balances quality and computational demands. This methodology
enables a fair yet effective means of evaluating RecSys algorithms, providing
valuable guidance for future research endeavors.
| [
{
"created": "Thu, 15 Feb 2024 07:35:52 GMT",
"version": "v1"
},
{
"created": "Tue, 27 Aug 2024 13:01:56 GMT",
"version": "v2"
}
] | 2024-08-28 | [
[
"Shevchenko",
"Valeriy",
""
],
[
"Belousov",
"Nikita",
""
],
[
"Vasilev",
"Alexey",
""
],
[
"Zholobov",
"Vladimir",
""
],
[
"Sosedka",
"Artyom",
""
],
[
"Semenova",
"Natalia",
""
],
[
"Volodkevich",
"Anna",
""
],
[
"Savchenko",
"Andrey",
""
],
[
"Zaytsev",
"Alexey",
""
]
] |
2402.09781 | Sandeep Kumar | Vivek Tetarwal, Sandeep Kumar | A Comprehensive Review on Computer Vision Analysis of Aerial Data | 112 pages | IEEE 2024 | null | null | cs.CV cs.IT math.IT | http://creativecommons.org/licenses/by-nc-nd/4.0/ | With the emergence of new technologies in the field of airborne platforms and
imaging sensors, aerial data analysis is becoming very popular, capitalizing on
its advantages over land data. This paper presents a comprehensive review of
the computer vision tasks within the domain of aerial data analysis. While
addressing fundamental aspects such as object detection and tracking, the
primary focus is on pivotal tasks like change detection, object segmentation,
and scene-level analysis. The paper provides the comparison of various hyper
parameters employed across diverse architectures and tasks. A substantial
section is dedicated to an in-depth discussion on libraries, their
categorization, and their relevance to different domain expertise. The paper
encompasses aerial datasets, the architectural nuances adopted, and the
evaluation metrics associated with all the tasks in aerial data analysis.
Applications of computer vision tasks in aerial data across different domains
are explored, with case studies providing further insights. The paper
thoroughly examines the challenges inherent in aerial data analysis, offering
practical solutions. Additionally, unresolved issues of significance are
identified, paving the way for future research directions in the field of
aerial data analysis.
| [
{
"created": "Thu, 15 Feb 2024 08:10:09 GMT",
"version": "v1"
}
] | 2024-02-16 | [
[
"Tetarwal",
"Vivek",
""
],
[
"Kumar",
"Sandeep",
""
]
] |
2402.09782 | Zihong Luo | Zihong Luo, Zheng Tao, Yuxuan Huang, Kexin He, Chengzhi Liu | MC-DBN: A Deep Belief Network-Based Model for Modality Completion | null | International Conference on Computer Supported Cooperative Work in
Design 2024 | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advancements in multi-modal artificial intelligence (AI) have
revolutionized the fields of stock market forecasting and heart rate
monitoring. Utilizing diverse data sources can substantially improve prediction
accuracy. Nonetheless, additional data may not always align with the original
dataset. Interpolation methods are commonly utilized for handling missing
values in modal data, though they may exhibit limitations in the context of
sparse information. Addressing this challenge, we propose a Modality Completion
Deep Belief Network-Based Model (MC-DBN). This approach utilizes implicit
features of complete data to compensate for gaps between itself and additional
incomplete data. It ensures that the enhanced multi-modal data closely aligns
with the dynamic nature of the real world to enhance the effectiveness of the
model. We conduct evaluations of the MC-DBN model in two datasets from the
stock market forecasting and heart rate monitoring domains. Comprehensive
experiments showcase the model's capacity to bridge the semantic divide present
in multi-modal data, subsequently enhancing its performance. The source code is
available at: https://github.com/logan-0623/DBN-generate
| [
{
"created": "Thu, 15 Feb 2024 08:21:50 GMT",
"version": "v1"
},
{
"created": "Mon, 4 Mar 2024 06:10:09 GMT",
"version": "v2"
},
{
"created": "Wed, 20 Mar 2024 08:50:46 GMT",
"version": "v3"
}
] | 2024-03-21 | [
[
"Luo",
"Zihong",
""
],
[
"Tao",
"Zheng",
""
],
[
"Huang",
"Yuxuan",
""
],
[
"He",
"Kexin",
""
],
[
"Liu",
"Chengzhi",
""
]
] |
2402.09795 | Sakib Anwar Rieyan | Sakib Anwar Rieyan, Md. Raisul Kabir News, A.B.M. Muntasir Rahman,
Sadia Afrin Khan, Sultan Tasneem Jawad Zaarif, Md. Golam Rabiul Alam,
Mohammad Mehedi Hassan, Michele Ianni, Giancarlo Fortino | An advanced data fabric architecture leveraging homomorphic encryption
and federated learning | null | Information Fusion, 102, 102004 (2024) | 10.1016/j.inffus.2023.102004 | null | cs.CR cs.AI cs.DB | http://creativecommons.org/licenses/by/4.0/ | Data fabric is an automated and AI-driven data fusion approach to accomplish
data management unification without moving data to a centralized location for
solving complex data problems. In a Federated learning architecture, the global
model is trained based on the learned parameters of several local models that
eliminate the necessity of moving data to a centralized repository for machine
learning. This paper introduces a secure approach for medical image analysis
using federated learning and partially homomorphic encryption within a
distributed data fabric architecture. With this method, multiple parties can
collaborate in training a machine-learning model without exchanging raw data
but using the learned or fused features. The approach complies with laws and
regulations such as HIPAA and GDPR, ensuring the privacy and security of the
data. The study demonstrates the method's effectiveness through a case study on
pituitary tumor classification, achieving a significant level of accuracy.
However, the primary focus of the study is on the development and evaluation of
federated learning and partially homomorphic encryption as tools for secure
medical image analysis. The results highlight the potential of these techniques
to be applied to other privacy-sensitive domains and contribute to the growing
body of research on secure and privacy-preserving machine learning.
| [
{
"created": "Thu, 15 Feb 2024 08:50:36 GMT",
"version": "v1"
}
] | 2024-02-16 | [
[
"Rieyan",
"Sakib Anwar",
""
],
[
"News",
"Md. Raisul Kabir",
""
],
[
"Rahman",
"A. B. M. Muntasir",
""
],
[
"Khan",
"Sadia Afrin",
""
],
[
"Zaarif",
"Sultan Tasneem Jawad",
""
],
[
"Alam",
"Md. Golam Rabiul",
""
],
[
"Hassan",
"Mohammad Mehedi",
""
],
[
"Ianni",
"Michele",
""
],
[
"Fortino",
"Giancarlo",
""
]
] |
2402.09844 | Quentin Gallou\'edec | Quentin Gallou\'edec and Edward Beeching and Cl\'ement Romac and
Emmanuel Dellandr\'ea | Jack of All Trades, Master of Some, a Multi-Purpose Transformer Agent | null | 38th Workshop on Aligning Reinforcement Learning Experimentalists
and Theorists (ARLET 2024) | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The search for a general model that can operate seamlessly across multiple
domains remains a key goal in machine learning research. The prevailing
methodology in Reinforcement Learning (RL) typically limits models to a single
task within a unimodal framework, a limitation that contrasts with the broader
vision of a versatile, multi-domain model. In this paper, we present Jack of
All Trades (JAT), a transformer-based model with a unique design optimized for
handling sequential decision-making tasks and multi-modal data types. The JAT
model demonstrates its robust capabilities and versatility by achieving strong
performance on very different RL benchmarks, along with promising results on
Computer Vision (CV) and Natural Language Processing (NLP) tasks, all using a
single set of weights. The JAT model marks a significant step towards more
general, cross-domain AI model design, and notably, it is the first model of
its kind to be fully open-sourced at https://huggingface.co/jat-project/jat,
including a pioneering general-purpose dataset.
| [
{
"created": "Thu, 15 Feb 2024 10:01:55 GMT",
"version": "v1"
},
{
"created": "Mon, 22 Apr 2024 09:47:31 GMT",
"version": "v2"
},
{
"created": "Wed, 10 Jul 2024 15:56:14 GMT",
"version": "v3"
}
] | 2024-07-11 | [
[
"Gallouédec",
"Quentin",
""
],
[
"Beeching",
"Edward",
""
],
[
"Romac",
"Clément",
""
],
[
"Dellandréa",
"Emmanuel",
""
]
] |
2402.09934 | Ritwik Banerjee | Khiem Phi, Noushin Salek Faramarzi, Chenlu Wang, Ritwik Banerjee | Paying Attention to Deflections: Mining Pragmatic Nuances for
Whataboutism Detection in Online Discourse | 14 pages, 5 figures | Findings of the Association for Computational Linguistics ACL.
(2024) 12628-12643. https://aclanthology.org/2024.findings-acl.750 | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Whataboutism, a potent tool for disrupting narratives and sowing distrust,
remains under-explored in quantitative NLP research. Moreover, past work has
not distinguished its use as a strategy for misinformation and propaganda from
its use as a tool for pragmatic and semantic framing. We introduce new datasets
from Twitter and YouTube, revealing overlaps as well as distinctions between
whataboutism, propaganda, and the tu quoque fallacy. Furthermore, drawing on
recent work in linguistic semantics, we differentiate the `what about' lexical
construct from whataboutism. Our experiments bring to light unique challenges
in its accurate detection, prompting the introduction of a novel method using
attention weights for negative sample mining. We report significant
improvements of 4% and 10% over previous state-of-the-art methods in our
Twitter and YouTube collections, respectively.
| [
{
"created": "Thu, 15 Feb 2024 13:34:19 GMT",
"version": "v1"
},
{
"created": "Sun, 22 Sep 2024 22:22:27 GMT",
"version": "v2"
}
] | 2024-09-24 | [
[
"Phi",
"Khiem",
""
],
[
"Faramarzi",
"Noushin Salek",
""
],
[
"Wang",
"Chenlu",
""
],
[
"Banerjee",
"Ritwik",
""
]
] |
2402.09949 | Leonidas Gee | Leonidas Gee, Leonardo Rigutini, Marco Ernandes, Andrea Zugarini | Multi-word Tokenization for Sequence Compression | The 2023 Conference on Empirical Methods in Natural Language
Processing (EMNLP 2023) | Proceedings of the 2023 Conference on Empirical Methods in Natural
Language Processing: Industry Track | 10.18653/v1/2023.emnlp-industry.58 | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models have proven highly successful at modelling a variety of
tasks. However, this comes at a steep computational cost that hinders wider
industrial uptake. In this paper, we present MWT: a Multi-Word Tokenizer that
goes beyond word boundaries by representing frequent multi-word expressions as
single tokens. MWTs produce a more compact and efficient tokenization that
yields two benefits: (1) Increase in performance due to a greater coverage of
input data given a fixed sequence length budget; (2) Faster and lighter
inference due to the ability to reduce the sequence length with negligible
drops in performance. Our results show that MWT is more robust across shorter
sequence lengths, thus allowing for major speedups via early sequence
truncation.
| [
{
"created": "Thu, 15 Feb 2024 13:52:23 GMT",
"version": "v1"
},
{
"created": "Thu, 4 Apr 2024 22:50:25 GMT",
"version": "v2"
}
] | 2024-04-08 | [
[
"Gee",
"Leonidas",
""
],
[
"Rigutini",
"Leonardo",
""
],
[
"Ernandes",
"Marco",
""
],
[
"Zugarini",
"Andrea",
""
]
] |
2402.09977 | Leonardo Rigutini | Leonidas Gee and Andrea Zugarini and Leonardo Rigutini and Paolo
Torroni | Fast Vocabulary Transfer for Language Model Compression | The 2022 Conference on Empirical Methods in Natural Language
Processing (EMNLP 2022) | Proceedings of the 2022 Conference on Empirical Methods in Natural
Language Processing (EMNLP 2022): Industry Track | 10.18653/v1/2022.emnlp-industry.41 | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Real-world business applications require a trade-off between language model
performance and size. We propose a new method for model compression that relies
on vocabulary transfer. We evaluate the method on various vertical domains and
downstream tasks. Our results indicate that vocabulary transfer can be
effectively used in combination with other compression techniques, yielding a
significant reduction in model size and inference time while marginally
compromising on performance.
| [
{
"created": "Thu, 15 Feb 2024 14:37:07 GMT",
"version": "v1"
}
] | 2024-02-16 | [
[
"Gee",
"Leonidas",
""
],
[
"Zugarini",
"Andrea",
""
],
[
"Rigutini",
"Leonardo",
""
],
[
"Torroni",
"Paolo",
""
]
] |
2402.09982 | Leonardo Rigutini | Enrico Randellini and Leonardo Rigutini and Claudio Sacca' | Data Augmentation and Transfer Learning Approaches Applied to Facial
Expressions Recognition | The 11th International Conference on Artificial Intelligence, Soft
Computing and Applications (AIAA 2021) | Proceeding of the 11th International Conference on Artificial
Intelligence, Soft Computing and Applications (AIAA 2021) | 10.5121/csit.2021.111912 | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The face expression is the first thing we pay attention to when we want to
understand a person's state of mind. Thus, the ability to recognize facial
expressions in an automatic way is a very interesting research field. In this
paper, because the small size of available training datasets, we propose a
novel data augmentation technique that improves the performances in the
recognition task. We apply geometrical transformations and build from scratch
GAN models able to generate new synthetic images for each emotion type. Thus,
on the augmented datasets we fine tune pretrained convolutional neural networks
with different architectures. To measure the generalization ability of the
models, we apply extra-database protocol approach, namely we train models on
the augmented versions of training dataset and test them on two different
databases. The combination of these techniques allows to reach average accuracy
values of the order of 85\% for the InceptionResNetV2 model.
| [
{
"created": "Thu, 15 Feb 2024 14:46:03 GMT",
"version": "v1"
}
] | 2024-02-16 | [
[
"Randellini",
"Enrico",
""
],
[
"Rigutini",
"Leonardo",
""
],
[
"Sacca'",
"Claudio",
""
]
] |
2402.10002 | Hai-Tao Yu | Hai-Tao Yu, Mofei Song | MM-Point: Multi-View Information-Enhanced Multi-Modal Self-Supervised 3D
Point Cloud Understanding | Accepted by AAAI 2024 | AAAI 2024 | null | null | cs.CV cs.AI cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In perception, multiple sensory information is integrated to map visual
information from 2D views onto 3D objects, which is beneficial for
understanding in 3D environments. But in terms of a single 2D view rendered
from different angles, only limited partial information can be provided.The
richness and value of Multi-view 2D information can provide superior
self-supervised signals for 3D objects. In this paper, we propose a novel
self-supervised point cloud representation learning method, MM-Point, which is
driven by intra-modal and inter-modal similarity objectives. The core of
MM-Point lies in the Multi-modal interaction and transmission between 3D
objects and multiple 2D views at the same time. In order to more effectively
simultaneously perform the consistent cross-modal objective of 2D multi-view
information based on contrastive learning, we further propose Multi-MLP and
Multi-level Augmentation strategies. Through carefully designed transformation
strategies, we further learn Multi-level invariance in 2D Multi-views. MM-Point
demonstrates state-of-the-art (SOTA) performance in various downstream tasks.
For instance, it achieves a peak accuracy of 92.4% on the synthetic dataset
ModelNet40, and a top accuracy of 87.8% on the real-world dataset ScanObjectNN,
comparable to fully supervised methods. Additionally, we demonstrate its
effectiveness in tasks such as few-shot classification, 3D part segmentation
and 3D semantic segmentation.
| [
{
"created": "Thu, 15 Feb 2024 15:10:17 GMT",
"version": "v1"
},
{
"created": "Thu, 22 Feb 2024 07:42:24 GMT",
"version": "v2"
},
{
"created": "Sun, 25 Feb 2024 07:58:07 GMT",
"version": "v3"
}
] | 2024-03-11 | [
[
"Yu",
"Hai-Tao",
""
],
[
"Song",
"Mofei",
""
]
] |
2402.10061 | Wieland Morgenstern | Wieland Morgenstern, Niklas Gard, Simon Baumann, Anna Hilsmann, Peter
Eisert | X-maps: Direct Depth Lookup for Event-based Structured Light Systems | Accepted at the CVPR 2023 Workshop on Event-based Vision:
https://tub-rip.github.io/eventvision2023/ | 2023 IEEE/CVF Conference on Computer Vision and Pattern
Recognition Workshops (CVPRW), Vancouver, BC, Canada, 2023, pp. 4007-4015 | 10.1109/CVPRW59228.2023.00418 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a new approach to direct depth estimation for Spatial Augmented
Reality (SAR) applications using event cameras. These dynamic vision sensors
are a great fit to be paired with laser projectors for depth estimation in a
structured light approach. Our key contributions involve a conversion of the
projector time map into a rectified X-map, capturing x-axis correspondences for
incoming events and enabling direct disparity lookup without any additional
search. Compared to previous implementations, this significantly simplifies
depth estimation, making it more efficient, while the accuracy is similar to
the time map-based process. Moreover, we compensate non-linear temporal
behavior of cheap laser projectors by a simple time map calibration, resulting
in improved performance and increased depth estimation accuracy. Since depth
estimation is executed by two lookups only, it can be executed almost instantly
(less than 3 ms per frame with a Python implementation) for incoming events.
This allows for real-time interactivity and responsiveness, which makes our
approach especially suitable for SAR experiences where low latency, high frame
rates and direct feedback are crucial. We present valuable insights gained into
data transformed into X-maps and evaluate our depth from disparity estimation
against the state of the art time map-based results. Additional results and
code are available on our project page: https://fraunhoferhhi.github.io/X-maps/
| [
{
"created": "Thu, 15 Feb 2024 16:29:46 GMT",
"version": "v1"
}
] | 2024-02-16 | [
[
"Morgenstern",
"Wieland",
""
],
[
"Gard",
"Niklas",
""
],
[
"Baumann",
"Simon",
""
],
[
"Hilsmann",
"Anna",
""
],
[
"Eisert",
"Peter",
""
]
] |
2402.10067 | Kristina Dzeparoska | Kristina Dzeparoska, Jieyu Lin, Ali Tizghadam, Alberto Leon-Garcia | LLM-based policy generation for intent-based management of applications | This article has been accepted for publication in 2023 19th
International Conference on Network and Service Management (CNSM), 3rd
International Workshop on Analytics for Service and Application Management
(AnServApp 2023) | 2023 19th International Conference on Network and Service
Management (CNSM), 2023, pp. 1-7 | 10.23919/CNSM59352.2023.10327837 | null | cs.DC cs.AI cs.FL cs.HC cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Automated management requires decomposing high-level user requests, such as
intents, to an abstraction that the system can understand and execute. This is
challenging because even a simple intent requires performing a number of
ordered steps. And the task of identifying and adapting these steps (as
conditions change) requires a decomposition approach that cannot be exactly
pre-defined beforehand. To tackle these challenges and support automated intent
decomposition and execution, we explore the few-shot capability of Large
Language Models (LLMs). We propose a pipeline that progressively decomposes
intents by generating the required actions using a policy-based abstraction.
This allows us to automate the policy execution by creating a closed control
loop for the intent deployment. To do so, we generate and map the policies to
APIs and form application management loops that perform the necessary
monitoring, analysis, planning and execution. We evaluate our proposal with a
use-case to fulfill and assure an application service chain of virtual network
functions. Using our approach, we can generalize and generate the necessary
steps to realize intents, thereby enabling intent automation for application
management.
| [
{
"created": "Mon, 22 Jan 2024 15:37:04 GMT",
"version": "v1"
}
] | 2024-02-16 | [
[
"Dzeparoska",
"Kristina",
""
],
[
"Lin",
"Jieyu",
""
],
[
"Tizghadam",
"Ali",
""
],
[
"Leon-Garcia",
"Alberto",
""
]
] |
2402.10135 | Irina Ar\'evalo | Jose L. Salmeron, Irina Ar\'evalo, Antonio Ruiz-Celma | Benchmarking federated strategies in Peer-to-Peer Federated learning for
biomedical data | null | Heliyon 9 (2023) e16925 | null | null | cs.LG cs.AI cs.DC | http://creativecommons.org/licenses/by/4.0/ | The increasing requirements for data protection and privacy has attracted a
huge research interest on distributed artificial intelligence and specifically
on federated learning, an emerging machine learning approach that allows the
construction of a model between several participants who hold their own private
data. In the initial proposal of federated learning the architecture was
centralised and the aggregation was done with federated averaging, meaning that
a central server will orchestrate the federation using the most straightforward
averaging strategy. This research is focused on testing different federated
strategies in a peer-to-peer environment. The authors propose various
aggregation strategies for federated learning, including weighted averaging
aggregation, using different factors and strategies based on participant
contribution. The strategies are tested with varying data sizes to identify the
most robust ones. This research tests the strategies with several biomedical
datasets and the results of the experiments show that the accuracy-based
weighted average outperforms the classical federated averaging method.
| [
{
"created": "Thu, 15 Feb 2024 17:38:32 GMT",
"version": "v1"
}
] | 2024-02-16 | [
[
"Salmeron",
"Jose L.",
""
],
[
"Arévalo",
"Irina",
""
],
[
"Ruiz-Celma",
"Antonio",
""
]
] |
2402.10365 | Robert Kosk | Robert Kosk, Richard Southern, Lihua You, Shaojun Bian, Willem Kokke,
Greg Maguire | Deep Spectral Meshes: Multi-Frequency Facial Mesh Processing with Graph
Neural Networks | 26 pages, 10 figures, journal article | Electronics. 2024; 13(4):720 | 10.3390/electronics13040720 | null | cs.CV cs.CG cs.GR | http://creativecommons.org/licenses/by/4.0/ | With the rising popularity of virtual worlds, the importance of data-driven
parametric models of 3D meshes has grown rapidly. Numerous applications, such
as computer vision, procedural generation, and mesh editing, vastly rely on
these models. However, current approaches do not allow for independent editing
of deformations at different frequency levels. They also do not benefit from
representing deformations at different frequencies with dedicated
representations, which would better expose their properties and improve the
generated meshes' geometric and perceptual quality. In this work, spectral
meshes are introduced as a method to decompose mesh deformations into
low-frequency and high-frequency deformations. These features of low- and
high-frequency deformations are used for representation learning with graph
convolutional networks. A parametric model for 3D facial mesh synthesis is
built upon the proposed framework, exposing user parameters that control
disentangled high- and low-frequency deformations. Independent control of
deformations at different frequencies and generation of plausible synthetic
examples are mutually exclusive objectives. A Conditioning Factor is introduced
to leverage these objectives. Our model takes further advantage of spectral
partitioning by representing different frequency levels with disparate, more
suitable representations. Low frequencies are represented with standardised
Euclidean coordinates, and high frequencies with a normalised deformation
representation (DR). This paper investigates applications of our proposed
approach in mesh reconstruction, mesh interpolation, and multi-frequency
editing. It is demonstrated that our method improves the overall quality of
generated meshes on most datasets when considering both the $L_1$ norm and
perceptual Dihedral Angle Mesh Error (DAME) metrics.
| [
{
"created": "Thu, 15 Feb 2024 23:17:08 GMT",
"version": "v1"
}
] | 2024-02-19 | [
[
"Kosk",
"Robert",
""
],
[
"Southern",
"Richard",
""
],
[
"You",
"Lihua",
""
],
[
"Bian",
"Shaojun",
""
],
[
"Kokke",
"Willem",
""
],
[
"Maguire",
"Greg",
""
]
] |
2402.10373 | Yanis Labrak | Yanis Labrak, Adrien Bazoge, Emmanuel Morin, Pierre-Antoine Gourraud,
Mickael Rouvier, Richard Dufour | BioMistral: A Collection of Open-Source Pretrained Large Language Models
for Medical Domains | Accepted at ACL 2024 - Proceedings of the 62st Annual Meeting of the
Association for Computational Linguistics (Volume 1: Long Papers) | Proceedings of the 62st Annual Meeting of the Association for
Computational Linguistics - Volume 1: Long Papers (ACL 2024) | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/publicdomain/zero/1.0/ | Large Language Models (LLMs) have demonstrated remarkable versatility in
recent years, offering potential applications across specialized domains such
as healthcare and medicine. Despite the availability of various open-source
LLMs tailored for health contexts, adapting general-purpose LLMs to the medical
domain presents significant challenges. In this paper, we introduce BioMistral,
an open-source LLM tailored for the biomedical domain, utilizing Mistral as its
foundation model and further pre-trained on PubMed Central. We conduct a
comprehensive evaluation of BioMistral on a benchmark comprising 10 established
medical question-answering (QA) tasks in English. We also explore lightweight
models obtained through quantization and model merging approaches. Our results
demonstrate BioMistral's superior performance compared to existing open-source
medical models and its competitive edge against proprietary counterparts.
Finally, to address the limited availability of data beyond English and to
assess the multilingual generalization of medical LLMs, we automatically
translated and evaluated this benchmark into 7 other languages. This marks the
first large-scale multilingual evaluation of LLMs in the medical domain.
Datasets, multilingual evaluation benchmarks, scripts, and all the models
obtained during our experiments are freely released.
| [
{
"created": "Thu, 15 Feb 2024 23:39:04 GMT",
"version": "v1"
},
{
"created": "Sun, 9 Jun 2024 15:19:09 GMT",
"version": "v2"
},
{
"created": "Wed, 17 Jul 2024 09:34:00 GMT",
"version": "v3"
}
] | 2024-07-18 | [
[
"Labrak",
"Yanis",
""
],
[
"Bazoge",
"Adrien",
""
],
[
"Morin",
"Emmanuel",
""
],
[
"Gourraud",
"Pierre-Antoine",
""
],
[
"Rouvier",
"Mickael",
""
],
[
"Dufour",
"Richard",
""
]
] |
2402.10404 | Ji-Hoon Park | Ji-Hoon Park, Yeong-Joon Ju, and Seong-Whan Lee | Explaining generative diffusion models via visual analysis for
interpretable decision-making process | 22 pages, published in Expert Systems with Applications | Expert Systems with Applications 248 (2024) 123231 | 10.1016/j.eswa.2024.123231 | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Diffusion models have demonstrated remarkable performance in generation
tasks. Nevertheless, explaining the diffusion process remains challenging due
to it being a sequence of denoising noisy images that are difficult for experts
to interpret. To address this issue, we propose the three research questions to
interpret the diffusion process from the perspective of the visual concepts
generated by the model and the region where the model attends in each time
step. We devise tools for visualizing the diffusion process and answering the
aforementioned research questions to render the diffusion process
human-understandable. We show how the output is progressively generated in the
diffusion process by explaining the level of denoising and highlighting
relationships to foundational visual concepts at each time step through the
results of experiments with various visual analyses using the tools. Throughout
the training of the diffusion model, the model learns diverse visual concepts
corresponding to each time-step, enabling the model to predict varying levels
of visual concepts at different stages. We substantiate our tools using Area
Under Cover (AUC) score, correlation quantification, and cross-attention
mapping. Our findings provide insights into the diffusion process and pave the
way for further research into explainable diffusion mechanisms.
| [
{
"created": "Fri, 16 Feb 2024 02:12:20 GMT",
"version": "v1"
}
] | 2024-02-19 | [
[
"Park",
"Ji-Hoon",
""
],
[
"Ju",
"Yeong-Joon",
""
],
[
"Lee",
"Seong-Whan",
""
]
] |
2402.10515 | Sagnik Bhattacharya | Sagnik Bhattacharya, Junyoung Choi, Joohyun Lee | Power-Efficient Indoor Localization Using Adaptive Channel-aware
Ultra-wideband DL-TDOA | null | IEEE GLOBECOM 2023 | null | null | eess.SP cs.AI | http://creativecommons.org/licenses/by/4.0/ | Among the various Ultra-wideband (UWB) ranging methods, the absence of uplink
communication or centralized computation makes downlink
time-difference-of-arrival (DL-TDOA) localization the most suitable for
large-scale industrial deployments. However, temporary or permanent obstacles
in the deployment region often lead to non-line-of-sight (NLOS) channel path
and signal outage effects, which result in localization errors. Prior research
has addressed this problem by increasing the ranging frequency, which leads to
a heavy increase in the user device power consumption. It also does not
contribute to any increase in localization accuracy under line-of-sight (LOS)
conditions. In this paper, we propose and implement a novel low-power
channel-aware dynamic frequency DL-TDOA ranging algorithm. It comprises NLOS
probability predictor based on a convolutional neural network (CNN), a dynamic
ranging frequency control module, and an IMU sensor-based ranging filter. Based
on the conducted experiments, we show that the proposed algorithm achieves 50%
higher accuracy in NLOS conditions while having 46% lower power consumption in
LOS conditions compared to baseline methods from prior research.
| [
{
"created": "Fri, 16 Feb 2024 09:04:04 GMT",
"version": "v1"
}
] | 2024-02-19 | [
[
"Bhattacharya",
"Sagnik",
""
],
[
"Choi",
"Junyoung",
""
],
[
"Lee",
"Joohyun",
""
]
] |
2402.10553 | Leonardo Rigutini | Andrea Pazienza and Nicola Macchiarulo and Felice Vitulano and Antonio
Fiorentini and Marco Cammisa and Leonardo Rigutini and Ernesto Di Iorio and
Achille Globo and Antonio Trevisi | A novel integrated industrial approach with cobots in the age of
industry 4.0 through conversational interaction and computer vision | null | Proceedings of the 6th Italian Conference on Computational
Linguistics (CLiC-it 2019) | null | null | cs.RO cs.CL cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | From robots that replace workers to robots that serve as helpful colleagues,
the field of robotic automation is experiencing a new trend that represents a
huge challenge for component manufacturers. The contribution starts from an
innovative vision that sees an ever closer collaboration between Cobot, able to
do a specific physical job with precision, the AI world, able to analyze
information and support the decision-making process, and the man able to have a
strategic vision of the future.
| [
{
"created": "Fri, 16 Feb 2024 10:35:01 GMT",
"version": "v1"
}
] | 2024-02-19 | [
[
"Pazienza",
"Andrea",
""
],
[
"Macchiarulo",
"Nicola",
""
],
[
"Vitulano",
"Felice",
""
],
[
"Fiorentini",
"Antonio",
""
],
[
"Cammisa",
"Marco",
""
],
[
"Rigutini",
"Leonardo",
""
],
[
"Di Iorio",
"Ernesto",
""
],
[
"Globo",
"Achille",
""
],
[
"Trevisi",
"Antonio",
""
]
] |
2402.10558 | Leonardo Rigutini | Achille Globo and Antonio Trevisi and Andrea Zugarini and Leonardo
Rigutini and Marco Maggini and Stefano Melacci | Neural paraphrasing by automatically crawled and aligned sentence pairs | The 6th International Conference on Social Networks Analysis,
Management and Security (SNAMS 2019) | Proceedings of The 6th International Conference on Social Networks
Analysis, Management and Security (SNAMS 2019) | 10.1109/SNAMS.2019.8931824 | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Paraphrasing is the task of re-writing an input text using other words,
without altering the meaning of the original content. Conversational systems
can exploit automatic paraphrasing to make the conversation more natural, e.g.,
talking about a certain topic using different paraphrases in different time
instants. Recently, the task of automatically generating paraphrases has been
approached in the context of Natural Language Generation (NLG). While many
existing systems simply consist in rule-based models, the recent success of the
Deep Neural Networks in several NLG tasks naturally suggests the possibility of
exploiting such networks for generating paraphrases. However, the main obstacle
toward neural-network-based paraphrasing is the lack of large datasets with
aligned pairs of sentences and paraphrases, that are needed to efficiently
train the neural models. In this paper we present a method for the automatic
generation of large aligned corpora, that is based on the assumption that news
and blog websites talk about the same events using different narrative styles.
We propose a similarity search procedure with linguistic constraints that,
given a reference sentence, is able to locate the most similar candidate
paraphrases out from millions of indexed sentences. The data generation process
is evaluated in the case of the Italian language, performing experiments using
pointer-based deep neural architectures.
| [
{
"created": "Fri, 16 Feb 2024 10:40:38 GMT",
"version": "v1"
}
] | 2024-02-19 | [
[
"Globo",
"Achille",
""
],
[
"Trevisi",
"Antonio",
""
],
[
"Zugarini",
"Andrea",
""
],
[
"Rigutini",
"Leonardo",
""
],
[
"Maggini",
"Marco",
""
],
[
"Melacci",
"Stefano",
""
]
] |
2402.10717 | Raktim Kumar Mondol | Raktim Kumar Mondol, Ewan K.A. Millar, Arcot Sowmya, Erik Meijering | BioFusionNet: Deep Learning-Based Survival Risk Stratification in ER+
Breast Cancer Through Multifeature and Multimodal Data Fusion | Keywords: Multimodal Fusion, Breast Cancer, Whole Slide Images, Deep
Neural Network, Survival Prediction | JBHI, 24 June 2024 | 10.1109/JBHI.2024.3418341 | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Breast cancer is a significant health concern affecting millions of women
worldwide. Accurate survival risk stratification plays a crucial role in
guiding personalised treatment decisions and improving patient outcomes. Here
we present BioFusionNet, a deep learning framework that fuses image-derived
features with genetic and clinical data to obtain a holistic profile and
achieve survival risk stratification of ER+ breast cancer patients. We employ
multiple self-supervised feature extractors (DINO and MoCoV3) pretrained on
histopathological patches to capture detailed image features. These features
are then fused by a variational autoencoder and fed to a self-attention network
generating patient-level features. A co-dual-cross-attention mechanism combines
the histopathological features with genetic data, enabling the model to capture
the interplay between them. Additionally, clinical data is incorporated using a
feed-forward network, further enhancing predictive performance and achieving
comprehensive multimodal feature integration. Furthermore, we introduce a
weighted Cox loss function, specifically designed to handle imbalanced survival
data, which is a common challenge. Our model achieves a mean concordance index
of 0.77 and a time-dependent area under the curve of 0.84, outperforming
state-of-the-art methods. It predicts risk (high versus low) with prognostic
significance for overall survival in univariate analysis (HR=2.99, 95% CI:
1.88--4.78, p<0.005), and maintains independent significance in multivariate
analysis incorporating standard clinicopathological variables (HR=2.91, 95\%
CI: 1.80--4.68, p<0.005).
| [
{
"created": "Fri, 16 Feb 2024 14:19:33 GMT",
"version": "v1"
},
{
"created": "Mon, 3 Jun 2024 02:14:12 GMT",
"version": "v2"
}
] | 2024-07-02 | [
[
"Mondol",
"Raktim Kumar",
""
],
[
"Millar",
"Ewan K. A.",
""
],
[
"Sowmya",
"Arcot",
""
],
[
"Meijering",
"Erik",
""
]
] |
2402.10753 | Junjie Ye | Junjie Ye, Sixian Li, Guanyu Li, Caishuang Huang, Songyang Gao, Yilong
Wu, Qi Zhang, Tao Gui, Xuanjing Huang | ToolSword: Unveiling Safety Issues of Large Language Models in Tool
Learning Across Three Stages | Accepted by ACL 2024 Main Conference | Proceedings of the 62nd Annual Meeting of the Association for
Computational Linguistics 2024 (Volume 1: Long Papers) | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tool learning is widely acknowledged as a foundational approach or deploying
large language models (LLMs) in real-world scenarios. While current research
primarily emphasizes leveraging tools to augment LLMs, it frequently neglects
emerging safety considerations tied to their application. To fill this gap, we
present *ToolSword*, a comprehensive framework dedicated to meticulously
investigating safety issues linked to LLMs in tool learning. Specifically,
ToolSword delineates six safety scenarios for LLMs in tool learning,
encompassing **malicious queries** and **jailbreak attacks** in the input
stage, **noisy misdirection** and **risky cues** in the execution stage, and
**harmful feedback** and **error conflicts** in the output stage. Experiments
conducted on 11 open-source and closed-source LLMs reveal enduring safety
challenges in tool learning, such as handling harmful queries, employing risky
tools, and delivering detrimental feedback, which even GPT-4 is susceptible to.
Moreover, we conduct further studies with the aim of fostering research on tool
learning safety. The data is released in
https://github.com/Junjie-Ye/ToolSword.
| [
{
"created": "Fri, 16 Feb 2024 15:19:46 GMT",
"version": "v1"
},
{
"created": "Fri, 16 Aug 2024 04:12:00 GMT",
"version": "v2"
}
] | 2024-08-19 | [
[
"Ye",
"Junjie",
""
],
[
"Li",
"Sixian",
""
],
[
"Li",
"Guanyu",
""
],
[
"Huang",
"Caishuang",
""
],
[
"Gao",
"Songyang",
""
],
[
"Wu",
"Yilong",
""
],
[
"Zhang",
"Qi",
""
],
[
"Gui",
"Tao",
""
],
[
"Huang",
"Xuanjing",
""
]
] |
2402.10776 | Eduardo Juarez | H. Fabelo, S. Ortega, A. Szolna, D. Bulters, J.F. Pineiro, S. Kabwama,
A. Shanahan, H. Bulstrode, S. Bisshopp, B.R. Kiran, D. Ravi, R. Lazcano, D.
Madronal, C. Sosa, C. Espino, M. Marquez, M. De la Luz Plaza, R. Camacho, D.
Carrera, M. Hernandez, G.M. Callico, J. Morera, B. Stanciulescu, G.Z. Yang,
R. Salvador, E. Juarez, C. Sanz and R. Sarmiento | In-Vivo Hyperspectral Human Brain Image Database for Brain Cancer
Detection | 19 pages, 12 figures | IEEE Access, 2019, 7, pp. 39098 39116 | 10.1109/ACCESS.2019.2904788 | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | The use of hyperspectral imaging for medical applications is becoming more
common in recent years. One of the main obstacles that researchers find when
developing hyperspectral algorithms for medical applications is the lack of
specific, publicly available, and hyperspectral medical data. The work
described in this paper was developed within the framework of the European
project HELICoiD (HypErspectraL Imaging Cancer Detection), which had as a main
goal the application of hyperspectral imaging to the delineation of brain
tumors in real-time during neurosurgical operations. In this paper, the
methodology followed to generate the first hyperspectral database of in-vivo
human brain tissues is presented. Data was acquired employing a customized
hyperspectral acquisition system capable of capturing information in the Visual
and Near InfraRed (VNIR) range from 400 to 1000 nm. Repeatability was assessed
for the cases where two images of the same scene were captured consecutively.
The analysis reveals that the system works more efficiently in the spectral
range between 450 and 900 nm. A total of 36 hyperspectral images from 22
different patients were obtained. From these data, more than 300 000 spectral
signatures were labeled employing a semi-automatic methodology based on the
spectral angle mapper algorithm. Four different classes were defined: normal
tissue, tumor tissue, blood vessel, and background elements. All the
hyperspectral data has been made available in a public repository.
| [
{
"created": "Fri, 16 Feb 2024 15:58:45 GMT",
"version": "v1"
}
] | 2024-02-19 | [
[
"Fabelo",
"H.",
""
],
[
"Ortega",
"S.",
""
],
[
"Szolna",
"A.",
""
],
[
"Bulters",
"D.",
""
],
[
"Pineiro",
"J. F.",
""
],
[
"Kabwama",
"S.",
""
],
[
"Shanahan",
"A.",
""
],
[
"Bulstrode",
"H.",
""
],
[
"Bisshopp",
"S.",
""
],
[
"Kiran",
"B. R.",
""
],
[
"Ravi",
"D.",
""
],
[
"Lazcano",
"R.",
""
],
[
"Madronal",
"D.",
""
],
[
"Sosa",
"C.",
""
],
[
"Espino",
"C.",
""
],
[
"Marquez",
"M.",
""
],
[
"Plaza",
"M. De la Luz",
""
],
[
"Camacho",
"R.",
""
],
[
"Carrera",
"D.",
""
],
[
"Hernandez",
"M.",
""
],
[
"Callico",
"G. M.",
""
],
[
"Morera",
"J.",
""
],
[
"Stanciulescu",
"B.",
""
],
[
"Yang",
"G. Z.",
""
],
[
"Salvador",
"R.",
""
],
[
"Juarez",
"E.",
""
],
[
"Sanz",
"C.",
""
],
[
"Sarmiento",
"R.",
""
]
] |
2402.10828 | Jianhao Yuan | Jianhao Yuan, Shuyang Sun, Daniel Omeiza, Bo Zhao, Paul Newman, Lars
Kunze, Matthew Gadd | RAG-Driver: Generalisable Driving Explanations with Retrieval-Augmented
In-Context Learning in Multi-Modal Large Language Model | 14 pages, 6 figures | Robotics: Science and Systems (RSS) 2024 | null | null | cs.RO cs.AI | http://creativecommons.org/licenses/by/4.0/ | We need to trust robots that use often opaque AI methods. They need to
explain themselves to us, and we need to trust their explanation. In this
regard, explainability plays a critical role in trustworthy autonomous
decision-making to foster transparency and acceptance among end users,
especially in complex autonomous driving. Recent advancements in Multi-Modal
Large Language models (MLLMs) have shown promising potential in enhancing the
explainability as a driving agent by producing control predictions along with
natural language explanations. However, severe data scarcity due to expensive
annotation costs and significant domain gaps between different datasets makes
the development of a robust and generalisable system an extremely challenging
task. Moreover, the prohibitively expensive training requirements of MLLM and
the unsolved problem of catastrophic forgetting further limit their
generalisability post-deployment. To address these challenges, we present
RAG-Driver, a novel retrieval-augmented multi-modal large language model that
leverages in-context learning for high-performance, explainable, and
generalisable autonomous driving. By grounding in retrieved expert
demonstration, we empirically validate that RAG-Driver achieves
state-of-the-art performance in producing driving action explanations,
justifications, and control signal prediction. More importantly, it exhibits
exceptional zero-shot generalisation capabilities to unseen environments
without further training endeavours.
| [
{
"created": "Fri, 16 Feb 2024 16:57:18 GMT",
"version": "v1"
},
{
"created": "Wed, 29 May 2024 14:44:20 GMT",
"version": "v2"
}
] | 2024-05-30 | [
[
"Yuan",
"Jianhao",
""
],
[
"Sun",
"Shuyang",
""
],
[
"Omeiza",
"Daniel",
""
],
[
"Zhao",
"Bo",
""
],
[
"Newman",
"Paul",
""
],
[
"Kunze",
"Lars",
""
],
[
"Gadd",
"Matthew",
""
]
] |
2402.10847 | Ekta Gavas | Ekta Gavas, Kaustubh Olpadkar, Anoop Namboodiri | Enhancement-Driven Pretraining for Robust Fingerprint Representation
Learning | 8 pages, 4 figures, Accepted at 19th VISIGRAPP 2024: VISAPP
conference | Proceedings of the 19th International Joint Conference on Computer
Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP
2024) - Volume 2: VISAPP, ISBN 978-989-758-679-8, ISSN 2184-4321, pages
821-828 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fingerprint recognition stands as a pivotal component of biometric
technology, with diverse applications from identity verification to advanced
search tools. In this paper, we propose a unique method for deriving robust
fingerprint representations by leveraging enhancement-based pre-training.
Building on the achievements of U-Net-based fingerprint enhancement, our method
employs a specialized encoder to derive representations from fingerprint images
in a self-supervised manner. We further refine these representations, aiming to
enhance the verification capabilities. Our experimental results, tested on
publicly available fingerprint datasets, reveal a marked improvement in
verification performance against established self-supervised training
techniques. Our findings not only highlight the effectiveness of our method but
also pave the way for potential advancements. Crucially, our research indicates
that it is feasible to extract meaningful fingerprint representations from
degraded images without relying on enhanced samples.
| [
{
"created": "Fri, 16 Feb 2024 17:36:56 GMT",
"version": "v1"
}
] | 2024-02-19 | [
[
"Gavas",
"Ekta",
""
],
[
"Olpadkar",
"Kaustubh",
""
],
[
"Namboodiri",
"Anoop",
""
]
] |
2402.10943 | Benjamin Kiessling | Benjamin Kiessling (PSL), Gennady Kurin, Matthew Thomas Miller, Kader
Smail | Advances and Limitations in Open Source Arabic-Script OCR: A Case Study | null | Digital Studies / Le champ num{\'e}rique, 2021, 11 (1) | 10.16995/dscn.8094 | null | cs.CL cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work presents an accuracy study of the open source OCR engine, Kraken,
on the leading Arabic scholarly journal, al-Abhath. In contrast with other
commercially available OCR engines, Kraken is shown to be capable of producing
highly accurate Arabic-script OCR. The study also assesses the relative
accuracy of typeface-specific and generalized models on the al-Abhath data and
provides a microanalysis of the ``error instances'' and the contextual features
that may have contributed to OCR misrecognition. Building on this analysis, the
paper argues that Arabic-script OCR can be significantly improved through (1) a
more systematic approach to training data production, and (2) the development
of key technological components, especially multi-language models and improved
line segmentation and layout analysis.
Cet article pr{\'e}sente une {\'e}tude d'exactitude du moteur ROC open
source, Krakan, sur la revue acad{\'e}mique arabe de premier rang, al-Abhath.
Contrairement {\`a} d'autres moteurs ROC disponibles sur le march{\'e}, Kraken
se r{\'e}v{\`e}le {\^e}tre capable de produire de la ROC extr{\^e}mement exacte
de l'{\'e}criture arabe. L'{\'e}tude {\'e}value aussi l'exactitude relative des
mod{\`e}les sp{\'e}cifiquement configur{\'e}s {\`a} des polices et celle des
mod{\`e}les g{\'e}n{\'e}ralis{\'e}s sur les donn{\'e}es d'al-Abhath et fournit
une microanalyse des "occurrences d'erreurs", ainsi qu'une microanalyse des
{\'e}l{\'e}ments contextuels qui pourraient avoir contribu{\'e} {\`a} la
m{\'e}reconnaissance ROC. S'appuyant sur cette analyse, cet article fait valoir
que la ROC de l'{\'e}criture arabe peut {\^e}tre consid{\'e}rablement
am{\'e}lior{\'e}e gr{\^a}ce {\`a} (1) une approche plus syst{\'e}matique
d'entra{\^i}nement de la production de donn{\'e}es et (2) gr{\^a}ce au
d{\'e}veloppement de composants technologiques fondamentaux,
notammentl'am{\'e}lioration des mod{\`e}les multilingues, de la segmentation de
ligne et de l'analyse de la mise en page.
| [
{
"created": "Thu, 8 Feb 2024 12:51:36 GMT",
"version": "v1"
}
] | 2024-02-20 | [
[
"Kiessling",
"Benjamin",
"",
"PSL"
],
[
"Kurin",
"Gennady",
""
],
[
"Miller",
"Matthew Thomas",
""
],
[
"Smail",
"Kader",
""
]
] |
2402.10948 | Wenyu Li | Wenyu Li, Yinuo Zhu, Xin Lin, Ming Li, Ziyue Jiang, Ziqian Zeng | Zero-shot Explainable Mental Health Analysis on Social Media by
Incorporating Mental Scales | 4 pages,2 figures | The Web Conference (WWW) 2024, Short Paper | 10.1145/3589335.3651584. | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Traditional discriminative approaches in mental health analysis are known for
their strong capacity but lack interpretability and demand large-scale
annotated data. The generative approaches, such as those based on large
language models (LLMs), have the potential to get rid of heavy annotations and
provide explanations but their capabilities still fall short compared to
discriminative approaches, and their explanations may be unreliable due to the
fact that the generation of explanation is a black-box process. Inspired by the
psychological assessment practice of using scales to evaluate mental states,
our method which is called Mental Analysis by Incorporating Mental Scales
(MAIMS), incorporates two procedures via LLMs. First, the patient completes
mental scales, and second, the psychologist interprets the collected
information from the mental scales and makes informed decisions. Experimental
results show that MAIMS outperforms other zero-shot methods. MAIMS can generate
more rigorous explanation based on the outputs of mental scales
| [
{
"created": "Fri, 9 Feb 2024 09:44:06 GMT",
"version": "v1"
},
{
"created": "Fri, 15 Mar 2024 02:02:02 GMT",
"version": "v2"
}
] | 2024-04-23 | [
[
"Li",
"Wenyu",
""
],
[
"Zhu",
"Yinuo",
""
],
[
"Lin",
"Xin",
""
],
[
"Li",
"Ming",
""
],
[
"Jiang",
"Ziyue",
""
],
[
"Zeng",
"Ziqian",
""
]
] |
2402.10967 | Jos\'e Alberto Ben\'itez-Andrades Ph.D. | Jos\'e Alberto Ben\'itez-Andrades, Isa\'ias Garc\'ia-Rodr\'iguez,
Carmen Benavides, H\'ector Alaiz-Moret\'on and Alejandro
Rodr\'iguez-Gonz\'alez | Social network analysis for personalized characterization and risk
assessment of alcohol use disorders in adolescents using semantic
technologies | null | Future Generation Computer Systems, Volume 106, May 2020, Pages
154-170 | 10.1016/j.future.2020.01.002 | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Alcohol Use Disorder (AUD) is a major concern for public health organizations
worldwide, especially as regards the adolescent population. The consumption of
alcohol in adolescents is known to be influenced by seeing friends and even
parents drinking alcohol. Building on this fact, a number of studies into
alcohol consumption among adolescents have made use of Social Network Analysis
(SNA) techniques to study the different social networks (peers, friends,
family, etc.) with whom the adolescent is involved. These kinds of studies need
an initial phase of data gathering by means of questionnaires and a subsequent
analysis phase using the SNA techniques. The process involves a number of
manual data handling stages that are time consuming and error-prone. The use of
knowledge engineering techniques (including the construction of a domain
ontology) to represent the information, allows the automation of all the
activities, from the initial data collection to the results of the SNA study.
This paper shows how a knowledge model is constructed, and compares the results
obtained using the traditional method with this, fully automated model,
detailing the main advantages of the latter. In the case of the SNA analysis,
the validity of the results obtained with the knowledge engineering approach
are compared to those obtained manually using the UCINET, Cytoscape, Pajek and
Gephi to test the accuracy of the knowledge model.
| [
{
"created": "Wed, 14 Feb 2024 16:09:05 GMT",
"version": "v1"
}
] | 2024-02-20 | [
[
"Benítez-Andrades",
"José Alberto",
""
],
[
"García-Rodríguez",
"Isaías",
""
],
[
"Benavides",
"Carmen",
""
],
[
"Alaiz-Moretón",
"Héctor",
""
],
[
"Rodríguez-González",
"Alejandro",
""
]
] |
2402.10977 | Fengqi You | Benjamin Decardi-Nelson, Abdulelah S. Alshehri, Akshay Ajagekar,
Fengqi You | Generative AI and Process Systems Engineering: The Next Frontier | null | Computers & Chemical Engineering, Volume 187, August 2024, 108723 | 10.1016/j.compchemeng.2024.108723 | null | cs.LG cs.AI cs.SY eess.SY math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article explores how emerging generative artificial intelligence (GenAI)
models, such as large language models (LLMs), can enhance solution
methodologies within process systems engineering (PSE). These cutting-edge
GenAI models, particularly foundation models (FMs), which are pre-trained on
extensive, general-purpose datasets, offer versatile adaptability for a broad
range of tasks, including responding to queries, image generation, and complex
decision-making. Given the close relationship between advancements in PSE and
developments in computing and systems technologies, exploring the synergy
between GenAI and PSE is essential. We begin our discussion with a compact
overview of both classic and emerging GenAI models, including FMs, and then
dive into their applications within key PSE domains: synthesis and design,
optimization and integration, and process monitoring and control. In each
domain, we explore how GenAI models could potentially advance PSE
methodologies, providing insights and prospects for each area. Furthermore, the
article identifies and discusses potential challenges in fully leveraging GenAI
within PSE, including multiscale modeling, data requirements, evaluation
metrics and benchmarks, and trust and safety, thereby deepening the discourse
on effective GenAI integration into systems analysis, design, optimization,
operations, monitoring, and control. This paper provides a guide for future
research focused on the applications of emerging GenAI in PSE.
| [
{
"created": "Thu, 15 Feb 2024 18:20:42 GMT",
"version": "v1"
},
{
"created": "Mon, 6 May 2024 21:40:04 GMT",
"version": "v2"
}
] | 2024-06-18 | [
[
"Decardi-Nelson",
"Benjamin",
""
],
[
"Alshehri",
"Abdulelah S.",
""
],
[
"Ajagekar",
"Akshay",
""
],
[
"You",
"Fengqi",
""
]
] |
2402.11161 | Zongxia Li | Zongxia Li, Ishani Mondal, Yijun Liang, Huy Nghiem, Jordan Lee
Boyd-Graber | PEDANTS: Cheap but Effective and Interpretable Answer Equivalence | Efficient PEDANTS Classifier for short-form QA in github:
https://github.com/zli12321/qa_metrics. arXiv admin note: text overlap with
arXiv:2401.13170 | Empirical Methods in Natural Language Processing 2024 | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Question answering (QA) can only make progress if we know if an answer is
correct, but current answer correctness (AC) metrics struggle with verbose,
free-form answers from large language models (LLMs). There are two challenges
with current short-form QA evaluations: a lack of diverse styles of evaluation
data and an over-reliance on expensive and slow LLMs. LLM-based scorers
correlate better with humans, but this expensive task has only been tested on
limited QA datasets. We rectify these issues by providing rubrics and datasets
for evaluating machine QA adopted from the Trivia community. We also propose an
efficient, and interpretable QA evaluation that is more stable than an exact
match and neural methods(BERTScore).
| [
{
"created": "Sat, 17 Feb 2024 01:56:19 GMT",
"version": "v1"
},
{
"created": "Sun, 7 Jul 2024 01:14:16 GMT",
"version": "v2"
},
{
"created": "Sat, 28 Sep 2024 02:57:29 GMT",
"version": "v3"
},
{
"created": "Thu, 10 Oct 2024 03:41:07 GMT",
"version": "v4"
},
{
"created": "Fri, 11 Oct 2024 20:56:36 GMT",
"version": "v5"
}
] | 2024-10-15 | [
[
"Li",
"Zongxia",
""
],
[
"Mondal",
"Ishani",
""
],
[
"Liang",
"Yijun",
""
],
[
"Nghiem",
"Huy",
""
],
[
"Boyd-Graber",
"Jordan Lee",
""
]
] |
2402.11175 | Yuxia Wang | Yuxia Wang, Jonibek Mansurov, Petar Ivanov, Jinyan Su, Artem
Shelmanov, Akim Tsvigun, Osama Mohanned Afzal, Tarek Mahmoud, Giovanni
Puccetti, Thomas Arnold, Alham Fikri Aji, Nizar Habash, Iryna Gurevych,
Preslav Nakov | M4GT-Bench: Evaluation Benchmark for Black-Box Machine-Generated Text
Detection | 29 pages | ACL 2024 main | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | The advent of Large Language Models (LLMs) has brought an unprecedented surge
in machine-generated text (MGT) across diverse channels. This raises legitimate
concerns about its potential misuse and societal implications. The need to
identify and differentiate such content from genuine human-generated text is
critical in combating disinformation, preserving the integrity of education and
scientific fields, and maintaining trust in communication. In this work, we
address this problem by introducing a new benchmark based on a multilingual,
multi-domain, and multi-generator corpus of MGTs -- M4GT-Bench. The benchmark
is compiled of three tasks: (1) mono-lingual and multi-lingual binary MGT
detection; (2) multi-way detection where one need to identify, which particular
model generated the text; and (3) mixed human-machine text detection, where a
word boundary delimiting MGT from human-written content should be determined.
On the developed benchmark, we have tested several MGT detection baselines and
also conducted an evaluation of human performance. We see that obtaining good
performance in MGT detection usually requires an access to the training data
from the same domain and generators. The benchmark is available at
https://github.com/mbzuai-nlp/M4GT-Bench.
| [
{
"created": "Sat, 17 Feb 2024 02:50:33 GMT",
"version": "v1"
},
{
"created": "Thu, 27 Jun 2024 05:42:12 GMT",
"version": "v2"
}
] | 2024-06-28 | [
[
"Wang",
"Yuxia",
""
],
[
"Mansurov",
"Jonibek",
""
],
[
"Ivanov",
"Petar",
""
],
[
"Su",
"Jinyan",
""
],
[
"Shelmanov",
"Artem",
""
],
[
"Tsvigun",
"Akim",
""
],
[
"Afzal",
"Osama Mohanned",
""
],
[
"Mahmoud",
"Tarek",
""
],
[
"Puccetti",
"Giovanni",
""
],
[
"Arnold",
"Thomas",
""
],
[
"Aji",
"Alham Fikri",
""
],
[
"Habash",
"Nizar",
""
],
[
"Gurevych",
"Iryna",
""
],
[
"Nakov",
"Preslav",
""
]
] |
2402.11203 | Yizheng Huang | Yizheng Huang and Jimmy Huang | Exploring ChatGPT for Next-generation Information Retrieval:
Opportunities and Challenges | Survey Paper | Web Intelligence, vol. 22, no. 1, pp. 31-44, 2024 | 10.3233/WEB-230363 | null | cs.IR cs.AI cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapid advancement of artificial intelligence (AI) has highlighted ChatGPT
as a pivotal technology in the field of information retrieval (IR).
Distinguished from its predecessors, ChatGPT offers significant benefits that
have attracted the attention of both the industry and academic communities.
While some view ChatGPT as a groundbreaking innovation, others attribute its
success to the effective integration of product development and market
strategies. The emergence of ChatGPT, alongside GPT-4, marks a new phase in
Generative AI, generating content that is distinct from training examples and
exceeding the capabilities of the prior GPT-3 model by OpenAI. Unlike the
traditional supervised learning approach in IR tasks, ChatGPT challenges
existing paradigms, bringing forth new challenges and opportunities regarding
text quality assurance, model bias, and efficiency. This paper seeks to examine
the impact of ChatGPT on IR tasks and offer insights into its potential future
developments.
| [
{
"created": "Sat, 17 Feb 2024 05:44:40 GMT",
"version": "v1"
}
] | 2024-04-18 | [
[
"Huang",
"Yizheng",
""
],
[
"Huang",
"Jimmy",
""
]
] |
2402.11273 | Yifei Chen | Yifei Chen, Chenyan Zhang, Yifan Ke, Yiyu Huang, Xuezhou Dai, Feiwei
Qin, Yongquan Zhang, Xiaodong Zhang, Changmiao Wang | Semi-supervised Medical Image Segmentation Method Based on Cross-pseudo
Labeling Leveraging Strong and Weak Data Augmentation Strategies | 5 pages, 2 figures, accept ISBI2024 | ISBI 2024 | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traditional supervised learning methods have historically encountered certain
constraints in medical image segmentation due to the challenging collection
process, high labeling cost, low signal-to-noise ratio, and complex features
characterizing biomedical images. This paper proposes a semi-supervised model,
DFCPS, which innovatively incorporates the Fixmatch concept. This significantly
enhances the model's performance and generalizability through data augmentation
processing, employing varied strategies for unlabeled data. Concurrently, the
model design gives appropriate emphasis to the generation, filtration, and
refinement processes of pseudo-labels. The novel concept of
cross-pseudo-supervision is introduced, integrating consistency learning with
self-training. This enables the model to fully leverage pseudo-labels from
multiple perspectives, thereby enhancing training diversity. The DFCPS model is
compared with both baseline and advanced models using the publicly accessible
Kvasir-SEG dataset. Across all four subdivisions containing different
proportions of unlabeled data, our model consistently exhibits superior
performance. Our source code is available at
https://github.com/JustlfC03/DFCPS.
| [
{
"created": "Sat, 17 Feb 2024 13:07:44 GMT",
"version": "v1"
}
] | 2024-02-20 | [
[
"Chen",
"Yifei",
""
],
[
"Zhang",
"Chenyan",
""
],
[
"Ke",
"Yifan",
""
],
[
"Huang",
"Yiyu",
""
],
[
"Dai",
"Xuezhou",
""
],
[
"Qin",
"Feiwei",
""
],
[
"Zhang",
"Yongquan",
""
],
[
"Zhang",
"Xiaodong",
""
],
[
"Wang",
"Changmiao",
""
]
] |
2402.11274 | Yifei Chen | Chenyan Zhang, Yifei Chen, Zhenxiong Fan, Yiyu Huang, Wenchao Weng,
Ruiquan Ge, Dong Zeng, Changmiao Wang | TC-DiffRecon: Texture coordination MRI reconstruction method based on
diffusion model and modified MF-UNet method | 5 pages, 2 figures, accept ISBI2024 | ISBI 2024 | null | null | eess.IV cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, diffusion models have gained significant attention as a novel set
of deep learning-based generative methods. These models attempt to sample data
from a Gaussian distribution that adheres to a target distribution, and have
been successfully adapted to the reconstruction of MRI data. However, as an
unconditional generative model, the diffusion model typically disrupts image
coordination because of the consistent projection of data introduced by
conditional bootstrap. This often results in image fragmentation and
incoherence. Furthermore, the inherent limitations of the diffusion model often
lead to excessive smoothing of the generated images. In the same vein, some
deep learning-based models often suffer from poor generalization performance,
meaning their effectiveness is greatly affected by different acceleration
factors. To address these challenges, we propose a novel diffusion model-based
MRI reconstruction method, named TC-DiffRecon, which does not rely on a
specific acceleration factor for training. We also suggest the incorporation of
the MF-UNet module, designed to enhance the quality of MRI images generated by
the model while mitigating the over-smoothing issue to a certain extent. During
the image generation sampling process, we employ a novel TCKG module and a
Coarse-to-Fine sampling scheme. These additions aim to harmonize image texture,
expedite the sampling process, while achieving data consistency. Our source
code is available at https://github.com/JustlfC03/TC-DiffRecon.
| [
{
"created": "Sat, 17 Feb 2024 13:09:00 GMT",
"version": "v1"
}
] | 2024-02-20 | [
[
"Zhang",
"Chenyan",
""
],
[
"Chen",
"Yifei",
""
],
[
"Fan",
"Zhenxiong",
""
],
[
"Huang",
"Yiyu",
""
],
[
"Weng",
"Wenchao",
""
],
[
"Ge",
"Ruiquan",
""
],
[
"Zeng",
"Dong",
""
],
[
"Wang",
"Changmiao",
""
]
] |
2402.11287 | Tomas Jelinek | Tom\'a\v{s} Jel\'inek, Jon\'a\v{s} \v{S}er\'ych, Ji\v{r}\'i Matas | Dense Matchers for Dense Tracking | null | Proceedings of the 27th Computer Vision Winter Workshop.
Ljubljana: Slovenian Pattern Recognition Society, 2024. p. 18-28 | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Optical flow is a useful input for various applications, including 3D
reconstruction, pose estimation, tracking, and structure-from-motion. Despite
its utility, the field of dense long-term tracking, especially over wide
baselines, has not been extensively explored. This paper extends the concept of
combining multiple optical flows over logarithmically spaced intervals as
proposed by MFT. We demonstrate the compatibility of MFT with different optical
flow networks, yielding results that surpass their individual performance.
Moreover, we present a simple yet effective combination of these networks
within the MFT framework. This approach proves to be competitive with more
sophisticated, non-causal methods in terms of position prediction accuracy,
highlighting the potential of MFT in enhancing long-term tracking applications.
| [
{
"created": "Sat, 17 Feb 2024 14:16:14 GMT",
"version": "v1"
}
] | 2024-02-22 | [
[
"Jelínek",
"Tomáš",
""
],
[
"Šerých",
"Jonáš",
""
],
[
"Matas",
"Jiří",
""
]
] |
2402.11305 | Juliette Marrie | Juliette Marrie, Michael Arbel, Julien Mairal, Diane Larlus | On Good Practices for Task-Specific Distillation of Large Pretrained
Visual Models | null | Published in Transactions on Machine Learning Research (TMLR),
2024 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large pretrained visual models exhibit remarkable generalization across
diverse recognition tasks. Yet, real-world applications often demand compact
models tailored to specific problems. Variants of knowledge distillation have
been devised for such a purpose, enabling task-specific compact models (the
students) to learn from a generic large pretrained one (the teacher). In this
paper, we show that the excellent robustness and versatility of recent
pretrained models challenge common practices established in the literature,
calling for a new set of optimal guidelines for task-specific distillation. To
address the lack of samples in downstream tasks, we also show that a variant of
Mixup based on stable diffusion complements standard data augmentation. This
strategy eliminates the need for engineered text prompts and improves
distillation of generic models into streamlined specialized networks.
| [
{
"created": "Sat, 17 Feb 2024 15:15:43 GMT",
"version": "v1"
},
{
"created": "Tue, 7 May 2024 15:30:45 GMT",
"version": "v2"
}
] | 2024-05-08 | [
[
"Marrie",
"Juliette",
""
],
[
"Arbel",
"Michael",
""
],
[
"Mairal",
"Julien",
""
],
[
"Larlus",
"Diane",
""
]
] |
2402.11319 | Junhyun Park | Junhyun Park, Seonghyeok Jang, Hyojae Park, Seongjun Bae, Minho Hwang | Hysteresis Compensation of Flexible Continuum Manipulator using RGBD
Sensing and Temporal Convolutional Network | 8 pages, 11 figures, 5 tables | IEEE Robotics and Automation Letters, Volume 9, Issue 7, 6091 -
6098, 2024 | 10.1109/LRA.2024.3398501 | null | cs.RO cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Flexible continuum manipulators are valued for minimally invasive surgery,
offering access to confined spaces through nonlinear paths. However,
cable-driven manipulators face control difficulties due to hysteresis from
cabling effects such as friction, elongation, and coupling. These effects are
difficult to model due to nonlinearity and the difficulties become even more
evident when dealing with long and coupled, multi-segmented manipulator. This
paper proposes a data-driven approach based on Deep Neural Networks (DNN) to
capture these nonlinear and previous states-dependent characteristics of cable
actuation. We collect physical joint configurations according to command joint
configurations using RGBD sensing and 7 fiducial markers to model the
hysteresis of the proposed manipulator. Result on a study comparing the
estimation performance of four DNN models show that the Temporal Convolution
Network (TCN) demonstrates the highest predictive capability. Leveraging
trained TCNs, we build a control algorithm to compensate for hysteresis.
Tracking tests in task space using unseen trajectories show that the proposed
control algorithm reduces the average position and orientation error by 61.39%
(from 13.7mm to 5.29 mm) and 64.04% (from 31.17{\deg} to 11.21{\deg}),
respectively. This result implies that the proposed calibrated controller
effectively reaches the desired configurations by estimating the hysteresis of
the manipulator. Applying this method in real surgical scenarios has the
potential to enhance control precision and improve surgical performance.
| [
{
"created": "Sat, 17 Feb 2024 16:20:59 GMT",
"version": "v1"
},
{
"created": "Wed, 10 Apr 2024 08:31:08 GMT",
"version": "v2"
},
{
"created": "Fri, 3 May 2024 17:19:31 GMT",
"version": "v3"
}
] | 2024-06-25 | [
[
"Park",
"Junhyun",
""
],
[
"Jang",
"Seonghyeok",
""
],
[
"Park",
"Hyojae",
""
],
[
"Bae",
"Seongjun",
""
],
[
"Hwang",
"Minho",
""
]
] |
2402.11353 | Young-Ho Kim | Eunkyung Jo, Yuin Jeong, SoHyun Park, Daniel A. Epstein, Young-Ho Kim | Understanding the Impact of Long-Term Memory on Self-Disclosure with
Large Language Model-Driven Chatbots for Public Health Intervention | Accepted to ACM CHI 2024 as a full paper | In Proceedings of the CHI Conference on Human Factors in Computing
Systems (CHI '24), May 11-16, 2024, Honolulu, HI, USA. ACM, New York, NY, USA | 10.1145/3613904.3642420 | null | cs.HC cs.AI cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Recent large language models (LLMs) offer the potential to support public
health monitoring by facilitating health disclosure through open-ended
conversations but rarely preserve the knowledge gained about individuals across
repeated interactions. Augmenting LLMs with long-term memory (LTM) presents an
opportunity to improve engagement and self-disclosure, but we lack an
understanding of how LTM impacts people's interaction with LLM-driven chatbots
in public health interventions. We examine the case of CareCall -- an
LLM-driven voice chatbot with LTM -- through the analysis of 1,252 call logs
and interviews with nine users. We found that LTM enhanced health disclosure
and fostered positive perceptions of the chatbot by offering familiarity.
However, we also observed challenges in promoting self-disclosure through LTM,
particularly around addressing chronic health conditions and privacy concerns.
We discuss considerations for LTM integration in LLM-driven chatbots for public
health monitoring, including carefully deciding what topics need to be
remembered in light of public health goals.
| [
{
"created": "Sat, 17 Feb 2024 18:05:53 GMT",
"version": "v1"
}
] | 2024-02-20 | [
[
"Jo",
"Eunkyung",
""
],
[
"Jeong",
"Yuin",
""
],
[
"Park",
"SoHyun",
""
],
[
"Epstein",
"Daniel A.",
""
],
[
"Kim",
"Young-Ho",
""
]
] |
2402.11457 | Shiyu Ni | Shiyu Ni, Keping Bi, Jiafeng Guo, Xueqi Cheng | When Do LLMs Need Retrieval Augmentation? Mitigating LLMs'
Overconfidence Helps Retrieval Augmentation | null | Findings of ACL2024 | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) have been found to have difficulty knowing they
do not possess certain knowledge and tend to provide specious answers in such
cases. Retrieval Augmentation (RA) has been extensively studied to mitigate
LLMs' hallucinations. However, due to the extra overhead and unassured quality
of retrieval, it may not be optimal to conduct RA all the time. A
straightforward idea is to only conduct retrieval when LLMs are uncertain about
a question. This motivates us to enhance the LLMs' ability to perceive their
knowledge boundaries to help RA. In this paper, we first quantitatively measure
LLMs' such ability and confirm their overconfidence. Then, we study how LLMs'
certainty about a question correlates with their dependence on external
retrieved information. We propose several methods to enhance LLMs' perception
of knowledge boundaries and show that they are effective in reducing
overconfidence. Additionally, equipped with these methods, LLMs can achieve
comparable or even better performance of RA with much fewer retrieval calls.
| [
{
"created": "Sun, 18 Feb 2024 04:57:19 GMT",
"version": "v1"
},
{
"created": "Tue, 11 Jun 2024 08:08:47 GMT",
"version": "v2"
}
] | 2024-06-12 | [
[
"Ni",
"Shiyu",
""
],
[
"Bi",
"Keping",
""
],
[
"Guo",
"Jiafeng",
""
],
[
"Cheng",
"Xueqi",
""
]
] |
2402.11523 | Peijie Sun | Peijie Sun, Le Wu, Kun Zhang, Xiangzhi Chen, and Meng Wang | Neighborhood-Enhanced Supervised Contrastive Learning for Collaborative
Filtering | null | IEEE TKDE, 2023 | 10.1109/TKDE.2023.3317068 | null | cs.IR cs.AI | http://creativecommons.org/licenses/by/4.0/ | While effective in recommendation tasks, collaborative filtering (CF)
techniques face the challenge of data sparsity. Researchers have begun
leveraging contrastive learning to introduce additional self-supervised signals
to address this. However, this approach often unintentionally distances the
target user/item from their collaborative neighbors, limiting its efficacy. In
response, we propose a solution that treats the collaborative neighbors of the
anchor node as positive samples within the final objective loss function. This
paper focuses on developing two unique supervised contrastive loss functions
that effectively combine supervision signals with contrastive loss. We analyze
our proposed loss functions through the gradient lens, demonstrating that
different positive samples simultaneously influence updating the anchor node's
embeddings. These samples' impact depends on their similarities to the anchor
node and the negative samples. Using the graph-based collaborative filtering
model as our backbone and following the same data augmentation methods as the
existing contrastive learning model SGL, we effectively enhance the performance
of the recommendation model. Our proposed Neighborhood-Enhanced Supervised
Contrastive Loss (NESCL) model substitutes the contrastive loss function in SGL
with our novel loss function, showing marked performance improvement. On three
real-world datasets, Yelp2018, Gowalla, and Amazon-Book, our model surpasses
the original SGL by 10.09%, 7.09%, and 35.36% on NDCG@20, respectively.
| [
{
"created": "Sun, 18 Feb 2024 09:46:51 GMT",
"version": "v1"
}
] | 2024-02-20 | [
[
"Sun",
"Peijie",
""
],
[
"Wu",
"Le",
""
],
[
"Zhang",
"Kun",
""
],
[
"Chen",
"Xiangzhi",
""
],
[
"Wang",
"Meng",
""
]
] |
2402.11569 | Eric Nichols | Matou\v{s} Jel\'inek and Eric Nichols and Randy Gomez | Developing Autonomous Robot-Mediated Behavior Coaching Sessions with
Haru | Accepted as Late Breaking Report (LBR) at the 19th Annual ACM/IEEE
International Conference on Human Robot Interaction (HRI '24) | HRI '24 Companion, March 11-14, 2024, Boulder, CO, USA | 10.1145/3610978.3640583 | null | cs.RO cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | This study presents an empirical investigation into the design and impact of
autonomous dialogues in human-robot interaction for behavior change coaching.
We focus on the use of Haru, a tabletop social robot, and explore the
implementation of the Tiny Habits method for fostering positive behavior
change. The core of our study lies in developing a fully autonomous dialogue
system that maximizes Haru's emotional expressiveness and unique personality.
Our methodology involved iterative design and extensive testing of the dialogue
system, ensuring it effectively embodied the principles of the Tiny Habits
method while also incorporating strategies for trust-raising and
trust-dampening. The effectiveness of the final version of the dialogue was
evaluated in an experimental study with human participants (N=12). The results
indicated a significant improvement in perceptions of Haru's liveliness,
interactivity, and neutrality. Additionally, our study contributes to the
broader understanding of dialogue design in social robotics, offering practical
insights for future developments in the field.
| [
{
"created": "Sun, 18 Feb 2024 12:33:54 GMT",
"version": "v1"
}
] | 2024-02-20 | [
[
"Jelínek",
"Matouš",
""
],
[
"Nichols",
"Eric",
""
],
[
"Gomez",
"Randy",
""
]
] |