id
stringlengths 10
10
| submitter
stringlengths 3
52
| authors
stringlengths 6
7.24k
| title
stringlengths 12
217
| comments
stringlengths 1
446
⌀ | journal-ref
stringlengths 4
297
| doi
stringlengths 12
118
⌀ | report-no
stringclasses 237
values | categories
stringlengths 5
71
| license
stringclasses 6
values | abstract
stringlengths 90
3.26k
| versions
listlengths 1
17
| update_date
stringclasses 969
values | authors_parsed
sequencelengths 1
451
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2402.00038 | Andrea Esposito | Antonio Curci and Andrea Esposito | Detecting Brain Tumors through Multimodal Neural Networks | Presented at NeroPRAI 2024 (co-located with ICPRAM 2024). This
version did not undergo peer review: refer to the open access version of
record (see DOI) | Proceedings of the 13th International Conference on Pattern
Recognition Applications and Methods (ICPRAM 2024) - NeroPRAI 2024 | 10.5220/0012608600003654 | null | eess.IV cs.CV cs.LG q-bio.QM | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Tumors can manifest in various forms and in different areas of the human
body. Brain tumors are specifically hard to diagnose and treat because of the
complexity of the organ in which they develop. Detecting them in time can lower
the chances of death and facilitate the therapy process for patients. The use
of Artificial Intelligence (AI) and, more specifically, deep learning, has the
potential to significantly reduce costs in terms of time and resources for the
discovery and identification of tumors from images obtained through imaging
techniques. This research work aims to assess the performance of a multimodal
model for the classification of Magnetic Resonance Imaging (MRI) scans
processed as grayscale images. The results are promising, and in line with
similar works, as the model reaches an accuracy of around 98\%. We also
highlight the need for explainability and transparency to ensure human control
and safety.
| [
{
"created": "Wed, 10 Jan 2024 13:06:52 GMT",
"version": "v1"
},
{
"created": "Fri, 15 Mar 2024 12:47:51 GMT",
"version": "v2"
}
] | 2024-03-18 | [
[
"Curci",
"Antonio",
""
],
[
"Esposito",
"Andrea",
""
]
] |
2402.00046 | Sofiene Lassoued | Sofiene Lassoued, Andreas Schwung | Introducing PetriRL: An Innovative Framework for JSSP Resolution
Integrating Petri nets and Event-based Reinforcement Learning | null | Journal of Manufacturing Systems (2024) | 10.1016/j.jmsy.2024.04.028 | null | cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Resource utilization and production process optimization are crucial for
companies in today's competitive industrial landscape. Addressing the
complexities of job shop scheduling problems (JSSP) is essential to improving
productivity, reducing costs, and ensuring timely delivery. We propose PetriRL,
a novel framework integrating Petri nets and deep reinforcement learning (DRL)
for JSSP optimization. PetriRL capitalizes on the inherent strengths of Petri
nets in modelling discrete event systems while leveraging the advantages of a
graph structure. The Petri net governs automated components of the process,
ensuring adherence to JSSP constraints. This allows for synergistic
collaboration with optimization algorithms such as DRL, particularly in
critical decision-making. Unlike traditional methods, PetriRL eliminates the
need to preprocess JSSP instances into disjunctive graphs and enhances the
explainability of process status through its graphical structure based on
places and transitions. Additionally, the inherent graph structure of Petri
nets enables the dynamic additions of job operations during the inference phase
without requiring agent retraining, thus enhancing flexibility. Experimental
results demonstrate PetriRL's robust generalization across various instance
sizes and its competitive performance on public test benchmarks and randomly
generated instances. Results are compared to a wide range of optimization
solutions such as heuristics, metaheuristics, and learning-based algorithms.
Finally, the added values of the framework's key elements, such as event-based
control and action masking, are studied in the ablation study.
| [
{
"created": "Tue, 23 Jan 2024 12:30:49 GMT",
"version": "v1"
},
{
"created": "Wed, 8 May 2024 10:47:57 GMT",
"version": "v2"
}
] | 2024-05-09 | [
[
"Lassoued",
"Sofiene",
""
],
[
"Schwung",
"Andreas",
""
]
] |
2402.00085 | Xuecheng Niu | Xuecheng Niu, Akinori Ito, Takashi Nose | Scheduled Curiosity-Deep Dyna-Q: Efficient Exploration for Dialog Policy
Learning | Accepted to IEEE Access | IEEE Access, vol. 12, pp. 46940-46952, 2024 | 10.1109/ACCESS.2024.3376418 | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Training task-oriented dialog agents based on reinforcement learning is
time-consuming and requires a large number of interactions with real users. How
to grasp dialog policy within limited dialog experiences remains an obstacle
that makes the agent training process less efficient. In addition, most
previous frameworks start training by randomly choosing training samples, which
differs from the human learning method and hurts the efficiency and stability
of training. Therefore, we propose Scheduled Curiosity-Deep Dyna-Q (SC-DDQ), a
curiosity-driven curriculum learning framework based on a state-of-the-art
model-based reinforcement learning dialog model, Deep Dyna-Q (DDQ).
Furthermore, we designed learning schedules for SC-DDQ and DDQ, respectively,
following two opposite training strategies: classic curriculum learning and its
reverse version. Our results show that by introducing scheduled learning and
curiosity, the new framework leads to a significant improvement over the DDQ
and Deep Q-learning(DQN). Surprisingly, we found that traditional curriculum
learning was not always effective. Specifically, according to the experimental
results, the easy-first and difficult-first strategies are more suitable for
SC-DDQ and DDQ. To analyze our results, we adopted the entropy of sampled
actions to depict action exploration and found that training strategies with
high entropy in the first stage and low entropy in the last stage lead to
better performance.
| [
{
"created": "Wed, 31 Jan 2024 06:13:28 GMT",
"version": "v1"
},
{
"created": "Mon, 20 May 2024 12:10:04 GMT",
"version": "v2"
}
] | 2024-05-21 | [
[
"Niu",
"Xuecheng",
""
],
[
"Ito",
"Akinori",
""
],
[
"Nose",
"Takashi",
""
]
] |
2402.00089 | Dr Peter J. Bentley | Soo Ling Lim, Peter J Bentley, Fuyuki Ishikawa | SCAPE: Searching Conceptual Architecture Prompts using Evolution | 8 pages | IEEE Congress on Evolutionary Computation (IEEE World Congress on
Computational Intelligence 2024), Yokohama, Japan | null | null | cs.NE cs.AI | http://creativecommons.org/licenses/by/4.0/ | Conceptual architecture involves a highly creative exploration of novel
ideas, often taken from other disciplines as architects consider radical new
forms, materials, textures and colors for buildings. While today's generative
AI systems can produce remarkable results, they lack the creativity
demonstrated for decades by evolutionary algorithms. SCAPE, our proposed tool,
combines evolutionary search with generative AI, enabling users to explore
creative and good quality designs inspired by their initial input through a
simple point and click interface. SCAPE injects randomness into generative AI,
and enables memory, making use of the built-in language skills of GPT-4 to vary
prompts via text-based mutation and crossover. We demonstrate that compared to
DALL-E 3, SCAPE enables a 67% improvement in image novelty, plus improvements
in quality and effectiveness of use; we show that in just three iterations
SCAPE has a 24% image novelty increase enabling effective exploration, plus
optimization of images by users. We use more than 20 independent architects to
assess SCAPE, who provide markedly positive feedback.
| [
{
"created": "Wed, 31 Jan 2024 10:25:45 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Apr 2024 10:05:33 GMT",
"version": "v2"
}
] | 2024-04-03 | [
[
"Lim",
"Soo Ling",
""
],
[
"Bentley",
"Peter J",
""
],
[
"Ishikawa",
"Fuyuki",
""
]
] |
2402.00312 | Trond Arne Undheim | Trond Arne Undheim | The whack-a-mole governance challenge for AI-enabled synthetic biology:
literature review and emerging frameworks | null | Front. Bioeng. Biotechnol. 12:1359768. | 10.3389/fbioe.2024.1359768 | null | q-bio.OT cs.AI | http://creativecommons.org/licenses/by/4.0/ | AI-enabled synthetic biology has tremendous potential but also significantly
increases biorisks and brings about a new set of dual use concerns. The picture
is complicated given the vast innovations envisioned to emerge by combining
emerging technologies, as AI-enabled synthetic biology potentially scales up
bioengineering into industrial biomanufacturing. However, the literature review
indicates that goals such as maintaining a reasonable scope for innovation, or
more ambitiously to foster a huge bioeconomy don't necessarily contrast with
biosafety, but need to go hand in hand. This paper presents a literature review
of the issues and describes emerging frameworks for policy and practice that
transverse the options of command-and control, stewardship, bottom-up, and
laissez-faire governance. How to achieve early warning systems that enable
prevention and mitigation of future AI-enabled biohazards from the lab, from
deliberate misuse, or from the public realm, will constantly need to evolve,
and adaptive, interactive approaches should emerge. Although biorisk is subject
to an established governance regime, and scientists generally adhere to
biosafety protocols, even experimental, but legitimate use by scientists could
lead to unexpected developments. Recent advances in chatbots enabled by
generative AI have revived fears that advanced biological insight can more
easily get into the hands of malignant individuals or organizations. Given
these sets of issues, society needs to rethink how AI-enabled synthetic biology
should be governed. The suggested way to visualize the challenge at hand is
whack-a-mole governance, although the emerging solutions are perhaps not so
different either.
| [
{
"created": "Thu, 1 Feb 2024 03:53:13 GMT",
"version": "v1"
}
] | 2024-03-08 | [
[
"Undheim",
"Trond Arne",
""
]
] |
2402.00491 | Aditya Bhattacharya | Aditya Bhattacharya, Simone Stumpf, Lucija Gosak, Gregor Stiglic,
Katrien Verbert | EXMOS: Explanatory Model Steering Through Multifaceted Explanations and
Data Configurations | This is a pre-print version only for early release. Please view the
conference published version from ACM CHI 2024 to get the latest version of
the paper | Proceedings of the CHI Conference on Human Factors in Computing
Systems (CHI '24), May 11--16, 2024, Honolulu, HI, USA | 10.1145/3613904.3642106 | null | cs.AI cs.HC | http://creativecommons.org/licenses/by/4.0/ | Explanations in interactive machine-learning systems facilitate debugging and
improving prediction models. However, the effectiveness of various global
model-centric and data-centric explanations in aiding domain experts to detect
and resolve potential data issues for model improvement remains unexplored.
This research investigates the influence of data-centric and model-centric
global explanations in systems that support healthcare experts in optimising
models through automated and manual data configurations. We conducted
quantitative (n=70) and qualitative (n=30) studies with healthcare experts to
explore the impact of different explanations on trust, understandability and
model improvement. Our results reveal the insufficiency of global model-centric
explanations for guiding users during data configuration. Although data-centric
explanations enhanced understanding of post-configuration system changes, a
hybrid fusion of both explanation types demonstrated the highest effectiveness.
Based on our study results, we also present design implications for effective
explanation-driven interactive machine-learning systems.
| [
{
"created": "Thu, 1 Feb 2024 10:57:00 GMT",
"version": "v1"
}
] | 2024-02-02 | [
[
"Bhattacharya",
"Aditya",
""
],
[
"Stumpf",
"Simone",
""
],
[
"Gosak",
"Lucija",
""
],
[
"Stiglic",
"Gregor",
""
],
[
"Verbert",
"Katrien",
""
]
] |
2402.00525 | Lukas Radl | Lukas Radl, Michael Steiner, Mathias Parger, Alexander Weinrauch,
Bernhard Kerbl, Markus Steinberger | StopThePop: Sorted Gaussian Splatting for View-Consistent Real-time
Rendering | SIGGRAPH 2024 (Journal Track); Project Page:
https://r4dl.github.io/StopThePop/ | ACM Transactions on Graphics, volume 43(4), July 2024 | null | null | cs.GR cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Gaussian Splatting has emerged as a prominent model for constructing 3D
representations from images across diverse domains. However, the efficiency of
the 3D Gaussian Splatting rendering pipeline relies on several simplifications.
Notably, reducing Gaussian to 2D splats with a single view-space depth
introduces popping and blending artifacts during view rotation. Addressing this
issue requires accurate per-pixel depth computation, yet a full per-pixel sort
proves excessively costly compared to a global sort operation. In this paper,
we present a novel hierarchical rasterization approach that systematically
resorts and culls splats with minimal processing overhead. Our software
rasterizer effectively eliminates popping artifacts and view inconsistencies,
as demonstrated through both quantitative and qualitative measurements.
Simultaneously, our method mitigates the potential for cheating view-dependent
effects with popping, ensuring a more authentic representation. Despite the
elimination of cheating, our approach achieves comparable quantitative results
for test images, while increasing the consistency for novel view synthesis in
motion. Due to its design, our hierarchical approach is only 4% slower on
average than the original Gaussian Splatting. Notably, enforcing consistency
enables a reduction in the number of Gaussians by approximately half with
nearly identical quality and view-consistency. Consequently, rendering
performance is nearly doubled, making our approach 1.6x faster than the
original Gaussian Splatting, with a 50% reduction in memory requirements.
| [
{
"created": "Thu, 1 Feb 2024 11:46:44 GMT",
"version": "v1"
},
{
"created": "Fri, 24 May 2024 13:59:17 GMT",
"version": "v2"
},
{
"created": "Wed, 9 Oct 2024 12:57:43 GMT",
"version": "v3"
}
] | 2024-10-10 | [
[
"Radl",
"Lukas",
""
],
[
"Steiner",
"Michael",
""
],
[
"Parger",
"Mathias",
""
],
[
"Weinrauch",
"Alexander",
""
],
[
"Kerbl",
"Bernhard",
""
],
[
"Steinberger",
"Markus",
""
]
] |
2402.00593 | Ariadna Jim\'enez-Partinen | Ariadna Jim\'enez-Partinen, Karl Thurnhofer-Hemsi, Esteban J. Palomo,
Jorge Rodr\'iguez-Capit\'an, Ana I. Molina-Ramos | Coronary Artery Disease Classification with Different Lesion Degree
Ranges based on Deep Learning | null | IEEE Access, vol. 12, pp. 69229-69239, 2024 | 10.1109/ACCESS.2024.3401465 | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Invasive Coronary Angiography (ICA) images are considered the gold standard
for assessing the state of the coronary arteries. Deep learning classification
methods are widely used and well-developed in different areas where medical
imaging evaluation has an essential impact due to the development of
computer-aided diagnosis systems that can support physicians in their clinical
procedures. In this paper, a new performance analysis of deep learning methods
for binary ICA classification with different lesion degrees is reported. To
reach this goal, an annotated dataset of ICA images that contains the ground
truth, the location of lesions and seven possible severity degrees ranging
between 0% and 100% was employed. The ICA images were divided into 'lesion' or
'non-lesion' patches. We aim to study how binary classification performance is
affected by the different lesion degrees considered in the positive class.
Therefore, five known convolutional neural network architectures were trained
with different input images where different lesion degree ranges were gradually
incorporated until considering the seven lesion degrees. Besides, four types of
experiments with and without data augmentation were designed, whose F-measure
and Area Under Curve (AUC) were computed. Reported results achieved an
F-measure and AUC of 92.7% and 98.1%, respectively. However, lesion
classification is highly affected by the degree of the lesion intended to
classify, with 15% less accuracy when <99% lesion patches are present.
| [
{
"created": "Thu, 1 Feb 2024 13:43:33 GMT",
"version": "v1"
},
{
"created": "Fri, 16 Feb 2024 15:45:53 GMT",
"version": "v2"
}
] | 2024-06-25 | [
[
"Jiménez-Partinen",
"Ariadna",
""
],
[
"Thurnhofer-Hemsi",
"Karl",
""
],
[
"Palomo",
"Esteban J.",
""
],
[
"Rodríguez-Capitán",
"Jorge",
""
],
[
"Molina-Ramos",
"Ana I.",
""
]
] |
2402.00676 | Raul Fernandez | Raul Fernandez-Fernandez, Juan G. Victores, Carlos Balaguer | Deep Robot Sketching: An application of Deep Q-Learning Networks for
human-like sketching | null | Cognitive Systems Research, Volume 81, September 2023, pages 57 to
63 | 10.1016/j.cogsys.2023.05.004 | null | cs.RO cs.AI cs.CV cs.LG cs.NE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The current success of Reinforcement Learning algorithms for its performance
in complex environments has inspired many recent theoretical approaches to
cognitive science. Artistic environments are studied within the cognitive
science community as rich, natural, multi-sensory, multi-cultural environments.
In this work, we propose the introduction of Reinforcement Learning for
improving the control of artistic robot applications. Deep Q-learning Neural
Networks (DQN) is one of the most successful algorithms for the implementation
of Reinforcement Learning in robotics. DQN methods generate complex control
policies for the execution of complex robot applications in a wide set of
environments. Current art painting robot applications use simple control laws
that limits the adaptability of the frameworks to a set of simple environments.
In this work, the introduction of DQN within an art painting robot application
is proposed. The goal is to study how the introduction of a complex control
policy impacts the performance of a basic art painting robot application. The
main expected contribution of this work is to serve as a first baseline for
future works introducing DQN methods for complex art painting robot frameworks.
Experiments consist of real world executions of human drawn sketches using the
DQN generated policy and TEO, the humanoid robot. Results are compared in terms
of similarity and obtained reward with respect to the reference inputs
| [
{
"created": "Thu, 1 Feb 2024 15:37:23 GMT",
"version": "v1"
}
] | 2024-02-02 | [
[
"Fernandez-Fernandez",
"Raul",
""
],
[
"Victores",
"Juan G.",
""
],
[
"Balaguer",
"Carlos",
""
]
] |
2402.00677 | Raul Fernandez | Raul Fernandez-Fernandez, Juan G. Victores, Jennifer J. Gago, David
Estevez, Carlos Balaguer | Neural Policy Style Transfer | null | Cognitive Systems Research, Volume 72, March 2022, Pages 23 to 32 | 10.1016/j.cogsys.2021.11.003 | null | cs.RO cs.AI cs.LG cs.NE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Style Transfer has been proposed in a number of fields: fine arts, natural
language processing, and fixed trajectories. We scale this concept up to
control policies within a Deep Reinforcement Learning infrastructure. Each
network is trained to maximize the expected reward, which typically encodes the
goal of an action, and can be described as the content. The expressive power of
deep neural networks enables encoding a secondary task, which can be described
as the style. The Neural Policy Style Transfer (NPST) algorithm is proposed to
transfer the style of one policy to another, while maintaining the content of
the latter. Different policies are defined via Deep Q-Network architectures.
These models are trained using demonstrations through Inverse Reinforcement
Learning. Two different sets of user demonstrations are performed, one for
content and other for style. Different styles are encoded as defined by user
demonstrations. The generated policy is the result of feeding a content policy
and a style policy to the NPST algorithm. Experiments are performed in a
catch-ball game inspired by the Deep Reinforcement Learning classical Atari
games; and a real-world painting scenario with a full-sized humanoid robot,
based on previous works of the authors. The implementation of three different
Q-Network architectures (Shallow, Deep and Deep Recurrent Q-Network) to encode
the policies within the NPST framework is proposed and the results obtained in
the experiments with each of these architectures compared.
| [
{
"created": "Thu, 1 Feb 2024 15:37:42 GMT",
"version": "v1"
}
] | 2024-02-02 | [
[
"Fernandez-Fernandez",
"Raul",
""
],
[
"Victores",
"Juan G.",
""
],
[
"Gago",
"Jennifer J.",
""
],
[
"Estevez",
"David",
""
],
[
"Balaguer",
"Carlos",
""
]
] |
2402.00678 | Raul Fernandez | Raul Fernandez-Fernandez, Juan G. Victores, David Estevez, and Carlos
Balaguer | Real Evaluations Tractability using Continuous Goal-Directed Actions in
Smart City Applications | null | Sensors, Volume 18, Issue 11, 2018 | 10.3390/s18113818 | null | cs.RO cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | One of the most important challenges of Smart City Applications is to adapt
the system to interact with non-expert users. Robot imitation frameworks aim to
simplify and reduce times of robot programming by allowing users to program
directly through demonstrations. In classical frameworks, actions are modeled
using joint or Cartesian space trajectories. Other features, such as visual
ones, are not always well represented with these pure geometrical approaches.
Continuous Goal-Directed Actions (CGDA) is an alternative to these methods, as
it encodes actions as changes of any feature that can be extracted from the
environment. As a consequence of this, the robot joint trajectories for
execution must be fully computed to comply with this feature-agnostic encoding.
This is achieved using Evolutionary Algorithms (EA), which usually requires too
many evaluations to perform this evolution step in the actual robot. Current
strategies involve performing evaluations in a simulation, transferring the
final joint trajectory to the actual robot. Smart City applications involve
working in highly dynamic and complex environments, where having a precise
model is not always achievable. Our goal is to study the tractability of
performing these evaluations directly in a real-world scenario. Two different
approaches to reduce the number of evaluations using EA, are proposed and
compared. In the first approach, Particle Swarm Optimization (PSO)-based
methods have been studied and compared within CGDA: naive PSO, Fitness
Inheritance PSO (FI-PSO), and Adaptive Fuzzy Fitness Granulation with PSO
(AFFG-PSO). The second approach studied the introduction of geometrical and
velocity constraints within CGDA. The effects of both approaches were analyzed
and compared in the wax and paint actions, two CGDA commonly studied use cases.
Results from this paper depict an important reduction in the number of
evaluations.
| [
{
"created": "Thu, 1 Feb 2024 15:38:21 GMT",
"version": "v1"
}
] | 2024-02-02 | [
[
"Fernandez-Fernandez",
"Raul",
""
],
[
"Victores",
"Juan G.",
""
],
[
"Estevez",
"David",
""
],
[
"Balaguer",
"Carlos",
""
]
] |
2402.00680 | Wei Jiang | Wei Jiang, Junru Li, Kai Zhang, Li Zhang | LVC-LGMC: Joint Local and Global Motion Compensation for Learned Video
Compression | Accepted to ICASSP 2024 (lecture presentation). The first attempt to
use cross attention for bits-free motion estimation and motion compensation | ICASSP (International Conference on Acoustics, Speech, and Signal
Processing) pp. 2955-2959, 2024 | 10.1109/icassp48485.2024.10448081 | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing learned video compression models employ flow net or deformable
convolutional networks (DCN) to estimate motion information. However, the
limited receptive fields of flow net and DCN inherently direct their
attentiveness towards the local contexts. Global contexts, such as large-scale
motions and global correlations among frames are ignored, presenting a
significant bottleneck for capturing accurate motions. To address this issue,
we propose a joint local and global motion compensation module (LGMC) for
leaned video coding. More specifically, we adopt flow net for local motion
compensation. To capture global context, we employ the cross attention in
feature domain for motion compensation. In addition, to avoid the quadratic
complexity of vanilla cross attention, we divide the softmax operations in
attention into two independent softmax operations, leading to linear
complexity. To validate the effectiveness of our proposed LGMC, we integrate it
with DCVC-TCM and obtain learned video compression with joint local and global
motion compensation (LVC-LGMC). Extensive experiments demonstrate that our
LVC-LGMC has significant rate-distortion performance improvements over baseline
DCVC-TCM.
| [
{
"created": "Thu, 1 Feb 2024 15:43:43 GMT",
"version": "v1"
},
{
"created": "Sun, 4 Feb 2024 08:43:28 GMT",
"version": "v2"
},
{
"created": "Mon, 11 Mar 2024 12:41:20 GMT",
"version": "v3"
}
] | 2024-04-09 | [
[
"Jiang",
"Wei",
""
],
[
"Li",
"Junru",
""
],
[
"Zhang",
"Kai",
""
],
[
"Zhang",
"Li",
""
]
] |
2402.00724 | Jan Valo\v{s}ek | Jan Valosek, Theo Mathieu, Raphaelle Schlienger, Olivia S. Kowalczyk,
Julien Cohen-Adad | Automatic Segmentation of the Spinal Cord Nerve Rootlets | null | Imaging Neuroscience, 2 (2024) 1-14 | 10.1162/imag_a_00218 | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Precise identification of spinal nerve rootlets is relevant to delineate
spinal levels for the study of functional activity in the spinal cord. The goal
of this study was to develop an automatic method for the semantic segmentation
of spinal nerve rootlets from T2-weighted magnetic resonance imaging (MRI)
scans. Images from two open-access MRI datasets were used to train a 3D
multi-class convolutional neural network using an active learning approach to
segment C2-C8 dorsal nerve rootlets. Each output class corresponds to a spinal
level. The method was tested on 3T T2-weighted images from datasets unseen
during training to assess inter-site, inter-session, and inter-resolution
variability. The test Dice score was 0.67 +- 0.16 (mean +- standard deviation
across testing images and rootlets levels), suggesting a good performance. The
method also demonstrated low inter-vendor and inter-site variability
(coefficient of variation <= 1.41 %), as well as low inter-session variability
(coefficient of variation <= 1.30 %) indicating stable predictions across
different MRI vendors, sites, and sessions. The proposed methodology is
open-source and readily available in the Spinal Cord Toolbox (SCT) v6.2 and
higher.
| [
{
"created": "Thu, 1 Feb 2024 16:14:54 GMT",
"version": "v1"
},
{
"created": "Wed, 1 May 2024 05:46:56 GMT",
"version": "v2"
}
] | 2024-07-26 | [
[
"Valosek",
"Jan",
""
],
[
"Mathieu",
"Theo",
""
],
[
"Schlienger",
"Raphaelle",
""
],
[
"Kowalczyk",
"Olivia S.",
""
],
[
"Cohen-Adad",
"Julien",
""
]
] |
2402.00856 | Haozhe Ji | Haozhe Ji, Cheng Lu, Yilin Niu, Pei Ke, Hongning Wang, Jun Zhu, Jie
Tang, Minlie Huang | Towards Efficient Exact Optimization of Language Model Alignment | 24 pages, 9 figures | Forty-first International Conference on Machine Learning (ICML
2024) | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The alignment of language models with human preferences is vital for their
application in real-world tasks. The problem is formulated as optimizing the
model's policy to maximize the expected reward that reflects human preferences
with minimal deviation from the initial policy. While considered as a
straightforward solution, reinforcement learning (RL) suffers from high
variance in policy updates, which impedes efficient policy improvement.
Recently, direct preference optimization (DPO) was proposed to directly
optimize the policy from preference data. However, we show that DPO derived
based on the optimal solution of the problem leads to a compromised
mean-seeking approximation of the optimal solution in practice. In this paper,
we propose efficient exact optimization (EXO) of the alignment objective. EXO
is guaranteed to optimize in the same direction as RL algorithms asymptotically
for arbitrary policy parametrization. This leads to the same mode-seeking
solution, while enables efficient optimization by circumventing the
complexities of RL. We also compare our method to DPO with both theoretical and
empirical analyses, and further demonstrate the advantages of our method over
existing approaches on realistic human preference data. Code is available at
https://github.com/haozheji/exact-optimization.
| [
{
"created": "Thu, 1 Feb 2024 18:51:54 GMT",
"version": "v1"
},
{
"created": "Fri, 2 Feb 2024 15:50:10 GMT",
"version": "v2"
},
{
"created": "Fri, 23 Feb 2024 16:19:22 GMT",
"version": "v3"
},
{
"created": "Wed, 5 Jun 2024 08:15:12 GMT",
"version": "v4"
}
] | 2024-06-06 | [
[
"Ji",
"Haozhe",
""
],
[
"Lu",
"Cheng",
""
],
[
"Niu",
"Yilin",
""
],
[
"Ke",
"Pei",
""
],
[
"Wang",
"Hongning",
""
],
[
"Zhu",
"Jun",
""
],
[
"Tang",
"Jie",
""
],
[
"Huang",
"Minlie",
""
]
] |
2402.00994 | Kirolos Ataallah | Kirolos Attallah, Girgis Zaky, Nourhan Abdelrhim, Kyrillos Botros,
Amjad Dife, and Nermin Negied | A Cost-Efficient Approach for Creating Virtual Fitting Room using
Generative Adversarial Networks (GANs) | null | International Journal of Advanced Computer Science and
Applications(IJACSA), Volume 15 Issue 1, 2024 | 10.14569/IJACSA.2024.0150132 | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Customers all over the world want to see how the clothes fit them or not
before purchasing. Therefore, customers by nature prefer brick-and-mortar
clothes shopping so they can try on products before purchasing them. But after
the Pandemic of COVID19 many sellers either shifted to online shopping or
closed their fitting rooms which made the shopping process hesitant and
doubtful. The fact that the clothes may not be suitable for their buyers after
purchase led us to think about using new AI technologies to create an online
platform or a virtual fitting room (VFR) in the form of a mobile application
and a deployed model using a webpage that can be embedded later to any online
store where they can try on any number of cloth items without physically trying
them. Besides, it will save much searching time for their needs. Furthermore,
it will reduce the crowding and headache in the physical shops by applying the
same technology using a special type of mirror that will enable customers to
try on faster. On the other hand, from business owners' perspective, this
project will highly increase their online sales, besides, it will save the
quality of the products by avoiding physical trials issues. The main approach
used in this work is applying Generative Adversarial Networks (GANs) combined
with image processing techniques to generate one output image from two input
images which are the person image and the cloth image. This work achieved
results that outperformed the state-of-the-art approaches found in literature.
| [
{
"created": "Thu, 1 Feb 2024 20:18:06 GMT",
"version": "v1"
}
] | 2024-02-05 | [
[
"Attallah",
"Kirolos",
""
],
[
"Zaky",
"Girgis",
""
],
[
"Abdelrhim",
"Nourhan",
""
],
[
"Botros",
"Kyrillos",
""
],
[
"Dife",
"Amjad",
""
],
[
"Negied",
"Nermin",
""
]
] |
2402.01018 | Weijie Xu | Weijie Xu, Zicheng Huang, Wenxiang Hu, Xi Fang, Rajesh Kumar
Cherukuri, Naumaan Nayyar, Lorenzo Malandri, Srinivasan H. Sengamedu | HR-MultiWOZ: A Task Oriented Dialogue (TOD) Dataset for HR LLM Agent | 13 pages, 9 figures | EACL 2024 | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recent advancements in Large Language Models (LLMs) have been reshaping
Natural Language Processing (NLP) task in several domains. Their use in the
field of Human Resources (HR) has still room for expansions and could be
beneficial for several time consuming tasks. Examples such as time-off
submissions, medical claims filing, and access requests are noteworthy, but
they are by no means the sole instances. However, the aforementioned
developments must grapple with the pivotal challenge of constructing a
high-quality training dataset. On one hand, most conversation datasets are
solving problems for customers not employees. On the other hand, gathering
conversations with HR could raise privacy concerns. To solve it, we introduce
HR-Multiwoz, a fully-labeled dataset of 550 conversations spanning 10 HR
domains to evaluate LLM Agent. Our work has the following contributions: (1) It
is the first labeled open-sourced conversation dataset in the HR domain for NLP
research. (2) It provides a detailed recipe for the data generation procedure
along with data analysis and human evaluations. The data generation pipeline is
transferable and can be easily adapted for labeled conversation data generation
in other domains. (3) The proposed data-collection pipeline is mostly based on
LLMs with minimal human involvement for annotation, which is time and
cost-efficient.
| [
{
"created": "Thu, 1 Feb 2024 21:10:44 GMT",
"version": "v1"
}
] | 2024-02-05 | [
[
"Xu",
"Weijie",
""
],
[
"Huang",
"Zicheng",
""
],
[
"Hu",
"Wenxiang",
""
],
[
"Fang",
"Xi",
""
],
[
"Cherukuri",
"Rajesh Kumar",
""
],
[
"Nayyar",
"Naumaan",
""
],
[
"Malandri",
"Lorenzo",
""
],
[
"Sengamedu",
"Srinivasan H.",
""
]
] |
2402.01182 | Meishan Zhang | Meishan Zhang, Bin Wang, Hao Fei, Min Zhang | In-Context Learning for Few-Shot Nested Named Entity Recognition | 5 figures | ICASSP 2024 | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | In nested Named entity recognition (NER), entities are nested with each
other, and thus requiring more data annotations to address. This leads to the
development of few-shot nested NER, where the prevalence of pretrained language
models with in-context learning (ICL) offers promising solutions. In this work,
we introduce an effective and innovative ICL framework for the setting of
few-shot nested NER. We improve the ICL prompt by devising a novel example
demonstration selection mechanism, EnDe retriever. In EnDe retriever, we employ
contrastive learning to perform three types of representation learning, in
terms of semantic similarity, boundary similarity, and label similarity, to
generate high-quality demonstration examples. Extensive experiments over three
nested NER and four flat NER datasets demonstrate the efficacy of our system.
| [
{
"created": "Fri, 2 Feb 2024 06:57:53 GMT",
"version": "v1"
}
] | 2024-02-05 | [
[
"Zhang",
"Meishan",
""
],
[
"Wang",
"Bin",
""
],
[
"Fei",
"Hao",
""
],
[
"Zhang",
"Min",
""
]
] |
2402.01195 | Henrik Schopmans | Henrik Schopmans, Pascal Friederich | Conditional Normalizing Flows for Active Learning of Coarse-Grained
Molecular Representations | null | Proceedings of the 41st International Conference on Machine
Learning (ICML 2024), PMLR 235:43804-43827, 2024 | null | null | cs.LG cs.AI physics.chem-ph | http://creativecommons.org/licenses/by/4.0/ | Efficient sampling of the Boltzmann distribution of molecular systems is a
long-standing challenge. Recently, instead of generating long molecular
dynamics simulations, generative machine learning methods such as normalizing
flows have been used to learn the Boltzmann distribution directly, without
samples. However, this approach is susceptible to mode collapse and thus often
does not explore the full configurational space. In this work, we address this
challenge by separating the problem into two levels, the fine-grained and
coarse-grained degrees of freedom. A normalizing flow conditioned on the
coarse-grained space yields a probabilistic connection between the two levels.
To explore the configurational space, we employ coarse-grained simulations with
active learning which allows us to update the flow and make all-atom potential
energy evaluations only when necessary. Using alanine dipeptide as an example,
we show that our methods obtain a speedup to molecular dynamics simulations of
approximately 15.9 to 216.2 compared to the speedup of 4.5 of the current
state-of-the-art machine learning approach.
| [
{
"created": "Fri, 2 Feb 2024 07:44:26 GMT",
"version": "v1"
},
{
"created": "Fri, 24 May 2024 12:13:33 GMT",
"version": "v2"
}
] | 2024-08-06 | [
[
"Schopmans",
"Henrik",
""
],
[
"Friederich",
"Pascal",
""
]
] |
2402.01219 | Roberto Natella | Roberto Natella, Pietro Liguori, Cristina Improta, Bojan Cukic,
Domenico Cotroneo | AI Code Generators for Security: Friend or Foe? | Dataset available at: https://github.com/dessertlab/violent-python | IEEE Security & Privacy, Early Access, February 2024 | 10.1109/MSEC.2024.3355713 | null | cs.CR cs.AI cs.SE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Recent advances of artificial intelligence (AI) code generators are opening
new opportunities in software security research, including misuse by malicious
actors. We review use cases for AI code generators for security and introduce
an evaluation benchmark.
| [
{
"created": "Fri, 2 Feb 2024 08:41:15 GMT",
"version": "v1"
}
] | 2024-02-05 | [
[
"Natella",
"Roberto",
""
],
[
"Liguori",
"Pietro",
""
],
[
"Improta",
"Cristina",
""
],
[
"Cukic",
"Bojan",
""
],
[
"Cotroneo",
"Domenico",
""
]
] |
2402.01393 | Carmen Martin-Turrero | Carmen Martin-Turrero, Maxence Bouvier, Manuel Breitenstein, Pietro
Zanuttigh, Vincent Parret | ALERT-Transformer: Bridging Asynchronous and Synchronous Machine
Learning for Real-Time Event-based Spatio-Temporal Data | Originally published in the Proceedings of Machine Learning Research
ICML 2024 | Proceedings of the 41st International Conference on Machine
Learning (2024), in Proceedings of Machine Learning Research 235:48837-48854 | null | null | cs.CV cs.LG cs.NE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We seek to enable classic processing of continuous ultra-sparse
spatiotemporal data generated by event-based sensors with dense machine
learning models. We propose a novel hybrid pipeline composed of asynchronous
sensing and synchronous processing that combines several ideas: (1) an
embedding based on PointNet models -- the ALERT module -- that can continuously
integrate new and dismiss old events thanks to a leakage mechanism, (2) a
flexible readout of the embedded data that allows to feed any downstream model
with always up-to-date features at any sampling rate, (3) exploiting the input
sparsity in a patch-based approach inspired by Vision Transformer to optimize
the efficiency of the method. These embeddings are then processed by a
transformer model trained for object and gesture recognition. Using this
approach, we achieve performances at the state-of-the-art with a lower latency
than competitors. We also demonstrate that our asynchronous model can operate
at any desired sampling rate.
| [
{
"created": "Fri, 2 Feb 2024 13:17:19 GMT",
"version": "v1"
},
{
"created": "Thu, 8 Feb 2024 08:09:17 GMT",
"version": "v2"
},
{
"created": "Tue, 30 Jul 2024 11:20:47 GMT",
"version": "v3"
}
] | 2024-07-31 | [
[
"Martin-Turrero",
"Carmen",
""
],
[
"Bouvier",
"Maxence",
""
],
[
"Breitenstein",
"Manuel",
""
],
[
"Zanuttigh",
"Pietro",
""
],
[
"Parret",
"Vincent",
""
]
] |
2402.01461 | Bruno Berenguel-Baeta | Bruno Berenguel-Baeta, Antoine N. Andre, Guillaume Caron, Jesus
Bermudez-Cameo, Jose J. Guerrero | Visual Gyroscope: Combination of Deep Learning Features and Direct
Alignment for Panoramic Stabilization | null | IEEE/CVF Conference on Computer Vision and Pattern Recognition pp.
6444-6447, 2023 | 10.1109/CVPRW59228.2023.00685 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In this article we present a visual gyroscope based on equirectangular
panoramas. We propose a new pipeline where we take advantage of combining three
different methods to obtain a robust and accurate estimation of the attitude of
the camera. We quantitatively and qualitatively validate our method on two
image sequences taken with a $360^\circ$ dual-fisheye camera mounted on
different aerial vehicles.
| [
{
"created": "Fri, 2 Feb 2024 14:52:24 GMT",
"version": "v1"
}
] | 2024-02-05 | [
[
"Berenguel-Baeta",
"Bruno",
""
],
[
"Andre",
"Antoine N.",
""
],
[
"Caron",
"Guillaume",
""
],
[
"Bermudez-Cameo",
"Jesus",
""
],
[
"Guerrero",
"Jose J.",
""
]
] |
2402.01466 | Bruno Berenguel-Baeta | Bruno Berenguel-Baeta, Jesus Bermudez-Cameo, Jose J. Guerrero | Scaled 360 layouts: Revisiting non-central panoramas | arXiv admin note: substantial text overlap with arXiv:2401.17058 | In Proceedings of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition (pp. 3702-3705) 2021 | 10.1109/CVPRW53098.2021.00410 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | From a non-central panorama, 3D lines can be recovered by geometric
reasoning. However, their sensitivity to noise and the complex geometric
modeling required has led these panoramas being very little investigated. In
this work we present a novel approach for 3D layout recovery of indoor
environments using single non-central panoramas. We obtain the boundaries of
the structural lines of the room from a non-central panorama using deep
learning and exploit the properties of non-central projection systems in a new
geometrical processing to recover the scaled layout. We solve the problem for
Manhattan environments, handling occlusions, and also for Atlanta environments
in an unified method. The experiments performed improve the state-of-the-art
methods for 3D layout recovery from a single panorama. Our approach is the
first work using deep learning with non-central panoramas and recovering the
scale of single panorama layouts.
| [
{
"created": "Fri, 2 Feb 2024 14:55:36 GMT",
"version": "v1"
}
] | 2024-02-05 | [
[
"Berenguel-Baeta",
"Bruno",
""
],
[
"Bermudez-Cameo",
"Jesus",
""
],
[
"Guerrero",
"Jose J.",
""
]
] |
2402.01472 | Pietro Melzi | Pietro Melzi and Christian Rathgeb and Ruben Tolosana and Ruben
Vera-Rodriguez and Aythami Morales and Dominik Lawatsch and Florian Domin and
Maxim Schaubert | Synthetic Data for the Mitigation of Demographic Biases in Face
Recognition | 8 pages, 3 figures | Proceedings of the International Joint Conference on Biometrics
2023, special session on "Synthetic Data in Biometrics" | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This study investigates the possibility of mitigating the demographic biases
that affect face recognition technologies through the use of synthetic data.
Demographic biases have the potential to impact individuals from specific
demographic groups, and can be identified by observing disparate performance of
face recognition systems across demographic groups. They primarily arise from
the unequal representations of demographic groups in the training data. In
recent times, synthetic data have emerged as a solution to some problems that
affect face recognition systems. In particular, during the generation process
it is possible to specify the desired demographic and facial attributes of
images, in order to control the demographic distribution of the synthesized
dataset, and fairly represent the different demographic groups. We propose to
fine-tune with synthetic data existing face recognition systems that present
some demographic biases. We use synthetic datasets generated with GANDiffFace,
a novel framework able to synthesize datasets for face recognition with
controllable demographic distribution and realistic intra-class variations. We
consider multiple datasets representing different demographic groups for
training and evaluation. Also, we fine-tune different face recognition systems,
and evaluate their demographic fairness with different metrics. Our results
support the proposed approach and the use of synthetic data to mitigate
demographic biases in face recognition.
| [
{
"created": "Fri, 2 Feb 2024 14:57:42 GMT",
"version": "v1"
}
] | 2024-02-05 | [
[
"Melzi",
"Pietro",
""
],
[
"Rathgeb",
"Christian",
""
],
[
"Tolosana",
"Ruben",
""
],
[
"Vera-Rodriguez",
"Ruben",
""
],
[
"Morales",
"Aythami",
""
],
[
"Lawatsch",
"Dominik",
""
],
[
"Domin",
"Florian",
""
],
[
"Schaubert",
"Maxim",
""
]
] |
2402.01510 | Pratik K. Biswas | Pratik K. Biswas | A Hybrid Strategy for Chat Transcript Summarization | Journal Paper (13 Pages, 8 Figures, 4 Tables). arXiv admin note: text
overlap with arXiv:2103.10599 | IEEE Access, October 2024 | 10.1109/ACCESS.2024.3473968 | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Text summarization is the process of condensing a piece of text to fewer
sentences, while still preserving its content. Chat transcript, in this
context, is a textual copy of a digital or online conversation between a
customer (caller) and agent(s). This paper presents an indigenously (locally)
developed hybrid method that first combines extractive and abstractive
summarization techniques in compressing ill-punctuated or un-punctuated chat
transcripts to produce more readable punctuated summaries and then optimizes
the overall quality of summarization through reinforcement learning. Extensive
testing, evaluations, comparisons, and validation have demonstrated the
efficacy of this approach for large-scale deployment of chat transcript
summarization, in the absence of manually generated reference (annotated)
summaries.
| [
{
"created": "Fri, 2 Feb 2024 15:44:28 GMT",
"version": "v1"
},
{
"created": "Wed, 31 Jul 2024 03:57:33 GMT",
"version": "v2"
}
] | 2024-10-14 | [
[
"Biswas",
"Pratik K.",
""
]
] |
2402.01557 | Nergis Tomen | Nergis Tomen, Silvia L. Pintea, Jan C. van Gemert | Deep Continuous Networks | Presented at ICML 2021 | In International Conference on Machine Learning 2021 Jul 1 (pp.
10324-10335). PMLR | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | CNNs and computational models of biological vision share some fundamental
principles, which opened new avenues of research. However, fruitful cross-field
research is hampered by conventional CNN architectures being based on spatially
and depthwise discrete representations, which cannot accommodate certain
aspects of biological complexity such as continuously varying receptive field
sizes and dynamics of neuronal responses. Here we propose deep continuous
networks (DCNs), which combine spatially continuous filters, with the
continuous depth framework of neural ODEs. This allows us to learn the spatial
support of the filters during training, as well as model the continuous
evolution of feature maps, linking DCNs closely to biological models. We show
that DCNs are versatile and highly applicable to standard image classification
and reconstruction problems, where they improve parameter and data efficiency,
and allow for meta-parametrization. We illustrate the biological plausibility
of the scale distributions learned by DCNs and explore their performance in a
neuroscientifically inspired pattern completion task. Finally, we investigate
an efficient implementation of DCNs by changing input contrast.
| [
{
"created": "Fri, 2 Feb 2024 16:50:18 GMT",
"version": "v1"
}
] | 2024-02-05 | [
[
"Tomen",
"Nergis",
""
],
[
"Pintea",
"Silvia L.",
""
],
[
"van Gemert",
"Jan C.",
""
]
] |
2402.01654 | Bal\'azs Andr\'as Tolnai | Bal\'azs Andr\'as Tolnai and Zheng Ma and Bo N{\o}rregaard
J{\o}rgensen | A Scoping Review of Energy Load Disaggregation | null | Progress in Artificial Intelligence. EPIA 2023. Lecture Notes in
Computer Science, vol 14116 | 10.1007/978-3-031-49011-8_17 | null | eess.SP cs.AI cs.CY | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Energy load disaggregation can contribute to balancing power grids by
enhancing the effectiveness of demand-side management and promoting
electricity-saving behavior through increased consumer awareness. However, the
field currently lacks a comprehensive overview. To address this gap, this paper
con-ducts a scoping review of load disaggregation domains, data types, and
methods, by assessing 72 full-text journal articles. The findings reveal that
domestic electricity consumption is the most researched area, while others,
such as industrial load disaggregation, are rarely discussed. The majority of
research uses relatively low-frequency data, sampled between 1 and 60 seconds.
A wide variety of methods are used, and artificial neural networks are the most
common, followed by optimization strategies, Hidden Markov Models, and Graph
Signal Processing approaches.
| [
{
"created": "Wed, 10 Jan 2024 09:59:12 GMT",
"version": "v1"
}
] | 2024-02-08 | [
[
"Tolnai",
"Balázs András",
""
],
[
"Ma",
"Zheng",
""
],
[
"Jørgensen",
"Bo Nørregaard",
""
]
] |
2402.01668 | Enrique Yeguas | Enrique Yeguas-Bol\'ivar, Jos\'e M. Alcalde-Llergo, Pilar
Aparicio-Mart\'inez, Juri Taborri, Andrea Zingoni and Sara Pinzi | Determining the Difficulties of Students With Dyslexia via Virtual
Reality and Artificial Intelligence: An Exploratory Analysis | 7 pages, 5 figures, 3 tables, MetroXRAINE 2022 Conference, VRAILEXIA
european project | 2022 IEEE International Conference on Metrology for Extended
Reality, Artificial Intelligence and Neural Engineering (MetroXRAINE), Rome,
Italy, 2022, pp. 585-590 | 10.1109/MetroXRAINE54828.2022.9967589 | null | cs.CY cs.AI cs.CV cs.GR cs.HC | http://creativecommons.org/licenses/by/4.0/ | Learning disorders are neurological conditions that affect the brain's
ability to interconnect communication areas. Dyslexic students experience
problems with reading, memorizing, and exposing concepts; however the magnitude
of these can be mitigated through both therapies and the creation of
compensatory mechanisms. Several efforts have been made to mitigate these
issues, leading to the creation of digital resources for students with specific
learning disorders attending primary and secondary education levels.
Conversely, a standard approach is still missed in higher education. The
VRAIlexia project has been created to tackle this issue by proposing two
different tools: a mobile application integrating virtual reality (VR) to
collect data quickly and easily, and an artificial intelligencebased software
(AI) to analyze the collected data for customizing the supporting methodology
for each student. The first one has been created and is being distributed among
dyslexic students in Higher Education Institutions, for the conduction of
specific psychological and psychometric tests. The second tool applies specific
artificial intelligence algorithms to the data gathered via the application and
other surveys. These AI techniques have allowed us to identify the most
relevant difficulties faced by the students' cohort. Our different models have
obtained around 90\% mean accuracy for predicting the support tools and
learning strategies.
| [
{
"created": "Mon, 15 Jan 2024 20:26:09 GMT",
"version": "v1"
}
] | 2024-02-06 | [
[
"Yeguas-Bolívar",
"Enrique",
""
],
[
"Alcalde-Llergo",
"José M.",
""
],
[
"Aparicio-Martínez",
"Pilar",
""
],
[
"Taborri",
"Juri",
""
],
[
"Zingoni",
"Andrea",
""
],
[
"Pinzi",
"Sara",
""
]
] |
2402.01672 | Sao Mai Nguyen | Louis Annabi (Flowers, U2IS), Sao Mai Nguyen | Prerequisite Structure Discovery in Intelligent Tutoring Systems | null | 2023 IEEE International Conference on Development and Learning
(ICDL), Nov 2023, Macau, China. pp.176-181 | 10.1109/icdl55364.2023.10364416 | null | cs.CY cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses the importance of Knowledge Structure (KS) and Knowledge
Tracing (KT) in improving the recommendation of educational content in
intelligent tutoring systems. The KS represents the relations between different
Knowledge Components (KCs), while KT predicts a learner's success based on her
past history. The contribution of this research includes proposing a KT model
that incorporates the KS as a learnable parameter, enabling the discovery of
the underlying KS from learner trajectories. The quality of the uncovered KS is
assessed by using it to recommend content and evaluating the recommendation
algorithm with simulated students.
| [
{
"created": "Thu, 18 Jan 2024 09:01:49 GMT",
"version": "v1"
}
] | 2024-02-06 | [
[
"Annabi",
"Louis",
"",
"Flowers, U2IS"
],
[
"Nguyen",
"Sao Mai",
""
]
] |
2402.01673 | Sascha Ossowski | Jos\'e-Antonio Santos, Alberto Fern\'andez, Mar Moreno-Rebato, Holger
Billhardt, Jos\'e-A. Rodr\'iguez-Garc\'ia, Sascha Ossowski | Legal and ethical implications of applications based on agreement
technologies: the case of auction-based road intersections | null | Artif. Intell. Law 28(4): 385-414 (2020) | 10.1007/s10506-019-09259-8 | null | cs.CY cs.AI | http://creativecommons.org/licenses/by/4.0/ | Agreement Technologies refer to a novel paradigm for the construction of
distributed intelligent systems, where autonomous software agents negotiate to
reach agreements on behalf of their human users. Smart Cities are a key
application domain for Agreement Technologies. While several proofs of concept
and prototypes exist, such systems are still far from ready for being deployed
in the real-world. In this paper we focus on a novel method for managing
elements of smart road infrastructures of the future, namely the case of
auction-based road intersections. We show that, even though the key
technological elements for such methods are already available, there are
multiple non-technical issues that need to be tackled before they can be
applied in practice. For this purpose, we analyse legal and ethical
implications of auction-based road intersections in the context of
international regulations and from the standpoint of the Spanish legislation.
From this exercise, we extract a set of required modifications, of both
technical and legal nature, which need to be addressed so as to pave the way
for the potential real-world deployment of such systems in a future that may
not be too far away.
| [
{
"created": "Thu, 18 Jan 2024 09:12:48 GMT",
"version": "v1"
}
] | 2024-02-06 | [
[
"Santos",
"José-Antonio",
""
],
[
"Fernández",
"Alberto",
""
],
[
"Moreno-Rebato",
"Mar",
""
],
[
"Billhardt",
"Holger",
""
],
[
"Rodríguez-García",
"José-A.",
""
],
[
"Ossowski",
"Sascha",
""
]
] |
2402.01676 | Jennifer Hu | Jennifer Hu, Kyle Mahowald, Gary Lupyan, Anna Ivanova, Roger Levy | Language models align with human judgments on key grammatical
constructions | Published in PNAS at https://www.pnas.org/doi/10.1073/pnas.2400917121
as response to Dentella et al. (2023) | Proceedings of the National Academy of Sciences, 121(36),
e2400917121 (2024) | 10.1073/pnas.2400917121 | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Do large language models (LLMs) make human-like linguistic generalizations?
Dentella et al. (2023) ("DGL") prompt several LLMs ("Is the following sentence
grammatically correct in English?") to elicit grammaticality judgments of 80
English sentences, concluding that LLMs demonstrate a "yes-response bias" and a
"failure to distinguish grammatical from ungrammatical sentences". We
re-evaluate LLM performance using well-established practices and find that
DGL's data in fact provide evidence for just how well LLMs capture human
behaviors. Models not only achieve high accuracy overall, but also capture
fine-grained variation in human linguistic judgments.
| [
{
"created": "Fri, 19 Jan 2024 19:36:54 GMT",
"version": "v1"
},
{
"created": "Fri, 30 Aug 2024 14:43:22 GMT",
"version": "v2"
}
] | 2024-09-02 | [
[
"Hu",
"Jennifer",
""
],
[
"Mahowald",
"Kyle",
""
],
[
"Lupyan",
"Gary",
""
],
[
"Ivanova",
"Anna",
""
],
[
"Levy",
"Roger",
""
]
] |
2402.01686 | Liliana Marie Prikler | Liliana Marie Prikler, Franz Wotawa (Graz University of Technology,
Institute for Software Technology) | A Systematic Mapping Study of Digital Twins for Diagnosis in
Transportation | null | 2023 10th International Conference on Dependable Systems and Their
Applications (DSA), Tokyo, Japan, 2023, pp. 431-442 | 10.1109/DSA59317.2023.00058 | null | cs.CY cs.AI cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, digital twins have been proposed and implemented in various
fields with potential applications ranging from prototyping to maintenance.
Going forward, they are to enable numerous efficient and sustainable
technologies, among them autonomous cars. However, despite a large body of
research in many fields, academics have yet to agree on what exactly a digital
twin is -- and as a result, what its capabilities and limitations might be. To
further our understanding, we explore the capabilities of digital twins
concerning diagnosis in the field of transportation. We conduct a systematic
mapping study including digital twins of vehicles and their components, as well
as transportation infrastructure. We discovered that few papers on digital
twins describe any diagnostic process. Furthermore, most existing approaches
appear limited to system monitoring or fault detection. These findings suggest
that we need more research for diagnostic reasoning utilizing digital twins.
| [
{
"created": "Mon, 22 Jan 2024 15:01:37 GMT",
"version": "v1"
}
] | 2024-02-06 | [
[
"Prikler",
"Liliana Marie",
"",
"Graz University of Technology,\n Institute for Software Technology"
],
[
"Wotawa",
"Franz",
"",
"Graz University of Technology,\n Institute for Software Technology"
]
] |
2402.01712 | Hamideh Ghanadian | Hamideh Ghanadian, Isar Nejadgholi, Hussein Al Osman | Socially Aware Synthetic Data Generation for Suicidal Ideation Detection
Using Large Language Models | null | IEEE Access | 10.1109/ACCESS.2024.3358206 | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Suicidal ideation detection is a vital research area that holds great
potential for improving mental health support systems. However, the sensitivity
surrounding suicide-related data poses challenges in accessing large-scale,
annotated datasets necessary for training effective machine learning models. To
address this limitation, we introduce an innovative strategy that leverages the
capabilities of generative AI models, such as ChatGPT, Flan-T5, and Llama, to
create synthetic data for suicidal ideation detection. Our data generation
approach is grounded in social factors extracted from psychology literature and
aims to ensure coverage of essential information related to suicidal ideation.
In our study, we benchmarked against state-of-the-art NLP classification
models, specifically, those centered around the BERT family structures. When
trained on the real-world dataset, UMD, these conventional models tend to yield
F1-scores ranging from 0.75 to 0.87. Our synthetic data-driven method, informed
by social factors, offers consistent F1-scores of 0.82 for both models,
suggesting that the richness of topics in synthetic data can bridge the
performance gap across different model complexities. Most impressively, when we
combined a mere 30% of the UMD dataset with our synthetic data, we witnessed a
substantial increase in performance, achieving an F1-score of 0.88 on the UMD
test set. Such results underscore the cost-effectiveness and potential of our
approach in confronting major challenges in the field, such as data scarcity
and the quest for diversity in data representation.
| [
{
"created": "Thu, 25 Jan 2024 18:25:05 GMT",
"version": "v1"
}
] | 2024-02-06 | [
[
"Ghanadian",
"Hamideh",
""
],
[
"Nejadgholi",
"Isar",
""
],
[
"Osman",
"Hussein Al",
""
]
] |
2402.01714 | Sourav Ghosh | Vibhav Agarwal, Sourav Ghosh, Harichandana BSS, Himanshu Arora, Barath
Raj Kandur Raja | TrICy: Trigger-guided Data-to-text Generation with Intent aware
Attention-Copy | Published in the IEEE/ACM Transactions on Audio, Speech, and Language
Processing. (Sourav Ghosh and Vibhav Agarwal contributed equally to this
work.) | IEEE/ACM Transactions on Audio, Speech, and Language Processing,
vol. 32, pp. 1173-1184, 2024 | 10.1109/TASLP.2024.3353574 | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Data-to-text (D2T) generation is a crucial task in many natural language
understanding (NLU) applications and forms the foundation of task-oriented
dialog systems. In the context of conversational AI solutions that can work
directly with local data on the user's device, architectures utilizing large
pre-trained language models (PLMs) are impractical for on-device deployment due
to a high memory footprint. To this end, we propose TrICy, a novel lightweight
framework for an enhanced D2T task that generates text sequences based on the
intent in context and may further be guided by user-provided triggers. We
leverage an attention-copy mechanism to predict out-of-vocabulary (OOV) words
accurately. Performance analyses on E2E NLG dataset (BLEU: 66.43%, ROUGE-L:
70.14%), WebNLG dataset (BLEU: Seen 64.08%, Unseen 52.35%), and our Custom
dataset related to text messaging applications, showcase our architecture's
effectiveness. Moreover, we show that by leveraging an optional trigger input,
data-to-text generation quality increases significantly and achieves the new
SOTA score of 69.29% BLEU for E2E NLG. Furthermore, our analyses show that
TrICy achieves at least 24% and 3% improvement in BLEU and METEOR respectively
over LLMs like GPT-3, ChatGPT, and Llama 2. We also demonstrate that in some
scenarios, performance improvement due to triggers is observed even when they
are absent in training.
| [
{
"created": "Thu, 25 Jan 2024 20:17:06 GMT",
"version": "v1"
}
] | 2024-02-07 | [
[
"Agarwal",
"Vibhav",
""
],
[
"Ghosh",
"Sourav",
""
],
[
"BSS",
"Harichandana",
""
],
[
"Arora",
"Himanshu",
""
],
[
"Raja",
"Barath Raj Kandur",
""
]
] |
2402.01716 | Hapnes Toba | H. Toba, Y. T. Hernita, M. Ayub, M. C. Wijanto | Bloom-epistemic and sentiment analysis hierarchical classification in
course discussion forums | 11 pages, 7 figures | International Journal of Evaluation and Research in Education 13
(2024) 80-90 | 10.11591/ijere.v13i1.26024 | null | cs.CY cs.CL cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Online discussion forums are widely used for active textual interaction
between lecturers and students, and to see how the students have progressed in
a learning process. The objective of this study is to compare appropriate
machine-learning models to assess sentiments and Bloom\'s epistemic taxonomy
based on textual comments in educational discussion forums. Our proposed method
is called the hierarchical approach of Bloom-Epistemic and Sentiment Analysis
(BE-Sent). The research methodology consists of three main steps. The first
step is the data collection from the internal discussion forum and YouTube
comments of a Web Programming channel. The next step is text preprocessing to
annotate the text and clear unimportant words. Furthermore, with the text
dataset that has been successfully cleaned, sentiment analysis and epistemic
categorization will be done in each sentence of the text. Sentiment analysis is
divided into three categories: positive, negative, and neutral. Bloom\'s
epistemic is divided into six categories: remembering, understanding, applying,
analyzing, evaluating, and creating. This research has succeeded in producing a
course learning subsystem that assesses opinions based on text reviews of
discussion forums according to the category of sentiment and epistemic
analysis.
| [
{
"created": "Fri, 26 Jan 2024 08:20:13 GMT",
"version": "v1"
}
] | 2024-02-06 | [
[
"Toba",
"H.",
""
],
[
"Hernita",
"Y. T.",
""
],
[
"Ayub",
"M.",
""
],
[
"Wijanto",
"M. C.",
""
]
] |
2402.01720 | Goitom Ybrah Hailu Mr | Goitom Ybrah Hailu, Shishay Welay | Deep Learning Based Amharic Chatbot for FAQs in Universities | null | Machine Learning (cs.LG), V1, 2024 | 10.48550/arXiv.2402.01720 | AksumUniv-CS-2024 | cs.CY cs.AI cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | University students often spend a considerable amount of time seeking answers
to common questions from administrators or teachers. This can become tedious
for both parties, leading to a need for a solution. In response, this paper
proposes a chatbot model that utilizes natural language processing and deep
learning techniques to answer frequently asked questions (FAQs) in the Amharic
language. Chatbots are computer programs that simulate human conversation
through the use of artificial intelligence (AI), acting as a virtual assistant
to handle questions and other tasks. The proposed chatbot program employs
tokenization, normalization, stop word removal, and stemming to analyze and
categorize Amharic input sentences. Three machine learning model algorithms
were used to classify tokens and retrieve appropriate responses: Support Vector
Machine (SVM), Multinomial Na\"ive Bayes, and deep neural networks implemented
through TensorFlow, Keras, and NLTK. The deep learning model achieved the best
results with 91.55% accuracy and a validation loss of 0.3548 using an Adam
optimizer and SoftMax activation function. The chatbot model was integrated
with Facebook Messenger and deployed on a Heroku server for 24-hour
accessibility. The experimental results demonstrate that the chatbot framework
achieved its objectives and effectively addressed challenges such as Amharic
Fidel variation, morphological variation, and lexical gaps. Future research
could explore the integration of Amharic WordNet to narrow the lexical gap and
support more complex questions.
| [
{
"created": "Fri, 26 Jan 2024 18:37:21 GMT",
"version": "v1"
}
] | 2024-02-08 | [
[
"Hailu",
"Goitom Ybrah",
""
],
[
"Welay",
"Shishay",
""
]
] |
2402.01728 | Weimin Fu | Weimin Fu, Shijie Li, Yifang Zhao, Haocheng Ma, Raj Dutta, Xuan Zhang,
Kaichen Yang, Yier Jin, Xiaolong Guo | Hardware Phi-1.5B: A Large Language Model Encodes Hardware Domain
Specific Knowledge | 6 pages, 6 figures | 29th IEEE/ACM Asia and South Pacific Design Automation Conference
(ASP-DAC); 2024 January; Incheon Songdo Convensia, South Korea | null | null | cs.CL cs.AI cs.AR | http://creativecommons.org/licenses/by-sa/4.0/ | In the rapidly evolving semiconductor industry, where research, design,
verification, and manufacturing are intricately linked, the potential of Large
Language Models to revolutionize hardware design and security verification is
immense. The primary challenge, however, lies in the complexity of hardware
specific issues that are not adequately addressed by the natural language or
software code knowledge typically acquired during the pretraining stage.
Additionally, the scarcity of datasets specific to the hardware domain poses a
significant hurdle in developing a foundational model. Addressing these
challenges, this paper introduces Hardware Phi 1.5B, an innovative large
language model specifically tailored for the hardware domain of the
semiconductor industry. We have developed a specialized, tiered dataset
comprising small, medium, and large subsets and focused our efforts on
pretraining using the medium dataset. This approach harnesses the compact yet
efficient architecture of the Phi 1.5B model. The creation of this first
pretrained, hardware domain specific large language model marks a significant
advancement, offering improved performance in hardware design and verification
tasks and illustrating a promising path forward for AI applications in the
semiconductor sector.
| [
{
"created": "Sat, 27 Jan 2024 22:49:43 GMT",
"version": "v1"
}
] | 2024-02-07 | [
[
"Fu",
"Weimin",
""
],
[
"Li",
"Shijie",
""
],
[
"Zhao",
"Yifang",
""
],
[
"Ma",
"Haocheng",
""
],
[
"Dutta",
"Raj",
""
],
[
"Zhang",
"Xuan",
""
],
[
"Yang",
"Kaichen",
""
],
[
"Jin",
"Yier",
""
],
[
"Guo",
"Xiaolong",
""
]
] |
2402.01738 | Yaiza Aragon\'es-Soria | Yaiza Aragon\'es-Soria and Manuel Oriol | C4Q: A Chatbot for Quantum | Paper accepted in the 5th International Workshop on Quantum Software
Engineering (Q-SE 2024) | In Proceedings of the 5th ACM/IEEE International Workshop on
Quantum Software Engineering (Q-SE 2024). Association for Computing
Machinery, New York, NY, USA, 49-52 | 10.1145/3643667.3648222 | null | cs.CL quant-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Quantum computing is a growing field that promises many real-world
applications such as quantum cryptography or quantum finance. The number of
people able to use quantum computing is however still very small. This
limitation comes from the difficulty to understand the concepts and to know how
to start coding. Therefore, there is a need for tools that can assist
non-expert in overcoming this complexity. One possibility would be to use
existing conversational agents. Unfortunately ChatGPT and other Large-Language
Models produce inaccurate results. This article presents C4Q, a chatbot that
answers accurately basic questions and guides users when trying to code quantum
programs. Contrary to other approaches C4Q uses a pre-trained large language
model only to discover and classify user requests. It then generates an
accurate answer using an own engine. Thanks to this architectural design, C4Q's
answers are always correct, and thus C4Q can become a support tool that makes
quantum computing more available to non-experts.
| [
{
"created": "Mon, 29 Jan 2024 09:44:45 GMT",
"version": "v1"
}
] | 2024-08-27 | [
[
"Aragonés-Soria",
"Yaiza",
""
],
[
"Oriol",
"Manuel",
""
]
] |
2402.01746 | Liang Zhang | Liang Zhang, Jionghao Lin, Conrad Borchers, Meng Cao, Xiangen Hu | 3DG: A Framework for Using Generative AI for Handling Sparse Learner
Performance Data From Intelligent Tutoring Systems | null | LAK 2024: International Workshop on Generative AI for Learning
Analytics (GenAI-LA) | null | null | cs.CY cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Learning performance data (e.g., quiz scores and attempts) is significant for
understanding learner engagement and knowledge mastery level. However, the
learning performance data collected from Intelligent Tutoring Systems (ITSs)
often suffers from sparsity, impacting the accuracy of learner modeling and
knowledge assessments. To address this, we introduce the 3DG framework
(3-Dimensional tensor for Densification and Generation), a novel approach
combining tensor factorization with advanced generative models, including
Generative Adversarial Network (GAN) and Generative Pre-trained Transformer
(GPT), for enhanced data imputation and augmentation. The framework operates by
first representing the data as a three-dimensional tensor, capturing dimensions
of learners, questions, and attempts. It then densifies the data through tensor
factorization and augments it using Generative AI models, tailored to
individual learning patterns identified via clustering. Applied to data from an
AutoTutor lesson by the Center for the Study of Adult Literacy (CSAL), the 3DG
framework effectively generated scalable, personalized simulations of learning
performance. Comparative analysis revealed GAN's superior reliability over
GPT-4 in this context, underscoring its potential in addressing data sparsity
challenges in ITSs and contributing to the advancement of personalized
educational technology.
| [
{
"created": "Mon, 29 Jan 2024 22:34:01 GMT",
"version": "v1"
}
] | 2024-02-06 | [
[
"Zhang",
"Liang",
""
],
[
"Lin",
"Jionghao",
""
],
[
"Borchers",
"Conrad",
""
],
[
"Cao",
"Meng",
""
],
[
"Hu",
"Xiangen",
""
]
] |
2402.01775 | Rosana Montes | Rosana Montes, Cristina Zuheros, Jeovani M. Morales, Noe Zerme\~no,
Jer\'onimo Duran, Francsico Herrera | Design and consensus content validity of the questionnaire for
b-learning education: A 2-Tuple Fuzzy Linguistic Delphi based Decision
Support Tool | 47 pages, 7 figures | Open Access Volume 147 November 2023 Article number 110755 | 10.1016/j.asoc.2023.110755 | null | cs.CY cs.CL cs.HC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Classic Delphi and Fuzzy Delphi methods are used to test content validity of
data collection tools such as questionnaires. Fuzzy Delphi takes the opinion
issued by judges from a linguistic perspective reducing ambiguity in opinions
by using fuzzy numbers. We propose an extension named 2-Tuple Fuzzy Linguistic
Delphi method to deal with scenarios in which judges show different expertise
degrees by using fuzzy multigranular semantics of the linguistic terms and to
obtain intermediate and final results expressed by 2-tuple linguistic values.
The key idea of our proposal is to validate the full questionnaire by means of
the evaluation of its parts, defining the validity of each item as a Decision
Making problem. Taking the opinion of experts, we measure the degree of
consensus, the degree of consistency, and the linguistic score of each item, in
order to detect those items that affect, positively or negatively, the quality
of the instrument. Considering the real need to evaluate a b-learning
educational experience with a consensual questionnaire, we present a Decision
Making model for questionnaire validation that solves it. Additionally, we
contribute to this consensus reaching problem by developing an online tool
under GPL v3 license. The software visualizes the collective valuations for
each iteration and assists to determine which parts of the questionnaire should
be modified to reach a consensual solution.
| [
{
"created": "Thu, 1 Feb 2024 13:32:18 GMT",
"version": "v1"
}
] | 2024-02-08 | [
[
"Montes",
"Rosana",
""
],
[
"Zuheros",
"Cristina",
""
],
[
"Morales",
"Jeovani M.",
""
],
[
"Zermeño",
"Noe",
""
],
[
"Duran",
"Jerónimo",
""
],
[
"Herrera",
"Francsico",
""
]
] |
2402.01817 | Subbarao Kambhampati | Subbarao Kambhampati, Karthik Valmeekam, Lin Guan, Mudit Verma, Kaya
Stechly, Siddhant Bhambri, Lucas Saldyt, Anil Murthy | LLMs Can't Plan, But Can Help Planning in LLM-Modulo Frameworks | null | Proceedings of the 41 st International Conference on Machine
Learning, Vienna, Austria. PMLR 235, 2024 | null | null | cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | There is considerable confusion about the role of Large Language Models
(LLMs) in planning and reasoning tasks. On one side are over-optimistic claims
that LLMs can indeed do these tasks with just the right prompting or
self-verification strategies. On the other side are perhaps over-pessimistic
claims that all that LLMs are good for in planning/reasoning tasks are as mere
translators of the problem specification from one syntactic format to another,
and ship the problem off to external symbolic solvers. In this position paper,
we take the view that both these extremes are misguided. We argue that
auto-regressive LLMs cannot, by themselves, do planning or self-verification
(which is after all a form of reasoning), and shed some light on the reasons
for misunderstandings in the literature. We will also argue that LLMs should be
viewed as universal approximate knowledge sources that have much more
meaningful roles to play in planning/reasoning tasks beyond simple
front-end/back-end format translators. We present a vision of {\bf LLM-Modulo
Frameworks} that combine the strengths of LLMs with external model-based
verifiers in a tighter bi-directional interaction regime. We will show how the
models driving the external verifiers themselves can be acquired with the help
of LLMs. We will also argue that rather than simply pipelining LLMs and
symbolic components, this LLM-Modulo Framework provides a better neuro-symbolic
approach that offers tighter integration between LLMs and symbolic components,
and allows extending the scope of model-based planning/reasoning regimes
towards more flexible knowledge, problem and preference specifications.
| [
{
"created": "Fri, 2 Feb 2024 14:43:18 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Feb 2024 01:29:37 GMT",
"version": "v2"
},
{
"created": "Wed, 12 Jun 2024 01:13:11 GMT",
"version": "v3"
}
] | 2024-06-13 | [
[
"Kambhampati",
"Subbarao",
""
],
[
"Valmeekam",
"Karthik",
""
],
[
"Guan",
"Lin",
""
],
[
"Verma",
"Mudit",
""
],
[
"Stechly",
"Kaya",
""
],
[
"Bhambri",
"Siddhant",
""
],
[
"Saldyt",
"Lucas",
""
],
[
"Murthy",
"Anil",
""
]
] |
2402.01821 | Akshay Kumar Jagadish | Akshay K. Jagadish, Julian Coda-Forno, Mirko Thalmann, Eric Schulz,
and Marcel Binz | Human-like Category Learning by Injecting Ecological Priors from Large
Language Models into Neural Networks | 27 pages (9 pages of main text, 4 pages of references, and 14 pages
of appendix), 13 figures, and 7 Tables | Proceedings of the 41st International Conference on Machine
Learning, Vienna, Austria. PMLR 235, 2024 | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Ecological rationality refers to the notion that humans are rational agents
adapted to their environment. However, testing this theory remains challenging
due to two reasons: the difficulty in defining what tasks are ecologically
valid and building rational models for these tasks. In this work, we
demonstrate that large language models can generate cognitive tasks,
specifically category learning tasks, that match the statistics of real-world
tasks, thereby addressing the first challenge. We tackle the second challenge
by deriving rational agents adapted to these tasks using the framework of
meta-learning, leading to a class of models called ecologically rational
meta-learned inference (ERMI). ERMI quantitatively explains human data better
than seven other cognitive models in two different experiments. It additionally
matches human behavior on a qualitative level: (1) it finds the same tasks
difficult that humans find difficult, (2) it becomes more reliant on an
exemplar-based strategy for assigning categories with learning, and (3) it
generalizes to unseen stimuli in a human-like way. Furthermore, we show that
ERMI's ecologically valid priors allow it to achieve state-of-the-art
performance on the OpenML-CC18 classification benchmark.
| [
{
"created": "Fri, 2 Feb 2024 16:32:04 GMT",
"version": "v1"
},
{
"created": "Tue, 28 May 2024 07:40:53 GMT",
"version": "v2"
}
] | 2024-05-29 | [
[
"Jagadish",
"Akshay K.",
""
],
[
"Coda-Forno",
"Julian",
""
],
[
"Thalmann",
"Mirko",
""
],
[
"Schulz",
"Eric",
""
],
[
"Binz",
"Marcel",
""
]
] |
2402.01828 | Izhak Shafran | Mingqiu Wang, Izhak Shafran, Hagen Soltau, Wei Han, Yuan Cao, Dian Yu,
Laurent El Shafey | Retrieval Augmented End-to-End Spoken Dialog Models | null | Proc. ICASSP 2024 | null | null | cs.CL cs.AI cs.SD eess.AS | http://creativecommons.org/licenses/by/4.0/ | We recently developed SLM, a joint speech and language model, which fuses a
pretrained foundational speech model and a large language model (LLM), while
preserving the in-context learning capability intrinsic to the pretrained LLM.
In this paper, we apply SLM to speech dialog applications where the dialog
states are inferred directly from the audio signal.
Task-oriented dialogs often contain domain-specific entities, i.e.,
restaurants, hotels, train stations, and city names, which are difficult to
recognize, however, critical for the downstream applications. Inspired by the
RAG (retrieval-augmented generation) paradigm, we propose a retrieval augmented
SLM (ReSLM) that overcomes this weakness. We first train a speech retriever to
retrieve text entities mentioned in the audio. The retrieved entities are then
added as text inputs to the underlying SLM to bias model predictions. We
evaluated ReSLM on speech MultiWoz task (DSTC-11 challenge), and found that
this retrieval augmentation boosts model performance, achieving joint goal
accuracy (38.6% vs 32.7%), slot error rate (20.6% vs 24.8%) and ASR word error
rate (5.5% vs 6.7%). While demonstrated on dialog state tracking, our approach
is broadly applicable to other speech tasks requiring contextual information or
domain-specific entities, such as contextual ASR with biasing capability.
| [
{
"created": "Fri, 2 Feb 2024 18:23:09 GMT",
"version": "v1"
}
] | 2024-02-08 | [
[
"Wang",
"Mingqiu",
""
],
[
"Shafran",
"Izhak",
""
],
[
"Soltau",
"Hagen",
""
],
[
"Han",
"Wei",
""
],
[
"Cao",
"Yuan",
""
],
[
"Yu",
"Dian",
""
],
[
"Shafey",
"Laurent El",
""
]
] |
2402.01849 | Elena Monta\~n\'es | Laura Fern\'andez D\'iaz, Miriam Fern\'andez D\'iaz, Jos\'e Ram\'on
Quevedo, Elena Monta\~n\'es | Capturing waste collection planning expert knowledge in a fitness
function through preference learning | null | Engineering Applications of Artificial Intelligence 2021 Volume 99
104113 | 10.1016/j.engappai.2020.104113 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This paper copes with the COGERSA waste collection process. Up to now,
experts have been manually designed the process using a trial and error
mechanism. This process is not globally optimized, since it has been
progressively and locally built as council demands appear. Planning
optimization algorithms usually solve it, but they need a fitness function to
evaluate a route planning quality. The drawback is that even experts are not
able to propose one in a straightforward way due to the complexity of the
process. Hence, the goal of this paper is to build a fitness function though a
preference framework, taking advantage of the available expert knowledge and
expertise. Several key performance indicators together with preference
judgments are carefully established according to the experts for learning a
promising fitness function. Particularly, the additivity property of them makes
the task be much more affordable, since it allows to work with routes rather
than with route plannings. Besides, a feature selection analysis is performed
over such indicators, since the experts suspect of a potential existing (but
unknown) redundancy among them. The experiment results confirm this hypothesis,
since the best $C-$index ($98\%$ against around $94\%$) is reached when 6 or 8
out of 21 indicators are taken. Particularly, truck load seems to be a highly
promising key performance indicator, together to the travelled distance along
non-main roads. A comparison with other existing approaches shows that the
proposed method clearly outperforms them, since the $C-$index goes from $72\%$
or $90\%$ to $98\%$.
| [
{
"created": "Fri, 2 Feb 2024 19:04:53 GMT",
"version": "v1"
}
] | 2024-02-07 | [
[
"Díaz",
"Laura Fernández",
""
],
[
"Díaz",
"Miriam Fernández",
""
],
[
"Quevedo",
"José Ramón",
""
],
[
"Montañés",
"Elena",
""
]
] |
2402.01916 | Francisco J. Ribadas-Pena | Francisco J. Ribadas-Pena, Shuyuan Cao, Elmurod Kuriyozov | CoLe and LYS at BioASQ MESINESP8 Task: similarity based descriptor
assignment in Spanish | Accepted at the 8th BioASQ Workshop at the 11th Conference and Labs
of the Evaluation Forum (CLEF) 2020. 11 pages | Working Notes of CLEF 2020. Vol. 2696 of CEUR Workshop Proceedings
(CEUR-WS.org) | null | null | cs.IR cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | In this paper, we describe our participation in the MESINESP Task of the
BioASQ biomedical semantic indexing challenge. The participating system follows
an approach based solely on conventional information retrieval tools. We have
evaluated various alternatives for extracting index terms from IBECS/LILACS
documents in order to be stored in an Apache Lucene index. Those indexed
representations are queried using the contents of the article to be annotated
and a ranked list of candidate labels is created from the retrieved documents.
We also have evaluated a sort of limited Label Powerset approach which creates
meta-labels joining pairs of DeCS labels with high co-occurrence scores, and an
alternative method based on label profile matching. Results obtained in
official runs seem to confirm the suitability of this approach for languages
like Spanish.
| [
{
"created": "Fri, 2 Feb 2024 21:36:03 GMT",
"version": "v1"
}
] | 2024-02-06 | [
[
"Ribadas-Pena",
"Francisco J.",
""
],
[
"Cao",
"Shuyuan",
""
],
[
"Kuriyozov",
"Elmurod",
""
]
] |
2402.01935 | Dejiao Zhang | Dejiao Zhang, Wasi Ahmad, Ming Tan, Hantian Ding, Ramesh Nallapati,
Dan Roth, Xiaofei Ma, Bing Xiang | Code Representation Learning At Scale | 10 pages | ICLR 2024 | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Recent studies have shown that code language models at scale demonstrate
significant performance gains on downstream tasks, i.e., code generation.
However, most of the existing works on code representation learning train
models at a hundred million parameter scale using very limited pretraining
corpora. In this work, we fuel code representation learning with a vast amount
of code data via a two-stage pretraining scheme. We first train the encoders
via a mix that leverages both randomness in masking language modeling and the
structure aspect of programming language. We then enhance the representations
via contrastive learning with hard negative and hard positive constructed in an
unsupervised manner. We establish an off-the-shelf encoder model that
persistently outperforms the existing models on a wide variety of downstream
tasks by large margins. To comprehend the factors contributing to successful
code representation learning, we conduct detailed ablations and share our
findings on (i) a customized and effective token-level denoising scheme for
source code; (ii) the importance of hard negatives and hard positives; (iii)
how the proposed bimodal contrastive learning boost the cross-lingual semantic
search performance; and (iv) how the pretraining schemes decide the downstream
task performance scales with the model size.
| [
{
"created": "Fri, 2 Feb 2024 22:19:15 GMT",
"version": "v1"
}
] | 2024-02-06 | [
[
"Zhang",
"Dejiao",
""
],
[
"Ahmad",
"Wasi",
""
],
[
"Tan",
"Ming",
""
],
[
"Ding",
"Hantian",
""
],
[
"Nallapati",
"Ramesh",
""
],
[
"Roth",
"Dan",
""
],
[
"Ma",
"Xiaofei",
""
],
[
"Xiang",
"Bing",
""
]
] |
2402.01963 | Francisco J. Ribadas-Pena | Francisco J. Ribadas-Pena, Shuyuan Cao, V\'ictor M. Darriba Bilbao | Improving Large-Scale k-Nearest Neighbor Text Categorization with Label
Autoencoders | 22 pages, 4 figures | Mathematics 2022, 10(16), 2867 | 10.3390/math10162867 | null | cs.LG cs.CL cs.IR | http://creativecommons.org/licenses/by/4.0/ | In this paper, we introduce a multi-label lazy learning approach to deal with
automatic semantic indexing in large document collections in the presence of
complex and structured label vocabularies with high inter-label correlation.
The proposed method is an evolution of the traditional k-Nearest Neighbors
algorithm which uses a large autoencoder trained to map the large label space
to a reduced size latent space and to regenerate the predicted labels from this
latent space. We have evaluated our proposal in a large portion of the MEDLINE
biomedical document collection which uses the Medical Subject Headings (MeSH)
thesaurus as a controlled vocabulary. In our experiments we propose and
evaluate several document representation approaches and different label
autoencoder configurations.
| [
{
"created": "Sat, 3 Feb 2024 00:11:29 GMT",
"version": "v1"
}
] | 2024-02-06 | [
[
"Ribadas-Pena",
"Francisco J.",
""
],
[
"Cao",
"Shuyuan",
""
],
[
"Bilbao",
"Víctor M. Darriba",
""
]
] |
2402.02094 | Wenjia Xu | Wenjia Xu, Jiuniu Wang, Zhiwei Wei, Mugen Peng, Yirong Wu | Deep Semantic-Visual Alignment for Zero-Shot Remote Sensing Image Scene
Classification | Published in ISPRS P&RS. The code is available at
https://github.com/wenjiaXu/RS_Scene_ZSL | ISPRS Journal of Photogrammetry and Remote Sensing, Volume 198,
2023, Pages 140-152 | 10.1016/j.isprsjprs.2023.02.012 | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep neural networks have achieved promising progress in remote sensing (RS)
image classification, for which the training process requires abundant samples
for each class. However, it is time-consuming and unrealistic to annotate
labels for each RS category, given the fact that the RS target database is
increasing dynamically. Zero-shot learning (ZSL) allows for identifying novel
classes that are not seen during training, which provides a promising solution
for the aforementioned problem. However, previous ZSL models mainly depend on
manually-labeled attributes or word embeddings extracted from language models
to transfer knowledge from seen classes to novel classes. Besides, pioneer ZSL
models use convolutional neural networks pre-trained on ImageNet, which focus
on the main objects appearing in each image, neglecting the background context
that also matters in RS scene classification. To address the above problems, we
propose to collect visually detectable attributes automatically. We predict
attributes for each class by depicting the semantic-visual similarity between
attributes and images. In this way, the attribute annotation process is
accomplished by machine instead of human as in other methods. Moreover, we
propose a Deep Semantic-Visual Alignment (DSVA) that take advantage of the
self-attention mechanism in the transformer to associate local image regions
together, integrating the background context information for prediction. The
DSVA model further utilizes the attribute attention maps to focus on the
informative image regions that are essential for knowledge transfer in ZSL, and
maps the visual images into attribute space to perform ZSL classification. With
extensive experiments, we show that our model outperforms other
state-of-the-art models by a large margin on a challenging large-scale RS scene
classification benchmark.
| [
{
"created": "Sat, 3 Feb 2024 09:18:49 GMT",
"version": "v1"
}
] | 2024-02-06 | [
[
"Xu",
"Wenjia",
""
],
[
"Wang",
"Jiuniu",
""
],
[
"Wei",
"Zhiwei",
""
],
[
"Peng",
"Mugen",
""
],
[
"Wu",
"Yirong",
""
]
] |
2402.02110 | Guang-Yuan Hao | Guang-Yuan Hao, Hengguan Huang, Haotian Wang, Jie Gao, Hao Wang | Composite Active Learning: Towards Multi-Domain Active Learning with
Theoretical Guarantees | null | AAAI 2024 | null | null | cs.LG cs.AI cs.CV cs.NE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Active learning (AL) aims to improve model performance within a fixed
labeling budget by choosing the most informative data points to label. Existing
AL focuses on the single-domain setting, where all data come from the same
domain (e.g., the same dataset). However, many real-world tasks often involve
multiple domains. For example, in visual recognition, it is often desirable to
train an image classifier that works across different environments (e.g.,
different backgrounds), where images from each environment constitute one
domain. Such a multi-domain AL setting is challenging for prior methods because
they (1) ignore the similarity among different domains when assigning labeling
budget and (2) fail to handle distribution shift of data across different
domains. In this paper, we propose the first general method, dubbed composite
active learning (CAL), for multi-domain AL. Our approach explicitly considers
the domain-level and instance-level information in the problem; CAL first
assigns domain-level budgets according to domain-level importance, which is
estimated by optimizing an upper error bound that we develop; with the
domain-level budgets, CAL then leverages a certain instance-level query
strategy to select samples to label from each domain. Our theoretical analysis
shows that our method achieves a better error bound compared to current AL
methods. Our empirical results demonstrate that our approach significantly
outperforms the state-of-the-art AL methods on both synthetic and real-world
multi-domain datasets. Code is available at
https://github.com/Wang-ML-Lab/multi-domain-active-learning.
| [
{
"created": "Sat, 3 Feb 2024 10:22:18 GMT",
"version": "v1"
}
] | 2024-02-12 | [
[
"Hao",
"Guang-Yuan",
""
],
[
"Huang",
"Hengguan",
""
],
[
"Wang",
"Haotian",
""
],
[
"Gao",
"Jie",
""
],
[
"Wang",
"Hao",
""
]
] |
2402.02121 | Hossein Bagheri | Ali Mirzaei, Hossein Bagheri, and Iman Khosravi | Enhancing crop classification accuracy by synthetic SAR-Optical data
generation using deep learning | null | ISPRS Int. J. Geo-Inf. 2023, 12(11), 450 | 10.3390/ijgi12110450 | null | cs.CV cs.LG eess.IV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Crop classification using remote sensing data has emerged as a prominent
research area in recent decades. Studies have demonstrated that fusing SAR and
optical images can significantly enhance the accuracy of classification.
However, a major challenge in this field is the limited availability of
training data, which adversely affects the performance of classifiers. In
agricultural regions, the dominant crops typically consist of one or two
specific types, while other crops are scarce. Consequently, when collecting
training samples to create a map of agricultural products, there is an
abundance of samples from the dominant crops, forming the majority classes.
Conversely, samples from other crops are scarce, representing the minority
classes. Addressing this issue requires overcoming several challenges and
weaknesses associated with traditional data generation methods. These methods
have been employed to tackle the imbalanced nature of the training data.
Nevertheless, they still face limitations in effectively handling the minority
classes. Overall, the issue of inadequate training data, particularly for
minority classes, remains a hurdle that traditional methods struggle to
overcome. In this research, We explore the effectiveness of conditional tabular
generative adversarial network (CTGAN) as a synthetic data generation method
based on a deep learning network, in addressing the challenge of limited
training data for minority classes in crop classification using the fusion of
SAR-optical data. Our findings demonstrate that the proposed method generates
synthetic data with higher quality that can significantly increase the number
of samples for minority classes leading to better performance of crop
classifiers.
| [
{
"created": "Sat, 3 Feb 2024 11:07:50 GMT",
"version": "v1"
}
] | 2024-02-07 | [
[
"Mirzaei",
"Ali",
""
],
[
"Bagheri",
"Hossein",
""
],
[
"Khosravi",
"Iman",
""
]
] |
2402.02141 | Bo Yang | Bo Yang, Chen Wang, Xiaoshuang Ma, Beiping Song, Zhuang Liu and Fangde
Sun | Zero-shot sketch-based remote sensing image retrieval based on
multi-level and attention-guided tokenization | 44 pages, 6 figures | Remote Sens. 2024, 16, 1653 | 10.3390/rs16101653 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Effectively and efficiently retrieving images from remote sensing databases
is a critical challenge in the realm of remote sensing big data. Utilizing
hand-drawn sketches as retrieval inputs offers intuitive and user-friendly
advantages, yet the potential of multi-level feature integration from sketches
remains underexplored, leading to suboptimal retrieval performance. To address
this gap, our study introduces a novel zero-shot, sketch-based retrieval method
for remote sensing images, leveraging multi-level feature extraction,
self-attention-guided tokenization and filtering, and cross-modality attention
update. This approach employs only vision information and does not require
semantic knowledge concerning the sketch and image. It starts by employing
multi-level self-attention guided feature extraction to tokenize the query
sketches, as well as self-attention feature extraction to tokenize the
candidate images. It then employs cross-attention mechanisms to establish token
correspondence between these two modalities, facilitating the computation of
sketch-to-image similarity. Our method significantly outperforms existing
sketch-based remote sensing image retrieval techniques, as evidenced by tests
on multiple datasets. Notably, it also exhibits robust zero-shot learning
capabilities and strong generalizability in handling unseen categories and
novel remote sensing data. The method's scalability can be further enhanced by
the pre-calculation of retrieval tokens for all candidate images in a database.
This research underscores the significant potential of multi-level,
attention-guided tokenization in cross-modal remote sensing image retrieval.
For broader accessibility and research facilitation, we have made the code and
dataset used in this study publicly available online. Code and dataset are
available at https://github.com/Snowstormfly/Cross-modal-retrieval-MLAGT.
| [
{
"created": "Sat, 3 Feb 2024 13:11:14 GMT",
"version": "v1"
},
{
"created": "Tue, 5 Mar 2024 12:15:57 GMT",
"version": "v2"
},
{
"created": "Thu, 16 May 2024 03:00:22 GMT",
"version": "v3"
}
] | 2024-05-20 | [
[
"Yang",
"Bo",
""
],
[
"Wang",
"Chen",
""
],
[
"Ma",
"Xiaoshuang",
""
],
[
"Song",
"Beiping",
""
],
[
"Liu",
"Zhuang",
""
],
[
"Sun",
"Fangde",
""
]
] |
2402.02181 | Jos\'e Alberto Ben\'itez-Andrades Ph.D. | Jos\'e Alberto Ben\'itez-Andrades, Isa\'ias Garc\'ia-Rodr\'iguez,
Carmen Benavides, H\'ector Al\'aiz-Moret\'on and Jos\'e Emilio Labra Gayo | An Ontology-Based multi-domain model in Social Network Analysis:
Experimental validation and case study | null | Information Sciences, Volume 540, November 2020, Pages 390-413 | 10.1016/j.ins.2020.06.008 | null | cs.SI cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The use of social network theory and methods of analysis have been applied to
different domains in recent years, including public health. The complete
procedure for carrying out a social network analysis (SNA) is a time-consuming
task that entails a series of steps in which the expert in social network
analysis could make mistakes. This research presents a multi-domain knowledge
model capable of automatically gathering data and carrying out different social
network analyses in different domains, without errors and obtaining the same
conclusions that an expert in SNA would obtain. The model is represented in an
ontology called OntoSNAQA, which is made up of classes, properties and rules
representing the domains of People, Questionnaires and Social Network Analysis.
Besides the ontology itself, different rules are represented by SWRL and SPARQL
queries. A Knowledge Based System was created using OntoSNAQA and applied to a
real case study in order to show the advantages of the approach. Finally, the
results of an SNA analysis obtained through the model were compared to those
obtained from some of the most widely used SNA applications: UCINET, Pajek,
Cytoscape and Gephi, to test and confirm the validity of the model.
| [
{
"created": "Sat, 3 Feb 2024 15:11:19 GMT",
"version": "v1"
}
] | 2024-02-06 | [
[
"Benítez-Andrades",
"José Alberto",
""
],
[
"García-Rodríguez",
"Isaías",
""
],
[
"Benavides",
"Carmen",
""
],
[
"Aláiz-Moretón",
"Héctor",
""
],
[
"Gayo",
"José Emilio Labra",
""
]
] |
2402.02183 | Jos\'e Alberto Ben\'itez-Andrades Ph.D. | Mar\'ia Teresa Garc\'ia-Ord\'as, Jos\'e Alberto Ben\'itez-Andrades,
Isa\'ias Garc\'ia-Rodr\'iguez, Carmen Benavides and H\'ector Alaiz-Moret\'on | Detecting Respiratory Pathologies Using Convolutional Neural Networks
and Variational Autoencoders for Unbalancing Data | null | Sensors 2020, Volume 20 Issue 4, ID 1214 | 10.3390/s20041214 | null | cs.CV eess.IV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The aim of this paper was the detection of pathologies through respiratory
sounds. The ICBHI (International Conference on Biomedical and Health
Informatics) Benchmark was used. This dataset is composed of 920 sounds of
which 810 are of chronic diseases, 75 of non-chronic diseases and only 35 of
healthy individuals. As more than 88% of the samples of the dataset are from
the same class (Chronic), the use of a Variational Convolutional Autoencoder
was proposed to generate new labeled data and other well known oversampling
techniques after determining that the dataset classes are unbalanced. Once the
preprocessing step was carried out, a Convolutional Neural Network (CNN) was
used to classify the respiratory sounds into healthy, chronic, and non-chronic
disease. In addition, we carried out a more challenging classification trying
to distinguish between the different types of pathologies or healthy: URTI,
COPD, Bronchiectasis, Pneumonia, and Bronchiolitis. We achieved results up to
0.993 F-Score in the three-label classification and 0.990 F-Score in the more
challenging six-class classification.
| [
{
"created": "Sat, 3 Feb 2024 15:17:32 GMT",
"version": "v1"
}
] | 2024-02-07 | [
[
"García-Ordás",
"María Teresa",
""
],
[
"Benítez-Andrades",
"José Alberto",
""
],
[
"García-Rodríguez",
"Isaías",
""
],
[
"Benavides",
"Carmen",
""
],
[
"Alaiz-Moretón",
"Héctor",
""
]
] |
2402.02188 | Jos\'e Alberto Ben\'itez-Andrades Ph.D. | Mar\'ia Teresa Garc\'ia-Ord\'as, Carmen Benavides, Jos\'e Alberto
Ben\'itez-Andrades, H\'ector Alaiz-Moret\'on and Isa\'ias
Garc\'ia-Rodr\'iguez | Diabetes detection using deep learning techniques with oversampling and
feature augmentation | null | Computer Methods and Programs in Biomedicine, Volume 202, April
2021, ID 105968 | 10.1016/j.cmpb.2021.105968 | null | eess.IV cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Background and objective: Diabetes is a chronic pathology which is affecting
more and more people over the years. It gives rise to a large number of deaths
each year. Furthermore, many people living with the disease do not realize the
seriousness of their health status early enough. Late diagnosis brings about
numerous health problems and a large number of deaths each year so the
development of methods for the early diagnosis of this pathology is essential.
Methods: In this paper, a pipeline based on deep learning techniques is
proposed to predict diabetic people. It includes data augmentation using a
variational autoencoder (VAE), feature augmentation using an sparse autoencoder
(SAE) and a convolutional neural network for classification. Pima Indians
Diabetes Database, which takes into account information on the patients such as
the number of pregnancies, glucose or insulin level, blood pressure or age, has
been evaluated.
Results: A 92.31% of accuracy was obtained when CNN classifier is trained
jointly the SAE for featuring augmentation over a well balanced dataset. This
means an increment of 3.17% of accuracy with respect the state-of-the-art.
Conclusions: Using a full deep learning pipeline for data preprocessing and
classification has demonstrate to be very promising in the diabetes detection
field outperforming the state-of-the-art proposals.
| [
{
"created": "Sat, 3 Feb 2024 15:30:20 GMT",
"version": "v1"
}
] | 2024-02-07 | [
[
"García-Ordás",
"María Teresa",
""
],
[
"Benavides",
"Carmen",
""
],
[
"Benítez-Andrades",
"José Alberto",
""
],
[
"Alaiz-Moretón",
"Héctor",
""
],
[
"García-Rodríguez",
"Isaías",
""
]
] |
2402.02209 | Orazio Pontorno | Orazio Pontorno (1), Luca Guarnera (1), Sebastiano Battiato (1) ((1)
University of Catania) | On the Exploitation of DCT-Traces in the Generative-AI Domain | null | 2024 IEEE International Conference on Image Processing (ICIP) | 10.1109/ICIP51287.2024.10648013 | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deepfakes represent one of the toughest challenges in the world of
Cybersecurity and Digital Forensics, especially considering the high-quality
results obtained with recent generative AI-based solutions. Almost all
generative models leave unique traces in synthetic data that, if analyzed and
identified in detail, can be exploited to improve the generalization
limitations of existing deepfake detectors. In this paper we analyzed deepfake
images in the frequency domain generated by both GAN and Diffusion Model
engines, examining in detail the underlying statistical distribution of
Discrete Cosine Transform (DCT) coefficients. Recognizing that not all
coefficients contribute equally to image detection, we hypothesize the
existence of a unique ``discriminative fingerprint", embedded in specific
combinations of coefficients. To identify them, Machine Learning classifiers
were trained on various combinations of coefficients. In addition, the
Explainable AI (XAI) LIME algorithm was used to search for intrinsic
discriminative combinations of coefficients. Finally, we performed a robustness
test to analyze the persistence of traces by applying JPEG compression. The
experimental results reveal the existence of traces left by the generative
models that are more discriminative and persistent at JPEG attacks. Code and
dataset are available at https://github.com/opontorno/dcts_analysis_deepfakes.
| [
{
"created": "Sat, 3 Feb 2024 16:45:31 GMT",
"version": "v1"
},
{
"created": "Mon, 12 Feb 2024 08:25:06 GMT",
"version": "v2"
},
{
"created": "Tue, 30 Jul 2024 16:16:45 GMT",
"version": "v3"
}
] | 2024-10-04 | [
[
"Pontorno",
"Orazio",
""
],
[
"Guarnera",
"Luca",
""
],
[
"Battiato",
"Sebastiano",
""
]
] |
2402.02210 | Haochen Chang | Haochen Chang, Jing Chen, Yilin Li, Jixiang Chen, Xiaofeng Zhang | Wavelet-Decoupling Contrastive Enhancement Network for Fine-Grained
Skeleton-Based Action Recognition | Accepted by ICASSP 2024 | IEEE International Conference on Acoustics, Speech and Signal
Processing, Apr 2024, Seoul (Korea), South Korea | 10.1109/ICASSP48485.2024.10448199 | null | cs.CV cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Skeleton-based action recognition has attracted much attention, benefiting
from its succinctness and robustness. However, the minimal inter-class
variation in similar action sequences often leads to confusion. The inherent
spatiotemporal coupling characteristics make it challenging to mine the subtle
differences in joint motion trajectories, which is critical for distinguishing
confusing fine-grained actions. To alleviate this problem, we propose a
Wavelet-Attention Decoupling (WAD) module that utilizes discrete wavelet
transform to effectively disentangle salient and subtle motion features in the
time-frequency domain. Then, the decoupling attention adaptively recalibrates
their temporal responses. To further amplify the discrepancies in these subtle
motion features, we propose a Fine-grained Contrastive Enhancement (FCE) module
to enhance attention towards trajectory features by contrastive learning.
Extensive experiments are conducted on the coarse-grained dataset NTU RGB+D and
the fine-grained dataset FineGYM. Our methods perform competitively compared to
state-of-the-art methods and can discriminate confusing fine-grained actions
well.
| [
{
"created": "Sat, 3 Feb 2024 16:51:04 GMT",
"version": "v1"
}
] | 2024-04-02 | [
[
"Chang",
"Haochen",
""
],
[
"Chen",
"Jing",
""
],
[
"Li",
"Yilin",
""
],
[
"Chen",
"Jixiang",
""
],
[
"Zhang",
"Xiaofeng",
""
]
] |
2402.02314 | Haowei Lin | Haowei Lin, Baizhou Huang, Haotian Ye, Qinyu Chen, Zihao Wang, Sujian
Li, Jianzhu Ma, Xiaojun Wan, James Zou, Yitao Liang | Selecting Large Language Model to Fine-tune via Rectified Scaling Law | null | ICML 2024 | null | null | cs.LG cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ever-growing ecosystem of LLMs has posed a challenge in selecting the
most appropriate pre-trained model to fine-tune amidst a sea of options. Given
constrained resources, fine-tuning all models and making selections afterward
is unrealistic. In this work, we formulate this resource-constrained selection
task into predicting fine-tuning performance and illustrate its natural
connection with Scaling Law. Unlike pre-training, we find that the fine-tuning
scaling curve includes not just the well-known "power phase" but also the
previously unobserved "pre-power phase". We also explain why existing Scaling
Law fails to capture this phase transition phenomenon both theoretically and
empirically. To address this, we introduce the concept of "pre-learned data
size" into our Rectified Scaling Law, which overcomes theoretical limitations
and fits experimental results much better. By leveraging our law, we propose a
novel LLM selection algorithm that selects the near-optimal model with hundreds
of times less resource consumption, while other methods may provide negatively
correlated selection. The project page is available at
rectified-scaling-law.github.io.
| [
{
"created": "Sun, 4 Feb 2024 01:55:00 GMT",
"version": "v1"
},
{
"created": "Mon, 27 May 2024 15:11:22 GMT",
"version": "v2"
},
{
"created": "Tue, 28 May 2024 16:16:42 GMT",
"version": "v3"
}
] | 2024-05-29 | [
[
"Lin",
"Haowei",
""
],
[
"Huang",
"Baizhou",
""
],
[
"Ye",
"Haotian",
""
],
[
"Chen",
"Qinyu",
""
],
[
"Wang",
"Zihao",
""
],
[
"Li",
"Sujian",
""
],
[
"Ma",
"Jianzhu",
""
],
[
"Wan",
"Xiaojun",
""
],
[
"Zou",
"James",
""
],
[
"Liang",
"Yitao",
""
]
] |
2402.02388 | Tong Niu | Tong Niu, Weihao Zhang, Rong Zhao | Solution-oriented Agent-based Models Generation with Verifier-assisted
Iterative In-context Learning | null | International Conference on Autonomous Agents and Multiagent
Systems 2024 | null | null | cs.CL cs.AI cs.LG cs.SE | http://creativecommons.org/licenses/by/4.0/ | Agent-based models (ABMs) stand as an essential paradigm for proposing and
validating hypothetical solutions or policies aimed at addressing challenges
posed by complex systems and achieving various objectives. This process demands
labor-intensive endeavors and multidisciplinary expertise. Large language
models (LLMs) encapsulating cross-domain knowledge and programming proficiency
could potentially alleviate the difficulty of this process. However, LLMs excel
in handling sequential information, making it challenging for analyzing the
intricate interactions and nonlinear dynamics inherent in ABMs. Additionally,
due to the lack of self-evaluation capability of LLMs, relying solely on LLMs
is insufficient to effectively accomplish this process. In this paper, we
present SAGE, a general solution-oriented ABM generation framework designed for
automatic modeling and generating solutions for targeted problems. Unlike
approaches reliant on expert handcrafting or resource-intensive neural network
training, SAGE establishes a verifier-assisted iterative in-context learning
process employing large language models (LLMs) to leverages their inherent
cross-domain knowledge for tackling intricate demands from diverse domain
scenarios. In SAGE, we introduce an semi-structured conceptual representation
expliciting the intricate structures of ABMs and an objective representation to
guide LLMs in modeling scenarios and proposing hypothetical solutions through
in-context learning. To ensure the model executability and solution
feasibility, SAGE devises a two-level verifier with chain-of-thought prompting
tailored to the complex interactions and non-linear dynamics of ABMs, driving
the iterative generation optimization. Moreover, we construct an evaluation
dataset of solution-oriented ABMs from open sources.It contains practical
models across various domains.
| [
{
"created": "Sun, 4 Feb 2024 07:59:06 GMT",
"version": "v1"
}
] | 2024-04-02 | [
[
"Niu",
"Tong",
""
],
[
"Zhang",
"Weihao",
""
],
[
"Zhao",
"Rong",
""
]
] |
2402.02397 | Aydogan Ozcan | Guangdong Ma, Xilin Yang, Bijie Bai, Jingxi Li, Yuhang Li, Tianyi Gan,
Che-Yung Shen, Yijie Zhang, Yuzhu Li, Mona Jarrahi, Aydogan Ozcan | Multiplexed all-optical permutation operations using a reconfigurable
diffractive optical network | 37 Pages, 10 Figures | Laser & Photonics Reviews (2024) | 10.1002/lpor.202400238 | null | physics.optics cs.CV cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large-scale and high-dimensional permutation operations are important for
various applications in e.g., telecommunications and encryption. Here, we
demonstrate the use of all-optical diffractive computing to execute a set of
high-dimensional permutation operations between an input and output
field-of-view through layer rotations in a diffractive optical network. In this
reconfigurable multiplexed material designed by deep learning, every
diffractive layer has four orientations: 0, 90, 180, and 270 degrees. Each
unique combination of these rotatable layers represents a distinct rotation
state of the diffractive design tailored for a specific permutation operation.
Therefore, a K-layer rotatable diffractive material is capable of all-optically
performing up to 4^K independent permutation operations. The original input
information can be decrypted by applying the specific inverse permutation
matrix to output patterns, while applying other inverse operations will lead to
loss of information. We demonstrated the feasibility of this reconfigurable
multiplexed diffractive design by approximating 256 randomly selected
permutation matrices using K=4 rotatable diffractive layers. We also
experimentally validated this reconfigurable diffractive network using
terahertz radiation and 3D-printed diffractive layers, providing a decent match
to our numerical results. The presented rotation-multiplexed diffractive
processor design is particularly useful due to its mechanical
reconfigurability, offering multifunctional representation through a single
fabrication process.
| [
{
"created": "Sun, 4 Feb 2024 08:19:14 GMT",
"version": "v1"
}
] | 2024-07-08 | [
[
"Ma",
"Guangdong",
""
],
[
"Yang",
"Xilin",
""
],
[
"Bai",
"Bijie",
""
],
[
"Li",
"Jingxi",
""
],
[
"Li",
"Yuhang",
""
],
[
"Gan",
"Tianyi",
""
],
[
"Shen",
"Che-Yung",
""
],
[
"Zhang",
"Yijie",
""
],
[
"Li",
"Yuzhu",
""
],
[
"Jarrahi",
"Mona",
""
],
[
"Ozcan",
"Aydogan",
""
]
] |
2402.02449 | Francisco J. Ribadas-Pena | Manuel Vilares Ferro, V\'ictor M. Darriba Bilbao, Francisco J.
Ribadas-Pena, Jorge Gra\~na Gil | Surfing the modeling of PoS taggers in low-resource scenarios | 17 papes, 5 figures | Mathematics 2022, 10(19), 3526 | 10.3390/math10193526 | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | The recent trend towards the application of deep structured techniques has
revealed the limits of huge models in natural language processing. This has
reawakened the interest in traditional machine learning algorithms, which have
proved still to be competitive in certain contexts, in particular low-resource
settings. In parallel, model selection has become an essential task to boost
performance at reasonable cost, even more so when we talk about processes
involving domains where the training and/or computational resources are scarce.
Against this backdrop, we evaluate the early estimation of learning curves as a
practical mechanism for selecting the most appropriate model in scenarios
characterized by the use of non-deep learners in resource-lean settings. On the
basis of a formal approximation model previously evaluated under conditions of
wide availability of training and validation resources, we study the
reliability of such an approach in a different and much more demanding
operationalenvironment. Using as case study the generation of PoS taggers for
Galician, a language belonging to the Western Ibero-Romance group, the
experimental results are consistent with our expectations.
| [
{
"created": "Sun, 4 Feb 2024 11:38:12 GMT",
"version": "v1"
}
] | 2024-02-06 | [
[
"Ferro",
"Manuel Vilares",
""
],
[
"Bilbao",
"Víctor M. Darriba",
""
],
[
"Ribadas-Pena",
"Francisco J.",
""
],
[
"Gil",
"Jorge Graña",
""
]
] |
2402.02513 | V\'ictor Manuel Darriba Bilbao | Manuel Vilares Ferro, Yerai Doval Mosquera, Francisco J. Ribadas Pena,
Victor M. Darriba Bilbao | Early stopping by correlating online indicators in neural networks | 26 pages, 6 figures | Neural Networks, 159 (2023), pp 109-124. ISSN 1879-2782. Elsevier | 10.1016/j.jcss.2022.05.002 | null | cs.LG cs.AI cs.CL cs.NE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In order to minimize the generalization error in neural networks, a novel
technique to identify overfitting phenomena when training the learner is
formally introduced. This enables support of a reliable and trustworthy early
stopping condition, thus improving the predictive power of that type of
modeling. Our proposal exploits the correlation over time in a collection of
online indicators, namely characteristic functions for indicating if a set of
hypotheses are met, associated with a range of independent stopping conditions
built from a canary judgment to evaluate the presence of overfitting. That way,
we provide a formal basis for decision making in terms of interrupting the
learning process.
As opposed to previous approaches focused on a single criterion, we take
advantage of subsidiarities between independent assessments, thus seeking both
a wider operating range and greater diagnostic reliability. With a view to
illustrating the effectiveness of the halting condition described, we choose to
work in the sphere of natural language processing, an operational continuum
increasingly based on machine learning. As a case study, we focus on parser
generation, one of the most demanding and complex tasks in the domain. The
selection of cross-validation as a canary function enables an actual comparison
with the most representative early stopping conditions based on overfitting
identification, pointing to a promising start toward an optimal bias and
variance control.
| [
{
"created": "Sun, 4 Feb 2024 14:57:20 GMT",
"version": "v1"
}
] | 2024-02-06 | [
[
"Ferro",
"Manuel Vilares",
""
],
[
"Mosquera",
"Yerai Doval",
""
],
[
"Pena",
"Francisco J. Ribadas",
""
],
[
"Bilbao",
"Victor M. Darriba",
""
]
] |
2402.02515 | V\'ictor Manuel Darriba Bilbao | Manuel Vilares Ferro, Victor M. Darriba Bilbao, Francisco J. Ribadas
Pena | Modeling of learning curves with applications to pos tagging | 30 pages, 11 figures | Computer Speech & Language, 41, pp 1-28 (2017). ISSN 0885-2308.
Elsevier | 10.1016/j.csl.2016.06.001 | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | An algorithm to estimate the evolution of learning curves on the whole of a
training data base, based on the results obtained from a portion and using a
functional strategy, is introduced. We approximate iteratively the sought value
at the desired time, independently of the learning technique used and once a
point in the process, called prediction level, has been passed. The proposal
proves to be formally correct with respect to our working hypotheses and
includes a reliable proximity condition. This allows the user to fix a
convergence threshold with respect to the accuracy finally achievable, which
extends the concept of stopping criterion and seems to be effective even in the
presence of distorting observations.
Our aim is to evaluate the training effort, supporting decision making in
order to reduce the need for both human and computational resources during the
learning process. The proposal is of interest in at least three operational
procedures. The first is the anticipation of accuracy gain, with the purpose of
measuring how much work is needed to achieve a certain degree of performance.
The second relates the comparison of efficiency between systems at training
time, with the objective of completing this task only for the one that best
suits our requirements. The prediction of accuracy is also a valuable item of
information for customizing systems, since we can estimate in advance the
impact of settings on both the performance and the development costs. Using the
generation of part-of-speech taggers as an example application, the
experimental results are consistent with our expectations.
| [
{
"created": "Sun, 4 Feb 2024 15:00:52 GMT",
"version": "v1"
}
] | 2024-02-06 | [
[
"Ferro",
"Manuel Vilares",
""
],
[
"Bilbao",
"Victor M. Darriba",
""
],
[
"Pena",
"Francisco J. Ribadas",
""
]
] |
2402.02516 | V\'ictor Manuel Darriba Bilbao | Manuel Vilares Ferro, Victor M. Darriba Bilbao, Jes\'us Vilares Ferro | Adaptive scheduling for adaptive sampling in POS taggers construction | 23 pager, 10 figures | Computer Speech & Language, 60, 101020 (2020), pp 1-18. ISSN
0885-2308. Elsevier | 10.1016/j.csl.2019.101020 | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We introduce an adaptive scheduling for adaptive sampling as a novel way of
machine learning in the construction of part-of-speech taggers. The goal is to
speed up the training on large data sets, without significant loss of
performance with regard to an optimal configuration. In contrast to previous
methods using a random, fixed or regularly rising spacing between the
instances, ours analyzes the shape of the learning curve geometrically in
conjunction with a functional model to increase or decrease it at any time. The
algorithm proves to be formally correct regarding our working hypotheses.
Namely, given a case, the following one is the nearest ensuring a net gain of
learning ability from the former, it being possible to modulate the level of
requirement for this condition. We also improve the robustness of sampling by
paying greater attention to those regions of the training data base subject to
a temporary inflation in performance, thus preventing the learning from
stopping prematurely.
The proposal has been evaluated on the basis of its reliability to identify
the convergence of models, corroborating our expectations. While a concrete
halting condition is used for testing, users can choose any condition
whatsoever to suit their own specific needs.
| [
{
"created": "Sun, 4 Feb 2024 15:02:17 GMT",
"version": "v1"
}
] | 2024-02-06 | [
[
"Ferro",
"Manuel Vilares",
""
],
[
"Bilbao",
"Victor M. Darriba",
""
],
[
"Ferro",
"Jesús Vilares",
""
]
] |
2402.02522 | V\'ictor Manuel Darriba Bilbao | Manuel Vilares Ferro, Victor M. Darriba Bilbao, Jes\'us Vilares Ferro | Absolute convergence and error thresholds in non-active adaptive
sampling | 27 pages, 10 figures | Journal of Computer and System Sciences, 129 (2020) , pp 39-61.
ISSN 1090-2724. Elsevier | 10.1016/j.jcss.2022.05.002 | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Non-active adaptive sampling is a way of building machine learning models
from a training data base which are supposed to dynamically and automatically
derive guaranteed sample size. In this context and regardless of the strategy
used in both scheduling and generating of weak predictors, a proposal for
calculating absolute convergence and error thresholds is described. We not only
make it possible to establish when the quality of the model no longer
increases, but also supplies a proximity condition to estimate in absolute
terms how close it is to achieving such a goal, thus supporting decision making
for fine-tuning learning parameters in model selection. The technique proves
its correctness and completeness with respect to our working hypotheses, in
addition to strengthening the robustness of the sampling scheme. Tests meet our
expectations and illustrate the proposal in the domain of natural language
processing, taking the generation of part-of-speech taggers as case study.
| [
{
"created": "Sun, 4 Feb 2024 15:10:34 GMT",
"version": "v1"
}
] | 2024-02-06 | [
[
"Ferro",
"Manuel Vilares",
""
],
[
"Bilbao",
"Victor M. Darriba",
""
],
[
"Ferro",
"Jesús Vilares",
""
]
] |
2402.02574 | Guanxiong Sun | Guanxiong Sun, Chi Wang, Zhaoyu Zhang, Jiankang Deng, Stefanos
Zafeiriou, Yang Hua | Spatio-temporal Prompting Network for Robust Video Feature Extraction | null | 2023 International Conference on Computer Vision (ICCV)
13541-13551 | 10.1109/ICCV51070.2023.01250 | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Frame quality deterioration is one of the main challenges in the field of
video understanding. To compensate for the information loss caused by
deteriorated frames, recent approaches exploit transformer-based integration
modules to obtain spatio-temporal information. However, these integration
modules are heavy and complex. Furthermore, each integration module is
specifically tailored for its target task, making it difficult to generalise to
multiple tasks. In this paper, we present a neat and unified framework, called
Spatio-Temporal Prompting Network (STPN). It can efficiently extract robust and
accurate video features by dynamically adjusting the input features in the
backbone network. Specifically, STPN predicts several video prompts containing
spatio-temporal information of neighbour frames. Then, these video prompts are
prepended to the patch embeddings of the current frame as the updated input for
video feature extraction. Moreover, STPN is easy to generalise to various video
tasks because it does not contain task-specific modules. Without bells and
whistles, STPN achieves state-of-the-art performance on three widely-used
datasets for different video understanding tasks, i.e., ImageNetVID for video
object detection, YouTubeVIS for video instance segmentation, and GOT-10k for
visual object tracking. Code is available at
https://github.com/guanxiongsun/vfe.pytorch.
| [
{
"created": "Sun, 4 Feb 2024 17:52:04 GMT",
"version": "v1"
}
] | 2024-02-07 | [
[
"Sun",
"Guanxiong",
""
],
[
"Wang",
"Chi",
""
],
[
"Zhang",
"Zhaoyu",
""
],
[
"Deng",
"Jiankang",
""
],
[
"Zafeiriou",
"Stefanos",
""
],
[
"Hua",
"Yang",
""
]
] |
2402.02591 | Jes\'us Vilares | Yerai Doval, Manuel Vilares, Jes\'us Vilares | On the performance of phonetic algorithms in microtext normalization | Accepted for publication in journal Expert Systems with Applications | Expert Systems with Applications, Volume 113, 2018, Pages 213-222 | 10.1016/j.eswa.2018.07.016 | null | cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | User-generated content published on microblogging social networks constitutes
a priceless source of information. However, microtexts usually deviate from the
standard lexical and grammatical rules of the language, thus making its
processing by traditional intelligent systems very difficult. As an answer,
microtext normalization consists in transforming those non-standard microtexts
into standard well-written texts as a preprocessing step, allowing traditional
approaches to continue with their usual processing. Given the importance of
phonetic phenomena in non-standard text formation, an essential element of the
knowledge base of a normalizer would be the phonetic rules that encode these
phenomena, which can be found in the so-called phonetic algorithms.
In this work we experiment with a wide range of phonetic algorithms for the
English language. The aim of this study is to determine the best phonetic
algorithms within the context of candidate generation for microtext
normalization. In other words, we intend to find those algorithms that taking
as input non-standard terms to be normalized allow us to obtain as output the
smallest possible sets of normalization candidates which still contain the
corresponding target standard words. As it will be stated, the choice of the
phonetic algorithm will depend heavily on the capabilities of the candidate
selection mechanism which we usually find at the end of a microtext
normalization pipeline. The faster it can make the right choices among big
enough sets of candidates, the more we can sacrifice on the precision of the
phonetic algorithms in favour of coverage in order to increase the overall
performance of the normalization system.
KEYWORDS: microtext normalization; phonetic algorithm; fuzzy matching;
Twitter; texting
| [
{
"created": "Sun, 4 Feb 2024 19:54:44 GMT",
"version": "v1"
}
] | 2024-02-06 | [
[
"Doval",
"Yerai",
""
],
[
"Vilares",
"Manuel",
""
],
[
"Vilares",
"Jesús",
""
]
] |
2402.02639 | Ben Hutchinson | Ned Cooper, Courtney Heldreth, Ben Hutchinson | "It's how you do things that matters": Attending to Process to Better
Serve Indigenous Communities with Language Technologies | null | Proceedings of the 18th Conference of the European Chapter of the
Association for Computational Linguistics (EACL 2024) | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Indigenous languages are historically under-served by Natural Language
Processing (NLP) technologies, but this is changing for some languages with the
recent scaling of large multilingual models and an increased focus by the NLP
community on endangered languages. This position paper explores ethical
considerations in building NLP technologies for Indigenous languages, based on
the premise that such projects should primarily serve Indigenous communities.
We report on interviews with 17 researchers working in or with Aboriginal
and/or Torres Strait Islander communities on language technology projects in
Australia. Drawing on insights from the interviews, we recommend practices for
NLP researchers to increase attention to the process of engagements with
Indigenous communities, rather than focusing only on decontextualised
artefacts.
| [
{
"created": "Sun, 4 Feb 2024 23:23:51 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Feb 2024 02:50:48 GMT",
"version": "v2"
}
] | 2024-02-07 | [
[
"Cooper",
"Ned",
""
],
[
"Heldreth",
"Courtney",
""
],
[
"Hutchinson",
"Ben",
""
]
] |
2402.02768 | Salwa Mostafa | Salwa Mostafa, Mohammed S. Elbamby, Mohamed K. Abdel-Aziz, and Mehdi
Bennis | Intent Profiling and Translation Through Emergent Communication | null | IEEE International Conference on Communications (ICC2024) | null | null | cs.NI cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | To effectively express and satisfy network application requirements,
intent-based network management has emerged as a promising solution. In
intent-based methods, users and applications express their intent in a
high-level abstract language to the network. Although this abstraction
simplifies network operation, it induces many challenges to efficiently express
applications' intents and map them to different network capabilities.
Therefore, in this work, we propose an AI-based framework for intent profiling
and translation. We consider a scenario where applications interacting with the
network express their needs for network services in their domain language. The
machine-to-machine communication (i.e., between applications and the network)
is complex since it requires networks to learn how to understand the domain
languages of each application, which is neither practical nor scalable.
Instead, a framework based on emergent communication is proposed for intent
profiling, in which applications express their abstract quality-of-experience
(QoE) intents to the network through emergent communication messages.
Subsequently, the network learns how to interpret these communication messages
and map them to network capabilities (i.e., slices) to guarantee the requested
Quality-of-Service (QoS). Simulation results show that the proposed method
outperforms self-learning slicing and other baselines, and achieves a
performance close to the perfect knowledge baseline.
| [
{
"created": "Mon, 5 Feb 2024 07:02:43 GMT",
"version": "v1"
}
] | 2024-02-08 | [
[
"Mostafa",
"Salwa",
""
],
[
"Elbamby",
"Mohammed S.",
""
],
[
"Abdel-Aziz",
"Mohamed K.",
""
],
[
"Bennis",
"Mehdi",
""
]
] |
2402.02837 | Amandine Decker | Amandine Decker (LORIA, UL, CNRS, SEMAGRAMME, GU), Maxime Amblard
(SEMAGRAMME, LORIA) | With a Little Help from my (Linguistic) Friends: Topic Segmentation of
Multi-party Casual Conversations | null | CODI 2024 - 5th workshop on Computational Approaches to Discourse,
Mar 2024, Malta, Malta | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Topics play an important role in the global organisation of a conversation as
what is currently discussed constrains the possible contributions of the
participant. Understanding the way topics are organised in interaction would
provide insight on the structure of dialogue beyond the sequence of utterances.
However, studying this high-level structure is a complex task that we try to
approach by first segmenting dialogues into smaller topically coherent sets of
utterances. Understanding the interactions between these segments would then
enable us to propose a model of topic organisation at a dialogue level. In this
paper we work with open-domain conversations and try to reach a comparable
level of accuracy as recent machine learning based topic segmentation models
but with a formal approach. The features we identify as meaningful for this
task help us understand better the topical structure of a conversation.
| [
{
"created": "Mon, 5 Feb 2024 09:48:07 GMT",
"version": "v1"
}
] | 2024-02-06 | [
[
"Decker",
"Amandine",
"",
"LORIA, UL, CNRS, SEMAGRAMME, GU"
],
[
"Amblard",
"Maxime",
"",
"SEMAGRAMME, LORIA"
]
] |
2402.02936 | Farhad Pakdaman | Li Yu, Yanjun Gao, Farhad Pakdaman, Moncef Gabbouj | Panoramic Image Inpainting With Gated Convolution And Contextual
Reconstruction Loss | Copyright 2024 IEEE - to appear in IEEE ICASSP 2024 | IEEE International Conference on Acoustics, Speech and Signal
Processing (ICASSP) 2024 | 10.1109/ICASSP48485.2024.10446469 | null | eess.IV cs.CV cs.LG cs.MM | http://creativecommons.org/licenses/by/4.0/ | Deep learning-based methods have demonstrated encouraging results in tackling
the task of panoramic image inpainting. However, it is challenging for existing
methods to distinguish valid pixels from invalid pixels and find suitable
references for corrupted areas, thus leading to artifacts in the inpainted
results. In response to these challenges, we propose a panoramic image
inpainting framework that consists of a Face Generator, a Cube Generator, a
side branch, and two discriminators. We use the Cubemap Projection (CMP) format
as network input. The generator employs gated convolutions to distinguish valid
pixels from invalid ones, while a side branch is designed utilizing contextual
reconstruction (CR) loss to guide the generators to find the most suitable
reference patch for inpainting the missing region. The proposed method is
compared with state-of-the-art (SOTA) methods on SUN360 Street View dataset in
terms of PSNR and SSIM. Experimental results and ablation study demonstrate
that the proposed method outperforms SOTA both quantitatively and
qualitatively.
| [
{
"created": "Mon, 5 Feb 2024 11:58:08 GMT",
"version": "v1"
}
] | 2024-03-20 | [
[
"Yu",
"Li",
""
],
[
"Gao",
"Yanjun",
""
],
[
"Pakdaman",
"Farhad",
""
],
[
"Gabbouj",
"Moncef",
""
]
] |
2402.03067 | Nikola Milo\v{s}evi\'c Dr | Darija Medvecki, Bojana Ba\v{s}aragin, Adela Ljaji\'c, Nikola
Milo\v{s}evi\'c | Multilingual transformer and BERTopic for short text topic modeling: The
case of Serbian | null | Trajanovic, M., Filipovic, N., Zdravkovic, M. (eds) Disruptive
Information Technologies for a Smart Society. ICIST 2023. Lecture Notes in
Networks and Systems, vol 872. Springer, Cham | 10.1007/978-3-031-50755-7_16 | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper presents the results of the first application of BERTopic, a
state-of-the-art topic modeling technique, to short text written in a
morphologi-cally rich language. We applied BERTopic with three multilingual
embed-ding models on two levels of text preprocessing (partial and full) to
evalu-ate its performance on partially preprocessed short text in Serbian. We
also compared it to LDA and NMF on fully preprocessed text. The experiments
were conducted on a dataset of tweets expressing hesitancy toward COVID-19
vaccination. Our results show that with adequate parameter setting, BERTopic
can yield informative topics even when applied to partially pre-processed short
text. When the same parameters are applied in both prepro-cessing scenarios,
the performance drop on partially preprocessed text is minimal. Compared to LDA
and NMF, judging by the keywords, BERTopic offers more informative topics and
gives novel insights when the number of topics is not limited. The findings of
this paper can be significant for re-searchers working with other
morphologically rich low-resource languages and short text.
| [
{
"created": "Mon, 5 Feb 2024 14:59:29 GMT",
"version": "v1"
}
] | 2024-02-06 | [
[
"Medvecki",
"Darija",
""
],
[
"Bašaragin",
"Bojana",
""
],
[
"Ljajić",
"Adela",
""
],
[
"Milošević",
"Nikola",
""
]
] |
2402.03166 | Jos\'e Morano | Jos\'e Morano and Guilherme Aresta and Hrvoje Bogunovi\'c | RRWNet: Recursive Refinement Network for Effective Retinal Artery/Vein
Segmentation and Classification | null | Expert Systems with Applications, 2024 | 10.1016/j.eswa.2024.124970 | null | eess.IV cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The caliber and configuration of retinal blood vessels serve as important
biomarkers for various diseases and medical conditions. A thorough analysis of
the retinal vasculature requires the segmentation of the blood vessels and
their classification into arteries and veins, typically performed on color
fundus images obtained by retinography. However, manually performing these
tasks is labor-intensive and prone to human error. While several automated
methods have been proposed to address this task, the current state of art faces
challenges due to manifest classification errors affecting the topological
consistency of segmentation maps. In this work, we introduce RRWNet, a novel
end-to-end deep learning framework that addresses this limitation. The
framework consists of a fully convolutional neural network that recursively
refines semantic segmentation maps, correcting manifest classification errors
and thus improving topological consistency. In particular, RRWNet is composed
of two specialized subnetworks: a Base subnetwork that generates base
segmentation maps from the input images, and a Recursive Refinement subnetwork
that iteratively and recursively improves these maps. Evaluation on three
different public datasets demonstrates the state-of-the-art performance of the
proposed method, yielding more topologically consistent segmentation maps with
fewer manifest classification errors than existing approaches. In addition, the
Recursive Refinement module within RRWNet proves effective in post-processing
segmentation maps from other methods, further demonstrating its potential. The
model code, weights, and predictions will be publicly available at
https://github.com/j-morano/rrwnet.
| [
{
"created": "Mon, 5 Feb 2024 16:35:29 GMT",
"version": "v1"
},
{
"created": "Wed, 13 Mar 2024 12:52:26 GMT",
"version": "v2"
},
{
"created": "Wed, 3 Apr 2024 07:10:22 GMT",
"version": "v3"
},
{
"created": "Thu, 8 Aug 2024 13:32:21 GMT",
"version": "v4"
}
] | 2024-08-09 | [
[
"Morano",
"José",
""
],
[
"Aresta",
"Guilherme",
""
],
[
"Bogunović",
"Hrvoje",
""
]
] |
2402.03176 | Bayode Ogunleye | Bayode Ogunleye, Tonderai Maswera, Laurence Hirsch, Jotham Gaudoin,
and Teresa Brunsdon | Comparison of Topic Modelling Approaches in the Banking Context | 14 pages, Journal of Applied Science | Applied Sciences (2023), 13(2), 797 | 10.3390/app13020797 | null | cs.IR cs.AI cs.LG stat.CO | http://creativecommons.org/licenses/by/4.0/ | Topic modelling is a prominent task for automatic topic extraction in many
applications such as sentiment analysis and recommendation systems. The
approach is vital for service industries to monitor their customer discussions.
The use of traditional approaches such as Latent Dirichlet Allocation (LDA) for
topic discovery has shown great performances, however, they are not consistent
in their results as these approaches suffer from data sparseness and inability
to model the word order in a document. Thus, this study presents the use of
Kernel Principal Component Analysis (KernelPCA) and K-means Clustering in the
BERTopic architecture. We have prepared a new dataset using tweets from
customers of Nigerian banks and we use this to compare the topic modelling
approaches. Our findings showed KernelPCA and K-means in the BERTopic
architecture-produced coherent topics with a coherence score of 0.8463.
| [
{
"created": "Mon, 5 Feb 2024 16:43:53 GMT",
"version": "v1"
}
] | 2024-02-06 | [
[
"Ogunleye",
"Bayode",
""
],
[
"Maswera",
"Tonderai",
""
],
[
"Hirsch",
"Laurence",
""
],
[
"Gaudoin",
"Jotham",
""
],
[
"Brunsdon",
"Teresa",
""
]
] |
2402.03246 | Shuhong Liu | Mingrui Li, Shuhong Liu, Heng Zhou, Guohao Zhu, Na Cheng, Tianchen
Deng, Hongyu Wang | SGS-SLAM: Semantic Gaussian Splatting For Neural Dense SLAM | null | European Conference on Computer Vision (ECCV) 2024 | null | null | cs.CV cs.AI cs.RO | http://creativecommons.org/licenses/by-sa/4.0/ | We present SGS-SLAM, the first semantic visual SLAM system based on Gaussian
Splatting. It incorporates appearance, geometry, and semantic features through
multi-channel optimization, addressing the oversmoothing limitations of neural
implicit SLAM systems in high-quality rendering, scene understanding, and
object-level geometry. We introduce a unique semantic feature loss that
effectively compensates for the shortcomings of traditional depth and color
losses in object optimization. Through a semantic-guided keyframe selection
strategy, we prevent erroneous reconstructions caused by cumulative errors.
Extensive experiments demonstrate that SGS-SLAM delivers state-of-the-art
performance in camera pose estimation, map reconstruction, precise semantic
segmentation, and object-level geometric accuracy, while ensuring real-time
rendering capabilities.
| [
{
"created": "Mon, 5 Feb 2024 18:03:53 GMT",
"version": "v1"
},
{
"created": "Sun, 25 Feb 2024 17:44:22 GMT",
"version": "v2"
},
{
"created": "Sat, 2 Mar 2024 13:49:10 GMT",
"version": "v3"
},
{
"created": "Wed, 13 Mar 2024 07:55:38 GMT",
"version": "v4"
},
{
"created": "Tue, 26 Mar 2024 12:35:03 GMT",
"version": "v5"
}
] | 2024-07-08 | [
[
"Li",
"Mingrui",
""
],
[
"Liu",
"Shuhong",
""
],
[
"Zhou",
"Heng",
""
],
[
"Zhu",
"Guohao",
""
],
[
"Cheng",
"Na",
""
],
[
"Deng",
"Tianchen",
""
],
[
"Wang",
"Hongyu",
""
]
] |
2402.03337 | Luis Marti | Eduardo Charles Vasconcellos (UFF), Ronald M Sampaio, Andr\'e P D
Ara\'ujo (UFF), Esteban Walter Gonzales Clua, Philippe Preux (SEQUEL, GRAppA
- LIFL), Raphael Guerra, Luiz M G Gon\c{c}alves (UFRN), Luis Mart\'i, Hernan
Lira, Nayat Sanchez-Pi | Reinforcement-learning robotic sailboats: simulator and preliminary
results | null | NeurIPS 2023 Workshop on Robot Learning Workshop: Pretraining,
Fine-Tuning, and Generalization with Large Scale Models, Dec 2023, New
Orelans, United States | null | null | cs.RO cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work focuses on the main challenges and problems in developing a virtual
oceanic environment reproducing real experiments using Unmanned Surface
Vehicles (USV) digital twins. We introduce the key features for building
virtual worlds, considering using Reinforcement Learning (RL) agents for
autonomous navigation and control. With this in mind, the main problems concern
the definition of the simulation equations (physics and mathematics), their
effective implementation, and how to include strategies for simulated control
and perception (sensors) to be used with RL. We present the modeling,
implementation steps, and challenges required to create a functional digital
twin based on a real robotic sailing vessel. The application is immediate for
developing navigation algorithms based on RL to be applied on real boats.
| [
{
"created": "Tue, 16 Jan 2024 09:04:05 GMT",
"version": "v1"
}
] | 2024-02-07 | [
[
"Vasconcellos",
"Eduardo Charles",
"",
"UFF"
],
[
"Sampaio",
"Ronald M",
"",
"UFF"
],
[
"Araújo",
"André P D",
"",
"UFF"
],
[
"Clua",
"Esteban Walter Gonzales",
"",
"SEQUEL, GRAppA\n - LIFL"
],
[
"Preux",
"Philippe",
"",
"SEQUEL, GRAppA\n - LIFL"
],
[
"Guerra",
"Raphael",
"",
"UFRN"
],
[
"Gonçalves",
"Luiz M G",
"",
"UFRN"
],
[
"Martí",
"Luis",
""
],
[
"Lira",
"Hernan",
""
],
[
"Sanchez-Pi",
"Nayat",
""
]
] |
2402.03369 | Majbah Uddin | Majbah Uddin, Nathan Huynh, Jose M Vidal, Kevin M Taaffe, Lawrence D
Fredendall, and Joel S Greenstein | Evaluation of Google's Voice Recognition and Sentence Classification for
Health Care Applications | null | Engineering Management Journal, 27:3, 152-162, 2015 | 10.1080/10429247.2015.1054752 | null | eess.AS cs.CL cs.LG cs.SD | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This study examined the use of voice recognition technology in perioperative
services (Periop) to enable Periop staff to record workflow milestones using
mobile technology. The use of mobile technology to improve patient flow and
quality of care could be facilitated if such voice recognition technology could
be made robust. The goal of this experiment was to allow the Periop staff to
provide care without being interrupted with data entry and querying tasks.
However, the results are generalizable to other situations where an engineering
manager attempts to improve communication performance using mobile technology.
This study enhanced Google's voice recognition capability by using
post-processing classifiers (i.e., bag-of-sentences, support vector machine,
and maximum entropy). The experiments investigated three factors (original
phrasing, reduced phrasing, and personalized phrasing) at three levels (zero
training repetition, 5 training repetitions, and 10 training repetitions).
Results indicated that personal phrasing yielded the highest correctness and
that training the device to recognize an individual's voice improved
correctness as well. Although simplistic, the bag-of-sentences classifier
significantly improved voice recognition correctness. The classification
efficiency of the maximum entropy and support vector machine algorithms was
found to be nearly identical. These results suggest that engineering managers
could significantly enhance Google's voice recognition technology by using
post-processing techniques, which would facilitate its use in health care and
other applications.
| [
{
"created": "Fri, 2 Feb 2024 03:13:09 GMT",
"version": "v1"
}
] | 2024-02-07 | [
[
"Uddin",
"Majbah",
""
],
[
"Huynh",
"Nathan",
""
],
[
"Vidal",
"Jose M",
""
],
[
"Taaffe",
"Kevin M",
""
],
[
"Fredendall",
"Lawrence D",
""
],
[
"Greenstein",
"Joel S",
""
]
] |
2402.03370 | Cyril Labbe | El\'ena Martel (SIGMA, LIG), Martin Lentschat (SIGMA, GETALP), Cyril
Labb\'e (LIG, SIGMA ) | Detection of tortured phrases in scientific literature | null | Proceedings of the 2nd Workshop on Information Extraction from
Scientific Publications, Nov 2023, Bali, Indonesia | null | null | cs.IR cs.AI cs.CL cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents various automatic detection methods to extract so called
tortured phrases from scientific papers. These tortured phrases, e.g. flag to
clamor instead of signal to noise, are the results of paraphrasing tools used
to escape plagiarism detection. We built a dataset and evaluated several
strategies to flag previously undocumented tortured phrases. The proposed and
tested methods are based on language models and either on embeddings
similarities or on predictions of masked token. We found that an approach using
token prediction and that propagates the scores to the chunk level gives the
best results. With a recall value of .87 and a precision value of .61, it could
retrieve new tortured phrases to be submitted to domain experts for validation.
| [
{
"created": "Fri, 2 Feb 2024 08:15:43 GMT",
"version": "v1"
}
] | 2024-02-07 | [
[
"Martel",
"Eléna",
"",
"SIGMA, LIG"
],
[
"Lentschat",
"Martin",
"",
"SIGMA, GETALP"
],
[
"Labbé",
"Cyril",
"",
"LIG, SIGMA"
]
] |
2402.03384 | Jos\'e Alberto Ben\'itez-Andrades Ph.D. | Santiago Valbuena Rubio, Mar\'ia Teresa Garc\'ia-Ord\'as, Oscar
Garc\'ia-Olalla Olivera, H\'ector Alaiz-Moret\'on, Maria-Inmaculada
Gonz\'alez-Alonso and Jos\'e Alberto Ben\'itez-Andrades | Survival and grade of the glioma prediction using transfer learning | null | PeerJ Computer Science, Volume 9, December 2023, ID e1723 | 10.7717/peerj-cs.1723 | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Glioblastoma is a highly malignant brain tumor with a life expectancy of only
3 to 6 months without treatment. Detecting and predicting its survival and
grade accurately are crucial. This study introduces a novel approach using
transfer learning techniques. Various pre-trained networks, including
EfficientNet, ResNet, VGG16, and Inception, were tested through exhaustive
optimization to identify the most suitable architecture. Transfer learning was
applied to fine-tune these models on a glioblastoma image dataset, aiming to
achieve two objectives: survival and tumor grade prediction.The experimental
results show 65% accuracy in survival prediction, classifying patients into
short, medium, or long survival categories. Additionally, the prediction of
tumor grade achieved an accuracy of 97%, accurately differentiating low-grade
gliomas (LGG) and high-grade gliomas (HGG). The success of the approach is
attributed to the effectiveness of transfer learning, surpassing the current
state-of-the-art methods. In conclusion, this study presents a promising method
for predicting the survival and grade of glioblastoma. Transfer learning
demonstrates its potential in enhancing prediction models, particularly in
scenarios with limited large datasets. These findings hold promise for
improving diagnostic and treatment approaches for glioblastoma patients.
| [
{
"created": "Sun, 4 Feb 2024 09:07:07 GMT",
"version": "v1"
}
] | 2024-02-07 | [
[
"Rubio",
"Santiago Valbuena",
""
],
[
"García-Ordás",
"María Teresa",
""
],
[
"Olivera",
"Oscar García-Olalla",
""
],
[
"Alaiz-Moretón",
"Héctor",
""
],
[
"González-Alonso",
"Maria-Inmaculada",
""
],
[
"Benítez-Andrades",
"José Alberto",
""
]
] |
2402.03386 | Jos\'e Alberto Ben\'itez-Andrades Ph.D. | \'Angel Delgado-Panadero, Jos\'e Alberto Ben\'itez-Andrades and
Mar\'ia Teresa Garc\'ia-Ord\'as | A generalized decision tree ensemble based on the NeuralNetworks
architecture: Distributed Gradient Boosting Forest (DGBF) | null | Applied Intelligence, Volume 53, July 2023, pages 22991-23003 | 10.1007/s10489-023-04735-w | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Tree ensemble algorithms as RandomForest and GradientBoosting are currently
the dominant methods for modeling discrete or tabular data, however, they are
unable to perform a hierarchical representation learning from raw data as
NeuralNetworks does thanks to its multi-layered structure, which is a key
feature for DeepLearning problems and modeling unstructured data. This
limitation is due to the fact that tree algorithms can not be trained with
back-propagation because of their mathematical nature. However, in this work,
we demonstrate that the mathematical formulation of bagging and boosting can be
combined together to define a graph-structured-tree-ensemble algorithm with a
distributed representation learning process between trees naturally (without
using back-propagation). We call this novel approach Distributed Gradient
Boosting Forest (DGBF) and we demonstrate that both RandomForest and
GradientBoosting can be expressed as particular graph architectures of DGBT.
Finally, we see that the distributed learning outperforms both RandomForest and
GradientBoosting in 7 out of 9 datasets.
| [
{
"created": "Sun, 4 Feb 2024 09:22:52 GMT",
"version": "v1"
}
] | 2024-02-07 | [
[
"Delgado-Panadero",
"Ángel",
""
],
[
"Benítez-Andrades",
"José Alberto",
""
],
[
"García-Ordás",
"María Teresa",
""
]
] |
2402.03473 | Xiaodan Xing | Xiaodan Xing, Huiyu Zhou, Yingying Fang, and Guang Yang | Assessing the Efficacy of Invisible Watermarks in AI-Generated Medical
Images | 5 pages | ISBI 2024 | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | AI-generated medical images are gaining growing popularity due to their
potential to address the data scarcity challenge in the real world. However,
the issue of accurate identification of these synthetic images, particularly
when they exhibit remarkable realism with their real copies, remains a concern.
To mitigate this challenge, image generators such as DALLE and Imagen, have
integrated digital watermarks aimed at facilitating the discernment of
synthetic images' authenticity. These watermarks are embedded within the image
pixels and are invisible to the human eye while remains their detectability.
Nevertheless, a comprehensive investigation into the potential impact of these
invisible watermarks on the utility of synthetic medical images has been
lacking. In this study, we propose the incorporation of invisible watermarks
into synthetic medical images and seek to evaluate their efficacy in the
context of downstream classification tasks. Our goal is to pave the way for
discussions on the viability of such watermarks in boosting the detectability
of synthetic medical images, fortifying ethical standards, and safeguarding
against data pollution and potential scams.
| [
{
"created": "Mon, 5 Feb 2024 19:32:10 GMT",
"version": "v1"
},
{
"created": "Thu, 8 Feb 2024 10:30:53 GMT",
"version": "v2"
},
{
"created": "Tue, 21 May 2024 13:01:59 GMT",
"version": "v3"
}
] | 2024-05-22 | [
[
"Xing",
"Xiaodan",
""
],
[
"Zhou",
"Huiyu",
""
],
[
"Fang",
"Yingying",
""
],
[
"Yang",
"Guang",
""
]
] |
2402.03654 | Ricardo De Deijn | Ricardo de Deijn, Aishwarya Batra, Brandon Koch, Naseef Mansoor, Hema
Makkena | Reviewing FID and SID Metrics on Generative Adversarial Networks | 14 pages 9 figures 1 table Included in IOTBS, NLTM, AIMLA, DBDM -
2024 Conference Proceedings Editor: David C. Wyld et al | CS & IT - CSCP (2024) 111-124 | 10.5121/csit.2024.140208 | null | cs.CV eess.IV | http://creativecommons.org/licenses/by-sa/4.0/ | The growth of generative adversarial network (GAN) models has increased the
ability of image processing and provides numerous industries with the
technology to produce realistic image transformations. However, with the field
being recently established there are new evaluation metrics that can further
this research. Previous research has shown the Fr\'echet Inception Distance
(FID) to be an effective metric when testing these image-to-image GANs in
real-world applications. Signed Inception Distance (SID), a founded metric in
2023, expands on FID by allowing unsigned distances. This paper uses public
datasets that consist of fa\c{c}ades, cityscapes, and maps within Pix2Pix and
CycleGAN models. After training these models are evaluated on both inception
distance metrics which measure the generating performance of the trained
models. Our findings indicate that usage of the metric SID incorporates an
efficient and effective metric to complement, or even exceed the ability shown
using the FID for the image-to-image GANs
| [
{
"created": "Tue, 6 Feb 2024 03:02:39 GMT",
"version": "v1"
}
] | 2024-02-07 | [
[
"de Deijn",
"Ricardo",
""
],
[
"Batra",
"Aishwarya",
""
],
[
"Koch",
"Brandon",
""
],
[
"Mansoor",
"Naseef",
""
],
[
"Makkena",
"Hema",
""
]
] |
2402.03728 | Hossein Rajaby Faghihi | Hossein Rajaby Faghihi and Parisa Kordjamshidi | Consistent Joint Decision-Making with Heterogeneous Learning Models | EACL 2024 Findings - Short Paper | EACL 2024 | null | null | cs.AI cs.CL cs.LG cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces a novel decision-making framework that promotes
consistency among decisions made by diverse models while utilizing external
knowledge. Leveraging the Integer Linear Programming (ILP) framework, we map
predictions from various models into globally normalized and comparable values
by incorporating information about decisions' prior probability, confidence
(uncertainty), and the models' expected accuracy. Our empirical study
demonstrates the superiority of our approach over conventional baselines on
multiple datasets.
| [
{
"created": "Tue, 6 Feb 2024 05:50:04 GMT",
"version": "v1"
}
] | 2024-02-07 | [
[
"Faghihi",
"Hossein Rajaby",
""
],
[
"Kordjamshidi",
"Parisa",
""
]
] |
2402.03732 | Feng Xia | Huiling Tu, Shuo Yu, Vidya Saikrishna, Feng Xia, Karin Verspoor | Deep Outdated Fact Detection in Knowledge Graphs | 10 pages, 6 figures | 2023 IEEE International Conference on Data Mining Workshops
(ICDMW), December 1-4, 2023, Shanghai, China | 10.1109/ICDMW60847.2023.00184 | null | cs.AI cs.CL cs.DL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Knowledge graphs (KGs) have garnered significant attention for their vast
potential across diverse domains. However, the issue of outdated facts poses a
challenge to KGs, affecting their overall quality as real-world information
evolves. Existing solutions for outdated fact detection often rely on manual
recognition. In response, this paper presents DEAN (Deep outdatEd fAct
detectioN), a novel deep learning-based framework designed to identify outdated
facts within KGs. DEAN distinguishes itself by capturing implicit structural
information among facts through comprehensive modeling of both entities and
relations. To effectively uncover latent out-of-date information, DEAN employs
a contrastive approach based on a pre-defined Relations-to-Nodes (R2N) graph,
weighted by the number of entities. Experimental results demonstrate the
effectiveness and superiority of DEAN over state-of-the-art baseline methods.
| [
{
"created": "Tue, 6 Feb 2024 05:58:15 GMT",
"version": "v1"
}
] | 2024-02-07 | [
[
"Tu",
"Huiling",
""
],
[
"Yu",
"Shuo",
""
],
[
"Saikrishna",
"Vidya",
""
],
[
"Xia",
"Feng",
""
],
[
"Verspoor",
"Karin",
""
]
] |
2402.03750 | Feng Xia | Xin Chen, Mingliang Hou, Tao Tang, Achhardeep Kaur and Feng Xia | Digital Twin Mobility Profiling: A Spatio-Temporal Graph Learning
Approach | 10 pages, 7 figures | The 7th IEEE International Conference on Data Science and Systems
(DSS), Dec 20 - 22, 2021, Haikou, China | 10.1109/HPCC-DSS-SmartCity-DependSys53884.2021.00182 | null | cs.LG cs.AI cs.HC | http://creativecommons.org/licenses/by/4.0/ | With the arrival of the big data era, mobility profiling has become a viable
method of utilizing enormous amounts of mobility data to create an intelligent
transportation system. Mobility profiling can extract potential patterns in
urban traffic from mobility data and is critical for a variety of
traffic-related applications. However, due to the high level of complexity and
the huge amount of data, mobility profiling faces huge challenges. Digital Twin
(DT) technology paves the way for cost-effective and performance-optimised
management by digitally creating a virtual representation of the network to
simulate its behaviour. In order to capture the complex spatio-temporal
features in traffic scenario, we construct alignment diagrams to assist in
completing the spatio-temporal correlation representation and design dilated
alignment convolution network (DACN) to learn the fine-grained correlations,
i.e., spatio-temporal interactions. We propose a digital twin mobility
profiling (DTMP) framework to learn node profiles on a mobility network DT
model. Extensive experiments have been conducted upon three real-world
datasets. Experimental results demonstrate the effectiveness of DTMP.
| [
{
"created": "Tue, 6 Feb 2024 06:37:43 GMT",
"version": "v1"
}
] | 2024-02-07 | [
[
"Chen",
"Xin",
""
],
[
"Hou",
"Mingliang",
""
],
[
"Tang",
"Tao",
""
],
[
"Kaur",
"Achhardeep",
""
],
[
"Xia",
"Feng",
""
]
] |
2402.03758 | Mingyue Guo | Mingyue Guo, Binghui Chen, Zhaoyi Yan, Yaowei Wang, Qixiang Ye | Virtual Classification: Modulating Domain-Specific Knowledge for
Multidomain Crowd Counting | Multidomain learning; Domain-guided virtual classifier;
Instance-specific batch normalization | IEEE Transactions on Neural Networks and Learning Systems,2024 | 10.1109/TNNLS.2024.3350363 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multidomain crowd counting aims to learn a general model for multiple diverse
datasets. However, deep networks prefer modeling distributions of the dominant
domains instead of all domains, which is known as domain bias. In this study,
we propose a simple-yet-effective Modulating Domain-specific Knowledge Network
(MDKNet) to handle the domain bias issue in multidomain crowd counting. MDKNet
is achieved by employing the idea of `modulating', enabling deep network
balancing and modeling different distributions of diverse datasets with little
bias. Specifically, we propose an Instance-specific Batch Normalization (IsBN)
module, which serves as a base modulator to refine the information flow to be
adaptive to domain distributions. To precisely modulating the domain-specific
information, the Domain-guided Virtual Classifier (DVC) is then introduced to
learn a domain-separable latent space. This space is employed as an input
guidance for the IsBN modulator, such that the mixture distributions of
multiple datasets can be well treated. Extensive experiments performed on
popular benchmarks, including Shanghai-tech A/B, QNRF and NWPU, validate the
superiority of MDKNet in tackling multidomain crowd counting and the
effectiveness for multidomain learning. Code is available at
\url{https://github.com/csguomy/MDKNet}.
| [
{
"created": "Tue, 6 Feb 2024 06:49:04 GMT",
"version": "v1"
}
] | 2024-02-07 | [
[
"Guo",
"Mingyue",
""
],
[
"Chen",
"Binghui",
""
],
[
"Yan",
"Zhaoyi",
""
],
[
"Wang",
"Yaowei",
""
],
[
"Ye",
"Qixiang",
""
]
] |
2402.03824 | Giuseppe Paolo Dr | Giuseppe Paolo, Jonas Gonzalez-Billandon, Bal\'azs K\'egl | A call for embodied AI | Published in ICML 2024 Position paper track | PMLR 235:39493-39508, 2024 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose Embodied AI as the next fundamental step in the pursuit of
Artificial General Intelligence, juxtaposing it against current AI
advancements, particularly Large Language Models. We traverse the evolution of
the embodiment concept across diverse fields - philosophy, psychology,
neuroscience, and robotics - to highlight how EAI distinguishes itself from the
classical paradigm of static learning. By broadening the scope of Embodied AI,
we introduce a theoretical framework based on cognitive architectures,
emphasizing perception, action, memory, and learning as essential components of
an embodied agent. This framework is aligned with Friston's active inference
principle, offering a comprehensive approach to EAI development. Despite the
progress made in the field of AI, substantial challenges, such as the
formulation of a novel AI learning theory and the innovation of advanced
hardware, persist. Our discussion lays down a foundational guideline for future
Embodied AI research. Highlighting the importance of creating Embodied AI
agents capable of seamless communication, collaboration, and coexistence with
humans and other intelligent entities within real-world environments, we aim to
steer the AI community towards addressing the multifaceted challenges and
seizing the opportunities that lie ahead in the quest for AGI.
| [
{
"created": "Tue, 6 Feb 2024 09:11:20 GMT",
"version": "v1"
},
{
"created": "Tue, 28 May 2024 15:07:37 GMT",
"version": "v2"
},
{
"created": "Thu, 18 Jul 2024 14:06:13 GMT",
"version": "v3"
},
{
"created": "Fri, 13 Sep 2024 13:36:05 GMT",
"version": "v4"
}
] | 2024-09-16 | [
[
"Paolo",
"Giuseppe",
""
],
[
"Gonzalez-Billandon",
"Jonas",
""
],
[
"Kégl",
"Balázs",
""
]
] |
2402.03948 | V\'ictor M. S\'anchez-Cartagena | Juan Ram\'on Rico-Juan, V\'ictor M. S\'anchez-Cartagena, Jose J.
Valero-Mas, Antonio Javier Gallego | Identifying Student Profiles Within Online Judge Systems Using
Explainable Artificial Intelligence | null | IEEE Transactions on Learning Technologies ( Volume: 16, Issue: 6,
December 2023) | 10.1109/TLT.2023.3239110 | null | cs.CY cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Online Judge (OJ) systems are typically considered within programming-related
courses as they yield fast and objective assessments of the code developed by
the students. Such an evaluation generally provides a single decision based on
a rubric, most commonly whether the submission successfully accomplished the
assignment. Nevertheless, since in an educational context such information may
be deemed insufficient, it would be beneficial for both the student and the
instructor to receive additional feedback about the overall development of the
task. This work aims to tackle this limitation by considering the further
exploitation of the information gathered by the OJ and automatically inferring
feedback for both the student and the instructor. More precisely, we consider
the use of learning-based schemes -- particularly, multi-instance learning
(MIL) and classical machine learning formulations -- to model student behavior.
Besides, explainable artificial intelligence (XAI) is contemplated to provide
human-understandable feedback. The proposal has been evaluated considering a
case of study comprising 2500 submissions from roughly 90 different students
from a programming-related course in a computer science degree. The results
obtained validate the proposal: The model is capable of significantly
predicting the user outcome (either passing or failing the assignment) solely
based on the behavioral pattern inferred by the submissions provided to the OJ.
Moreover, the proposal is able to identify prone-to-fail student groups and
profiles as well as other relevant information, which eventually serves as
feedback to both the student and the instructor.
| [
{
"created": "Mon, 29 Jan 2024 12:11:30 GMT",
"version": "v1"
}
] | 2024-02-07 | [
[
"Rico-Juan",
"Juan Ramón",
""
],
[
"Sánchez-Cartagena",
"Víctor M.",
""
],
[
"Valero-Mas",
"Jose J.",
""
],
[
"Gallego",
"Antonio Javier",
""
]
] |
2402.03989 | Anton Backhaus | Anton Backhaus, Thorsten Luettel, Hans-Joachim Wuensche | YOLOPoint Joint Keypoint and Object Detection | 12 pages, 5 figures | Proceedings of Advanced Concepts for Intelligent Vision Systems,
14124, 112-123 (2023) | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Intelligent vehicles of the future must be capable of understanding and
navigating safely through their surroundings. Camera-based vehicle systems can
use keypoints as well as objects as low- and high-level landmarks for
GNSS-independent SLAM and visual odometry. To this end we propose YOLOPoint, a
convolutional neural network model that simultaneously detects keypoints and
objects in an image by combining YOLOv5 and SuperPoint to create a single
forward-pass network that is both real-time capable and accurate. By using a
shared backbone and a light-weight network structure, YOLOPoint is able to
perform competitively on both the HPatches and KITTI benchmarks.
| [
{
"created": "Tue, 6 Feb 2024 13:31:45 GMT",
"version": "v1"
}
] | 2024-02-07 | [
[
"Backhaus",
"Anton",
""
],
[
"Luettel",
"Thorsten",
""
],
[
"Wuensche",
"Hans-Joachim",
""
]
] |
2402.04082 | Bayode Ogunleye | Hemlata Sharma, Hitesh Harsora, Bayode Ogunleye | An Optimal House Price Prediction Algorithm: XGBoost | 16 pages, Journal of Analytics | Analytics, 3(1), 30-45 (2024) | 10.3390/analytics3010003 | null | cs.LG cs.AI stat.AP stat.ME | http://creativecommons.org/licenses/by/4.0/ | An accurate prediction of house prices is a fundamental requirement for
various sectors including real estate and mortgage lending. It is widely
recognized that a property value is not solely determined by its physical
attributes but is significantly influenced by its surrounding neighbourhood.
Meeting the diverse housing needs of individuals while balancing budget
constraints is a primary concern for real estate developers. To this end, we
addressed the house price prediction problem as a regression task and thus
employed various machine learning techniques capable of expressing the
significance of independent variables. We made use of the housing dataset of
Ames City in Iowa, USA to compare support vector regressor, random forest
regressor, XGBoost, multilayer perceptron and multiple linear regression
algorithms for house price prediction. Afterwards, we identified the key
factors that influence housing costs. Our results show that XGBoost is the best
performing model for house price prediction.
| [
{
"created": "Tue, 6 Feb 2024 15:36:06 GMT",
"version": "v1"
}
] | 2024-02-07 | [
[
"Sharma",
"Hemlata",
""
],
[
"Harsora",
"Hitesh",
""
],
[
"Ogunleye",
"Bayode",
""
]
] |
2402.04088 | Bayode Ogunleye | Bayode Ogunleye, Babitha Dharmaraj | The Use of a Large Language Model for Cyberbullying Detection | 14 pages, Journal of Analytics | Analytics 2 (2023), no. 3: 694-707 | 10.3390/analytics2030038 | null | cs.CL cs.AI cs.LG stat.AP | http://creativecommons.org/licenses/by/4.0/ | The dominance of social media has added to the channels of bullying for
perpetrators. Unfortunately, cyberbullying (CB) is the most prevalent
phenomenon in todays cyber world, and is a severe threat to the mental and
physical health of citizens. This opens the need to develop a robust system to
prevent bullying content from online forums, blogs, and social media platforms
to manage the impact in our society. Several machine learning (ML) algorithms
have been proposed for this purpose. However, their performances are not
consistent due to high class imbalance and generalisation issues. In recent
years, large language models (LLMs) like BERT and RoBERTa have achieved
state-of-the-art (SOTA) results in several natural language processing (NLP)
tasks. Unfortunately, the LLMs have not been applied extensively for CB
detection. In our paper, we explored the use of these models for cyberbullying
(CB) detection. We have prepared a new dataset (D2) from existing studies
(Formspring and Twitter). Our experimental results for dataset D1 and D2 showed
that RoBERTa outperformed other models.
| [
{
"created": "Tue, 6 Feb 2024 15:46:31 GMT",
"version": "v1"
}
] | 2024-02-07 | [
[
"Ogunleye",
"Bayode",
""
],
[
"Dharmaraj",
"Babitha",
""
]
] |
2402.04103 | Bayode Ogunleye | Jeen Mary John, Olamilekan Shobayo, Bayode Ogunleye | An Exploration of Clustering Algorithms for Customer Segmentation in the
UK Retail Market | 15 pages, Journal of Analytics | Analytics, 2(4), 809-823 (2023) | 10.3390/analytics2040042 | null | cs.LG cs.AI stat.AP stat.CO | http://creativecommons.org/licenses/by/4.0/ | Recently, peoples awareness of online purchases has significantly risen. This
has given rise to online retail platforms and the need for a better
understanding of customer purchasing behaviour. Retail companies are pressed
with the need to deal with a high volume of customer purchases, which requires
sophisticated approaches to perform more accurate and efficient customer
segmentation. Customer segmentation is a marketing analytical tool that aids
customer-centric service and thus enhances profitability. In this paper, we aim
to develop a customer segmentation model to improve decision-making processes
in the retail market industry. To achieve this, we employed a UK-based online
retail dataset obtained from the UCI machine learning repository. The retail
dataset consists of 541,909 customer records and eight features. Our study
adopted the RFM (recency, frequency, and monetary) framework to quantify
customer values. Thereafter, we compared several state-of-the-art (SOTA)
clustering algorithms, namely, K-means clustering, the Gaussian mixture model
(GMM), density-based spatial clustering of applications with noise (DBSCAN),
agglomerative clustering, and balanced iterative reducing and clustering using
hierarchies (BIRCH). The results showed the GMM outperformed other approaches,
with a Silhouette Score of 0.80.
| [
{
"created": "Tue, 6 Feb 2024 15:58:14 GMT",
"version": "v1"
}
] | 2024-02-07 | [
[
"John",
"Jeen Mary",
""
],
[
"Shobayo",
"Olamilekan",
""
],
[
"Ogunleye",
"Bayode",
""
]
] |
2402.04465 | Jos\'e Miguel Buenaposada | Antonio Fern\'andez-Baldera, Jos\'e M. Buenaposada, Luis Baumela | BAdaCost: Multi-class Boosting with Costs | null | Pattern Recognition. Volume 79, July 2018, Pages 467-479 | 10.1016/j.patcog.2018.02.022 | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We present BAdaCost, a multi-class cost-sensitive classification algorithm.
It combines a set of cost-sensitive multi-class weak learners to obtain a
strong classification rule within the Boosting framework. To derive the
algorithm we introduce CMEL, a Cost-sensitive Multi-class Exponential Loss that
generalizes the losses optimized in various classification algorithms such as
AdaBoost, SAMME, Cost-sensitive AdaBoost and PIBoost. Hence unifying them under
a common theoretical framework. In the experiments performed we prove that
BAdaCost achieves significant gains in performance when compared to previous
multi-class cost-sensitive approaches. The advantages of the proposed algorithm
in asymmetric multi-class classification are also evaluated in practical
multi-view face and car detection problems.
| [
{
"created": "Tue, 6 Feb 2024 23:18:29 GMT",
"version": "v1"
}
] | 2024-02-08 | [
[
"Fernández-Baldera",
"Antonio",
""
],
[
"Buenaposada",
"José M.",
""
],
[
"Baumela",
"Luis",
""
]
] |
2402.04482 | Jos\'e Miguel Buenaposada | Iago Su\'arez, Ghesn Sfeir, Jos\'e M. Buenaposada, Luis Baumela | BEBLID: Boosted efficient binary local image descriptor | null | Pattern Recognition Letters. Volume 133, May 2020, Pages 366-372 | 10.1016/j.patrec.2020.04.005 | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Efficient matching of local image features is a fundamental task in many
computer vision applications. However, the real-time performance of top
matching algorithms is compromised in computationally limited devices, such as
mobile phones or drones, due to the simplicity of their hardware and their
finite energy supply. In this paper we introduce BEBLID, an efficient learned
binary image descriptor. It improves our previous real-valued descriptor,
BELID, making it both more efficient for matching and more accurate. To this
end we use AdaBoost with an improved weak-learner training scheme that produces
better local descriptions. Further, we binarize our descriptor by forcing all
weak-learners to have the same weight in the strong learner combination and
train it in an unbalanced data set to address the asymmetries arising in
matching and retrieval tasks. In our experiments BEBLID achieves an accuracy
close to SIFT and better computational efficiency than ORB, the fastest
algorithm in the literature.
| [
{
"created": "Wed, 7 Feb 2024 00:14:32 GMT",
"version": "v1"
}
] | 2024-02-08 | [
[
"Suárez",
"Iago",
""
],
[
"Sfeir",
"Ghesn",
""
],
[
"Buenaposada",
"José M.",
""
],
[
"Baumela",
"Luis",
""
]
] |
2402.04505 | EPTCS | Kin Ian Lo, Mehrnoosh Sadrzadeh, Shane Mansfield | Developments in Sheaf-Theoretic Models of Natural Language Ambiguities | In Proceedings DCM 2023, arXiv:2409.19298 | EPTCS 408, 2024, pp. 62-72 | 10.4204/EPTCS.408.4 | null | cs.CL quant-ph | http://creativecommons.org/licenses/by/4.0/ | Sheaves are mathematical objects consisting of a base which constitutes a
topological space and the data associated with each open set thereof, e.g.
continuous functions defined on the open sets. Sheaves have originally been
used in algebraic topology and logic. Recently, they have also modelled events
such as physical experiments and natural language disambiguation processes. We
extend the latter models from lexical ambiguities to discourse ambiguities
arising from anaphora. To begin, we calculated a new measure of contextuality
for a dataset of basic anaphoric discourses, resulting in a higher proportion
of contextual models-82.9%-compared to previous work which only yielded 3.17%
contextual models. Then, we show how an extension of the natural language
processing challenge, known as the Winograd Schema, which involves anaphoric
ambiguities can be modelled on the Bell-CHSH scenario with a contextual
fraction of 0.096.
| [
{
"created": "Wed, 7 Feb 2024 01:18:55 GMT",
"version": "v1"
},
{
"created": "Tue, 1 Oct 2024 09:54:00 GMT",
"version": "v2"
}
] | 2024-10-02 | [
[
"Lo",
"Kin Ian",
""
],
[
"Sadrzadeh",
"Mehrnoosh",
""
],
[
"Mansfield",
"Shane",
""
]
] |
2402.04519 | Shiyu Hu | Xin Zhao and Shiyu Hu and Yipei Wang and Jing Zhang and Yimin Hu and
Rongshuai Liu and Haibin Ling and Yin Li and Renshu Li and Kun Liu and
Jiadong Li | BioDrone: A Bionic Drone-based Single Object Tracking Benchmark for
Robust Vision | This paper is published in IJCV (refer to DOI). Please cite the
published IJCV | Int J Comput Vis (2023) | 10.1007/s11263-023-01937-0 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Single object tracking (SOT) is a fundamental problem in computer vision,
with a wide range of applications, including autonomous driving, augmented
reality, and robot navigation. The robustness of SOT faces two main challenges:
tiny target and fast motion. These challenges are especially manifested in
videos captured by unmanned aerial vehicles (UAV), where the target is usually
far away from the camera and often with significant motion relative to the
camera. To evaluate the robustness of SOT methods, we propose BioDrone -- the
first bionic drone-based visual benchmark for SOT. Unlike existing UAV
datasets, BioDrone features videos captured from a flapping-wing UAV system
with a major camera shake due to its aerodynamics. BioDrone hence highlights
the tracking of tiny targets with drastic changes between consecutive frames,
providing a new robust vision benchmark for SOT. To date, BioDrone offers the
largest UAV-based SOT benchmark with high-quality fine-grained manual
annotations and automatically generates frame-level labels, designed for robust
vision analyses. Leveraging our proposed BioDrone, we conduct a systematic
evaluation of existing SOT methods, comparing the performance of 20
representative models and studying novel means of optimizing a SOTA method
(KeepTrack KeepTrack) for robust SOT. Our evaluation leads to new baselines and
insights for robust SOT. Moving forward, we hope that BioDrone will not only
serve as a high-quality benchmark for robust SOT, but also invite future
research into robust computer vision. The database, toolkits, evaluation
server, and baseline results are available at http://biodrone.aitestunion.com.
| [
{
"created": "Wed, 7 Feb 2024 01:57:56 GMT",
"version": "v1"
}
] | 2024-02-08 | [
[
"Zhao",
"Xin",
""
],
[
"Hu",
"Shiyu",
""
],
[
"Wang",
"Yipei",
""
],
[
"Zhang",
"Jing",
""
],
[
"Hu",
"Yimin",
""
],
[
"Liu",
"Rongshuai",
""
],
[
"Ling",
"Haibin",
""
],
[
"Li",
"Yin",
""
],
[
"Li",
"Renshu",
""
],
[
"Liu",
"Kun",
""
],
[
"Li",
"Jiadong",
""
]
] |
2402.04539 | Guojian Wang | Guojian Wang, Faguo Wu, Xiao Zhang, Jianxiang Liu | Learning Diverse Policies with Soft Self-Generated Guidance | 23 pages, 19 figures | International Journal of Intelligent Systems, Volume 2023 | 10.1155/2023/4705291 | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reinforcement learning (RL) with sparse and deceptive rewards is challenging
because non-zero rewards are rarely obtained. Hence, the gradient calculated by
the agent can be stochastic and without valid information. Recent studies that
utilize memory buffers of previous experiences can lead to a more efficient
learning process. However, existing methods often require these experiences to
be successful and may overly exploit them, which can cause the agent to adopt
suboptimal behaviors. This paper develops an approach that uses diverse past
trajectories for faster and more efficient online RL, even if these
trajectories are suboptimal or not highly rewarded. The proposed algorithm
combines a policy improvement step with an additional exploration step using
offline demonstration data. The main contribution of this paper is that by
regarding diverse past trajectories as guidance, instead of imitating them, our
method directs its policy to follow and expand past trajectories while still
being able to learn without rewards and approach optimality. Furthermore, a
novel diversity measurement is introduced to maintain the team's diversity and
regulate exploration. The proposed algorithm is evaluated on discrete and
continuous control tasks with sparse and deceptive rewards. Compared with the
existing RL methods, the experimental results indicate that our proposed
algorithm is significantly better than the baseline methods regarding diverse
exploration and avoiding local optima.
| [
{
"created": "Wed, 7 Feb 2024 02:53:50 GMT",
"version": "v1"
}
] | 2024-02-08 | [
[
"Wang",
"Guojian",
""
],
[
"Wu",
"Faguo",
""
],
[
"Zhang",
"Xiao",
""
],
[
"Liu",
"Jianxiang",
""
]
] |
2402.04597 | Francisco Chicano | Javier Ferrer, Francisco Chicano, Jos\'e Antonio Ortega Toro | CMSA algorithm for solving the prioritized pairwise test data generation
problem in software product lines | Preprint of the submitted version of the article in Journal of
Heuristics | J. Heuristics 27(1-2): 229-249 (2021) | 10.1007/s10732-020-09462-w | null | cs.AI cs.SE | http://creativecommons.org/licenses/by/4.0/ | In Software Product Lines (SPLs) it may be difficult or even impossible to
test all the products of the family because of the large number of valid
feature combinations that may exist. Thus, we want to find a minimal subset of
the product family that allows us to test all these possible combinations
(pairwise). Furthermore, when testing a single product is a great effort, it is
desirable to first test products composed of a set of priority features. This
problem is called Prioritized Pairwise Test Data Generation Problem.
State-of-the-art algorithms based on Integer Linear Programming for this
problema are faster enough for small and medium instances. However, there
exists some real instances that are too large to be computed with these
algorithms in a reasonable time because of the exponential growth of the number
of candidate solutions. Also, these heuristics not always lead us to the best
solutions. In this work we propose a new approach based on a hybrid
metaheuristic algorithm called Construct, Merge, Solve & Adapt. We compare this
matheuristic with four algorithms: a Hybrid algorithm based on Integer Linear
Programming ((HILP), a Hybrid algorithm based on Integer Nonlinear Programming
(HINLP), the Parallel Prioritized Genetic Solver (PPGS), and a greedy algorithm
called prioritized-ICPL. The analysis reveals that CMSA results in
statistically significantly better quality solutions in most instances and for
most levels of weighted coverage, although it requires more execution time.
| [
{
"created": "Wed, 7 Feb 2024 05:43:57 GMT",
"version": "v1"
}
] | 2024-02-08 | [
[
"Ferrer",
"Javier",
""
],
[
"Chicano",
"Francisco",
""
],
[
"Toro",
"José Antonio Ortega",
""
]
] |
2402.04841 | Jianyuan Guo | Jianyuan Guo, Zhiwei Hao, Chengcheng Wang, Yehui Tang, Han Wu, Han Hu,
Kai Han, Chang Xu | Data-efficient Large Vision Models through Sequential Autoregression | 15 pages | ICML 2024 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Training general-purpose vision models on purely sequential visual data,
eschewing linguistic inputs, has heralded a new frontier in visual
understanding. These models are intended to not only comprehend but also
seamlessly transit to out-of-domain tasks. However, current endeavors are
hamstrung by an over-reliance on colossal models, exemplified by models with
upwards of 3B parameters, and the necessity for an extensive corpus of visual
data, often comprising a staggering 400B tokens. In this paper, we delve into
the development of an efficient, autoregression-based vision model,
innovatively architected to operate on a limited dataset. We meticulously
demonstrate how this model achieves proficiency in a spectrum of visual tasks
spanning both high-level and low-level semantic understanding during the
testing phase. Our empirical evaluations underscore the model's agility in
adapting to various tasks, heralding a significant reduction in the parameter
footprint, and a marked decrease in training data requirements, thereby paving
the way for more sustainable and accessible advancements in the field of
generalist vision models. The code is available at
https://github.com/ggjy/DeLVM.
| [
{
"created": "Wed, 7 Feb 2024 13:41:53 GMT",
"version": "v1"
}
] | 2024-06-07 | [
[
"Guo",
"Jianyuan",
""
],
[
"Hao",
"Zhiwei",
""
],
[
"Wang",
"Chengcheng",
""
],
[
"Tang",
"Yehui",
""
],
[
"Wu",
"Han",
""
],
[
"Hu",
"Han",
""
],
[
"Han",
"Kai",
""
],
[
"Xu",
"Chang",
""
]
] |
2402.04938 | Luis Costero | Jennifer Hern\'andez-B\'ecares, Luis Costero, Pedro Pablo
G\'omez-Mart\'in | An approach to automated videogame beta testing | null | Entertainment Computing, Elsevier. 18. pp 79 to 92. (2017) | 10.1016/j.entcom.2016.08.002 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Videogames developed in the 1970s and 1980s were modest programs created in a
couple of months by a single person, who played the roles of designer, artist
and programmer. Since then, videogames have evolved to become a multi-million
dollar industry. Today, AAA game development involves hundreds of people
working together over several years. Management and engineering requirements
have changed at the same pace. Although many of the processes have been adapted
over time, this is not quite true for quality assurance tasks, which are still
done mainly manually by human beta testers due to the specific peculiarities of
videogames. This paper presents an approach to automate this beta testing.
| [
{
"created": "Wed, 7 Feb 2024 15:16:21 GMT",
"version": "v1"
}
] | 2024-02-08 | [
[
"Hernández-Bécares",
"Jennifer",
""
],
[
"Costero",
"Luis",
""
],
[
"Gómez-Martín",
"Pedro Pablo",
""
]
] |
2402.04979 | Thomas P\"ollabauer | Thomas P\"ollabauer, Fabian R\"ucker, Andreas Franek, Felix
Gorschl\"uter | Detection and Pose Estimation of flat, Texture-less Industry Objects on
HoloLens using synthetic Training | Scandinavian Conference on Image Analysis 2023 | In Scandinavian Conference on Image Analysis 2023 (pp. 569-585).
Cham: Springer Nature Switzerland | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current state-of-the-art 6d pose estimation is too compute intensive to be
deployed on edge devices, such as Microsoft HoloLens (2) or Apple iPad, both
used for an increasing number of augmented reality applications. The quality of
AR is greatly dependent on its capabilities to detect and overlay geometry
within the scene. We propose a synthetically trained client-server-based
augmented reality application, demonstrating state-of-the-art object pose
estimation of metallic and texture-less industry objects on edge devices.
Synthetic data enables training without real photographs, i.e. for
yet-to-be-manufactured objects. Our qualitative evaluation on an AR-assisted
sorting task, and quantitative evaluation on both renderings, as well as
real-world data recorded on HoloLens 2, sheds light on its real-world
applicability.
| [
{
"created": "Wed, 7 Feb 2024 15:57:28 GMT",
"version": "v1"
}
] | 2024-02-08 | [
[
"Pöllabauer",
"Thomas",
""
],
[
"Rücker",
"Fabian",
""
],
[
"Franek",
"Andreas",
""
],
[
"Gorschlüter",
"Felix",
""
]
] |
2402.05149 | Janaka Chathuranga | Janaka Chathuranga Brahmanage, Jiajing Ling, Akshat Kumar | FlowPG: Action-constrained Policy Gradient with Normalizing Flows | null | Thirty-seventh Conference on Neural Information Processing
Systems. 2023 | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Action-constrained reinforcement learning (ACRL) is a popular approach for
solving safety-critical and resource-allocation related decision making
problems. A major challenge in ACRL is to ensure agent taking a valid action
satisfying constraints in each RL step. Commonly used approach of using a
projection layer on top of the policy network requires solving an optimization
program which can result in longer training time, slow convergence, and zero
gradient problem. To address this, first we use a normalizing flow model to
learn an invertible, differentiable mapping between the feasible action space
and the support of a simple distribution on a latent variable, such as
Gaussian. Second, learning the flow model requires sampling from the feasible
action space, which is also challenging. We develop multiple methods, based on
Hamiltonian Monte-Carlo and probabilistic sentential decision diagrams for such
action sampling for convex and non-convex constraints. Third, we integrate the
learned normalizing flow with the DDPG algorithm. By design, a well-trained
normalizing flow will transform policy output into a valid action without
requiring an optimization solver. Empirically, our approach results in
significantly fewer constraint violations (upto an order-of-magnitude for
several instances) and is multiple times faster on a variety of continuous
control tasks.
| [
{
"created": "Wed, 7 Feb 2024 11:11:46 GMT",
"version": "v1"
}
] | 2024-02-09 | [
[
"Brahmanage",
"Janaka Chathuranga",
""
],
[
"Ling",
"Jiajing",
""
],
[
"Kumar",
"Akshat",
""
]
] |