id
stringlengths 10
10
| submitter
stringlengths 3
52
| authors
stringlengths 6
7.24k
| title
stringlengths 12
217
| comments
stringlengths 1
446
⌀ | journal-ref
stringlengths 4
297
| doi
stringlengths 12
118
⌀ | report-no
stringclasses 237
values | categories
stringlengths 5
71
| license
stringclasses 6
values | abstract
stringlengths 90
3.26k
| versions
listlengths 1
17
| update_date
stringclasses 969
values | authors_parsed
sequencelengths 1
451
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2405.10957 | Lucas B\"ottcher | Lucas B\"ottcher and Gregory Wheeler | Statistical Mechanics and Artificial Neural Networks: Principles,
Models, and Applications | 45 pages, 12 figures. arXiv admin note: text overlap with
arXiv:2208.13219 | Order, Disorder and Criticality, pp. 117-161 (2024) | 10.1142/9789819800827_0003 | null | cond-mat.dis-nn cond-mat.stat-mech cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The field of neuroscience and the development of artificial neural networks
(ANNs) have mutually influenced each other, drawing from and contributing to
many concepts initially developed in statistical mechanics. Notably, Hopfield
networks and Boltzmann machines are versions of the Ising model, a model
extensively studied in statistical mechanics for over a century. In the first
part of this chapter, we provide an overview of the principles, models, and
applications of ANNs, highlighting their connections to statistical mechanics
and statistical learning theory.
Artificial neural networks can be seen as high-dimensional mathematical
functions, and understanding the geometric properties of their loss landscapes
(i.e., the high-dimensional space on which one wishes to find extrema or
saddles) can provide valuable insights into their optimization behavior,
generalization abilities, and overall performance. Visualizing these functions
can help us design better optimization methods and improve their generalization
abilities. Thus, the second part of this chapter focuses on quantifying
geometric properties and visualizing loss functions associated with deep ANNs.
| [
{
"created": "Fri, 5 Apr 2024 13:54:58 GMT",
"version": "v1"
}
] | 2024-10-17 | [
[
"Böttcher",
"Lucas",
""
],
[
"Wheeler",
"Gregory",
""
]
] |
2405.11011 | Jose Aizpurua | Jone Ugarte-Valdivielso, Jose I. Aizpurua, Manex Barrenetxea-I\~narra | Uncertainty Distribution Assessment of Jiles-Atherton Parameter
Estimation for Inrush Current Studies | 11 pages, 13 figures | IEEE Transactions on Power Delivery | 10.1109/TPWRD.2024.3398790 | null | eess.SY cs.AI cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Transformers are one of the key assets in AC distribution grids and renewable
power integration. During transformer energization inrush currents appear,
which lead to transformer degradation and can cause grid instability events.
These inrush currents are a consequence of the transformer's magnetic core
saturation during its connection to the grid. Transformer cores are normally
modelled by the Jiles-Atherton (JA) model which contains five parameters. These
parameters can be estimated by metaheuristic-based search algorithms. The
parameter initialization of these algorithms plays an important role in the
algorithm convergence. The most popular strategy used for JA parameter
initialization is a random uniform distribution. However, techniques such as
parameter initialization by Probability Density Functions (PDFs) have shown to
improve accuracy over random methods. In this context, this research work
presents a framework to assess the impact of different parameter initialization
strategies on the performance of the JA parameter estimation for inrush current
studies. Depending on available data and expert knowledge, uncertainty levels
are modelled with different PDFs. Moreover, three different
metaheuristic-search algorithms are employed on two different core materials
and their accuracy and computational time are compared. Results show an
improvement in the accuracy and computational time of the metaheuristic-based
algorithms when PDF parameter initialization is used.
| [
{
"created": "Fri, 17 May 2024 15:20:26 GMT",
"version": "v1"
}
] | 2024-05-21 | [
[
"Ugarte-Valdivielso",
"Jone",
""
],
[
"Aizpurua",
"Jose I.",
""
],
[
"Barrenetxea-Iñarra",
"Manex",
""
]
] |
2405.11206 | Thanh Nguyen Xuan | Thanh Nguyen, Tung M. Luu, Tri Ton, and Chang D. Yoo | Towards Robust Policy: Enhancing Offline Reinforcement Learning with
Adversarial Attacks and Defenses | null | International Conference on Pattern Recognition and Artificial
Intelligence (ICPRAI) 2024 | null | null | cs.LG cs.AI cs.RO | http://creativecommons.org/licenses/by/4.0/ | Offline reinforcement learning (RL) addresses the challenge of expensive and
high-risk data exploration inherent in RL by pre-training policies on vast
amounts of offline data, enabling direct deployment or fine-tuning in
real-world environments. However, this training paradigm can compromise policy
robustness, leading to degraded performance in practical conditions due to
observation perturbations or intentional attacks. While adversarial attacks and
defenses have been extensively studied in deep learning, their application in
offline RL is limited. This paper proposes a framework to enhance the
robustness of offline RL models by leveraging advanced adversarial attacks and
defenses. The framework attacks the actor and critic components by perturbing
observations during training and using adversarial defenses as regularization
to enhance the learned policy. Four attacks and two defenses are introduced and
evaluated on the D4RL benchmark. The results show the vulnerability of both the
actor and critic to attacks and the effectiveness of the defenses in improving
policy robustness. This framework holds promise for enhancing the reliability
of offline RL models in practical scenarios.
| [
{
"created": "Sat, 18 May 2024 07:23:44 GMT",
"version": "v1"
}
] | 2024-08-13 | [
[
"Nguyen",
"Thanh",
""
],
[
"Luu",
"Tung M.",
""
],
[
"Ton",
"Tri",
""
],
[
"Yoo",
"Chang D.",
""
]
] |
2405.11212 | Claudiu Creanga | Claudiu Creanga, Liviu Petrisor Dinu | Automated Text Identification Using CNN and Training Dynamics | null | Vol-3496, 2023, 4-8 | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We used Data Maps to model and characterize the AuTexTification dataset. This
provides insights about the behaviour of individual samples during training
across epochs (training dynamics). We characterized the samples across 3
dimensions: confidence, variability and correctness. This shows the presence of
3 regions: easy-to-learn, ambiguous and hard-to-learn examples. We used a
classic CNN architecture and found out that training the model only on a subset
of ambiguous examples improves the model's out-of-distribution generalization.
| [
{
"created": "Sat, 18 May 2024 07:37:17 GMT",
"version": "v1"
}
] | 2024-05-21 | [
[
"Creanga",
"Claudiu",
""
],
[
"Dinu",
"Liviu Petrisor",
""
]
] |
2405.11295 | Sudhakar Singh | Nand Lal Yadav, Satyendra Singh, Rajesh Kumar, Sudhakar Singh | Medical Image Analysis for Detection, Treatment and Planning of Disease
using Artificial Intelligence Approaches | 10 pages, 3 figures | International Journal of Microsystems and IoT, Vol. 1, Issue 5,
pp.278- 287, 2023 | 10.5281/zenodo.10057577 | null | eess.IV cs.CV cs.LG cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | X-ray is one of the prevalent image modalities for the detection and
diagnosis of the human body. X-ray provides an actual anatomical structure of
an organ present with disease or absence of disease. Segmentation of disease in
chest X-ray images is essential for the diagnosis and treatment. In this paper,
a framework for the segmentation of X-ray images using artificial intelligence
techniques has been discussed. Here data has been pre-processed and cleaned
followed by segmentation using SegNet and Residual Net approaches to X-ray
images. Finally, segmentation has been evaluated using well known metrics like
Loss, Dice Coefficient, Jaccard Coefficient, Precision, Recall, Binary
Accuracy, and Validation Accuracy. The experimental results reveal that the
proposed approach performs better in all respect of well-known parameters with
16 batch size and 50 epochs. The value of validation accuracy, precision, and
recall of SegNet and Residual Unet models are 0.9815, 0.9699, 0.9574, and
0.9901, 0.9864, 0.9750 respectively.
| [
{
"created": "Sat, 18 May 2024 13:43:43 GMT",
"version": "v1"
}
] | 2024-05-21 | [
[
"Yadav",
"Nand Lal",
""
],
[
"Singh",
"Satyendra",
""
],
[
"Kumar",
"Rajesh",
""
],
[
"Singh",
"Sudhakar",
""
]
] |
2405.11298 | Jack Vice | Jack Vice, Natalie Ruiz-Sanchez, Pamela K. Douglas, Gita Sukthankar | Visual Episodic Memory-based Exploration | FLAIRS 2023, 7 pages, 11 figures | The International FLAIRS Conference Proceedings. Vol. 36. 2023 | null | null | cs.RO cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In humans, intrinsic motivation is an important mechanism for open-ended
cognitive development; in robots, it has been shown to be valuable for
exploration. An important aspect of human cognitive development is
$\textit{episodic memory}$ which enables both the recollection of events from
the past and the projection of subjective future. This paper explores the use
of visual episodic memory as a source of intrinsic motivation for robotic
exploration problems. Using a convolutional recurrent neural network
autoencoder, the agent learns an efficient representation for spatiotemporal
features such that accurate sequence prediction can only happen once
spatiotemporal features have been learned. Structural similarity between ground
truth and autoencoder generated images is used as an intrinsic motivation
signal to guide exploration. Our proposed episodic memory model also implicitly
accounts for the agent's actions, motivating the robot to seek new interactive
experiences rather than just areas that are visually dissimilar. When guiding
robotic exploration, our proposed method outperforms the Curiosity-driven
Variational Autoencoder (CVAE) at finding dynamic anomalies.
| [
{
"created": "Sat, 18 May 2024 13:58:47 GMT",
"version": "v1"
}
] | 2024-05-21 | [
[
"Vice",
"Jack",
""
],
[
"Ruiz-Sanchez",
"Natalie",
""
],
[
"Douglas",
"Pamela K.",
""
],
[
"Sukthankar",
"Gita",
""
]
] |
2405.11498 | Conor O'Sullivan Mr | Conor O'Sullivan, Seamus Coveney, Xavier Monteys, Soumyabrata Dev | The Effectiveness of Edge Detection Evaluation Metrics for Automated
Coastline Detection | null | 2023 Photonics & Electromagnetics Research Symposium (PIERS) | 10.1109/PIERS59004.2023.10221292 | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | We analyse the effectiveness of RMSE, PSNR, SSIM and FOM for evaluating edge
detection algorithms used for automated coastline detection. Typically, the
accuracy of detected coastlines is assessed visually. This can be impractical
on a large scale leading to the need for objective evaluation metrics. Hence,
we conduct an experiment to find reliable metrics. We apply Canny edge
detection to 95 coastline satellite images across 49 testing locations. We vary
the Hysteresis thresholds and compare metric values to a visual analysis of
detected edges. We found that FOM was the most reliable metric for selecting
the best threshold. It could select a better threshold 92.6% of the time and
the best threshold 66.3% of the time. This is compared RMSE, PSNR and SSIM
which could select the best threshold 6.3%, 6.3% and 11.6% of the time
respectively. We provide a reason for these results by reformulating RMSE, PSNR
and SSIM in terms of confusion matrix measures. This suggests these metrics not
only fail for this experiment but are not useful for evaluating edge detection
in general.
| [
{
"created": "Sun, 19 May 2024 09:51:10 GMT",
"version": "v1"
}
] | 2024-05-21 | [
[
"O'Sullivan",
"Conor",
""
],
[
"Coveney",
"Seamus",
""
],
[
"Monteys",
"Xavier",
""
],
[
"Dev",
"Soumyabrata",
""
]
] |
2405.11637 | Ghazaleh Mahmoudi | Ghazaleh Mahmoudi, Babak Behkamkia, Sauleh Eetemadi | Zero-Shot Stance Detection using Contextual Data Generation with LLMs | 5 pages, AAAI-2024 Workshop on Public Sector LLMs | AAAI-2024 Workshop on Public Sector LLMs: Algorithmic and
Sociotechnical Design | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Stance detection, the classification of attitudes expressed in a text towards
a specific topic, is vital for applications like fake news detection and
opinion mining. However, the scarcity of labeled data remains a challenge for
this task. To address this problem, we propose Dynamic Model Adaptation with
Contextual Data Generation (DyMoAdapt) that combines Few-Shot Learning and
Large Language Models. In this approach, we aim to fine-tune an existing model
at test time. We achieve this by generating new topic-specific data using
GPT-3. This method could enhance performance by allowing the adaptation of the
model to new topics. However, the results did not increase as we expected.
Furthermore, we introduce the Multi Generated Topic VAST (MGT-VAST) dataset,
which extends VAST using GPT-3. In this dataset, each context is associated
with multiple topics, allowing the model to understand the relationship between
contexts and various potential topics
| [
{
"created": "Sun, 19 May 2024 17:58:26 GMT",
"version": "v1"
}
] | 2024-05-21 | [
[
"Mahmoudi",
"Ghazaleh",
""
],
[
"Behkamkia",
"Babak",
""
],
[
"Eetemadi",
"Sauleh",
""
]
] |
2405.11647 | Li Jiang | Li Jiang, Yusen Wu, Junwu Xiong, Jingqing Ruan, Yichuan Ding, Qingpei
Guo, Zujie Wen, Jun Zhou, Xiaotie Deng | Hummer: Towards Limited Competitive Preference Dataset | null | COLM 2024 | null | null | cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Preference datasets are essential for incorporating human preferences into
pre-trained language models, playing a key role in the success of Reinforcement
Learning from Human Feedback. However, these datasets often demonstrate
conflicting alignment objectives, leading to increased vulnerability to
jailbreak attacks and challenges in adapting downstream tasks to prioritize
specific alignment objectives without negatively impacting others. In this
work, we introduce a novel statistical metric, Alignment Dimension Conflict, to
quantify the degree of conflict within preference datasets. We then present
\texttt{Hummer} and its fine-grained variant, \texttt{Hummer-F}, as innovative
pairwise preference datasets with reduced-conflict alignment objectives.
\texttt{Hummer} is built based on UltraFeedback and is enhanced by AI feedback
from GPT-4, marking as the first preference dataset aimed at reducing the
competition between alignment objectives. Furthermore, we develop reward
models, HummerRM and HummerRM-F, which employ a hybrid sampling approach to
balance diverse alignment objectives effectively. This sampling method
positions HummerRM as an ideal model for domain-specific further fine-tuning
and reducing vulnerabilities to attacks.
| [
{
"created": "Sun, 19 May 2024 18:57:25 GMT",
"version": "v1"
},
{
"created": "Tue, 21 May 2024 02:01:42 GMT",
"version": "v2"
},
{
"created": "Tue, 6 Aug 2024 14:12:26 GMT",
"version": "v3"
}
] | 2024-08-07 | [
[
"Jiang",
"Li",
""
],
[
"Wu",
"Yusen",
""
],
[
"Xiong",
"Junwu",
""
],
[
"Ruan",
"Jingqing",
""
],
[
"Ding",
"Yichuan",
""
],
[
"Guo",
"Qingpei",
""
],
[
"Wen",
"Zujie",
""
],
[
"Zhou",
"Jun",
""
],
[
"Deng",
"Xiaotie",
""
]
] |
2405.11677 | Christiaan Viviers | Christiaan G.A. Viviers, Lena Filatova, Maurice Termeer, Peter H.N. de
With, Fons van der Sommen | Advancing 6-DoF Instrument Pose Estimation in Variable X-Ray Imaging
Geometries | Early author version of paper. Refer to the full paper at
https://ieeexplore.ieee.org/document/10478293 | IEEE Transactions on Image Processing (2024) (Volume: 33) Page(s):
2462 - 2476 | 10.1109/TIP.2024.3378469 | null | cs.CV cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Accurate 6-DoF pose estimation of surgical instruments during minimally
invasive surgeries can substantially improve treatment strategies and eventual
surgical outcome. Existing deep learning methods have achieved accurate
results, but they require custom approaches for each object and laborious setup
and training environments often stretching to extensive simulations, whilst
lacking real-time computation. We propose a general-purpose approach of data
acquisition for 6-DoF pose estimation tasks in X-ray systems, a novel and
general purpose YOLOv5-6D pose architecture for accurate and fast object pose
estimation and a complete method for surgical screw pose estimation under
acquisition geometry consideration from a monocular cone-beam X-ray image. The
proposed YOLOv5-6D pose model achieves competitive results on public benchmarks
whilst being considerably faster at 42 FPS on GPU. In addition, the method
generalizes across varying X-ray acquisition geometry and semantic image
complexity to enable accurate pose estimation over different domains. Finally,
the proposed approach is tested for bone-screw pose estimation for
computer-aided guidance during spine surgeries. The model achieves a 92.41% by
the 0.1 ADD-S metric, demonstrating a promising approach for enhancing surgical
precision and patient outcomes. The code for YOLOv5-6D is publicly available at
https://github.com/cviviers/YOLOv5-6D-Pose
| [
{
"created": "Sun, 19 May 2024 21:35:12 GMT",
"version": "v1"
}
] | 2024-05-21 | [
[
"Viviers",
"Christiaan G. A.",
""
],
[
"Filatova",
"Lena",
""
],
[
"Termeer",
"Maurice",
""
],
[
"de With",
"Peter H. N.",
""
],
[
"van der Sommen",
"Fons",
""
]
] |
2405.11865 | Constantine Lignos | Andrew Rueda, Elena \'Alvarez Mellado, Constantine Lignos | CoNLL#: Fine-grained Error Analysis and a Corrected Test Set for
CoNLL-03 English | Accepted to LREC-COLING 2024 | Proceedings of the 2024 Joint International Conference on
Computational Linguistics, Language Resources and Evaluation (LREC-COLING
2024). 3718-3728 | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modern named entity recognition systems have steadily improved performance in
the age of larger and more powerful neural models. However, over the past
several years, the state-of-the-art has seemingly hit another plateau on the
benchmark CoNLL-03 English dataset. In this paper, we perform a deep dive into
the test outputs of the highest-performing NER models, conducting a
fine-grained evaluation of their performance by introducing new document-level
annotations on the test set. We go beyond F1 scores by categorizing errors in
order to interpret the true state of the art for NER and guide future work. We
review previous attempts at correcting the various flaws of the test set and
introduce CoNLL#, a new corrected version of the test set that addresses its
systematic and most prevalent errors, allowing for low-noise, interpretable
error analysis.
| [
{
"created": "Mon, 20 May 2024 08:16:34 GMT",
"version": "v1"
}
] | 2024-05-21 | [
[
"Rueda",
"Andrew",
""
],
[
"Mellado",
"Elena Álvarez",
""
],
[
"Lignos",
"Constantine",
""
]
] |
2405.11903 | Sushmita Sarker | Sushmita Sarker, Prithul Sarker, Gunner Stone, Ryan Gorman, Alireza
Tavakkoli, George Bebis and Javad Sattarvand | A comprehensive overview of deep learning techniques for 3D point cloud
classification and semantic segmentation | Published in Springer Nature (Machine Vision and Applications) | Machine Vision and Applications 35, 67 (2024) | 10.1007/s00138-024-01543-1 | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Point cloud analysis has a wide range of applications in many areas such as
computer vision, robotic manipulation, and autonomous driving. While deep
learning has achieved remarkable success on image-based tasks, there are many
unique challenges faced by deep neural networks in processing massive,
unordered, irregular and noisy 3D points. To stimulate future research, this
paper analyzes recent progress in deep learning methods employed for point
cloud processing and presents challenges and potential directions to advance
this field. It serves as a comprehensive review on two major tasks in 3D point
cloud processing-- namely, 3D shape classification and semantic segmentation.
| [
{
"created": "Mon, 20 May 2024 09:33:27 GMT",
"version": "v1"
}
] | 2024-05-21 | [
[
"Sarker",
"Sushmita",
""
],
[
"Sarker",
"Prithul",
""
],
[
"Stone",
"Gunner",
""
],
[
"Gorman",
"Ryan",
""
],
[
"Tavakkoli",
"Alireza",
""
],
[
"Bebis",
"George",
""
],
[
"Sattarvand",
"Javad",
""
]
] |
2405.11978 | Moises Diaz | Antonio Parziale, Moises Diaz, Miguel A. Ferrer, Angelo Marcelli | SM-DTW: Stability Modulated Dynamic Time Warping for signature
verification | null | Pattern Recognition Letters, Volume: 121, Pages 113-122 (2019) | 10.1016/j.patrec.2018.07.029 | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Building upon findings in computational model of handwriting learning and
execution, we introduce the concept of stability to explain the difference
between the actual movements performed during multiple execution of the
subject's signature, and conjecture that the most stable parts of the signature
should play a paramount role in evaluating the similarity between a questioned
signature and the reference ones during signature verification. We then
introduce the Stability Modulated Dynamic Time Warping algorithm for
incorporating the stability regions, i.e. the most similar parts between two
signatures, into the distance measure between a pair of signatures computed by
the Dynamic Time Warping for signature verification. Experiments were conducted
on two datasets largely adopted for performance evaluation. Experimental
results show that the proposed algorithm improves the performance of the
baseline system and compares favourably with other top performing signature
verification systems.
| [
{
"created": "Mon, 20 May 2024 12:18:15 GMT",
"version": "v1"
}
] | 2024-05-21 | [
[
"Parziale",
"Antonio",
""
],
[
"Diaz",
"Moises",
""
],
[
"Ferrer",
"Miguel A.",
""
],
[
"Marcelli",
"Angelo",
""
]
] |
2405.11983 | Silvia Garc\'ia-M\'endez | Silvia Garc\'ia-M\'endez, Francisco de Arriba-P\'erez and Mar\'ia del
Carmen Somoza-L\'opez | A review on the use of large language models as virtual tutors | null | Science & Education (2024), 1-16 | 10.1007/s11191-024-00530-2 | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Transformer architectures contribute to managing long-term dependencies for
Natural Language Processing, representing one of the most recent changes in the
field. These architectures are the basis of the innovative, cutting-edge Large
Language Models (LLMs) that have produced a huge buzz in several fields and
industrial sectors, among the ones education stands out. Accordingly, these
generative Artificial Intelligence-based solutions have directed the change in
techniques and the evolution in educational methods and contents, along with
network infrastructure, towards high-quality learning. Given the popularity of
LLMs, this review seeks to provide a comprehensive overview of those solutions
designed specifically to generate and evaluate educational materials and which
involve students and teachers in their design or experimental plan. To the best
of our knowledge, this is the first review of educational applications (e.g.,
student assessment) of LLMs. As expected, the most common role of these systems
is as virtual tutors for automatic question generation. Moreover, the most
popular models are GTP-3 and BERT. However, due to the continuous launch of new
generative models, new works are expected to be published shortly.
| [
{
"created": "Mon, 20 May 2024 12:33:42 GMT",
"version": "v1"
},
{
"created": "Thu, 5 Sep 2024 10:01:39 GMT",
"version": "v2"
}
] | 2024-09-06 | [
[
"García-Méndez",
"Silvia",
""
],
[
"de Arriba-Pérez",
"Francisco",
""
],
[
"Somoza-López",
"María del Carmen",
""
]
] |
2405.12206 | Tong Zeng | Tong Zeng, Daniel E. Acuna | Modeling citation worthiness by using attention-based bidirectional long
short-term memory networks and interpretable models | null | Scientometrics 124, 399-428 (2020) | 10.1007/s11192-020-03421-9 | null | cs.CL cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Scientist learn early on how to cite scientific sources to support their
claims. Sometimes, however, scientists have challenges determining where a
citation should be situated -- or, even worse, fail to cite a source
altogether. Automatically detecting sentences that need a citation (i.e.,
citation worthiness) could solve both of these issues, leading to more robust
and well-constructed scientific arguments. Previous researchers have applied
machine learning to this task but have used small datasets and models that do
not take advantage of recent algorithmic developments such as attention
mechanisms in deep learning. We hypothesize that we can develop significantly
accurate deep learning architectures that learn from large supervised datasets
constructed from open access publications. In this work, we propose a
Bidirectional Long Short-Term Memory (BiLSTM) network with attention mechanism
and contextual information to detect sentences that need citations. We also
produce a new, large dataset (PMOA-CITE) based on PubMed Open Access Subset,
which is orders of magnitude larger than previous datasets. Our experiments
show that our architecture achieves state of the art performance on the
standard ACL-ARC dataset ($F_{1}=0.507$) and exhibits high performance
($F_{1}=0.856$) on the new PMOA-CITE. Moreover, we show that it can transfer
learning across these datasets. We further use interpretable models to
illuminate how specific language is used to promote and inhibit citations. We
discover that sections and surrounding sentences are crucial for our improved
predictions. We further examined purported mispredictions of the model, and
uncovered systematic human mistakes in citation behavior and source data. This
opens the door for our model to check documents during pre-submission and
pre-archival procedures. We make this new dataset, the code, and a web-based
tool available to the community.
| [
{
"created": "Mon, 20 May 2024 17:45:36 GMT",
"version": "v1"
}
] | 2024-05-21 | [
[
"Zeng",
"Tong",
""
],
[
"Acuna",
"Daniel E.",
""
]
] |
2405.12266 | Daniel Commey | Daniel Commey, Benjamin Appiah, Bill K. Frimpong, Isaac Osei, Ebenezer
N. A. Hammond, Garth V. Crosby | EGAN: Evolutional GAN for Ransomware Evasion | null | 2023 IEEE 48th Conference on Local Computer Networks (LCN),
Daytona Beach, FL, USA, 2023, pp. 1-9 | 10.1109/LCN58197.2023.10223320 | null | cs.CR cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Adversarial Training is a proven defense strategy against adversarial
malware. However, generating adversarial malware samples for this type of
training presents a challenge because the resulting adversarial malware needs
to remain evasive and functional. This work proposes an attack framework, EGAN,
to address this limitation. EGAN leverages an Evolution Strategy and Generative
Adversarial Network to select a sequence of attack actions that can mutate a
Ransomware file while preserving its original functionality. We tested this
framework on popular AI-powered commercial antivirus systems listed on
VirusTotal and demonstrated that our framework is capable of bypassing the
majority of these systems. Moreover, we evaluated whether the EGAN attack
framework can evade other commercial non-AI antivirus solutions. Our results
indicate that the adversarial ransomware generated can increase the probability
of evading some of them.
| [
{
"created": "Mon, 20 May 2024 17:52:40 GMT",
"version": "v1"
}
] | 2024-05-22 | [
[
"Commey",
"Daniel",
""
],
[
"Appiah",
"Benjamin",
""
],
[
"Frimpong",
"Bill K.",
""
],
[
"Osei",
"Isaac",
""
],
[
"Hammond",
"Ebenezer N. A.",
""
],
[
"Crosby",
"Garth V.",
""
]
] |
2405.12313 | Md. Toukir Ahmed | Md. Toukir Ahmed, Ocean Monjur, Mohammed Kamruzzaman | Deep learning-based hyperspectral image reconstruction for quality
assessment of agro-product | Under review | Journal of Food Engineering, Volume 382 , December 2024, 112223 | 10.1016/j.jfoodeng.2024.112223 | null | cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | Hyperspectral imaging (HSI) has recently emerged as a promising tool for many
agricultural applications; however, the technology cannot be directly used in a
real-time system due to the extensive time needed to process large volumes of
data. Consequently, the development of a simple, compact, and cost-effective
imaging system is not possible with the current HSI systems. Therefore, the
overall goal of this study was to reconstruct hyperspectral images from RGB
images through deep learning for agricultural applications. Specifically, this
study used Hyperspectral Convolutional Neural Network - Dense (HSCNN-D) to
reconstruct hyperspectral images from RGB images for predicting soluble solid
content (SSC) in sweet potatoes. The algorithm accurately reconstructed the
hyperspectral images from RGB images, with the resulting spectra closely
matching the ground-truth. The partial least squares regression (PLSR) model
based on reconstructed spectra outperformed the model using the full spectral
range, demonstrating its potential for SSC prediction in sweet potatoes. These
findings highlight the potential of deep learning-based hyperspectral image
reconstruction as a low-cost, efficient tool for various agricultural uses.
| [
{
"created": "Mon, 20 May 2024 18:15:20 GMT",
"version": "v1"
}
] | 2024-08-01 | [
[
"Ahmed",
"Md. Toukir",
""
],
[
"Monjur",
"Ocean",
""
],
[
"Kamruzzaman",
"Mohammed",
""
]
] |
2405.12556 | Moises Diaz | Marcos Faundez, Moises Diaz, Miguel Angel Ferrer | Online Signature Recognition: A Biologically Inspired Feature Vector
Splitting Approach | null | Cognitive Computation,vol:16,Pages 265 to 277 (2024) | 10.1007/s12559-023-10205-9 | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This research introduces an innovative approach to explore the cognitive and
biologically inspired underpinnings of feature vector splitting for analyzing
the significance of different attributes in e-security biometric signature
recognition applications. Departing from traditional methods of concatenating
features into an extended set, we employ multiple splitting strategies,
aligning with cognitive principles, to preserve control over the relative
importance of each feature subset. Our methodology is applied to three diverse
databases (MCYT100, MCYT300,and SVC) using two classifiers (vector quantization
and dynamic time warping with one and five training samples). Experimentation
demonstrates that the fusion of pressure data with spatial coordinates (x and
y) consistently enhances performance. However, the inclusion of pen-tip angles
in the same feature set yields mixed results, with performance improvements
observed in select cases. This work delves into the cognitive aspects of
feature fusion,shedding light on the cognitive relevance of feature vector
splitting in e-security biometric applications.
| [
{
"created": "Tue, 21 May 2024 07:51:01 GMT",
"version": "v1"
}
] | 2024-05-22 | [
[
"Faundez",
"Marcos",
""
],
[
"Diaz",
"Moises",
""
],
[
"Ferrer",
"Miguel Angel",
""
]
] |
2405.12628 | Vincenzo Suriani | Vincenzo Suriani, Emanuele Musumeci, Daniele Nardi, Domenico Daniele
Bloisi | Play Everywhere: A Temporal Logic based Game Environment Independent
Approach for Playing Soccer with Robots | RoboCup 2023: Robot World Cup XXVI Best Paper | Lecture Notes in Computer Science ((LNAI,volume 14140)) Included
in the following conference series: Robot World Cup RoboCup 2023: Robot World
Cup XXVI | 10.1007/978-3-031-55015-7_1 | null | cs.RO cs.AI | http://creativecommons.org/licenses/by/4.0/ | Robots playing soccer often rely on hard-coded behaviors that struggle to
generalize when the game environment change. In this paper, we propose a
temporal logic based approach that allows robots' behaviors and goals to adapt
to the semantics of the environment. In particular, we present a hierarchical
representation of soccer in which the robot selects the level of operation
based on the perceived semantic characteristics of the environment, thus
modifying dynamically the set of rules and goals to apply. The proposed
approach enables the robot to operate in unstructured environments, just as it
happens when humans go from soccer played on an official field to soccer played
on a street. Three different use cases set in different scenarios are presented
to demonstrate the effectiveness of the proposed approach.
| [
{
"created": "Tue, 21 May 2024 09:30:47 GMT",
"version": "v1"
}
] | 2024-05-22 | [
[
"Suriani",
"Vincenzo",
""
],
[
"Musumeci",
"Emanuele",
""
],
[
"Nardi",
"Daniele",
""
],
[
"Bloisi",
"Domenico Daniele",
""
]
] |
2405.12695 | Moises Diaz | Moises Diaz, Miguel A. Ferrer, Gennaro Vessio | Explainable offline automatic signature verifier to support forensic
handwriting examiners | null | Neural Computing and Applications, Volume 36, pages 2411 to 2427
(2024) | 10.1007/s00521-023-09192-7 | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Signature verification is a critical task in many applications, including
forensic science, legal judgments, and financial markets. However, current
signature verification systems are often difficult to explain, which can limit
their acceptance in these applications. In this paper, we propose a novel
explainable offline automatic signature verifier (ASV) to support forensic
handwriting examiners. Our ASV is based on a universal background model (UBM)
constructed from offline signature images. It allows us to assign a questioned
signature to the UBM and to a reference set of known signatures using simple
distance measures. This makes it possible to explain the verifier's decision in
a way that is understandable to non experts. We evaluated our ASV on publicly
available databases and found that it achieves competitive performance with
state of the art ASVs, even when challenging 1 versus 1 comparison are
considered. Our results demonstrate that it is possible to develop an
explainable ASV that is also competitive in terms of performance. We believe
that our ASV has the potential to improve the acceptance of signature
verification in critical applications such as forensic science and legal
judgments.
| [
{
"created": "Tue, 21 May 2024 11:38:45 GMT",
"version": "v1"
}
] | 2024-05-22 | [
[
"Diaz",
"Moises",
""
],
[
"Ferrer",
"Miguel A.",
""
],
[
"Vessio",
"Gennaro",
""
]
] |
2405.12755 | Satvik Golechha | Satvik Golechha | Progress Measures for Grokking on Real-world Tasks | 5 pages | ICML 2024 Workshop on High-dimensional Learning Dynamics (HiLD) | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Grokking, a phenomenon where machine learning models generalize long after
overfitting, has been primarily observed and studied in algorithmic tasks. This
paper explores grokking in real-world datasets using deep neural networks for
classification under the cross-entropy loss. We challenge the prevalent
hypothesis that the $L_2$ norm of weights is the primary cause of grokking by
demonstrating that grokking can occur outside the expected range of weight
norms. To better understand grokking, we introduce three new progress measures:
activation sparsity, absolute weight entropy, and approximate local circuit
complexity. These measures are conceptually related to generalization and
demonstrate a stronger correlation with grokking in real-world datasets
compared to weight norms. Our findings suggest that while weight norms might
usually correlate with grokking and our progress measures, they are not
causative, and our proposed measures provide a better understanding of the
dynamics of grokking.
| [
{
"created": "Tue, 21 May 2024 13:06:41 GMT",
"version": "v1"
},
{
"created": "Thu, 20 Jun 2024 07:39:05 GMT",
"version": "v2"
}
] | 2024-06-21 | [
[
"Golechha",
"Satvik",
""
]
] |
2405.12926 | Manh Khoi Duong | Manh Khoi Duong, Stefan Conrad | Trusting Fair Data: Leveraging Quality in Fairness-Driven Data Removal
Techniques | The Version of Record of this contribution is published in Springer
LNCS 14912 and is available online at
https://doi.org/10.1007/978-3-031-68323-7_33 | Lecture Notes in Computer Science, Vol. 14912 (2024), pp. 375-380.
Springer | 10.1007/978-3-031-68323-7_33 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this paper, we deal with bias mitigation techniques that remove specific
data points from the training set to aim for a fair representation of the
population in that set. Machine learning models are trained on these
pre-processed datasets, and their predictions are expected to be fair. However,
such approaches may exclude relevant data, making the attained subsets less
trustworthy for further usage. To enhance the trustworthiness of prior methods,
we propose additional requirements and objectives that the subsets must fulfill
in addition to fairness: (1) group coverage, and (2) minimal data loss. While
removing entire groups may improve the measured fairness, this practice is very
problematic as failing to represent every group cannot be considered fair. In
our second concern, we advocate for the retention of data while minimizing
discrimination. By introducing a multi-objective optimization problem that
considers fairness and data loss, we propose a methodology to find
Pareto-optimal solutions that balance these objectives. By identifying such
solutions, users can make informed decisions about the trade-off between
fairness and data quality and select the most suitable subset for their
application. Our method is distributed as a Python package via PyPI under the
name FairDo (https://github.com/mkduong-ai/fairdo).
| [
{
"created": "Tue, 21 May 2024 16:51:28 GMT",
"version": "v1"
},
{
"created": "Tue, 11 Jun 2024 14:22:14 GMT",
"version": "v2"
},
{
"created": "Thu, 19 Sep 2024 11:31:09 GMT",
"version": "v3"
}
] | 2024-09-24 | [
[
"Duong",
"Manh Khoi",
""
],
[
"Conrad",
"Stefan",
""
]
] |
2405.13038 | Aditya Bhattacharya | Aditya Bhattacharya, Simone Stumpf, Katrien Verbert | An Explanatory Model Steering System for Collaboration between Domain
Experts and AI | Demo paper accepted for ACM UMAP 2024 | Adjunct Proceedings of the 32nd ACM Conference on User Modeling,
Adaptation and Personalization (UMAP Adjunct '24), July 1--4, 2024, Cagliari,
Italy | 10.1145/3631700.3664886 | null | cs.HC cs.AI | http://creativecommons.org/licenses/by/4.0/ | With the increasing adoption of Artificial Intelligence (AI) systems in
high-stake domains, such as healthcare, effective collaboration between domain
experts and AI is imperative. To facilitate effective collaboration between
domain experts and AI systems, we introduce an Explanatory Model Steering
system that allows domain experts to steer prediction models using their domain
knowledge. The system includes an explanation dashboard that combines different
types of data-centric and model-centric explanations and allows prediction
models to be steered through manual and automated data configuration
approaches. It allows domain experts to apply their prior knowledge for
configuring the underlying training data and refining prediction models.
Additionally, our model steering system has been evaluated for a
healthcare-focused scenario with 174 healthcare experts through three extensive
user studies. Our findings highlight the importance of involving domain experts
during model steering, ultimately leading to improved human-AI collaboration.
| [
{
"created": "Fri, 17 May 2024 07:27:48 GMT",
"version": "v1"
}
] | 2024-05-24 | [
[
"Bhattacharya",
"Aditya",
""
],
[
"Stumpf",
"Simone",
""
],
[
"Verbert",
"Katrien",
""
]
] |
2405.13049 | Fanfan Wang | Fanfan Wang, Heqing Ma, Jianfei Yu, Rui Xia, Erik Cambria | SemEval-2024 Task 3: Multimodal Emotion Cause Analysis in Conversations | Accepted to the 18th International Workshop on Semantic Evaluation
(SemEval-2024). 12 pages, 3 figures, 4 Tables | https://aclanthology.org/2024.semeval-1.277/ | null | null | cs.CL cs.AI cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ability to understand emotions is an essential component of human-like
artificial intelligence, as emotions greatly influence human cognition,
decision making, and social interactions. In addition to emotion recognition in
conversations, the task of identifying the potential causes behind an
individual's emotional state in conversations, is of great importance in many
application scenarios. We organize SemEval-2024 Task 3, named Multimodal
Emotion Cause Analysis in Conversations, which aims at extracting all pairs of
emotions and their corresponding causes from conversations. Under different
modality settings, it consists of two subtasks: Textual Emotion-Cause Pair
Extraction in Conversations (TECPE) and Multimodal Emotion-Cause Pair
Extraction in Conversations (MECPE). The shared task has attracted 143
registrations and 216 successful submissions. In this paper, we introduce the
task, dataset and evaluation settings, summarize the systems of the top teams,
and discuss the findings of the participants.
| [
{
"created": "Sun, 19 May 2024 09:59:00 GMT",
"version": "v1"
},
{
"created": "Tue, 11 Jun 2024 03:12:01 GMT",
"version": "v2"
},
{
"created": "Mon, 8 Jul 2024 07:32:28 GMT",
"version": "v3"
}
] | 2024-07-09 | [
[
"Wang",
"Fanfan",
""
],
[
"Ma",
"Heqing",
""
],
[
"Yu",
"Jianfei",
""
],
[
"Xia",
"Rui",
""
],
[
"Cambria",
"Erik",
""
]
] |
2405.13135 | Tong Zeng | Tong Zeng, Daniel Acuna | Dataset Mention Extraction in Scientific Articles Using Bi-LSTM-CRF
Model | null | Rich Search and Discovery for Research Datasets, 2020, 158-165 | 10.5281/zenodo.4402304 | null | cs.CL cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Datasets are critical for scientific research, playing an important role in
replication, reproducibility, and efficiency. Researchers have recently shown
that datasets are becoming more important for science to function properly,
even serving as artifacts of study themselves. However, citing datasets is not
a common or standard practice in spite of recent efforts by data repositories
and funding agencies. This greatly affects our ability to track their usage and
importance. A potential solution to this problem is to automatically extract
dataset mentions from scientific articles. In this work, we propose to achieve
such extraction by using a neural network based on a Bi-LSTM-CRF architecture.
Our method achieves F1 = 0.885 in social science articles released as part of
the Rich Context Dataset. We discuss the limitations of the current datasets
and propose modifications to the model to be done in the future.
| [
{
"created": "Tue, 21 May 2024 18:12:37 GMT",
"version": "v1"
}
] | 2024-05-24 | [
[
"Zeng",
"Tong",
""
],
[
"Acuna",
"Daniel",
""
]
] |
2405.13197 | Zhanchao Huang | Zhanchao Huang, Wenjun Hong, Hua Su | Global-Local Detail Guided Transformer for Sea Ice Recognition in
Optical Remote Sensing Images | 5 pages, 5 figures | IEEE IGARSS 2024 | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The recognition of sea ice is of great significance for reflecting climate
change and ensuring the safety of ship navigation. Recently, many deep learning
based methods have been proposed and applied to segment and recognize sea ice
regions. However, the diverse scales of sea ice areas, the zigzag and fine edge
contours, and the difficulty in distinguishing different types of sea ice pose
challenges to existing sea ice recognition models. In this paper, a
Global-Local Detail Guided Transformer (GDGT) method is proposed for sea ice
recognition in optical remote sensing images. In GDGT, a global-local feature
fusiont mechanism is designed to fuse global structural correlation features
and local spatial detail features. Furthermore, a detail-guided decoder is
developed to retain more high-resolution detail information during feature
reconstruction for improving the performance of sea ice recognition.
Experiments on the produced sea ice dataset demonstrated the effectiveness and
advancement of GDGT.
| [
{
"created": "Tue, 21 May 2024 21:02:20 GMT",
"version": "v1"
}
] | 2024-05-24 | [
[
"Huang",
"Zhanchao",
""
],
[
"Hong",
"Wenjun",
""
],
[
"Su",
"Hua",
""
]
] |
2405.13229 | Rochana Obadage | Obadage Rochana Rumalshan, Pramuka Weerasinghe, Mohamed Shaheer,
Prabhath Gunathilake, Erunika Dayaratna | Transfer Learning Approach for Railway Technical Map (RTM) Component
Identification | 9 pages, 8 figures | Lecture Notes in Networks and Systems: 465 (2022) 479-488 | 10.1007/978-981-19-2397-5_44 | null | cs.CV cs.AI cs.DL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The extreme popularity over the years for railway transportation urges the
necessity to maintain efficient railway management systems around the globe.
Even though, at present, there exist a large collection of Computer Aided
Designed Railway Technical Maps (RTMs) but available only in the portable
document format (PDF). Using Deep Learning and Optical Character Recognition
techniques, this research work proposes a generic system to digitize the
relevant map component data from a given input image and create a formatted
text file per image. Out of YOLOv3, SSD and Faster-RCNN object detection models
used, Faster-RCNN yields the highest mean Average Precision (mAP) and the
highest F1 score values 0.68 and 0.76 respectively. Further it is proven from
the results obtained that, one can improve the results with OCR when the text
containing image is being sent through a sophisticated pre-processing pipeline
to remove distortions.
| [
{
"created": "Tue, 21 May 2024 22:35:08 GMT",
"version": "v1"
}
] | 2024-05-24 | [
[
"Rumalshan",
"Obadage Rochana",
""
],
[
"Weerasinghe",
"Pramuka",
""
],
[
"Shaheer",
"Mohamed",
""
],
[
"Gunathilake",
"Prabhath",
""
],
[
"Dayaratna",
"Erunika",
""
]
] |
2405.13237 | Noor Nakhaei | Noor Nakhaei, Chrysostomos Marasinou, Akinyinka Omigbodun, Nina
Capiro, Bo Li, Anne Hoyt, and William Hsu | Spatial Matching of 2D Mammography Images and Specimen Radiographs:
Towards Improved Characterization of Suspicious Microcalcifications | null | Medical Imaging 2021: Computer-Aided Diagnosis (Vol. 11597, pp.
511-516). SPIE | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Accurate characterization of suspicious microcalcifications is critical to
determine whether these calcifications are associated with invasive disease.
Our overarching objective is to enable the joint characterization of
microcalcifications and surrounding breast tissue using mammography images and
digital histopathology images. Towards this goal, we investigate a template
matching-based approach that utilizes microcalcifications as landmarks to match
radiographs taken of biopsy core specimens to groups of calcifications that are
visible on mammography. Our approach achieved a high negative predictive value
(0.98) but modest precision (0.66) and recall (0.58) in identifying the
mammographic region where microcalcifications were taken during a core needle
biopsy.
| [
{
"created": "Tue, 21 May 2024 22:51:06 GMT",
"version": "v1"
}
] | 2024-05-24 | [
[
"Nakhaei",
"Noor",
""
],
[
"Marasinou",
"Chrysostomos",
""
],
[
"Omigbodun",
"Akinyinka",
""
],
[
"Capiro",
"Nina",
""
],
[
"Li",
"Bo",
""
],
[
"Hoyt",
"Anne",
""
],
[
"Hsu",
"William",
""
]
] |
2405.13438 | Moises Diaz | Moises Diaz, Miguel Angel Ferrer, Donato Impedovo, Giuseppe Pirlo,
Gennaro Vessio | Dynamically enhanced static handwriting representation for Parkinson's
disease detection | null | Pattern Recognition Letters, vol. 128, pp. 204-210 (2019) | 10.1016/j.patrec.2019.08.018 | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Computer aided diagnosis systems can provide non-invasive, low-cost tools to
support clinicians. These systems have the potential to assist the diagnosis
and monitoring of neurodegenerative disorders, in particular Parkinson's
disease (PD). Handwriting plays a special role in the context of PD assessment.
In this paper, the discriminating power of "dynamically enhanced" static images
of handwriting is investigated. The enhanced images are synthetically generated
by exploiting simultaneously the static and dynamic properties of handwriting.
Specifically, we propose a static representation that embeds dynamic
information based on: (i) drawing the points of the samples, instead of linking
them, so as to retain temporal/velocity information; and (ii) adding pen-ups
for the same purpose. To evaluate the effectiveness of the new handwriting
representation, a fair comparison between this approach and state-of-the-art
methods based on static and dynamic handwriting is conducted on the same
dataset, i.e. PaHaW. The classification workflow employs transfer learning to
extract meaningful features from multiple representations of the input data. An
ensemble of different classifiers is used to achieve the final predictions.
Dynamically enhanced static handwriting is able to outperform the results
obtained by using static and dynamic handwriting separately.
| [
{
"created": "Wed, 22 May 2024 08:28:42 GMT",
"version": "v1"
}
] | 2024-05-24 | [
[
"Diaz",
"Moises",
""
],
[
"Ferrer",
"Miguel Angel",
""
],
[
"Impedovo",
"Donato",
""
],
[
"Pirlo",
"Giuseppe",
""
],
[
"Vessio",
"Gennaro",
""
]
] |
2405.13555 | Moises Diaz | Moises Diaz, Miguel A. Ferrer, Donato Impedovo, Muhammad Imran Malik,
Giuseppe Pirlo, and Rejean Plamondon | A Perspective Analysis of Handwritten Signature Technology | null | ACM Computing Surveys (CSUR), vol.51, no 6, pp. 117:1-117:39
(2018) | 10.1145/3274658 | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Handwritten signatures are biometric traits at the center of debate in the
scientific community. Over the last 40 years, the interest in signature studies
has grown steadily, having as its main reference the application of automatic
signature verification, as previously published reviews in 1989, 2000, and 2008
bear witness. Ever since, and over the last 10 years, the application of
handwritten signature technology has strongly evolved, and much research has
focused on the possibility of applying systems based on handwritten signature
analysis and processing to a multitude of new fields. After several years of
haphazard growth of this research area, it is time to assess its current
developments for their applicability in order to draw a structured way forward.
This perspective reports a systematic review of the last 10 years of the
literature on handwritten signatures with respect to the new scenario, focusing
on the most promising domains of research and trying to elicit possible future
research directions in this subject.
| [
{
"created": "Wed, 22 May 2024 11:41:19 GMT",
"version": "v1"
}
] | 2024-05-24 | [
[
"Diaz",
"Moises",
""
],
[
"Ferrer",
"Miguel A.",
""
],
[
"Impedovo",
"Donato",
""
],
[
"Malik",
"Muhammad Imran",
""
],
[
"Pirlo",
"Giuseppe",
""
],
[
"Plamondon",
"Rejean",
""
]
] |
2405.13557 | Emanuele Aiello | Luca Savant Aira, Antonio Montanaro, Emanuele Aiello, Diego Valsesia,
Enrico Magli | MotionCraft: Physics-based Zero-Shot Video Generation | null | NeurIPS 2024 | null | null | cs.LG cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Generating videos with realistic and physically plausible motion is one of
the main recent challenges in computer vision. While diffusion models are
achieving compelling results in image generation, video diffusion models are
limited by heavy training and huge models, resulting in videos that are still
biased to the training dataset. In this work we propose MotionCraft, a new
zero-shot video generator to craft physics-based and realistic videos.
MotionCraft is able to warp the noise latent space of an image diffusion model,
such as Stable Diffusion, by applying an optical flow derived from a physics
simulation. We show that warping the noise latent space results in coherent
application of the desired motion while allowing the model to generate missing
elements consistent with the scene evolution, which would otherwise result in
artefacts or missing content if the flow was applied in the pixel space. We
compare our method with the state-of-the-art Text2Video-Zero reporting
qualitative and quantitative improvements, demonstrating the effectiveness of
our approach to generate videos with finely-prescribed complex motion dynamics.
Project page: https://mezzelfo.github.io/MotionCraft/
| [
{
"created": "Wed, 22 May 2024 11:44:57 GMT",
"version": "v1"
}
] | 2024-10-01 | [
[
"Aira",
"Luca Savant",
""
],
[
"Montanaro",
"Antonio",
""
],
[
"Aiello",
"Emanuele",
""
],
[
"Valsesia",
"Diego",
""
],
[
"Magli",
"Enrico",
""
]
] |
2405.13606 | Anastasija Nikiforova | Anastasija Nikiforova, Martin Lnenicka, Petar Mili\'c, Mariusz Luterek
and Manuel Pedro Rodr\'iguez Bol\'ivar | From the evolution of public data ecosystems to the evolving horizons of
the forward-looking intelligent public data ecosystem empowered by emerging
technologies | null | In: Janssen, M, J. Crompvoets, J. Ramon Gil-Garcia, H. Lee, I
Lindgren, A Nikiforova, G. Viale Pereira. Electronic Government. EGOV 2024.
Lecture Notes in Computer Science, Springer, Cham | null | null | cs.CY cs.AI cs.ET cs.HC cs.IR | http://creativecommons.org/licenses/by/4.0/ | Public data ecosystems (PDEs) represent complex socio-technical systems
crucial for optimizing data use in the public sector and outside it.
Recognizing their multifaceted nature, previous research pro-posed a
six-generation Evolutionary Model of Public Data Ecosystems (EMPDE). Designed
as a result of a systematic literature review on the topic spanning three
decade, this model, while theoretically robust, necessitates empirical
validation to enhance its practical applicability. This study addresses this
gap by validating the theoretical model through a real-life examination in five
European countries - Latvia, Serbia, Czech Republic, Spain, and Poland. This
empirical validation provides insights into PDEs dynamics and variations of
implementations across contexts, particularly focusing on the 6th generation of
forward-looking PDE generation named "Intelligent Public Data Generation" that
represents a paradigm shift driven by emerging technologies such as cloud
computing, Artificial Intelligence, Natural Language Processing tools,
Generative AI, and Large Language Models (LLM) with potential to contribute to
both automation and augmentation of business processes within these ecosystems.
By transcending their traditional status as a mere component, evolving into
both an actor and a stakeholder simultaneously, these technologies catalyze
innovation and progress, enhancing PDE management strategies to align with
societal, regulatory, and technical imperatives in the digital era.
| [
{
"created": "Wed, 22 May 2024 12:58:02 GMT",
"version": "v1"
}
] | 2024-05-24 | [
[
"Nikiforova",
"Anastasija",
""
],
[
"Lnenicka",
"Martin",
""
],
[
"Milić",
"Petar",
""
],
[
"Luterek",
"Mariusz",
""
],
[
"Bolívar",
"Manuel Pedro Rodríguez",
""
]
] |
2405.13786 | Aurora Ramirez | Aurora Ram\'irez and Mario Berrios and Jos\'e Ra\'ul Romero and Robert
Feldt | Towards Explainable Test Case Prioritisation with Learning-to-Rank
Models | 3rd International Workshop on Artificial Intelligence in Software
Testing (AIST) - International Conference on Software Testing and Validation
(ICST) | Proc. 2023 IEEE International Conference on Software Testing,
Verification and Validation Workshops (ICSTW), pp. 66-69 | 10.1109/ICSTW58534.2023.00023 | null | cs.SE cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Test case prioritisation (TCP) is a critical task in regression testing to
ensure quality as software evolves. Machine learning has become a common way to
achieve it. In particular, learning-to-rank (LTR) algorithms provide an
effective method of ordering and prioritising test cases. However, their use
poses a challenge in terms of explainability, both globally at the model level
and locally for particular results. Here, we present and discuss scenarios that
require different explanations and how the particularities of TCP (multiple
builds over time, test case and test suite variations, etc.) could influence
them. We include a preliminary experiment to analyse the similarity of
explanations, showing that they do not only vary depending on test
case-specific predictions, but also on the relative ranks.
| [
{
"created": "Wed, 22 May 2024 16:11:45 GMT",
"version": "v1"
}
] | 2024-05-24 | [
[
"Ramírez",
"Aurora",
""
],
[
"Berrios",
"Mario",
""
],
[
"Romero",
"José Raúl",
""
],
[
"Feldt",
"Robert",
""
]
] |
2405.13843 | Md. Toukir Ahmed | Md. Toukir Ahmed, Md Wadud Ahmed, Ocean Monjur, Jason Lee Emmert,
Girish Chowdhary, Mohammed Kamruzzaman | Hyperspectral Image Reconstruction for Predicting Chick Embryo Mortality
Towards Advancing Egg and Hatchery Industry | Under review | Smart Agricultural Technology,Volume 9 , December 2024 | 10.1016/j.atech.2024.100533 | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As the demand for food surges and the agricultural sector undergoes a
transformative shift towards sustainability and efficiency, the need for
precise and proactive measures to ensure the health and welfare of livestock
becomes paramount. In the context of the broader agricultural landscape
outlined, the application of Hyperspectral Imaging (HSI) takes on profound
significance. HSI has emerged as a cutting-edge, non-destructive technique for
fast and accurate egg quality analysis, including the detection of chick embryo
mortality. However, the high cost and operational complexity compared to
conventional RGB imaging are significant bottlenecks in the widespread adoption
of HSI technology. To overcome these hurdles and unlock the full potential of
HSI, a promising solution is hyperspectral image reconstruction from standard
RGB images. This study aims to reconstruct hyperspectral images from RGB images
for non-destructive early prediction of chick embryo mortality. Firstly, the
performance of different image reconstruction algorithms, such as HRNET, MST++,
Restormer, and EDSR were compared to reconstruct the hyperspectral images of
the eggs in the early incubation period. Later, the reconstructed spectra were
used to differentiate live from dead chick-producing eggs using the XGBoost and
Random Forest classification methods. Among the reconstruction methods, HRNET
showed impressive reconstruction performance with MRAE of 0.0955, RMSE of
0.0159, and PSNR of 36.79 dB. This study motivated that harnessing imaging
technology integrated with smart sensors and data analytics has the potential
to improve automation, enhance biosecurity, and optimize resource management
towards sustainable agriculture 4.0.
| [
{
"created": "Wed, 22 May 2024 17:12:15 GMT",
"version": "v1"
}
] | 2024-08-29 | [
[
"Ahmed",
"Md. Toukir",
""
],
[
"Ahmed",
"Md Wadud",
""
],
[
"Monjur",
"Ocean",
""
],
[
"Emmert",
"Jason Lee",
""
],
[
"Chowdhary",
"Girish",
""
],
[
"Kamruzzaman",
"Mohammed",
""
]
] |
2405.14206 | Guotao Liang | Guotao Liang, Baoquan Zhang, Yaowei Wang, Xutao Li, Yunming Ye,
Huaibin Wang, Chuyao Luo, Kola Ye, linfeng Luo | LG-VQ: Language-Guided Codebook Learning | Accepted by NeurIPS 2024 | NeurIPS 2024 | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Vector quantization (VQ) is a key technique in high-resolution and
high-fidelity image synthesis, which aims to learn a codebook to encode an
image with a sequence of discrete codes and then generate an image in an
auto-regression manner. Although existing methods have shown superior
performance, most methods prefer to learn a single-modal codebook (\emph{e.g.},
image), resulting in suboptimal performance when the codebook is applied to
multi-modal downstream tasks (\emph{e.g.}, text-to-image, image captioning) due
to the existence of modal gaps. In this paper, we propose a novel
language-guided codebook learning framework, called LG-VQ, which aims to learn
a codebook that can be aligned with the text to improve the performance of
multi-modal downstream tasks. Specifically, we first introduce pre-trained text
semantics as prior knowledge, then design two novel alignment modules
(\emph{i.e.}, Semantic Alignment Module, and Relationship Alignment Module) to
transfer such prior knowledge into codes for achieving codebook text alignment.
In particular, our LG-VQ method is model-agnostic, which can be easily
integrated into existing VQ models. Experimental results show that our method
achieves superior performance on reconstruction and various multi-modal
downstream tasks.
| [
{
"created": "Thu, 23 May 2024 06:04:40 GMT",
"version": "v1"
},
{
"created": "Wed, 9 Oct 2024 04:30:30 GMT",
"version": "v2"
}
] | 2024-10-10 | [
[
"Liang",
"Guotao",
""
],
[
"Zhang",
"Baoquan",
""
],
[
"Wang",
"Yaowei",
""
],
[
"Li",
"Xutao",
""
],
[
"Ye",
"Yunming",
""
],
[
"Wang",
"Huaibin",
""
],
[
"Luo",
"Chuyao",
""
],
[
"Ye",
"Kola",
""
],
[
"Luo",
"linfeng",
""
]
] |
2405.14265 | Jerome Arjonilla | Brahim Driss, J\'er\^ome Arjonilla, Hui Wang, Abdallah Saffidine,
Tristan Cazenave | Deep Reinforcement Learning for 5*5 Multiplayer Go | Accepted in EvoApps at Evostar2023 | International Conference on the Applications of Evolutionary
Computation (Part of EvoStar), 2023, 753--764 | 10.1007/978-3-031-30229-9_48 | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | In recent years, much progress has been made in computer Go and most of the
results have been obtained thanks to search algorithms (Monte Carlo Tree
Search) and Deep Reinforcement Learning (DRL). In this paper, we propose to use
and analyze the latest algorithms that use search and DRL (AlphaZero and
Descent algorithms) to automatically learn to play an extended version of the
game of Go with more than two players. We show that using search and DRL we
were able to improve the level of play, even though there are more than two
players.
| [
{
"created": "Thu, 23 May 2024 07:44:24 GMT",
"version": "v1"
}
] | 2024-05-24 | [
[
"Driss",
"Brahim",
""
],
[
"Arjonilla",
"Jérôme",
""
],
[
"Wang",
"Hui",
""
],
[
"Saffidine",
"Abdallah",
""
],
[
"Cazenave",
"Tristan",
""
]
] |
2405.14307 | Weigang Lu | Weigang Lu, Ziyu Guan, Wei Zhao, and Yaming Yang | AdaGMLP: AdaBoosting GNN-to-MLP Knowledge Distillation | Accepted by KDD 2024 | KDD 2024 | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph Neural Networks (GNNs) have revolutionized graph-based machine
learning, but their heavy computational demands pose challenges for
latency-sensitive edge devices in practical industrial applications. In
response, a new wave of methods, collectively known as GNN-to-MLP Knowledge
Distillation, has emerged. They aim to transfer GNN-learned knowledge to a more
efficient MLP student, which offers faster, resource-efficient inference while
maintaining competitive performance compared to GNNs. However, these methods
face significant challenges in situations with insufficient training data and
incomplete test data, limiting their applicability in real-world applications.
To address these challenges, we propose AdaGMLP, an AdaBoosting GNN-to-MLP
Knowledge Distillation framework. It leverages an ensemble of diverse MLP
students trained on different subsets of labeled nodes, addressing the issue of
insufficient training data. Additionally, it incorporates a Node Alignment
technique for robust predictions on test data with missing or incomplete
features. Our experiments on seven benchmark datasets with different settings
demonstrate that AdaGMLP outperforms existing G2M methods, making it suitable
for a wide range of latency-sensitive real-world applications. We have
submitted our code to the GitHub repository
(https://github.com/WeigangLu/AdaGMLP-KDD24).
| [
{
"created": "Thu, 23 May 2024 08:28:44 GMT",
"version": "v1"
}
] | 2024-05-24 | [
[
"Lu",
"Weigang",
""
],
[
"Guan",
"Ziyu",
""
],
[
"Zhao",
"Wei",
""
],
[
"Yang",
"Yaming",
""
]
] |
2405.14334 | Yitao Peng | Yitao Peng, Lianghua He, Die Hu | Hierarchical Salient Patch Identification for Interpretable Fundus
Disease Localization | null | IEEE BIBM 2024 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the widespread application of deep learning technology in medical image
analysis, the effective explanation of model predictions and improvement of
diagnostic accuracy have become urgent problems that need to be solved.
Attribution methods have become key tools to help doctors better understand the
diagnostic basis of models, and are used to explain and localize diseases in
medical images. However, previous methods suffer from inaccurate and incomplete
localization problems for fundus diseases with complex and diverse structures.
To solve these problems, we propose a weakly supervised interpretable fundus
disease localization method called hierarchical salient patch identification
(HSPI) that can achieve interpretable disease localization using only
image-level labels and a neural network classifier (NNC). First, we propose
salient patch identification (SPI), which divides the image into several
patches and optimizes consistency loss to identify which patch in the input
image is most important for the network's prediction, in order to locate the
disease. Second, we propose a hierarchical identification strategy to force SPI
to analyze the importance of different areas to neural network classifier's
prediction to comprehensively locate disease areas. Conditional peak focusing
is then introduced to ensure that the mask vector can accurately locate the
disease area. Finally, we propose patch selection based on multi-sized
intersections to filter out incorrectly or additionally identified non-disease
regions. We conduct disease localization experiments on fundus image datasets
and achieve the best performance on multiple evaluation metrics compared to
previous interpretable attribution methods. Additional ablation studies are
conducted to verify the effectiveness of each method.
| [
{
"created": "Thu, 23 May 2024 09:07:21 GMT",
"version": "v1"
},
{
"created": "Wed, 21 Aug 2024 13:46:18 GMT",
"version": "v2"
}
] | 2024-08-22 | [
[
"Peng",
"Yitao",
""
],
[
"He",
"Lianghua",
""
],
[
"Hu",
"Die",
""
]
] |
2405.14346 | Jerome Arjonilla | J\'er\^ome Arjonilla, Abdallah Saffidine, Tristan Cazenave | Mixture of Public and Private Distributions in Imperfect Information
Games | Accepted in CoG 2023 | 2023 IEEE Conference on Games (CoG) | 10.1109/CoG57401.2023.10333169 | null | cs.AI cs.GT | http://creativecommons.org/licenses/by-sa/4.0/ | In imperfect information games (e.g. Bridge, Skat, Poker), one of the
fundamental considerations is to infer the missing information while at the
same time avoiding the disclosure of private information. Disregarding the
issue of protecting private information can lead to a highly exploitable
performance. Yet, excessive attention to it leads to hesitations that are no
longer consistent with our private information. In our work, we show that to
improve performance, one must choose whether to use a player's private
information. We extend our work by proposing a new belief distribution
depending on the amount of private and public information desired. We
empirically demonstrate an increase in performance and, with the aim of further
improving performance, the new distribution should be used according to the
position in the game. Our experiments have been done on multiple benchmarks and
in multiple determinization-based algorithms (PIMC and IS-MCTS).
| [
{
"created": "Thu, 23 May 2024 09:18:25 GMT",
"version": "v1"
}
] | 2024-05-24 | [
[
"Arjonilla",
"Jérôme",
""
],
[
"Saffidine",
"Abdallah",
""
],
[
"Cazenave",
"Tristan",
""
]
] |
2405.14409 | Moises Diaz | Moises Diaz, Miguel A. Ferrer, Soodamani Ramalingam and Richard Guest | Investigating the Common Authorship of Signatures by Off-Line Automatic
Signature Verification Without the Use of Reference Signatures | null | IEEE Transactions on Information Forensics and Security, vol.15,
no.1, pp. 487 to 499 (2019) | 10.1109/TIFS.2019.2924195 | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In automatic signature verification, questioned specimens are usually
compared with reference signatures. In writer-dependent schemes, a number of
reference signatures are required to build up the individual signer model while
a writer-independent system requires a set of reference signatures from several
signers to develop the model of the system. This paper addresses the problem of
automatic signature verification when no reference signatures are available.
The scenario we explore consists of a set of signatures, which could be signed
by the same author or by multiple signers. As such, we discuss three methods
which estimate automatically the common authorship of a set of off-line
signatures. The first method develops a score similarity matrix, worked out
with the assistance of duplicated signatures; the second uses a
feature-distance matrix for each pair of signatures; and the last method
introduces pre-classification based on the complexity of each signature.
Publicly available signatures were used in the experiments, which gave
encouraging results. As a baseline for the performance obtained by our
approaches, we carried out a visual Turing Test where forensic and non-forensic
human volunteers, carrying out the same task, performed less well than the
automatic schemes.
| [
{
"created": "Thu, 23 May 2024 10:30:48 GMT",
"version": "v1"
}
] | 2024-05-24 | [
[
"Diaz",
"Moises",
""
],
[
"Ferrer",
"Miguel A.",
""
],
[
"Ramalingam",
"Soodamani",
""
],
[
"Guest",
"Richard",
""
]
] |
2405.14437 | Alejo Lopez-Avila | Alejo Lopez-Avila, V\'ictor Su\'arez-Paniagua | Combining Denoising Autoencoders with Contrastive Learning to fine-tune
Transformer Models | 1 figure, 7 tables, 12 pages | emnlp main, 2023, pages 2021 to 2032 | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Recently, using large pretrained Transformer models for transfer learning
tasks has evolved to the point where they have become one of the flagship
trends in the Natural Language Processing (NLP) community, giving rise to
various outlooks such as prompt-based, adapters or combinations with
unsupervised approaches, among many others. This work proposes a 3 Phase
technique to adjust a base model for a classification task. First, we adapt the
model's signal to the data distribution by performing further training with a
Denoising Autoencoder (DAE). Second, we adjust the representation space of the
output to the corresponding classes by clustering through a Contrastive
Learning (CL) method. In addition, we introduce a new data augmentation
approach for Supervised Contrastive Learning to correct the unbalanced
datasets. Third, we apply fine-tuning to delimit the predefined categories.
These different phases provide relevant and complementary knowledge to the
model to learn the final task. We supply extensive experimental results on
several datasets to demonstrate these claims. Moreover, we include an ablation
study and compare the proposed method against other ways of combining these
techniques.
| [
{
"created": "Thu, 23 May 2024 11:08:35 GMT",
"version": "v1"
}
] | 2024-05-24 | [
[
"Lopez-Avila",
"Alejo",
""
],
[
"Suárez-Paniagua",
"Víctor",
""
]
] |
2405.14445 | Lena Schmidt | Lena Schmidt, Kaitlyn Hair, Sergio Graziozi, Fiona Campbell, Claudia
Kapp, Alireza Khanteymoori, Dawn Craig, Mark Engelbert, James Thomas | Exploring the use of a Large Language Model for data extraction in
systematic reviews: a rapid feasibility study | Conference proceedings, peer-reviewed and presented at the 3rd
Workshop on Augmented Intelligence for Technology-Assisted Reviews Systems,
Glasgow, 2024 | Proceedings of the 3rd Workshop on Augmented Intelligence for
Technology-Assisted Reviews Systems, 2024 | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper describes a rapid feasibility study of using GPT-4, a large
language model (LLM), to (semi)automate data extraction in systematic reviews.
Despite the recent surge of interest in LLMs there is still a lack of
understanding of how to design LLM-based automation tools and how to robustly
evaluate their performance. During the 2023 Evidence Synthesis Hackathon we
conducted two feasibility studies. Firstly, to automatically extract study
characteristics from human clinical, animal, and social science domain studies.
We used two studies from each category for prompt-development; and ten for
evaluation. Secondly, we used the LLM to predict Participants, Interventions,
Controls and Outcomes (PICOs) labelled within 100 abstracts in the EBM-NLP
dataset. Overall, results indicated an accuracy of around 80%, with some
variability between domains (82% for human clinical, 80% for animal, and 72%
for studies of human social sciences). Causal inference methods and study
design were the data extraction items with the most errors. In the PICO study,
participants and intervention/control showed high accuracy (>80%), outcomes
were more challenging. Evaluation was done manually; scoring methods such as
BLEU and ROUGE showed limited value. We observed variability in the LLMs
predictions and changes in response quality. This paper presents a template for
future evaluations of LLMs in the context of data extraction for systematic
review automation. Our results show that there might be value in using LLMs,
for example as second or third reviewers. However, caution is advised when
integrating models such as GPT-4 into tools. Further research on stability and
reliability in practical settings is warranted for each type of data that is
processed by the LLM.
| [
{
"created": "Thu, 23 May 2024 11:24:23 GMT",
"version": "v1"
}
] | 2024-05-24 | [
[
"Schmidt",
"Lena",
""
],
[
"Hair",
"Kaitlyn",
""
],
[
"Graziozi",
"Sergio",
""
],
[
"Campbell",
"Fiona",
""
],
[
"Kapp",
"Claudia",
""
],
[
"Khanteymoori",
"Alireza",
""
],
[
"Craig",
"Dawn",
""
],
[
"Engelbert",
"Mark",
""
],
[
"Thomas",
"James",
""
]
] |
2405.14626 | Pedro Neto | Laura Duarte, Pedro Neto | Event-based dataset for the detection and classification of
manufacturing assembly tasks | null | Data in Brief, Volume 54, 2024, 110340, ISSN 2352-3409 | 10.1016/j.dib.2024.110340 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The featured dataset, the Event-based Dataset of Assembly Tasks (EDAT24),
showcases a selection of manufacturing primitive tasks (idle, pick, place, and
screw), which are basic actions performed by human operators in any
manufacturing assembly. The data were captured using a DAVIS240C event camera,
an asynchronous vision sensor that registers events when changes in light
intensity value occur. Events are a lightweight data format for conveying
visual information and are well-suited for real-time detection and analysis of
human motion. Each manufacturing primitive has 100 recorded samples of
DAVIS240C data, including events and greyscale frames, for a total of 400
samples. In the dataset, the user interacts with objects from the open-source
CT-Benchmark in front of the static DAVIS event camera. All data are made
available in raw form (.aedat) and in pre-processed form (.npy). Custom-built
Python code is made available together with the dataset to aid researchers to
add new manufacturing primitives or extend the dataset with more samples.
| [
{
"created": "Thu, 23 May 2024 14:32:52 GMT",
"version": "v1"
}
] | 2024-05-24 | [
[
"Duarte",
"Laura",
""
],
[
"Neto",
"Pedro",
""
]
] |
2405.14796 | Mohamed Debbagh | Mohamed Debbagh, Yixue Liu, Zhouzhou Zheng, Xintong Jiang, Shangpeng
Sun, Mark Lefsrud | Generative Plant Growth Simulation from Sequence-Informed Environmental
Conditions | null | Artificial Neural Networks in Pattern Recognition. ANNPR 2024.
Lecture Notes in Computer Science(), vol. 15154, Springer, Cham, 2024, pp.
308-319 | 10.1007/978-3-031-71602-7_26 | null | cs.CV cs.AI q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A plant growth simulation can be characterized as a reconstructed visual
representation of a plant or plant system. The phenotypic characteristics and
plant structures are controlled by the scene environment and other contextual
attributes. Considering the temporal dependencies and compounding effects of
various factors on growth trajectories, we formulate a probabilistic approach
to the simulation task by solving a frame synthesis and pattern recognition
problem. We introduce a sequence-informed plant growth simulation framework
(SI-PGS) that employs a conditional generative model to implicitly learn a
distribution of possible plant representations within a dynamic scene from a
fusion of low-dimensional temporal sensor and context data. Methods such as
controlled latent sampling and recurrent output connections are used to improve
coherence in the plant structures between frames of prediction. In this work,
we demonstrate that SI-PGS is able to capture temporal dependencies and
continuously generate realistic frames of plant growth.
| [
{
"created": "Thu, 23 May 2024 17:06:46 GMT",
"version": "v1"
},
{
"created": "Mon, 27 May 2024 14:35:49 GMT",
"version": "v2"
},
{
"created": "Wed, 10 Jul 2024 01:49:45 GMT",
"version": "v3"
}
] | 2024-09-23 | [
[
"Debbagh",
"Mohamed",
""
],
[
"Liu",
"Yixue",
""
],
[
"Zheng",
"Zhouzhou",
""
],
[
"Jiang",
"Xintong",
""
],
[
"Sun",
"Shangpeng",
""
],
[
"Lefsrud",
"Mark",
""
]
] |
2405.14879 | Noel Conruyt | Ouassine Younes (LISI, Computer Science Department), Zahir Jihad
(LISI, Computer Science Department), Conruyt No\"el (LIM), Kayal Mohsen
(ENTROPIE (Nouvelle-Cal\'edonie)), A. Martin Philippe (LIM), Chenin Eric
(UMMISCO), Bigot Lionel (ENTROPIE (R\'eunion)), Vignes Lebbe Regine (ISYEB) | Automatic Coral Detection with YOLO: A Deep Learning Approach for
Efficient and Accurate Coral Reef Monitoring | null | ECAI 2023 International Workshops, Sep 2023, Krak{\'o}w, France.
pp.170-177 | 10.1007/978-3-031-50485-3_16 | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Coral reefs are vital ecosystems that are under increasing threat due to
local human impacts and climate change. Efficient and accurate monitoring of
coral reefs is crucial for their conservation and management. In this paper, we
present an automatic coral detection system utilizing the You Only Look Once
(YOLO) deep learning model, which is specifically tailored for underwater
imagery analysis. To train and evaluate our system, we employ a dataset
consisting of 400 original underwater images. We increased the number of
annotated images to 580 through image manipulation using data augmentation
techniques, which can improve the model's performance by providing more diverse
examples for training. The dataset is carefully collected from underwater
videos that capture various coral reef environments, species, and lighting
conditions. Our system leverages the YOLOv5 algorithm's real-time object
detection capabilities, enabling efficient and accurate coral detection. We
used YOLOv5 to extract discriminating features from the annotated dataset,
enabling the system to generalize, including previously unseen underwater
images. The successful implementation of the automatic coral detection system
with YOLOv5 on our original image dataset highlights the potential of advanced
computer vision techniques for coral reef research and conservation. Further
research will focus on refining the algorithm to handle challenging underwater
image conditions, and expanding the dataset to incorporate a wider range of
coral species and spatio-temporal variations.
| [
{
"created": "Wed, 3 Apr 2024 08:00:46 GMT",
"version": "v1"
}
] | 2024-05-27 | [
[
"Younes",
"Ouassine",
"",
"LISI, Computer Science Department"
],
[
"Jihad",
"Zahir",
"",
"LISI, Computer Science Department"
],
[
"Noël",
"Conruyt",
"",
"LIM"
],
[
"Mohsen",
"Kayal",
"",
"ENTROPIE"
],
[
"Philippe",
"A. Martin",
"",
"LIM"
],
[
"Eric",
"Chenin",
"",
"UMMISCO"
],
[
"Lionel",
"Bigot",
"",
"ENTROPIE"
],
[
"Regine",
"Vignes Lebbe",
"",
"ISYEB"
]
] |
2405.14900 | Kendall Schmidt | Kendall Schmidt (American College of Radiology, USA), Benjamin Bearce
(The Massachusetts General Hospital, USA and University of Colorado, USA),
Ken Chang (The Massachusetts General Hospital), Laura Coombs (American
College of Radiology, USA), Keyvan Farahani (National Institutes of Health
National Cancer Institute, USA), Marawan Elbatele (Computer Vision and
Robotics Institute, University of Girona, Spain), Kaouther Mouhebe (Computer
Vision and Robotics Institute, University of Girona, Spain), Robert Marti
(Computer Vision and Robotics Institute, University of Girona, Spain),
Ruipeng Zhang (Cooperative Medianet Innovation Center, Shanghai Jiao Tong
University, China and Shanghai AI Laboratory, China), Yao Zhang (Shanghai AI
Laboratory, China), Yanfeng Wang (Cooperative Medianet Innovation Center,
Shanghai Jiao Tong University, China and Shanghai AI Laboratory, China),
Yaojun Hu (Real Doctor AI Research Centre, Zhejiang University, China),
Haochao Ying (Real Doctor AI Research Centre, Zhejiang University, China and
School of Public Health, Zhejiang University, China), Yuyang Xu (Real Doctor
AI Research Centre, Zhejiang University, China and College of Computer
Science and Technology, Zhejiang University, China), Conrad Testagrose
(University of North Florida College of Computing Jacksonville, USA), Mutlu
Demirer (Mayo Clinic Florida Radiology, USA), Vikash Gupta (Mayo Clinic
Florida Radiology, USA), \"Unal Ak\"unal (Division of Medical Image
Computing, German Cancer Research Center, Heidelberg, Germany), Markus
Bujotzek (Division of Medical Image Computing, German Cancer Research Center,
Heidelberg, Germany), Klaus H. Maier-Hein (Division of Medical Image
Computing, German Cancer Research Center, Heidelberg, Germany), Yi Qin
(Electronic and Computer Engineering, Hong Kong University of Science and
Technology, China), Xiaomeng Li (Electronic and Computer Engineering, Hong
Kong University of Science and Technology, China), Jayashree Kalpathy-Cramer
(The Massachusetts General Hospital, USA and University of Colorado, USA),
Holger R. Roth (NVIDIA, USA) | Fair Evaluation of Federated Learning Algorithms for Automated Breast
Density Classification: The Results of the 2022 ACR-NCI-NVIDIA Federated
Learning Challenge | 16 pages, 9 figures | Medical Image Analysis Volume 95, July 2024, 103206 | 10.1016/j.media.2024.103206. | null | eess.IV cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | The correct interpretation of breast density is important in the assessment
of breast cancer risk. AI has been shown capable of accurately predicting
breast density, however, due to the differences in imaging characteristics
across mammography systems, models built using data from one system do not
generalize well to other systems. Though federated learning (FL) has emerged as
a way to improve the generalizability of AI without the need to share data, the
best way to preserve features from all training data during FL is an active
area of research. To explore FL methodology, the breast density classification
FL challenge was hosted in partnership with the American College of Radiology,
Harvard Medical School's Mass General Brigham, University of Colorado, NVIDIA,
and the National Institutes of Health National Cancer Institute. Challenge
participants were able to submit docker containers capable of implementing FL
on three simulated medical facilities, each containing a unique large
mammography dataset. The breast density FL challenge ran from June 15 to
September 5, 2022, attracting seven finalists from around the world. The
winning FL submission reached a linear kappa score of 0.653 on the challenge
test data and 0.413 on an external testing dataset, scoring comparably to a
model trained on the same data in a central location.
| [
{
"created": "Wed, 22 May 2024 19:54:09 GMT",
"version": "v1"
}
] | 2024-05-27 | [
[
"Schmidt",
"Kendall",
"",
"American College of Radiology, USA"
],
[
"Bearce",
"Benjamin",
"",
"The Massachusetts General Hospital, USA and University of Colorado, USA"
],
[
"Chang",
"Ken",
"",
"The Massachusetts General Hospital"
],
[
"Coombs",
"Laura",
"",
"American\n College of Radiology, USA"
],
[
"Farahani",
"Keyvan",
"",
"National Institutes of Health\n National Cancer Institute, USA"
],
[
"Elbatele",
"Marawan",
"",
"Computer Vision and\n Robotics Institute, University of Girona, Spain"
],
[
"Mouhebe",
"Kaouther",
"",
"Computer\n Vision and Robotics Institute, University of Girona, Spain"
],
[
"Marti",
"Robert",
"",
"Computer Vision and Robotics Institute, University of Girona, Spain"
],
[
"Zhang",
"Ruipeng",
"",
"Cooperative Medianet Innovation Center, Shanghai Jiao Tong\n University, China and Shanghai AI Laboratory, China"
],
[
"Zhang",
"Yao",
"",
"Shanghai AI\n Laboratory, China"
],
[
"Wang",
"Yanfeng",
"",
"Cooperative Medianet Innovation Center,\n Shanghai Jiao Tong University, China and Shanghai AI Laboratory, China"
],
[
"Hu",
"Yaojun",
"",
"Real Doctor AI Research Centre, Zhejiang University, China"
],
[
"Ying",
"Haochao",
"",
"Real Doctor AI Research Centre, Zhejiang University, China and\n School of Public Health, Zhejiang University, China"
],
[
"Xu",
"Yuyang",
"",
"Real Doctor\n AI Research Centre, Zhejiang University, China and College of Computer\n Science and Technology, Zhejiang University, China"
],
[
"Testagrose",
"Conrad",
"",
"University of North Florida College of Computing Jacksonville, USA"
],
[
"Demirer",
"Mutlu",
"",
"Mayo Clinic Florida Radiology, USA"
],
[
"Gupta",
"Vikash",
"",
"Mayo Clinic\n Florida Radiology, USA"
],
[
"Akünal",
"Ünal",
"",
"Division of Medical Image\n Computing, German Cancer Research Center, Heidelberg, Germany"
],
[
"Bujotzek",
"Markus",
"",
"Division of Medical Image Computing, German Cancer Research Center,\n Heidelberg, Germany"
],
[
"Maier-Hein",
"Klaus H.",
"",
"Division of Medical Image\n Computing, German Cancer Research Center, Heidelberg, Germany"
],
[
"Qin",
"Yi",
"",
"Electronic and Computer Engineering, Hong Kong University of Science and\n Technology, China"
],
[
"Li",
"Xiaomeng",
"",
"Electronic and Computer Engineering, Hong\n Kong University of Science and Technology, China"
],
[
"Kalpathy-Cramer",
"Jayashree",
"",
"The Massachusetts General Hospital, USA and University of Colorado, USA"
],
[
"Roth",
"Holger R.",
"",
"NVIDIA, USA"
]
] |
2405.14986 | Amin Ahmadi Kasani | Amin Ahmadi Kasani, Hedieh Sajedi | Hand bone age estimation using divide and conquer strategy and
lightweight convolutional neural networks | null | Engineering Applications of Artificial Intelligence, Volume 120,
2023, 105935, ISSN 0952-1976 | 10.1016/j.engappai.2023.105935 | null | cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Estimating the Bone Age of children is very important for diagnosing growth
defects, and related diseases, and estimating the final height that children
reach after maturity. For this reason, it is widely used in different
countries. Traditional methods for estimating bone age are performed by
comparing atlas images and radiographic images of the left hand, which is
time-consuming and error-prone. To estimate bone age using deep neural network
models, a lot of research has been done, our effort has been to improve the
accuracy and speed of this process by using the introduced approach. After
creating and analyzing our initial model, we focused on preprocessing and made
the inputs smaller, and increased their quality. we selected small regions of
hand radiographs and estimated the age of the bone only according to these
regions. by doing this we improved bone age estimation accuracy even further
than what was achieved in related works, without increasing the required
computational resource. We reached a Mean Absolute Error (MAE) of 3.90 months
in the range of 0-20 years and an MAE of 3.84 months in the range of 1-18 years
on the RSNA test set.
| [
{
"created": "Thu, 23 May 2024 18:39:33 GMT",
"version": "v1"
}
] | 2024-05-27 | [
[
"Kasani",
"Amin Ahmadi",
""
],
[
"Sajedi",
"Hedieh",
""
]
] |
2405.15292 | Jokin Alcibar | Jokin Alcibar, Jose I. Aizpurua, Ekhi Zugasti | Towards a Probabilistic Fusion Approach for Robust Battery Prognostics | null | PHM Society European Conference, 8(1), 13 | 10.36001/phme.2024.v8i1.4143 | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Batteries are a key enabling technology for the decarbonization of transport
and energy sectors. The safe and reliable operation of batteries is crucial for
battery-powered systems. In this direction, the development of accurate and
robust battery state-of-health prognostics models can unlock the potential of
autonomous systems for complex, remote and reliable operations. The combination
of Neural Networks, Bayesian modelling concepts and ensemble learning
strategies, form a valuable prognostics framework to combine uncertainty in a
robust and accurate manner. Accordingly, this paper introduces a Bayesian
ensemble learning approach to predict the capacity depletion of lithium-ion
batteries. The approach accurately predicts the capacity fade and quantifies
the uncertainty associated with battery design and degradation processes. The
proposed Bayesian ensemble methodology employs a stacking technique,
integrating multiple Bayesian neural networks (BNNs) as base learners, which
have been trained on data diversity. The proposed method has been validated
using a battery aging dataset collected by the NASA Ames Prognostics Center of
Excellence. Obtained results demonstrate the improved accuracy and robustness
of the proposed probabilistic fusion approach with respect to (i) a single BNN
model and (ii) a classical stacking strategy based on different BNNs.
| [
{
"created": "Fri, 24 May 2024 07:26:36 GMT",
"version": "v1"
}
] | 2024-07-16 | [
[
"Alcibar",
"Jokin",
""
],
[
"Aizpurua",
"Jose I.",
""
],
[
"Zugasti",
"Ekhi",
""
]
] |
2405.15512 | Marc Oedingen | Marc Oedingen, Raphael C. Engelhardt, Robin Denz, Maximilian Hammer,
Wolfgang Konen | ChatGPT Code Detection: Techniques for Uncovering the Source of Code | Accepted for publication in MDPI AI Journal | AI. 2024; 5(3):1066-1094 | 10.3390/ai5030053 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | In recent times, large language models (LLMs) have made significant strides
in generating computer code, blurring the lines between code created by humans
and code produced by artificial intelligence (AI). As these technologies evolve
rapidly, it is crucial to explore how they influence code generation,
especially given the risk of misuse in areas like higher education. This paper
explores this issue by using advanced classification techniques to
differentiate between code written by humans and that generated by ChatGPT, a
type of LLM. We employ a new approach that combines powerful embedding features
(black-box) with supervised learning algorithms - including Deep Neural
Networks, Random Forests, and Extreme Gradient Boosting - to achieve this
differentiation with an impressive accuracy of 98%. For the successful
combinations, we also examine their model calibration, showing that some of the
models are extremely well calibrated. Additionally, we present white-box
features and an interpretable Bayes classifier to elucidate critical
differences between the code sources, enhancing the explainability and
transparency of our approach. Both approaches work well but provide at most
85-88% accuracy. We also show that untrained humans solve the same task not
better than random guessing. This study is crucial in understanding and
mitigating the potential risks associated with using AI in code generation,
particularly in the context of higher education, software development, and
competitive programming.
| [
{
"created": "Fri, 24 May 2024 12:56:18 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Jul 2024 10:23:01 GMT",
"version": "v2"
}
] | 2024-07-04 | [
[
"Oedingen",
"Marc",
""
],
[
"Engelhardt",
"Raphael C.",
""
],
[
"Denz",
"Robin",
""
],
[
"Hammer",
"Maximilian",
""
],
[
"Konen",
"Wolfgang",
""
]
] |
2405.15550 | Moises Diaz | Shahid Ismail, Moises Diaz, Cristina Carmona-Duarte, Jose Manuel
Vilar, Miguel A. Ferrer | CowScreeningDB: A public benchmark dataset for lameness detection in
dairy cows | null | Computers and Electronics in Agriculture, vol.216, pp.108500, 2024 | 10.1016/j.compag.2023.108500 | null | cs.CV eess.IV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Lameness is one of the costliest pathological problems affecting dairy
animals. It is usually assessed by trained veterinary clinicians who observe
features such as gait symmetry or gait parameters as step counts in real-time.
With the development of artificial intelligence, various modular systems have
been proposed to minimize subjectivity in lameness assessment. However, the
major limitation in their development is the unavailability of a public dataset
which is currently either commercial or privately held. To tackle this
limitation, we have introduced CowScreeningDB which was created using sensory
data. This dataset was sourced from 43 cows at a dairy located in Gran Canaria,
Spain. It consists of a multi-sensor dataset built on data collected using an
Apple Watch 6 during the normal daily routine of a dairy cow. Thanks to the
collection environment, sampling technique, information regarding the sensors,
the applications used for data conversion and storage make the dataset a
transparent one. This transparency of data can thus be used for further
development of techniques for lameness detection for dairy cows which can be
objectively compared. Aside from the public sharing of the dataset, we have
also shared a machine-learning technique which classifies the caws in healthy
and lame by using the raw sensory data. Hence validating the major objective
which is to establish the relationship between sensor data and lameness.
| [
{
"created": "Fri, 24 May 2024 13:36:00 GMT",
"version": "v1"
}
] | 2024-05-27 | [
[
"Ismail",
"Shahid",
""
],
[
"Diaz",
"Moises",
""
],
[
"Carmona-Duarte",
"Cristina",
""
],
[
"Vilar",
"Jose Manuel",
""
],
[
"Ferrer",
"Miguel A.",
""
]
] |
2405.15561 | Andreas Bucher | Andreas Bucher, Birgit Schenk, Mateusz Dolata, Gerhard Schwabe | When Generative AI Meets Workplace Learning: Creating A Realistic &
Motivating Learning Experience With A Generative PCA | null | ECIS 2024 | null | null | cs.HC cs.AI | http://creativecommons.org/licenses/by/4.0/ | Workplace learning is used to train employees systematically, e.g., via
e-learning or in 1:1 training. However, this is often deemed ineffective and
costly. Whereas pure e-learning lacks the possibility of conversational
exercise and personal contact, 1:1 training with human instructors involves a
high level of personnel and organizational costs. Hence, pedagogical
conversational agents (PCAs), based on generative AI, seem to compensate for
the disadvantages of both forms. Following Action Design Research, this paper
describes an organizational communication training with a Generative PCA
(GenPCA). The evaluation shows promising results: the agent was perceived
positively among employees and contributed to an improvement in self-determined
learning. However, the integration of such agent comes not without limitations.
We conclude with suggestions concerning the didactical methods, which are
supported by a GenPCA, and possible improvements of such an agent for workplace
learning.
| [
{
"created": "Fri, 24 May 2024 13:49:18 GMT",
"version": "v1"
}
] | 2024-05-27 | [
[
"Bucher",
"Andreas",
""
],
[
"Schenk",
"Birgit",
""
],
[
"Dolata",
"Mateusz",
""
],
[
"Schwabe",
"Gerhard",
""
]
] |
2405.15564 | Rui Miao | Rui Miao, Kaixiong Zhou, Yili Wang, Ninghao Liu, Ying Wang, Xin Wang | Rethinking Independent Cross-Entropy Loss For Graph-Structured Data | 20 pages, 4 figures | ICML 2024 | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph neural networks (GNNs) have exhibited prominent performance in learning
graph-structured data. Considering node classification task, based on the i.i.d
assumption among node labels, the traditional supervised learning simply sums
up cross-entropy losses of the independent training nodes and applies the
average loss to optimize GNNs' weights. But different from other data formats,
the nodes are naturally connected. It is found that the independent
distribution modeling of node labels restricts GNNs' capability to generalize
over the entire graph and defend adversarial attacks. In this work, we propose
a new framework, termed joint-cluster supervised learning, to model the joint
distribution of each node with its corresponding cluster. We learn the joint
distribution of node and cluster labels conditioned on their representations,
and train GNNs with the obtained joint loss. In this way, the data-label
reference signals extracted from the local cluster explicitly strengthen the
discrimination ability on the target node. The extensive experiments
demonstrate that our joint-cluster supervised learning can effectively bolster
GNNs' node classification accuracy. Furthermore, being benefited from the
reference signals which may be free from spiteful interference, our learning
paradigm significantly protects the node classification from being affected by
the adversarial attack.
| [
{
"created": "Fri, 24 May 2024 13:52:41 GMT",
"version": "v1"
},
{
"created": "Mon, 27 May 2024 01:42:32 GMT",
"version": "v2"
}
] | 2024-05-28 | [
[
"Miao",
"Rui",
""
],
[
"Zhou",
"Kaixiong",
""
],
[
"Wang",
"Yili",
""
],
[
"Liu",
"Ninghao",
""
],
[
"Wang",
"Ying",
""
],
[
"Wang",
"Xin",
""
]
] |
2405.15642 | David Lindsay Dr. | David Lindsay, Sian Lindsay | Effective Confidence Region Prediction Using Probability Forecasters | 10 pages, originally posted in 2005 | Artificial Intelligence in Medicine 2005 | 10.1007/11527770_66 | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Confidence region prediction is a practically useful extension to the
commonly studied pattern recognition problem. Instead of predicting a single
label, the constraint is relaxed to allow prediction of a subset of labels
given a desired confidence level 1-delta. Ideally, effective region predictions
should be (1) well calibrated - predictive regions at confidence level 1-delta
should err with relative frequency at most delta and (2) be as narrow (or
certain) as possible. We present a simple technique to generate confidence
region predictions from conditional probability estimates (probability
forecasts). We use this 'conversion' technique to generate confidence region
predictions from probability forecasts output by standard machine learning
algorithms when tested on 15 multi-class datasets. Our results show that
approximately 44% of experiments demonstrate well-calibrated confidence region
predictions, with the K-Nearest Neighbour algorithm tending to perform
consistently well across all data. Our results illustrate the practical
benefits of effective confidence region prediction with respect to medical
diagnostics, where guarantees of capturing the true disease label can be given.
| [
{
"created": "Fri, 24 May 2024 15:33:08 GMT",
"version": "v1"
}
] | 2024-05-27 | [
[
"Lindsay",
"David",
""
],
[
"Lindsay",
"Sian",
""
]
] |
2405.15664 | Nicolai Steinke | Nicolai Steinke, Daniel G\"ohring, Ra\`ul Rojas | GroundGrid:LiDAR Point Cloud Ground Segmentation and Terrain Estimation | This letter has been accepted for publication in IEEE Robotics and
Automation Letters | IEEE Robotics and Automation Letters, vol. 9, no. 1, pp. 420-426,
Jan. 2024 | 10.1109/LRA.2023.3333233 | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The precise point cloud ground segmentation is a crucial prerequisite of
virtually all perception tasks for LiDAR sensors in autonomous vehicles.
Especially the clustering and extraction of objects from a point cloud usually
relies on an accurate removal of ground points. The correct estimation of the
surrounding terrain is important for aspects of the drivability of a surface,
path planning, and obstacle prediction. In this article, we propose our system
GroundGrid which relies on 2D elevation maps to solve the terrain estimation
and point cloud ground segmentation problems. We evaluate the ground
segmentation and terrain estimation performance of GroundGrid and compare it to
other state-of-the-art methods using the SemanticKITTI dataset and a novel
evaluation method relying on airborne LiDAR scanning. The results show that
GroundGrid is capable of outperforming other state-of-the-art systems with an
average IoU of 94.78% while maintaining a high run-time performance of 171Hz.
The source code is available at https://github.com/dcmlr/groundgrid
| [
{
"created": "Fri, 24 May 2024 16:02:44 GMT",
"version": "v1"
}
] | 2024-05-27 | [
[
"Steinke",
"Nicolai",
""
],
[
"Göhring",
"Daniel",
""
],
[
"Rojas",
"Raùl",
""
]
] |
2405.16000 | Homayoon Beigi | Sanjay Natesan and Homayoon Beigi | Carnatic Raga Identification System using Rigorous Time-Delay Neural
Network | 7 pages, 2 tables, 3 figures | Recognition Technologies, Inc. Technical Report (2024),
RTI-20240524-01 | 10.13140/RG.2.2.17517.40164 | RTI-20240524-01 | cs.SD cs.AI cs.LG cs.MM eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large scale machine learning-based Raga identification continues to be a
nontrivial issue in the computational aspects behind Carnatic music. Each raga
consists of many unique and intrinsic melodic patterns that can be used to
easily identify them from others. These ragas can also then be used to cluster
songs within the same raga, as well as identify songs in other closely related
ragas. In this case, the input sound is analyzed using a combination of steps
including using a Discrete Fourier transformation and using Triangular
Filtering to create custom bins of possible notes, extracting features from the
presence of particular notes or lack thereof. Using a combination of Neural
Networks including 1D Convolutional Neural Networks conventionally known as
Time-Delay Neural Networks) and Long Short-Term Memory (LSTM), which are a form
of Recurrent Neural Networks, the backbone of the classification strategy to
build the model can be created. In addition, to help with variations in shruti,
a long-time attention-based mechanism will be implemented to determine the
relative changes in frequency rather than the absolute differences. This will
provide a much more meaningful data point when training audio clips in
different shrutis. To evaluate the accuracy of the classifier, a dataset of 676
recordings is used. The songs are distributed across the list of ragas. The
goal of this program is to be able to effectively and efficiently label a much
wider range of audio clips in more shrutis, ragas, and with more background
noise.
| [
{
"created": "Sat, 25 May 2024 01:31:58 GMT",
"version": "v1"
}
] | 2024-05-29 | [
[
"Natesan",
"Sanjay",
""
],
[
"Beigi",
"Homayoon",
""
]
] |
2405.16234 | Junyu Xiong | Shiyu Xia, Junyu Xiong, Haoyu Dong, Jianbo Zhao, Yuzhang Tian, Mengyu
Zhou, Yeye He, Shi Han, Dongmei Zhang | Vision Language Models for Spreadsheet Understanding: Challenges and
Opportunities | null | Proceedings of the 3rd Workshop on Advances in Language and Vision
Research (ALVR), Pages 116-128, August 2024 | 10.18653/v1/2024.alvr-1.10 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper explores capabilities of Vision Language Models on spreadsheet
comprehension. We propose three self-supervised challenges with corresponding
evaluation metrics to comprehensively evaluate VLMs on Optical Character
Recognition (OCR), spatial perception, and visual format recognition.
Additionally, we utilize the spreadsheet table detection task to assess the
overall performance of VLMs by integrating these challenges. To probe VLMs more
finely, we propose three spreadsheet-to-image settings: column width
adjustment, style change, and address augmentation. We propose variants of
prompts to address the above tasks in different settings. Notably, to leverage
the strengths of VLMs in understanding text rather than two-dimensional
positioning, we propose to decode cell values on the four boundaries of the
table in spreadsheet boundary detection. Our findings reveal that VLMs
demonstrate promising OCR capabilities but produce unsatisfactory results due
to cell omission and misalignment, and they notably exhibit insufficient
spatial and format recognition skills, motivating future work to enhance VLMs'
spreadsheet data comprehension capabilities using our methods to generate
extensive spreadsheet-image pairs in various settings.
| [
{
"created": "Sat, 25 May 2024 13:51:48 GMT",
"version": "v1"
},
{
"created": "Fri, 9 Aug 2024 03:30:15 GMT",
"version": "v2"
}
] | 2024-09-27 | [
[
"Xia",
"Shiyu",
""
],
[
"Xiong",
"Junyu",
""
],
[
"Dong",
"Haoyu",
""
],
[
"Zhao",
"Jianbo",
""
],
[
"Tian",
"Yuzhang",
""
],
[
"Zhou",
"Mengyu",
""
],
[
"He",
"Yeye",
""
],
[
"Han",
"Shi",
""
],
[
"Zhang",
"Dongmei",
""
]
] |
2405.16237 | Philippe Weier | Philippe Weier, Alexander Rath, \'Elie Michel, Iliyan Georgiev,
Philipp Slusallek, Tamy Boubekeur | N-BVH: Neural ray queries with bounding volume hierarchies | 10 pages | SIGGRAPH Conference Papers '24, July 27-August 1, 2024, Denver,
CO, USA | 10.1145/3641519.3657464 | null | cs.GR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural representations have shown spectacular ability to compress complex
signals in a fraction of the raw data size. In 3D computer graphics, the bulk
of a scene's memory usage is due to polygons and textures, making them ideal
candidates for neural compression. Here, the main challenge lies in finding
good trade-offs between efficient compression and cheap inference while
minimizing training time. In the context of rendering, we adopt a ray-centric
approach to this problem and devise N-BVH, a neural compression architecture
designed to answer arbitrary ray queries in 3D. Our compact model is learned
from the input geometry and substituted for it whenever a ray intersection is
queried by a path-tracing engine. While prior neural compression methods have
focused on point queries, ours proposes neural ray queries that integrate
seamlessly into standard ray-tracing pipelines. At the core of our method, we
employ an adaptive BVH-driven probing scheme to optimize the parameters of a
multi-resolution hash grid, focusing its neural capacity on the sparse 3D
occupancy swept by the original surfaces. As a result, our N-BVH can serve
accurate ray queries from a representation that is more than an order of
magnitude more compact, providing faithful approximations of visibility, depth,
and appearance attributes. The flexibility of our method allows us to combine
and overlap neural and non-neural entities within the same 3D scene and extends
to appearance level of detail.
| [
{
"created": "Sat, 25 May 2024 13:54:34 GMT",
"version": "v1"
}
] | 2024-05-28 | [
[
"Weier",
"Philippe",
""
],
[
"Rath",
"Alexander",
""
],
[
"Michel",
"Élie",
""
],
[
"Georgiev",
"Iliyan",
""
],
[
"Slusallek",
"Philipp",
""
],
[
"Boubekeur",
"Tamy",
""
]
] |
2405.16422 | Hao Wang | Hao Wang, Jianwei Li, Zhengyu Li | AI-Generated Text Detection and Classification Based on BERT Deep
Learning Algorithm | null | CONF-MPCS 2024 | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | AI-generated text detection plays an increasingly important role in various
fields. In this study, we developed an efficient AI-generated text detection
model based on the BERT algorithm, which provides new ideas and methods for
solving related problems. In the data preprocessing stage, a series of steps
were taken to process the text, including operations such as converting to
lowercase, word splitting, removing stop words, stemming extraction, removing
digits, and eliminating redundant spaces, to ensure data quality and accuracy.
By dividing the dataset into a training set and a test set in the ratio of 60%
and 40%, and observing the changes in the accuracy and loss values during the
training process, we found that the model performed well during the training
process. The accuracy increases steadily from the initial 94.78% to 99.72%,
while the loss value decreases from 0.261 to 0.021 and converges gradually,
which indicates that the BERT model is able to detect AI-generated text with
high accuracy and the prediction results are gradually approaching the real
classification results. Further analysis of the results of the training and
test sets reveals that in terms of loss value, the average loss of the training
set is 0.0565, while the average loss of the test set is 0.0917, showing a
slightly higher loss value. As for the accuracy, the average accuracy of the
training set reaches 98.1%, while the average accuracy of the test set is
97.71%, which is not much different from each other, indicating that the model
has good generalisation ability. In conclusion, the AI-generated text detection
model based on the BERT algorithm proposed in this study shows high accuracy
and stability in experiments, providing an effective solution for related
fields.
| [
{
"created": "Sun, 26 May 2024 04:26:07 GMT",
"version": "v1"
}
] | 2024-10-15 | [
[
"Wang",
"Hao",
""
],
[
"Li",
"Jianwei",
""
],
[
"Li",
"Zhengyu",
""
]
] |
2405.16631 | Qiang Sheng | Qiong Nan, Qiang Sheng, Juan Cao, Beizhe Hu, Danding Wang, Jintao Li | Let Silence Speak: Enhancing Fake News Detection with Generated Comments
from Large Language Models | 11 pages, 5 figures, 8 tables | CIKM 2024 | 10.1145/3627673.3679519 | null | cs.CL cs.CY cs.SI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Fake news detection plays a crucial role in protecting social media users and
maintaining a healthy news ecosystem. Among existing works, comment-based fake
news detection methods are empirically shown as promising because comments
could reflect users' opinions, stances, and emotions and deepen models'
understanding of fake news. Unfortunately, due to exposure bias and users'
different willingness to comment, it is not easy to obtain diverse comments in
reality, especially for early detection scenarios. Without obtaining the
comments from the ``silent'' users, the perceived opinions may be incomplete,
subsequently affecting news veracity judgment. In this paper, we explore the
possibility of finding an alternative source of comments to guarantee the
availability of diverse comments, especially those from silent users.
Specifically, we propose to adopt large language models (LLMs) as a user
simulator and comment generator, and design GenFEND, a generated
feedback-enhanced detection framework, which generates comments by prompting
LLMs with diverse user profiles and aggregating generated comments from
multiple subpopulation groups. Experiments demonstrate the effectiveness of
GenFEND and further analysis shows that the generated comments cover more
diverse users and could even be more effective than actual comments.
| [
{
"created": "Sun, 26 May 2024 17:09:23 GMT",
"version": "v1"
}
] | 2024-09-23 | [
[
"Nan",
"Qiong",
""
],
[
"Sheng",
"Qiang",
""
],
[
"Cao",
"Juan",
""
],
[
"Hu",
"Beizhe",
""
],
[
"Wang",
"Danding",
""
],
[
"Li",
"Jintao",
""
]
] |
2405.16693 | Konrad Kulakowski | Micha{\l} Strada and Sebastian Ernst and Jacek Szybowski and Konrad
Ku{\l}akowski | Detection of decision-making manipulation in the pairwise comparisons
method | 19 pages, 5 figures, 2 tables | Strada, M.; Ernst, S.; Szybowski, J.; Ku{\l}akowski, K. Detection
of Decision-Making Manipulation in the Pairwise Comparison Method. Appl. Sci.
2024, 14, 8946 | 10.3390/app14198946 | null | cs.AI cs.DM | http://creativecommons.org/licenses/by/4.0/ | Most decision-making models, including the pairwise comparison method, assume
the decision-makers honesty. However, it is easy to imagine a situation where a
decision-maker tries to manipulate the ranking results. This paper presents
three simple manipulation methods in the pairwise comparison method. We then
try to detect these methods using appropriately constructed neural networks.
Experimental results accompany the proposed solutions on the generated data,
showing a considerable manipulation detection level.
| [
{
"created": "Sun, 26 May 2024 20:58:12 GMT",
"version": "v1"
}
] | 2024-10-11 | [
[
"Strada",
"Michał",
""
],
[
"Ernst",
"Sebastian",
""
],
[
"Szybowski",
"Jacek",
""
],
[
"Kułakowski",
"Konrad",
""
]
] |
2405.16711 | Christine Lee | Christine P Lee, Min Kyung Lee, Bilge Mutlu | The AI-DEC: A Card-based Design Method for User-centered AI Explanations | null | Designing Interactive Systems Conference, 2024, (DIS '24) | 10.1145/3643834.3661576 | null | cs.HC cs.AI | http://creativecommons.org/licenses/by/4.0/ | Increasing evidence suggests that many deployed AI systems do not
sufficiently support end-user interaction and information needs. Engaging
end-users in the design of these systems can reveal user needs and
expectations, yet effective ways of engaging end-users in the AI explanation
design remain under-explored. To address this gap, we developed a design
method, called AI-DEC, that defines four dimensions of AI explanations that are
critical for the integration of AI systems -- communication content, modality,
frequency, and direction -- and offers design examples for end-users to design
AI explanations that meet their needs. We evaluated this method through
co-design sessions with workers in healthcare, finance, and management
industries who regularly use AI systems in their daily work. Findings indicate
that the AI-DEC effectively supported workers in designing explanations that
accommodated diverse levels of performance and autonomy needs, which varied
depending on the AI system's workplace role and worker values. We discuss the
implications of using the AI-DEC for the user-centered design of AI
explanations in real-world systems.
| [
{
"created": "Sun, 26 May 2024 22:18:38 GMT",
"version": "v1"
}
] | 2024-05-28 | [
[
"Lee",
"Christine P",
""
],
[
"Lee",
"Min Kyung",
""
],
[
"Mutlu",
"Bilge",
""
]
] |
2405.16959 | Cristina Carmona-Duarte | Tiziana D'Alessandro, Cristina Carmona-Duarte, Claudio De Stefano,
Moises Diaz, Miguel A. Ferrer, Francesco Fontanella | A Machine Learning Approach to Analyze the Effects of Alzheimer's
Disease on Handwriting through Lognormal Features | null | IGS 2023. Lecture Notes in Computer Science, vol 14285. Springer
(2023) | 10.1007/978-3-031-45461-5_8 | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Alzheimer's disease is one of the most incisive illnesses among the
neurodegenerative ones, and it causes a progressive decline in cognitive
abilities that, in the worst cases, becomes severe enough to interfere with
daily life. Currently, there is no cure, so an early diagnosis is strongly
needed to try and slow its progression through medical treatments. Handwriting
analysis is considered a potential tool for detecting and understanding certain
neurological conditions, including Alzheimer's disease. While handwriting
analysis alone cannot provide a definitive diagnosis of Alzheimer's, it may
offer some insights and be used for a comprehensive assessment. The
Sigma-lognormal model is conceived for movement analysis and can also be
applied to handwriting. This model returns a set of lognormal parameters as
output, which forms the basis for the computation of novel and significant
features. This paper presents a machine learning approach applied to
handwriting features extracted through the sigma-lognormal model. The aim is to
develop a support system to help doctors in the diagnosis and study of
Alzheimer, evaluate the effectiveness of the extracted features and finally
study the relation among them.
| [
{
"created": "Mon, 27 May 2024 08:54:11 GMT",
"version": "v1"
}
] | 2024-05-28 | [
[
"D'Alessandro",
"Tiziana",
""
],
[
"Carmona-Duarte",
"Cristina",
""
],
[
"De Stefano",
"Claudio",
""
],
[
"Diaz",
"Moises",
""
],
[
"Ferrer",
"Miguel A.",
""
],
[
"Fontanella",
"Francesco",
""
]
] |
2405.17110 | Shujun Yang | Shujun Yang, Yu Zhang, Yao Ding, Danfeng Hong | Superpixelwise Low-rank Approximation based Partial Label Learning for
Hyperspectral Image Classification | 0 | journal={IEEE Geoscience and Remote Sensing Letters}, year={2023},
publisher={IEEE} | 10.1109/LGRS.2023.3279985 | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Insufficient prior knowledge of a captured hyperspectral image (HSI) scene
may lead the experts or the automatic labeling systems to offer incorrect
labels or ambiguous labels (i.e., assigning each training sample to a group of
candidate labels, among which only one of them is valid; this is also known as
partial label learning) during the labeling process. Accordingly, how to learn
from such data with ambiguous labels is a problem of great practical
importance. In this paper, we propose a novel superpixelwise low-rank
approximation (LRA)-based partial label learning method, namely SLAP, which is
the first to take into account partial label learning in HSI classification.
SLAP is mainly composed of two phases: disambiguating the training labels and
acquiring the predictive model. Specifically, in the first phase, we propose a
superpixelwise LRA-based model, preparing the affinity graph for the subsequent
label propagation process while extracting the discriminative representation to
enhance the following classification task of the second phase. Then to
disambiguate the training labels, label propagation propagates the labeling
information via the affinity graph of training pixels. In the second phase, we
take advantage of the resulting disambiguated training labels and the
discriminative representations to enhance the classification performance. The
extensive experiments validate the advantage of the proposed SLAP method over
state-of-the-art methods.
| [
{
"created": "Mon, 27 May 2024 12:26:49 GMT",
"version": "v1"
}
] | 2024-05-28 | [
[
"Yang",
"Shujun",
""
],
[
"Zhang",
"Yu",
""
],
[
"Ding",
"Yao",
""
],
[
"Hong",
"Danfeng",
""
]
] |
2405.17182 | Rapha\"el Romero | Rapha\"el Romero, Maarten Buyl, Tijl De Bie, Jefrey Lijffijt | Exploring the Performance of Continuous-Time Dynamic Link Prediction
Algorithms | null | Appl. Sci. 2024, 14(8), 3516 | 10.3390/app14083516 | null | cs.SI cs.AI | http://creativecommons.org/licenses/by/4.0/ | Dynamic Link Prediction (DLP) addresses the prediction of future links in
evolving networks. However, accurately portraying the performance of DLP
algorithms poses challenges that might impede progress in the field.
Importantly, common evaluation pipelines usually calculate ranking or binary
classification metrics, where the scores of observed interactions (positives)
are compared with those of randomly generated ones (negatives). However, a
single metric is not sufficient to fully capture the differences between DLP
algorithms, and is prone to overly optimistic performance evaluation. Instead,
an in-depth evaluation should reflect performance variations across different
nodes, edges, and time segments. In this work, we contribute tools to perform
such a comprehensive evaluation. (1) We propose Birth-Death diagrams, a simple
but powerful visualization technique that illustrates the effect of time-based
train-test splitting on the difficulty of DLP on a given dataset. (2) We
describe an exhaustive taxonomy of negative sampling methods that can be used
at evaluation time. (3) We carry out an empirical study of the effect of the
different negative sampling strategies. Our comparison between heuristics and
state-of-the-art memory-based methods on various real-world datasets confirms a
strong effect of using different negative sampling strategies on the test Area
Under the Curve (AUC). Moreover, we conduct a visual exploration of the
prediction, with additional insights on which different types of errors are
prominent over time.
| [
{
"created": "Mon, 27 May 2024 14:03:28 GMT",
"version": "v1"
}
] | 2024-05-28 | [
[
"Romero",
"Raphaël",
""
],
[
"Buyl",
"Maarten",
""
],
[
"De Bie",
"Tijl",
""
],
[
"Lijffijt",
"Jefrey",
""
]
] |
2405.17253 | Rapha\"el Romero | Rapha\"el Romero, Jefrey Lijffijt, Riccardo Rastelli, Marco Corneli,
Tijl De Bie | Gaussian Embedding of Temporal Networks | null | IEEE Access ( Volume: 11, 2023) Page(s): 117971 - 117983 | 10.1109/ACCESS.2023.3324213 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Representing the nodes of continuous-time temporal graphs in a
low-dimensional latent space has wide-ranging applications, from prediction to
visualization. Yet, analyzing continuous-time relational data with timestamped
interactions introduces unique challenges due to its sparsity. Merely embedding
nodes as trajectories in the latent space overlooks this sparsity, emphasizing
the need to quantify uncertainty around the latent positions. In this paper, we
propose TGNE (\textbf{T}emporal \textbf{G}aussian \textbf{N}etwork
\textbf{E}mbedding), an innovative method that bridges two distinct strands of
literature: the statistical analysis of networks via Latent Space Models
(LSM)\cite{Hoff2002} and temporal graph machine learning. TGNE embeds nodes as
piece-wise linear trajectories of Gaussian distributions in the latent space,
capturing both structural information and uncertainty around the trajectories.
We evaluate TGNE's effectiveness in reconstructing the original graph and
modelling uncertainty. The results demonstrate that TGNE generates competitive
time-varying embedding locations compared to common baselines for
reconstructing unobserved edge interactions based on observed edges.
Furthermore, the uncertainty estimates align with the time-varying degree
distribution in the network, providing valuable insights into the temporal
dynamics of the graph. To facilitate reproducibility, we provide an open-source
implementation of TGNE at \url{https://github.com/aida-ugent/tgne}.
| [
{
"created": "Mon, 27 May 2024 15:07:57 GMT",
"version": "v1"
}
] | 2024-05-28 | [
[
"Romero",
"Raphaël",
""
],
[
"Lijffijt",
"Jefrey",
""
],
[
"Rastelli",
"Riccardo",
""
],
[
"Corneli",
"Marco",
""
],
[
"De Bie",
"Tijl",
""
]
] |
2405.17278 | Shaoan Wang | Shaoan Wang, Zhanhua Xin, Yaoqing Hu, Dongyue Li, Mingzhu Zhu, Junzhi
Yu | EF-Calib: Spatiotemporal Calibration of Event- and Frame-Based Cameras
Using Continuous-Time Trajectories | Accepted by IEEE Robotics and Automation Letters | IEEE Robotics and Automation Letters, 2024 | 10.1109/LRA.2024.3474475 | null | cs.RO cs.CV | http://creativecommons.org/licenses/by/4.0/ | Event camera, a bio-inspired asynchronous triggered camera, offers promising
prospects for fusion with frame-based cameras owing to its low latency and high
dynamic range. However, calibrating stereo vision systems that incorporate both
event and frame-based cameras remains a significant challenge. In this letter,
we present EF-Calib, a spatiotemporal calibration framework for event- and
frame-based cameras using continuous-time trajectories. A novel calibration
pattern applicable to both camera types and the corresponding event recognition
algorithm is proposed. Leveraging the asynchronous nature of events, a
derivable piece-wise B-spline to represent camera pose continuously is
introduced, enabling calibration for intrinsic parameters, extrinsic
parameters, and time offset, with analytical Jacobians provided. Various
experiments are carried out to evaluate the calibration performance of
EF-Calib, including calibration experiments for intrinsic parameters, extrinsic
parameters, and time offset. Experimental results show that EF-Calib achieves
the most accurate intrinsic parameters compared to current SOTA, the close
accuracy of the extrinsic parameters compared to the frame-based results, and
accurate time offset estimation. EF-Calib provides a convenient and accurate
toolbox for calibrating the system that fuses events and frames. The code of
this paper will also be open-sourced at: https://github.com/wsakobe/EF-Calib.
| [
{
"created": "Mon, 27 May 2024 15:40:24 GMT",
"version": "v1"
},
{
"created": "Wed, 25 Sep 2024 03:59:55 GMT",
"version": "v2"
}
] | 2024-10-07 | [
[
"Wang",
"Shaoan",
""
],
[
"Xin",
"Zhanhua",
""
],
[
"Hu",
"Yaoqing",
""
],
[
"Li",
"Dongyue",
""
],
[
"Zhu",
"Mingzhu",
""
],
[
"Yu",
"Junzhi",
""
]
] |
2405.17280 | Silvia Garc\'ia-M\'endez | Silvia Garc\'ia-M\'endez, Milagros Fern\'andez-Gavilanes, Enrique
Costa-Montenegro, Jonathan Juncal-Mart\'inez, F. Javier Gonz\'alez-Casta\~no | A Library for Automatic Natural Language Generation of Spanish Texts | null | Expert Systems with Applications, 120, 372-386 | 10.1016/j.eswa.2018.11.036 | null | cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In this article we present a novel system for natural language generation
(NLG) of Spanish sentences from a minimum set of meaningful words (such as
nouns, verbs and adjectives) which, unlike other state-of-the-art solutions,
performs the NLG task in a fully automatic way, exploiting both knowledge-based
and statistical approaches. Relying on its linguistic knowledge of vocabulary
and grammar, the system is able to generate complete, coherent and correctly
spelled sentences from the main word sets presented by the user. The system,
which was designed to be integrable, portable and efficient, can be easily
adapted to other languages by design and can feasibly be integrated in a wide
range of digital devices. During its development we also created a
supplementary lexicon for Spanish, aLexiS, with wide coverage and high
precision, as well as syntactic trees from a freely available definite-clause
grammar. The resulting NLG library has been evaluated both automatically and
manually (annotation). The system can potentially be used in different
application domains such as augmentative communication and automatic generation
of administrative reports or news.
| [
{
"created": "Mon, 27 May 2024 15:44:06 GMT",
"version": "v1"
}
] | 2024-05-28 | [
[
"García-Méndez",
"Silvia",
""
],
[
"Fernández-Gavilanes",
"Milagros",
""
],
[
"Costa-Montenegro",
"Enrique",
""
],
[
"Juncal-Martínez",
"Jonathan",
""
],
[
"González-Castaño",
"F. Javier",
""
]
] |
2405.17369 | Amin Ahmadi Kasani | Amin Ahmadi Kasani, Hedieh Sajedi | Predict joint angle of body parts based on sequence pattern recognition | null | 2022 16th International Conference on Ubiquitous Information
Management and Communication (IMCOM) | 10.1109/IMCOM53663.2022.9721801 | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The way organs are positioned and moved in the workplace can cause pain and
physical harm. Therefore, ergonomists use ergonomic risk assessments based on
visual observation of the workplace, or review pictures and videos taken in the
workplace. Sometimes the workers in the photos are not in perfect condition.
Some parts of the workers' bodies may not be in the camera's field of view,
could be obscured by objects, or by self-occlusion, this is the main problem in
2D human posture recognition. It is difficult to predict the position of body
parts when they are not visible in the image, and geometric mathematical
methods are not entirely suitable for this purpose. Therefore, we created a
dataset with artificial images of a 3D human model, specifically for painful
postures, and real human photos from different viewpoints. Each image we
captured was based on a predefined joint angle for each 3D model or human
model. We created various images, including images where some body parts are
not visible. Nevertheless, the joint angle is estimated beforehand, so we could
study the case by converting the input images into the sequence of joint
connections between predefined body parts and extracting the desired joint
angle with a convolutional neural network. In the end, we obtained root mean
square error (RMSE) of 12.89 and mean absolute error (MAE) of 4.7 on the test
dataset.
| [
{
"created": "Mon, 27 May 2024 17:24:11 GMT",
"version": "v1"
}
] | 2024-05-28 | [
[
"Kasani",
"Amin Ahmadi",
""
],
[
"Sajedi",
"Hedieh",
""
]
] |
2405.17569 | Marcelo Matheus Gauy | Marcelo Matheus Gauy, Larissa Cristina Berti, Arnaldo C\^andido Jr,
Augusto Camargo Neto, Alfredo Goldman, Anna Sara Shafferman Levin, Marcus
Martins, Beatriz Raposo de Medeiros, Marcelo Queiroz, Ester Cerdeira Sabino,
Flaviane Romani Fernandes Svartman and Marcelo Finger | Discriminant audio properties in deep learning based respiratory
insufficiency detection in Brazilian Portuguese | 5 pages, 2 figures, 1 table. Published in Artificial Intelligence in
Medicine (AIME) 2023 | Artificial Intellingence in Medicine Proceedings 2023, page
271-275 | 10.1007/978-3-031-34344-5_32 | null | cs.LG cs.AI cs.SD eess.AS | http://creativecommons.org/licenses/by/4.0/ | This work investigates Artificial Intelligence (AI) systems that detect
respiratory insufficiency (RI) by analyzing speech audios, thus treating speech
as a RI biomarker. Previous works collected RI data (P1) from COVID-19 patients
during the first phase of the pandemic and trained modern AI models, such as
CNNs and Transformers, which achieved $96.5\%$ accuracy, showing the
feasibility of RI detection via AI. Here, we collect RI patient data (P2) with
several causes besides COVID-19, aiming at extending AI-based RI detection. We
also collected control data from hospital patients without RI. We show that the
considered models, when trained on P1, do not generalize to P2, indicating that
COVID-19 RI has features that may not be found in all RI types.
| [
{
"created": "Mon, 27 May 2024 18:04:49 GMT",
"version": "v1"
}
] | 2024-05-29 | [
[
"Gauy",
"Marcelo Matheus",
""
],
[
"Berti",
"Larissa Cristina",
""
],
[
"Cândido",
"Arnaldo",
"Jr"
],
[
"Neto",
"Augusto Camargo",
""
],
[
"Goldman",
"Alfredo",
""
],
[
"Levin",
"Anna Sara Shafferman",
""
],
[
"Martins",
"Marcus",
""
],
[
"de Medeiros",
"Beatriz Raposo",
""
],
[
"Queiroz",
"Marcelo",
""
],
[
"Sabino",
"Ester Cerdeira",
""
],
[
"Svartman",
"Flaviane Romani Fernandes",
""
],
[
"Finger",
"Marcelo",
""
]
] |
2405.17817 | Vida Adeli | Vida Adeli, Soroush Mehraban, Irene Ballester, Yasamin Zarghami,
Andrea Sabo, Andrea Iaboni, Babak Taati | Benchmarking Skeleton-based Motion Encoder Models for Clinical
Applications: Estimating Parkinson's Disease Severity in Walking Sequences | null | IEEE International Conference on Automatic Face and Gesture
Recognition (FG 2024) | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This study investigates the application of general human motion encoders
trained on large-scale human motion datasets for analyzing gait patterns in PD
patients. Although these models have learned a wealth of human biomechanical
knowledge, their effectiveness in analyzing pathological movements, such as
parkinsonian gait, has yet to be fully validated. We propose a comparative
framework and evaluate six pre-trained state-of-the-art human motion encoder
models on their ability to predict the Movement Disorder Society - Unified
Parkinson's Disease Rating Scale (MDS-UPDRS-III) gait scores from motion
capture data. We compare these against a traditional gait feature-based
predictive model in a recently released large public PD dataset, including PD
patients on and off medication. The feature-based model currently shows higher
weighted average accuracy, precision, recall, and F1-score. Motion encoder
models with closely comparable results demonstrate promise for scalability and
efficiency in clinical settings. This potential is underscored by the enhanced
performance of the encoder model upon fine-tuning on PD training set. Four of
the six human motion models examined provided prediction scores that were
significantly different between on- and off-medication states. This finding
reveals the sensitivity of motion encoder models to nuanced clinical changes.
It also underscores the necessity for continued customization of these models
to better capture disease-specific features, thereby reducing the reliance on
labor-intensive feature engineering. Lastly, we establish a benchmark for the
analysis of skeleton-based motion encoder models in clinical settings. To the
best of our knowledge, this is the first study to provide a benchmark that
enables state-of-the-art models to be tested and compete in a clinical context.
Codes and benchmark leaderboard are available at code.
| [
{
"created": "Tue, 28 May 2024 04:29:10 GMT",
"version": "v1"
},
{
"created": "Thu, 30 May 2024 13:40:23 GMT",
"version": "v2"
}
] | 2024-05-31 | [
[
"Adeli",
"Vida",
""
],
[
"Mehraban",
"Soroush",
""
],
[
"Ballester",
"Irene",
""
],
[
"Zarghami",
"Yasamin",
""
],
[
"Sabo",
"Andrea",
""
],
[
"Iaboni",
"Andrea",
""
],
[
"Taati",
"Babak",
""
]
] |
2405.17874 | Dwane Van Der Sluis | D. van der Sluis | NUTS, NARS, and Speech | 10 pages, 3 figures | Artificial General Intelligence: 16th International Conference,
AGI 2023, Stockholm, Sweden, June 16-19, 2023, Proceedings Jun 2023 Pages
307-316 | 10.1007/978-3-031-33469-6_31 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | To investigate whether "Intelligence is the capacity of an
information-processing system to adapt to its environment while operating with
insufficient knowledge and resources", we look at utilising the non axiomatic
reasoning system (NARS) for speech recognition. This article presents NUTS:
raNdom dimensionality redUction non axiomaTic reasoning few Shot learner for
perception. NUTS consists of naive dimensionality reduction, some
pre-processing, and then non axiomatic reasoning (NARS). With only 2 training
examples NUTS performs similarly to the Whisper Tiny model for discrete word
identification.
| [
{
"created": "Tue, 28 May 2024 06:51:42 GMT",
"version": "v1"
}
] | 2024-05-29 | [
[
"van der Sluis",
"D.",
""
]
] |
2405.17886 | Moises Diaz | Jiri Mekyska, Katarina Safarova, Tomas Urbanek, Jirina Bednarova,
Vojtech Zvoncak, Jana Marie Havigerova, Lukas Cunek, Zoltan Galaz, Jan Mucha,
Christine Klauszova, Marcos Faundez-Zanuy, Miguel A. Ferrer and Moises Diaz | Graphomotor and Handwriting Disabilities Rating Scale (GHDRS):towards
complex and objective assessment | null | Australian Journalof Learning Difficulties, Routledge, 1-34,2024 | 10.1080/19404158.2024.2326686 | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Graphomotor and handwriting disabilities (GD and HD, respectively) could
significantly reduce children's quality of life. Effective remediation depends
on proper diagnosis; however, current approaches to diagnosis and assessment of
GD and HD have several limitations and knowledge gaps, e.g. they are
subjective, they do not facilitate identification of specific manifestations,
etc. The aim of this work is to introduce a new scale (GHDRS Graphomotor and
Handwriting Disabilities Rating Scale) that will enable experts to perform
objective and complex computeraided diagnosis and assessment of GD and HD. The
scale supports quantification of 17 manifestations associated with the
process/product of drawing/ handwriting. The whole methodology of GHDRS design
is made maximally transparent so that it could be adapted for other languages.
| [
{
"created": "Tue, 28 May 2024 07:09:42 GMT",
"version": "v1"
}
] | 2024-05-29 | [
[
"Mekyska",
"Jiri",
""
],
[
"Safarova",
"Katarina",
""
],
[
"Urbanek",
"Tomas",
""
],
[
"Bednarova",
"Jirina",
""
],
[
"Zvoncak",
"Vojtech",
""
],
[
"Havigerova",
"Jana Marie",
""
],
[
"Cunek",
"Lukas",
""
],
[
"Galaz",
"Zoltan",
""
],
[
"Mucha",
"Jan",
""
],
[
"Klauszova",
"Christine",
""
],
[
"Faundez-Zanuy",
"Marcos",
""
],
[
"Ferrer",
"Miguel A.",
""
],
[
"Diaz",
"Moises",
""
]
] |
2405.17910 | Damien Pellier | \'Etienne Fournier, Christine Jeoffrion, Belal Hmedan, Damien Pellier,
Humbert Fiorino, Aur\'elie Landry | Human-Cobot collaboration's impact on success, time completion, errors,
workload, gestures and acceptability during an assembly task | null | Applied Ergonomics, Volume 119, September 2024, 104306 | 10.1016/j.apergo.2024.104306 | null | cs.AI cs.HC cs.RO | http://creativecommons.org/licenses/by/4.0/ | The 5.0 industry promotes collaborative robots (cobots). This research
studies the impacts of cobot collaboration using an experimental setup. 120
participants realized a simple and a complex assembly task. 50% collaborated
with another human (H/H) and 50% with a cobot (H/C). The workload and the
acceptability of the cobotic collaboration were measured. Working with a cobot
decreases the effect of the task complexity on the human workload and on the
output quality. However, it increases the time completion and the number of
gestures (while decreasing their frequency). The H/C couples have a higher
chance of success but they take more time and more gestures to realize the
task. The results of this research could help developers and stakeholders to
understand the impacts of implementing a cobot in production chains.
| [
{
"created": "Tue, 28 May 2024 07:30:28 GMT",
"version": "v1"
}
] | 2024-05-29 | [
[
"Fournier",
"Étienne",
""
],
[
"Jeoffrion",
"Christine",
""
],
[
"Hmedan",
"Belal",
""
],
[
"Pellier",
"Damien",
""
],
[
"Fiorino",
"Humbert",
""
],
[
"Landry",
"Aurélie",
""
]
] |
2405.17940 | Hongbin Lin | Hongbin Lin, Bin Li, Chun Wai Wong, Juan Rojas, Xiangyu Chu, and Kwok
Wai Samuel Au | World Models for General Surgical Grasping | null | Robotics: Science and Systems 2024 | null | null | cs.RO cs.AI | http://creativecommons.org/licenses/by/4.0/ | Intelligent vision control systems for surgical robots should adapt to
unknown and diverse objects while being robust to system disturbances. Previous
methods did not meet these requirements due to mainly relying on pose
estimation and feature tracking. We propose a world-model-based deep
reinforcement learning framework "Grasp Anything for Surgery" (GAS), that
learns a pixel-level visuomotor policy for surgical grasping, enhancing both
generality and robustness. In particular, a novel method is proposed to
estimate the values and uncertainties of depth pixels for a rigid-link object's
inaccurate region based on the empirical prior of the object's size; both depth
and mask images of task objects are encoded to a single compact 3-channel image
(size: 64x64x3) by dynamically zooming in the mask regions, minimizing the
information loss. The learned controller's effectiveness is extensively
evaluated in simulation and in a real robot. Our learned visuomotor policy
handles: i) unseen objects, including 5 types of target grasping objects and a
robot gripper, in unstructured real-world surgery environments, and ii)
disturbances in perception and control. Note that we are the first work to
achieve a unified surgical control system that grasps diverse surgical objects
using different robot grippers on real robots in complex surgery scenes
(average success rate: 69%). Our system also demonstrates significant
robustness across 6 conditions including background variation, target
disturbance, camera pose variation, kinematic control error, image noise, and
re-grasping after the gripped target object drops from the gripper. Videos and
codes can be found on our project page: https://linhongbin.github.io/gas/.
| [
{
"created": "Tue, 28 May 2024 08:11:12 GMT",
"version": "v1"
}
] | 2024-05-29 | [
[
"Lin",
"Hongbin",
""
],
[
"Li",
"Bin",
""
],
[
"Wong",
"Chun Wai",
""
],
[
"Rojas",
"Juan",
""
],
[
"Chu",
"Xiangyu",
""
],
[
"Au",
"Kwok Wai Samuel",
""
]
] |
2405.18064 | Dr Peter J. Bentley | Peter J Bentley, Soo Ling Lim, Rajat Mathur, Sid Narang | Automated Real-World Sustainability Data Generation from Images of
Buildings | 6 pages | The 4th International Conference on Electrical, Computer,
Communications and Mechatronics Engineering (ICECCME) 2014 | null | null | cs.AI cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | When data on building features is unavailable, the task of determining how to
improve that building in terms of carbon emissions becomes infeasible. We show
that from only a set of images, a Large Language Model with appropriate prompt
engineering and domain knowledge can successfully estimate a range of building
features relevant for sustainability calculations. We compare our novel
image-to-data method with a ground truth comprising real building data for 47
apartments and achieve accuracy better than a human performing the same task.
We also demonstrate that the method can generate tailored recommendations to
the owner on how best to improve their properties and discuss methods to scale
the approach.
| [
{
"created": "Tue, 28 May 2024 11:24:20 GMT",
"version": "v1"
},
{
"created": "Wed, 28 Aug 2024 13:41:34 GMT",
"version": "v2"
}
] | 2024-08-29 | [
[
"Bentley",
"Peter J",
""
],
[
"Lim",
"Soo Ling",
""
],
[
"Mathur",
"Rajat",
""
],
[
"Narang",
"Sid",
""
]
] |
2405.18334 | Renzhi Wu | Renzhi Wu and Pramod Chunduri and Dristi J Shah and Ashmitha Julius
Aravind and Ali Payani and Xu Chu and Joy Arulraj and Kexin Rong | SketchQL Demonstration: Zero-shot Video Moment Querying with Sketches | null | Published on International Conference on Very Large Databases 2024 | null | null | cs.DB cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we will present SketchQL, a video database management system
(VDBMS) for retrieving video moments with a sketch-based query interface. This
novel interface allows users to specify object trajectory events with simple
mouse drag-and-drop operations. Users can use trajectories of single objects as
building blocks to compose complex events. Using a pre-trained model that
encodes trajectory similarity, SketchQL achieves zero-shot video moments
retrieval by performing similarity searches over the video to identify clips
that are the most similar to the visual query. In this demonstration, we
introduce the graphic user interface of SketchQL and detail its functionalities
and interaction mechanisms. We also demonstrate the end-to-end usage of
SketchQL from query composition to video moments retrieval using real-world
scenarios.
| [
{
"created": "Tue, 28 May 2024 16:28:51 GMT",
"version": "v1"
},
{
"created": "Sat, 22 Jun 2024 03:47:32 GMT",
"version": "v2"
},
{
"created": "Mon, 1 Jul 2024 02:10:50 GMT",
"version": "v3"
}
] | 2024-07-02 | [
[
"Wu",
"Renzhi",
""
],
[
"Chunduri",
"Pramod",
""
],
[
"Shah",
"Dristi J",
""
],
[
"Aravind",
"Ashmitha Julius",
""
],
[
"Payani",
"Ali",
""
],
[
"Chu",
"Xu",
""
],
[
"Arulraj",
"Joy",
""
],
[
"Rong",
"Kexin",
""
]
] |
2405.18335 | Silvia Garc\'ia-M\'endez | Silvia Garc\'ia M\'endez, F\'atima Leal, Benedita Malheiro, Juan
Carlos Burguillo Rial | Interpretable classification of wiki-review streams | null | (2023) IEEE Access | 10.1109/ACCESS.2023.3342472 | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Wiki articles are created and maintained by a crowd of editors, producing a
continuous stream of reviews. Reviews can take the form of additions, reverts,
or both. This crowdsourcing model is exposed to manipulation since neither
reviews nor editors are automatically screened and purged. To protect articles
against vandalism or damage, the stream of reviews can be mined to classify
reviews and profile editors in real-time. The goal of this work is to
anticipate and explain which reviews to revert. This way, editors are informed
why their edits will be reverted. The proposed method employs stream-based
processing, updating the profiling and classification models on each incoming
event. The profiling uses side and content-based features employing Natural
Language Processing, and editor profiles are incrementally updated based on
their reviews. Since the proposed method relies on self-explainable
classification algorithms, it is possible to understand why a review has been
classified as a revert or a non-revert. In addition, this work contributes an
algorithm for generating synthetic data for class balancing, making the final
classification fairer. The proposed online method was tested with a real data
set from Wikivoyage, which was balanced through the aforementioned synthetic
data generation. The results attained near-90 % values for all evaluation
metrics (accuracy, precision, recall, and F-measure).
| [
{
"created": "Tue, 28 May 2024 16:28:58 GMT",
"version": "v1"
}
] | 2024-05-29 | [
[
"Méndez",
"Silvia García",
""
],
[
"Leal",
"Fátima",
""
],
[
"Malheiro",
"Benedita",
""
],
[
"Rial",
"Juan Carlos Burguillo",
""
]
] |
2405.18346 | Anjanava Biswas | Anjanava Biswas, Wrick Talukdar | Intelligent Clinical Documentation: Harnessing Generative AI for
Patient-Centric Clinical Note Generation | 15 pages, 7 figures | International Journal of Innovative Science and Research
Technology: Vol. 9 (2024): No. 5, 994-1008 | 10.38124/ijisrt/IJISRT24MAY1483 | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Comprehensive clinical documentation is crucial for effective healthcare
delivery, yet it poses a significant burden on healthcare professionals,
leading to burnout, increased medical errors, and compromised patient safety.
This paper explores the potential of generative AI (Artificial Intelligence) to
streamline the clinical documentation process, specifically focusing on
generating SOAP (Subjective, Objective, Assessment, Plan) and BIRP (Behavior,
Intervention, Response, Plan) notes. We present a case study demonstrating the
application of natural language processing (NLP) and automatic speech
recognition (ASR) technologies to transcribe patient-clinician interactions,
coupled with advanced prompting techniques to generate draft clinical notes
using large language models (LLMs). The study highlights the benefits of this
approach, including time savings, improved documentation quality, and enhanced
patient-centered care. Additionally, we discuss ethical considerations, such as
maintaining patient confidentiality and addressing model biases, underscoring
the need for responsible deployment of generative AI in healthcare settings.
The findings suggest that generative AI has the potential to revolutionize
clinical documentation practices, alleviating administrative burdens and
enabling healthcare professionals to focus more on direct patient care.
| [
{
"created": "Tue, 28 May 2024 16:43:41 GMT",
"version": "v1"
}
] | 2024-05-29 | [
[
"Biswas",
"Anjanava",
""
],
[
"Talukdar",
"Wrick",
""
]
] |
2405.18350 | Silvia Garc\'ia-M\'endez | Silvia Garc\'ia M\'endez, Milagros Fern\'andez Gavilanes, Enrique
Costa Montenegro, Jonathan Juncal Mart\'inez, Francisco Javier Gonz\'alez
Casta\~no, Ehud Reiter | A System for Automatic English Text Expansion | null | (2019) IEEE Access, 7, 123320-123333 | 10.1109/ACCESS.2019.2937505 | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | We present an automatic text expansion system to generate English sentences,
which performs automatic Natural Language Generation (NLG) by combining
linguistic rules with statistical approaches. Here, "automatic" means that the
system can generate coherent and correct sentences from a minimum set of words.
From its inception, the design is modular and adaptable to other languages.
This adaptability is one of its greatest advantages. For English, we have
created the highly precise aLexiE lexicon with wide coverage, which represents
a contribution on its own. We have evaluated the resulting NLG library in an
Augmentative and Alternative Communication (AAC) proof of concept, both
directly (by regenerating corpus sentences) and manually (from annotations)
using a popular corpus in the NLG field. We performed a second analysis by
comparing the quality of text expansion in English to Spanish, using an ad-hoc
Spanish-English parallel corpus. The system might also be applied to other
domains such as report and news generation.
| [
{
"created": "Tue, 28 May 2024 16:48:05 GMT",
"version": "v1"
}
] | 2024-05-29 | [
[
"Méndez",
"Silvia García",
""
],
[
"Gavilanes",
"Milagros Fernández",
""
],
[
"Montenegro",
"Enrique Costa",
""
],
[
"Martínez",
"Jonathan Juncal",
""
],
[
"Castaño",
"Francisco Javier González",
""
],
[
"Reiter",
"Ehud",
""
]
] |
2405.18387 | Ioanna Gogou | Ioanna Gogou, Dimitrios Koutsomitropoulos | A Review and Implementation of Object Detection Models and Optimizations
for Real-time Medical Mask Detection during the COVID-19 Pandemic | null | 2022 International Conference on INnovations in Intelligent
SysTems and Applications (INISTA), Biarritz, France, 2022, pp. 1-6 | 10.1109/INISTA55318.2022.9894232 | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Convolutional Neural Networks (CNN) are commonly used for the problem of
object detection thanks to their increased accuracy. Nevertheless, the
performance of CNN-based detection models is ambiguous when detection speed is
considered. To the best of our knowledge, there has not been sufficient
evaluation of the available methods in terms of the speed/accuracy trade-off in
related literature. This work assesses the most fundamental object detection
models on the Common Objects in Context (COCO) dataset with respect to this
trade-off, their memory consumption, and computational and storage cost. Next,
we select a highly efficient model called YOLOv5 to train on the topical and
unexplored dataset of human faces with medical masks, the Properly-Wearing
Masked Faces Dataset (PWMFD), and analyze the benefits of specific optimization
techniques for real-time medical mask detection: transfer learning, data
augmentations, and a Squeeze-and-Excitation attention mechanism. Using our
findings in the context of the COVID-19 pandemic, we propose an optimized model
based on YOLOv5s using transfer learning for the detection of correctly and
incorrectly worn medical masks that surpassed more than two times in speed (69
frames per second) the state-of-the-art model SE-YOLOv3 on the PWMFD dataset
while maintaining the same level of mean Average Precision (67%).
| [
{
"created": "Tue, 28 May 2024 17:27:24 GMT",
"version": "v1"
}
] | 2024-05-29 | [
[
"Gogou",
"Ioanna",
""
],
[
"Koutsomitropoulos",
"Dimitrios",
""
]
] |
2405.18511 | Wentian Xu | Wentian Xu, Matthew Moffat, Thalia Seale, Ziyun Liang, Felix Wagner,
Daniel Whitehouse, David Menon, Virginia Newcombe, Natalie Voets, Abhirup
Banerjee, Konstantinos Kamnitsas | Feasibility and benefits of joint learning from MRI databases with
different brain diseases and modalities for segmentation | Accepted to MIDL 2024 | Proceedings of Machine Learning Research, MIDL 2024 | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Models for segmentation of brain lesions in multi-modal MRI are commonly
trained for a specific pathology using a single database with a predefined set
of MRI modalities, determined by a protocol for the specific disease. This work
explores the following open questions: Is it feasible to train a model using
multiple databases that contain varying sets of MRI modalities and annotations
for different brain pathologies? Will this joint learning benefit performance
on the sets of modalities and pathologies available during training? Will it
enable analysis of new databases with different sets of modalities and
pathologies? We develop and compare different methods and show that promising
results can be achieved with appropriate, simple and practical alterations to
the model and training framework. We experiment with 7 databases containing 5
types of brain pathologies and different sets of MRI modalities. Results
demonstrate, for the first time, that joint training on multi-modal MRI
databases with different brain pathologies and sets of modalities is feasible
and offers practical benefits. It enables a single model to segment pathologies
encountered during training in diverse sets of modalities, while facilitating
segmentation of new types of pathologies such as via follow-up fine-tuning. The
insights this study provides into the potential and limitations of this
paradigm should prove useful for guiding future advances in the direction. Code
and pretrained models: https://github.com/WenTXuL/MultiUnet
| [
{
"created": "Tue, 28 May 2024 18:28:10 GMT",
"version": "v1"
}
] | 2024-05-30 | [
[
"Xu",
"Wentian",
""
],
[
"Moffat",
"Matthew",
""
],
[
"Seale",
"Thalia",
""
],
[
"Liang",
"Ziyun",
""
],
[
"Wagner",
"Felix",
""
],
[
"Whitehouse",
"Daniel",
""
],
[
"Menon",
"David",
""
],
[
"Newcombe",
"Virginia",
""
],
[
"Voets",
"Natalie",
""
],
[
"Banerjee",
"Abhirup",
""
],
[
"Kamnitsas",
"Konstantinos",
""
]
] |
2405.18636 | Jiawei Zhang | Jiawei Zhang | ChatGPT as the Marketplace of Ideas: Should Truth-Seeking Be the Goal of
AI Content Governance? | 27 pages, 3 figures | Stanford Law & Policy Review Online 35 (2024) 11-37 | null | null | cs.AI cs.CY cs.ET cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As one of the most enduring metaphors within legal discourse, the marketplace
of ideas has wielded considerable influence over the jurisprudential landscape
for decades. A century after the inception of this theory, ChatGPT emerged as a
revolutionary technological advancement in the twenty-first century. This
research finds that ChatGPT effectively manifests the marketplace metaphor. It
not only instantiates the promises envisaged by generations of legal scholars
but also lays bare the perils discerned through sustained academic critique.
Specifically, the workings of ChatGPT and the marketplace of ideas theory
exhibit at least four common features: arena, means, objectives, and flaws.
These shared attributes are sufficient to render ChatGPT historically the most
qualified engine for actualizing the marketplace of ideas theory.
The comparison of the marketplace theory and ChatGPT merely marks a starting
point. A more meaningful undertaking entails reevaluating and reframing both
internal and external AI policies by referring to the accumulated experience,
insights, and suggestions researchers have raised to fix the marketplace
theory. Here, a pivotal issue is: should truth-seeking be set as the goal of AI
content governance? Given the unattainability of the absolute truth-seeking
goal, I argue against adopting zero-risk policies. Instead, a more judicious
approach would be to embrace a knowledge-based alternative wherein large
language models (LLMs) are trained to generate competing and divergent
viewpoints based on sufficient justifications. This research also argues that
so-called AI content risks are not created by AI companies but are inherent in
the entire information ecosystem. Thus, the burden of managing these risks
should be distributed among different social actors, rather than being solely
shouldered by chatbot companies.
| [
{
"created": "Tue, 28 May 2024 22:38:24 GMT",
"version": "v1"
}
] | 2024-05-30 | [
[
"Zhang",
"Jiawei",
""
]
] |
2405.18742 | Dan Ventura | Reed Perkins and Dan Ventura | Musical Phrase Segmentation via Grammatical Induction | Extended version of a paper appearing in the proceedings of IJCAI
2024 that includes additional material in an appendix. Please cite the IJCAI
version | Proceedings of the International Joint Conference on Artificial
Intelligence, 2024 | null | null | cs.AI cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | We outline a solution to the challenge of musical phrase segmentation that
uses grammatical induction algorithms, a class of algorithms which infer a
context-free grammar from an input sequence. We analyze the performance of five
grammatical induction algorithms on three datasets using various musical
viewpoint combinations. Our experiments show that the LONGESTFIRST algorithm
achieves the best F1 scores across all three datasets and that input encodings
that include the duration viewpoint result in the best performance.
| [
{
"created": "Wed, 29 May 2024 04:04:36 GMT",
"version": "v1"
}
] | 2024-05-30 | [
[
"Perkins",
"Reed",
""
],
[
"Ventura",
"Dan",
""
]
] |
2405.18823 | Hallah Butt | Hallah Shahid Butt, Benjamin Sch\"afer | Why Reinforcement Learning in Energy Systems Needs Explanations | null | ExEn Workshop 2024 | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | With economic development, the complexity of infrastructure has increased
drastically. Similarly, with the shift from fossil fuels to renewable sources
of energy, there is a dire need for such systems that not only predict and
forecast with accuracy but also help in understanding the process of
predictions. Artificial intelligence and machine learning techniques have
helped in finding out wellperforming solutions to different problems in the
energy sector. However, the usage of state-of-the-art techniques like
reinforcement learning is not surprisingly convincing. This paper discusses the
application of reinforcement techniques in energy systems and how explanations
of these models can be helpful
| [
{
"created": "Wed, 29 May 2024 07:09:00 GMT",
"version": "v1"
}
] | 2024-05-30 | [
[
"Butt",
"Hallah Shahid",
""
],
[
"Schäfer",
"Benjamin",
""
]
] |
2405.18845 | Silvia Garc\'ia-M\'endez | Silvia Garc\'ia M\'endez, F\'atima Leal, Benedita Malheiro, Juan
Carlos Burguillo Rial, Bruno Veloso, Adriana E. Chis, Horacio Gonz\'alez
V\'elez | Simulation, Modelling and Classification of Wiki Contributors: Spotting
The Good, The Bad, and The Ugly | null | Simulation Modelling Practice and Theory, 120, 102616 (2022) | 10.1016/j.simpat.2022.102616 | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Data crowdsourcing is a data acquisition process where groups of voluntary
contributors feed platforms with highly relevant data ranging from news,
comments, and media to knowledge and classifications. It typically processes
user-generated data streams to provide and refine popular services such as
wikis, collaborative maps, e-commerce sites, and social networks. Nevertheless,
this modus operandi raises severe concerns regarding ill-intentioned data
manipulation in adversarial environments. This paper presents a simulation,
modelling, and classification approach to automatically identify human and
non-human (bots) as well as benign and malign contributors by using data
fabrication to balance classes within experimental data sets, data stream
modelling to build and update contributor profiles and, finally, autonomic data
stream classification. By employing WikiVoyage - a free worldwide wiki travel
guide open to contribution from the general public - as a testbed, our approach
proves to significantly boost the confidence and quality of the classifier by
using a class-balanced data stream, comprising both real and synthetic data.
Our empirical results show that the proposed method distinguishes between
benign and malign bots as well as human contributors with a classification
accuracy of up to 92 %.
| [
{
"created": "Wed, 29 May 2024 07:56:08 GMT",
"version": "v1"
}
] | 2024-05-30 | [
[
"Méndez",
"Silvia García",
""
],
[
"Leal",
"Fátima",
""
],
[
"Malheiro",
"Benedita",
""
],
[
"Rial",
"Juan Carlos Burguillo",
""
],
[
"Veloso",
"Bruno",
""
],
[
"Chis",
"Adriana E.",
""
],
[
"Vélez",
"Horacio González",
""
]
] |
2405.18872 | Qizhou Chen | Qizhou Chen, Qing Shao | Single image super-resolution based on trainable feature matching
attention network | 35pages, 12 figures | Pattern Recognition, 2024 | 10.1016/j.patcog.2024.110289 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Convolutional Neural Networks (CNNs) have been widely employed for image
Super-Resolution (SR) in recent years. Various techniques enhance SR
performance by altering CNN structures or incorporating improved self-attention
mechanisms. Interestingly, these advancements share a common trait. Instead of
explicitly learning high-frequency details, they learn an implicit feature
processing mode that utilizes weighted sums of a feature map's own elements for
reconstruction, akin to convolution and non-local. In contrast, early
dictionary-based approaches learn feature decompositions explicitly to match
and rebuild Low-Resolution (LR) features. Building on this analysis, we
introduce Trainable Feature Matching (TFM) to amalgamate this explicit feature
learning into CNNs, augmenting their representation capabilities. Within TFM,
trainable feature sets are integrated to explicitly learn features from
training images through feature matching. Furthermore, we integrate non-local
and channel attention into our proposed Trainable Feature Matching Attention
Network (TFMAN) to further enhance SR performance. To alleviate the
computational demands of non-local operations, we propose a streamlined variant
called Same-size-divided Region-level Non-Local (SRNL). SRNL conducts non-local
computations in parallel on blocks uniformly divided from the input feature
map. The efficacy of TFM and SRNL is validated through ablation studies and
module explorations. We employ a recurrent convolutional network as the
backbone of our TFMAN to optimize parameter utilization. Comprehensive
experiments on benchmark datasets demonstrate that TFMAN achieves superior
results in most comparisons while using fewer parameters. The code is available
at https://github.com/qizhou000/tfman.
| [
{
"created": "Wed, 29 May 2024 08:31:54 GMT",
"version": "v1"
}
] | 2024-05-30 | [
[
"Chen",
"Qizhou",
""
],
[
"Shao",
"Qing",
""
]
] |
2405.18924 | Moises Diaz | Miguel A. Ferrer, Abhijit Das, Moises Diaz, Aythami Morales, Cristina
Carmona-Duarte, Umapada Pal | MDIW-13: a New Multi-Lingual and Multi-Script Database and Benchmark for
Script Identification | null | Cognitive Computation, Volume 16, pages 131 to 157,(2024) | 10.1007/s12559-023-10193-w | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Script identification plays a vital role in applications that involve
handwriting and document analysis within a multi-script and multi-lingual
environment. Moreover, it exhibits a profound connection with human cognition.
This paper provides a new database for benchmarking script identification
algorithms, which contains both printed and handwritten documents collected
from a wide variety of scripts, such as Arabic, Bengali (Bangla), Gujarati,
Gurmukhi, Devanagari, Japanese, Kannada, Malayalam, Oriya, Roman, Tamil,
Telugu, and Thai. The dataset consists of 1,135 documents scanned from local
newspaper and handwritten letters as well as notes from different native
writers. Further, these documents are segmented into lines and words,
comprising a total of 13,979 and 86,655 lines and words, respectively, in the
dataset. Easy-to-go benchmarks are proposed with handcrafted and deep learning
methods. The benchmark includes results at the document, line, and word levels
with printed and handwritten documents. Results of script identification
independent of the document/line/word level and independent of the
printed/handwritten letters are also given. The new multi-lingual database is
expected to create new script identifiers, present various challenges,
including identifying handwritten and printed samples and serve as a foundation
for future research in script identification based on the reported results of
the three benchmarks.
| [
{
"created": "Wed, 29 May 2024 09:29:09 GMT",
"version": "v1"
}
] | 2024-05-30 | [
[
"Ferrer",
"Miguel A.",
""
],
[
"Das",
"Abhijit",
""
],
[
"Diaz",
"Moises",
""
],
[
"Morales",
"Aythami",
""
],
[
"Carmona-Duarte",
"Cristina",
""
],
[
"Pal",
"Umapada",
""
]
] |
2405.19081 | Moises Diaz | Jose J. Quintana, Miguel A. Ferrer, Moises Diaz, Jose J. Feo, Adam
Wolniakowski and Konstantsin Miatliuk | Uniform vs. Lognormal Kinematics in Robots: Perceptual Preferences for
Robotic Movements | null | Applied Sciences Volume 12 Issue 23 (2022) | 10.3390/app122312045 | null | cs.RO cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Collaborative robots or cobots interact with humans in a common work
environment. In cobots, one under investigated but important issue is related
to their movement and how it is perceived by humans. This paper tries to
analyze whether humans prefer a robot moving in a human or in a robotic
fashion. To this end, the present work lays out what differentiates the
movement performed by an industrial robotic arm from that performed by a human
one. The main difference lies in the fact that the robotic movement has a
trapezoidal speed profile, while for the human arm, the speed profile is
bell-shaped and during complex movements, it can be considered as a sum of
superimposed bell-shaped movements. Based on the lognormality principle, a
procedure was developed for a robotic arm to perform human-like movements. Both
speed profiles were implemented in two industrial robots, namely, an ABB IRB
120 and a Universal Robot UR3. Three tests were used to study the subjects'
preference when seeing both movements and another analyzed the same when
interacting with the robot by touching its ends with their fingers.
| [
{
"created": "Wed, 29 May 2024 13:36:47 GMT",
"version": "v1"
}
] | 2024-05-30 | [
[
"Quintana",
"Jose J.",
""
],
[
"Ferrer",
"Miguel A.",
""
],
[
"Diaz",
"Moises",
""
],
[
"Feo",
"Jose J.",
""
],
[
"Wolniakowski",
"Adam",
""
],
[
"Miatliuk",
"Konstantsin",
""
]
] |
2405.19224 | Anna Breger | Anna Breger, Clemens Karner, Ian Selby, Janek Gr\"ohl, S\"oren
Dittmer, Edward Lilley, Judith Babar, Jake Beckford, Thomas R Else, Timothy J
Sadler, Shahab Shahipasand, Arthikkaa Thavakumar, Michael Roberts,
Carola-Bibiane Sch\"onlieb | A study on the adequacy of common IQA measures for medical images | null | Springer Lecture Notes in Electrical Engineering, MICAD conference
(2024) | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Image quality assessment (IQA) is standard practice in the development stage
of novel machine learning algorithms that operate on images. The most commonly
used IQA measures have been developed and tested for natural images, but not in
the medical setting. Reported inconsistencies arising in medical images are not
surprising, as they have different properties than natural images. In this
study, we test the applicability of common IQA measures for medical image data
by comparing their assessment to manually rated chest X-ray (5 experts) and
photoacoustic image data (2 experts). Moreover, we include supplementary
studies on grayscale natural images and accelerated brain MRI data. The results
of all experiments show a similar outcome in line with previous findings for
medical images: PSNR and SSIM in the default setting are in the lower range of
the result list and HaarPSI outperforms the other tested measures in the
overall performance. Also among the top performers in our medical experiments
are the full reference measures FSIM, LPIPS and MS-SSIM. Generally, the results
on natural images yield considerably higher correlations, suggesting that
additional employment of tailored IQA measures for medical imaging algorithms
is needed.
| [
{
"created": "Wed, 29 May 2024 16:04:03 GMT",
"version": "v1"
},
{
"created": "Tue, 20 Aug 2024 12:05:44 GMT",
"version": "v2"
},
{
"created": "Sun, 6 Oct 2024 16:06:54 GMT",
"version": "v3"
}
] | 2024-10-16 | [
[
"Breger",
"Anna",
""
],
[
"Karner",
"Clemens",
""
],
[
"Selby",
"Ian",
""
],
[
"Gröhl",
"Janek",
""
],
[
"Dittmer",
"Sören",
""
],
[
"Lilley",
"Edward",
""
],
[
"Babar",
"Judith",
""
],
[
"Beckford",
"Jake",
""
],
[
"Else",
"Thomas R",
""
],
[
"Sadler",
"Timothy J",
""
],
[
"Shahipasand",
"Shahab",
""
],
[
"Thavakumar",
"Arthikkaa",
""
],
[
"Roberts",
"Michael",
""
],
[
"Schönlieb",
"Carola-Bibiane",
""
]
] |
2405.19255 | Haowen Xu | Jose Tupayachi, Haowen Xu, Olufemi A. Omitaomu, Mustafa Can Camur,
Aliza Sharmin, Xueping Li | Towards Next-Generation Urban Decision Support Systems through
AI-Powered Construction of Scientific Ontology using Large Language Models --
A Case in Optimizing Intermodal Freight Transportation | null | Smart Cities, 2024, 7(5), 2392-2421 | 10.3390/smartcities7050094 | null | cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The incorporation of Artificial Intelligence (AI) models into various
optimization systems is on the rise. Yet, addressing complex urban and
environmental management problems normally requires in-depth domain science and
informatics expertise. This expertise is essential for deriving data and
simulation-driven for informed decision support. In this context, we
investigate the potential of leveraging the pre-trained Large Language Models
(LLMs). By adopting ChatGPT API as the reasoning core, we outline an integrated
workflow that encompasses natural language processing, methontology-based
prompt tuning, and transformers. This workflow automates the creation of
scenario-based ontology using existing research articles and technical manuals
of urban datasets and simulations. The outcomes of our methodology are
knowledge graphs in widely adopted ontology languages (e.g., OWL, RDF, SPARQL).
These facilitate the development of urban decision support systems by enhancing
the data and metadata modeling, the integration of complex datasets, the
coupling of multi-domain simulation models, and the formulation of
decision-making metrics and workflow. The feasibility of our methodology is
evaluated through a comparative analysis that juxtaposes our AI-generated
ontology with the well-known Pizza Ontology employed in tutorials for popular
ontology software (e.g., prot\'eg\'e). We close with a real-world case study of
optimizing the complex urban system of multi-modal freight transportation by
generating anthologies of various domain data and simulations to support
informed decision-making.
| [
{
"created": "Wed, 29 May 2024 16:40:31 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Aug 2024 21:03:04 GMT",
"version": "v2"
},
{
"created": "Fri, 6 Sep 2024 20:04:22 GMT",
"version": "v3"
}
] | 2024-09-10 | [
[
"Tupayachi",
"Jose",
""
],
[
"Xu",
"Haowen",
""
],
[
"Omitaomu",
"Olufemi A.",
""
],
[
"Camur",
"Mustafa Can",
""
],
[
"Sharmin",
"Aliza",
""
],
[
"Li",
"Xueping",
""
]
] |
2405.19331 | Simon Giebenhain | Simon Giebenhain, Tobias Kirschstein, Martin R\"unz, Lourdes Agapito,
Matthias Nie{\ss}ner | NPGA: Neural Parametric Gaussian Avatars | Project Page: see https://simongiebenhain.github.io/NPGA/ ; Youtube
Video: see https://youtu.be/t0S0OK7WnA4 | SIGGRAPH Asia 2024 Conference Papers (SA Conference Papers '24),
December 3-6, 2024, Tokyo, Japan | 10.1145/3680528.3687689 | null | cs.CV cs.AI cs.GR | http://creativecommons.org/licenses/by/4.0/ | The creation of high-fidelity, digital versions of human heads is an
important stepping stone in the process of further integrating virtual
components into our everyday lives. Constructing such avatars is a challenging
research problem, due to a high demand for photo-realism and real-time
rendering performance. In this work, we propose Neural Parametric Gaussian
Avatars (NPGA), a data-driven approach to create high-fidelity, controllable
avatars from multi-view video recordings. We build our method around 3D
Gaussian splatting for its highly efficient rendering and to inherit the
topological flexibility of point clouds. In contrast to previous work, we
condition our avatars' dynamics on the rich expression space of neural
parametric head models (NPHM), instead of mesh-based 3DMMs. To this end, we
distill the backward deformation field of our underlying NPHM into forward
deformations which are compatible with rasterization-based rendering. All
remaining fine-scale, expression-dependent details are learned from the
multi-view videos. For increased representational capacity of our avatars, we
propose per-Gaussian latent features that condition each primitives dynamic
behavior. To regularize this increased dynamic expressivity, we propose
Laplacian terms on the latent features and predicted dynamics. We evaluate our
method on the public NeRSemble dataset, demonstrating that NPGA significantly
outperforms the previous state-of-the-art avatars on the self-reenactment task
by 2.6 PSNR. Furthermore, we demonstrate accurate animation capabilities from
real-world monocular videos.
| [
{
"created": "Wed, 29 May 2024 17:58:09 GMT",
"version": "v1"
},
{
"created": "Fri, 13 Sep 2024 17:41:21 GMT",
"version": "v2"
}
] | 2024-09-16 | [
[
"Giebenhain",
"Simon",
""
],
[
"Kirschstein",
"Tobias",
""
],
[
"Rünz",
"Martin",
""
],
[
"Agapito",
"Lourdes",
""
],
[
"Nießner",
"Matthias",
""
]
] |
2405.19442 | Rongjun Qin | Ningli Xu, Rongjun Qin | Large-scale DSM registration via motion averaging | 9 Figures | ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial
Information Sciences. X-1-2024 | 10.5194/isprs-annals-x-1-2024-275-2024 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generating wide-area digital surface models (DSMs) requires registering a
large number of individual, and partially overlapped DSMs. This presents a
challenging problem for a typical registration algorithm, since when a large
number of observations from these multiple DSMs are considered, it may easily
cause memory overflow. Sequential registration algorithms, although can
significantly reduce the computation, are especially vulnerable for small
overlapped pairs, leading to a large error accumulation. In this work, we
propose a novel solution that builds the DSM registration task as a motion
averaging problem: pair-wise DSMs are registered to build a scene graph, with
edges representing relative poses between DSMs. Specifically, based on the grid
structure of the large DSM, the pair-wise registration is performed using a
novel nearest neighbor search method. We show that the scene graph can be
optimized via an extremely fast motion average algorithm with O(N) complexity
(N refers to the number of images). Evaluation of high-resolution
satellite-derived DSM demonstrates significant improvement in computation and
accuracy.
| [
{
"created": "Wed, 29 May 2024 18:40:11 GMT",
"version": "v1"
},
{
"created": "Sun, 2 Jun 2024 04:16:01 GMT",
"version": "v2"
}
] | 2024-06-04 | [
[
"Xu",
"Ningli",
""
],
[
"Qin",
"Rongjun",
""
]
] |
2405.19479 | Harini Suresh | Harini Suresh, Emily Tseng, Meg Young, Mary L. Gray, Emma Pierson,
Karen Levy | Participation in the age of foundation models | 13 pages, 2 figures. Appeared at FAccT '24 | In The 2024 ACM Conference on Fairness, Accountability, and
Transparency (FAccT '24), June 3-6, 2024, Rio de Janeiro, Brazil. ACM, New
York, NY, USA, 13 pages | 10.1145/3630106.3658992 | null | cs.CY cs.AI cs.HC cs.LG | http://creativecommons.org/licenses/by/4.0/ | Growing interest and investment in the capabilities of foundation models has
positioned such systems to impact a wide array of public services. Alongside
these opportunities is the risk that these systems reify existing power
imbalances and cause disproportionate harm to marginalized communities.
Participatory approaches hold promise to instead lend agency and
decision-making power to marginalized stakeholders. But existing approaches in
participatory AI/ML are typically deeply grounded in context - how do we apply
these approaches to foundation models, which are, by design, disconnected from
context? Our paper interrogates this question.
First, we examine existing attempts at incorporating participation into
foundation models. We highlight the tension between participation and scale,
demonstrating that it is intractable for impacted communities to meaningfully
shape a foundation model that is intended to be universally applicable. In
response, we develop a blueprint for participatory foundation models that
identifies more local, application-oriented opportunities for meaningful
participation. In addition to the "foundation" layer, our framework proposes
the "subfloor'' layer, in which stakeholders develop shared technical
infrastructure, norms and governance for a grounded domain, and the "surface''
layer, in which affected communities shape the use of a foundation model for a
specific downstream task. The intermediate "subfloor'' layer scopes the range
of potential harms to consider, and affords communities more concrete avenues
for deliberation and intervention. At the same time, it avoids duplicative
effort by scaling input across relevant use cases. Through three case studies
in clinical care, financial services, and journalism, we illustrate how this
multi-layer model can create more meaningful opportunities for participation
than solely intervening at the foundation layer.
| [
{
"created": "Wed, 29 May 2024 19:53:23 GMT",
"version": "v1"
}
] | 2024-05-31 | [
[
"Suresh",
"Harini",
""
],
[
"Tseng",
"Emily",
""
],
[
"Young",
"Meg",
""
],
[
"Gray",
"Mary L.",
""
],
[
"Pierson",
"Emma",
""
],
[
"Levy",
"Karen",
""
]
] |
2405.19808 | Herman Cappelen | Herman Cappelen and Josh Dever | AI with Alien Content and Alien Metasemantics | 20 pages, book chapter | in Ernie Lepore and Luvell Anderson (Eds), The Oxford Handbook of
Applied Philosophy of Language, Oxford Handbooks (2024) | 10.1093/oxfordhb/9780192844118.013.47 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | AlphaGo plays chess and Go in a creative and novel way. It is natural for us
to attribute contents to it, such as that it doesn't view being several pawns
behind, if it has more board space, as bad. The framework introduced in
Cappelen and Dever (2021) provides a way of thinking about the semantics and
the metasemantics of AI content: does AlphaGo entertain contents like this, and
if so, in virtue of what does a given state of the program mean that particular
content? One salient question Cappelen and Dever didn't consider was the
possibility of alien content. Alien content is content that is not or cannot be
expressed by human beings. It's highly plausible that AlphaGo, or any other
sophisticated AI system, expresses alien contents. That this is so, moreover,
is plausibly a metasemantic fact: a fact that has to do with how AI comes to
entertain content in the first place, one that will heed the vastly different
etiology of AI and human content. This chapter explores the question of alien
content in AI from a semantic and metasemantic perspective. It lays out the
logical space of possible responses to the semantic and metasemantic questions
alien content poses, considers whether and how we humans could communicate with
entities who express alien content, and points out that getting clear about
such questions might be important for more 'applied' issues in the philosophy
of AI, such as existential risk and XAI.
| [
{
"created": "Thu, 30 May 2024 08:17:15 GMT",
"version": "v1"
},
{
"created": "Sun, 2 Jun 2024 22:27:50 GMT",
"version": "v2"
}
] | 2024-06-04 | [
[
"Cappelen",
"Herman",
""
],
[
"Dever",
"Josh",
""
]
] |
2405.19837 | Margarida Romero | Margarida Romero (LINE, COMUE UCA, ULaval, Mnemosyne) | Lifelong learning challenges in the era of artificial intelligence: a
computational thinking perspective | null | IRMBAM, Ipag, Jul 2024, Nice, France | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapid advancement of artificial intelligence (AI) has brought significant
challenges to the education and workforce skills required to take advantage of
AI for human-AI collaboration in the workplace. As AI continues to reshape
industries and job markets, the need to define how AI literacy can be
considered in lifelong learning has become increasingly critical (Cetindamar et
al., 2022; Laupichler et al., 2022; Romero et al., 2023). Like any new
technology, AI is the subject of both hopes and fears, and what it entails
today presents major challenges (Cugurullo \& Acheampong, 2023; Villani et al.,
2018). It also raises profound questions about our own humanity. Will the
machine surpass the intelligence of the humans who designed it? What will be
the relationship between so-called AI and our human intelligences? How could
human-AI collaboration be regulated in a way that serves the Sustainable
Development Goals (SDGs)? This paper provides a review of the challenges of
lifelong learning in the era of AI from a computational thinking, critical
thinking, and creative competencies perspective, highlighting the implications
for management and leadership in organizations.
| [
{
"created": "Thu, 30 May 2024 08:46:11 GMT",
"version": "v1"
}
] | 2024-05-31 | [
[
"Romero",
"Margarida",
"",
"LINE, COMUE UCA, ULaval, Mnemosyne"
]
] |
2405.19973 | Yang-Hui He | Yang-Hui He | A Triumvirate of AI Driven Theoretical Discovery | 14 pages, under consideration for Nature Review Physics | nature reviews physics Aug 5, 2024 | 10.1038/s42254-024-00740-1 | null | math.HO cs.AI hep-th physics.hist-ph | http://creativecommons.org/licenses/by/4.0/ | Recent years have seen the dramatic rise of the usage of AI algorithms in
pure mathematics and fundamental sciences such as theoretical physics. This is
perhaps counter-intuitive since mathematical sciences require the rigorous
definitions, derivations, and proofs, in contrast to the experimental sciences
which rely on the modelling of data with error-bars. In this Perspective, we
categorize the approaches to mathematical discovery as "top-down", "bottom-up"
and "meta-mathematics", as inspired by historical examples. We review some of
the progress over the last few years, comparing and contrasting both the
advances and the short-comings in each approach. We argue that while the
theorist is in no way in danger of being replaced by AI in the near future, the
hybrid of human expertise and AI algorithms will become an integral part of
theoretical discovery.
| [
{
"created": "Thu, 30 May 2024 11:57:00 GMT",
"version": "v1"
}
] | 2024-08-07 | [
[
"He",
"Yang-Hui",
""
]
] |
2405.20172 | Alaa Nfissi | Alaa Nfissi, Wassim Bouachir, Nizar Bouguila, Brian Mishara | Iterative Feature Boosting for Explainable Speech Emotion Recognition | Published in: 2023 International Conference on Machine Learning and
Applications (ICMLA) | 2023 International Conference on Machine Learning and Applications
(ICMLA), Jacksonville, FL, USA, 2023, pp. 543-549 | 10.1109/ICMLA58977.2023.00081 | null | cs.SD cs.AI cs.CL cs.LG eess.AS | http://creativecommons.org/licenses/by/4.0/ | In speech emotion recognition (SER), using predefined features without
considering their practical importance may lead to high dimensional datasets,
including redundant and irrelevant information. Consequently, high-dimensional
learning often results in decreasing model accuracy while increasing
computational complexity. Our work underlines the importance of carefully
considering and analyzing features in order to build efficient SER systems. We
present a new supervised SER method based on an efficient feature engineering
approach. We pay particular attention to the explainability of results to
evaluate feature relevance and refine feature sets. This is performed
iteratively through feature evaluation loop, using Shapley values to boost
feature selection and improve overall framework performance. Our approach
allows thus to balance the benefits between model performance and transparency.
The proposed method outperforms human-level performance (HLP) and
state-of-the-art machine learning methods in emotion recognition on the TESS
dataset. The source code of this paper is publicly available at
https://github.com/alaaNfissi/Iterative-Feature-Boosting-for-Explainable-Speech-Emotion-Recognition.
| [
{
"created": "Thu, 30 May 2024 15:44:27 GMT",
"version": "v1"
},
{
"created": "Fri, 31 May 2024 01:59:20 GMT",
"version": "v2"
},
{
"created": "Wed, 5 Jun 2024 22:28:13 GMT",
"version": "v3"
}
] | 2024-06-07 | [
[
"Nfissi",
"Alaa",
""
],
[
"Bouachir",
"Wassim",
""
],
[
"Bouguila",
"Nizar",
""
],
[
"Mishara",
"Brian",
""
]
] |
2405.20501 | Shivendra Agrawal | Shivendra Agrawal, Suresh Nayak, Ashutosh Naik, and Bradley Hayes | ShelfHelp: Empowering Humans to Perform Vision-Independent Manipulation
Tasks with a Socially Assistive Robotic Cane | 8 pages, 14 figures and charts | In AAMAS (pp. 1514-1523) 2023 | 10.5555/3545946.3598805 | null | cs.RO cs.AI cs.CV cs.HC cs.LG | http://creativecommons.org/licenses/by/4.0/ | The ability to shop independently, especially in grocery stores, is important
for maintaining a high quality of life. This can be particularly challenging
for people with visual impairments (PVI). Stores carry thousands of products,
with approximately 30,000 new products introduced each year in the US market
alone, presenting a challenge even for modern computer vision solutions.
Through this work, we present a proof-of-concept socially assistive robotic
system we call ShelfHelp, and propose novel technical solutions for enhancing
instrumented canes traditionally meant for navigation tasks with additional
capability within the domain of shopping. ShelfHelp includes a novel visual
product locator algorithm designed for use in grocery stores and a novel
planner that autonomously issues verbal manipulation guidance commands to guide
the user during product retrieval. Through a human subjects study, we show the
system's success in locating and providing effective manipulation guidance to
retrieve desired products with novice users. We compare two autonomous verbal
guidance modes achieving comparable performance to a human assistance baseline
and present encouraging findings that validate our system's efficiency and
effectiveness and through positive subjective metrics including competence,
intelligence, and ease of use.
| [
{
"created": "Thu, 30 May 2024 21:42:54 GMT",
"version": "v1"
}
] | 2024-06-03 | [
[
"Agrawal",
"Shivendra",
""
],
[
"Nayak",
"Suresh",
""
],
[
"Naik",
"Ashutosh",
""
],
[
"Hayes",
"Bradley",
""
]
] |
2405.20643 | Nerea Aranjuelo | Nerea Aranjuelo, Siyu Huang, Ignacio Arganda-Carreras, Luis Unzueta,
Oihana Otaegui, Hanspeter Pfister, Donglai Wei | Learning Gaze-aware Compositional GAN | Accepted by ETRA 2024 as Full paper, and as journal paper in
Proceedings of the ACM on Computer Graphics and Interactive Techniques | Proceedings of the ACM on Computer Graphics and Interactive
Techniques, 2024 | 10.1145/3654706 | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Gaze-annotated facial data is crucial for training deep neural networks
(DNNs) for gaze estimation. However, obtaining these data is labor-intensive
and requires specialized equipment due to the challenge of accurately
annotating the gaze direction of a subject. In this work, we present a
generative framework to create annotated gaze data by leveraging the benefits
of labeled and unlabeled data sources. We propose a Gaze-aware Compositional
GAN that learns to generate annotated facial images from a limited labeled
dataset. Then we transfer this model to an unlabeled data domain to take
advantage of the diversity it provides. Experiments demonstrate our approach's
effectiveness in generating within-domain image augmentations in the ETH-XGaze
dataset and cross-domain augmentations in the CelebAMask-HQ dataset domain for
gaze estimation DNN training. We also show additional applications of our work,
which include facial image editing and gaze redirection.
| [
{
"created": "Fri, 31 May 2024 07:07:54 GMT",
"version": "v1"
}
] | 2024-06-03 | [
[
"Aranjuelo",
"Nerea",
""
],
[
"Huang",
"Siyu",
""
],
[
"Arganda-Carreras",
"Ignacio",
""
],
[
"Unzueta",
"Luis",
""
],
[
"Otaegui",
"Oihana",
""
],
[
"Pfister",
"Hanspeter",
""
],
[
"Wei",
"Donglai",
""
]
] |
2405.20705 | S\"oren Schleibaum | S\"oren Schleibaum, Lu Feng, Sarit Kraus, J\"org P. M\"uller | ADESSE: Advice Explanations in Complex Repeated Decision-Making
Environments | null | Proceedings of the Thirty-Third International Joint Conference on
Artificial Intelligence (2024) | 10.24963/ijcai.2024/875 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In the evolving landscape of human-centered AI, fostering a synergistic
relationship between humans and AI agents in decision-making processes stands
as a paramount challenge. This work considers a problem setup where an
intelligent agent comprising a neural network-based prediction component and a
deep reinforcement learning component provides advice to a human decision-maker
in complex repeated decision-making environments. Whether the human
decision-maker would follow the agent's advice depends on their beliefs and
trust in the agent and on their understanding of the advice itself. To this
end, we developed an approach named ADESSE to generate explanations about the
adviser agent to improve human trust and decision-making. Computational
experiments on a range of environments with varying model sizes demonstrate the
applicability and scalability of ADESSE. Furthermore, an interactive game-based
user study shows that participants were significantly more satisfied, achieved
a higher reward in the game, and took less time to select an action when
presented with explanations generated by ADESSE. These findings illuminate the
critical role of tailored, human-centered explanations in AI-assisted
decision-making.
| [
{
"created": "Fri, 31 May 2024 08:59:20 GMT",
"version": "v1"
},
{
"created": "Tue, 10 Sep 2024 09:49:54 GMT",
"version": "v2"
}
] | 2024-09-11 | [
[
"Schleibaum",
"Sören",
""
],
[
"Feng",
"Lu",
""
],
[
"Kraus",
"Sarit",
""
],
[
"Müller",
"Jörg P.",
""
]
] |