id
stringlengths 10
10
| submitter
stringlengths 3
52
| authors
stringlengths 6
7.24k
| title
stringlengths 12
217
| comments
stringlengths 1
446
⌀ | journal-ref
stringlengths 4
297
| doi
stringlengths 12
118
⌀ | report-no
stringclasses 237
values | categories
stringlengths 5
71
| license
stringclasses 6
values | abstract
stringlengths 90
3.26k
| versions
listlengths 1
17
| update_date
stringclasses 969
values | authors_parsed
sequencelengths 1
451
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2404.12091 | Wu Ran | Wu Ran, Peirong Ma, Zhiquan He, Hao Ren, Hong Lu | Harnessing Joint Rain-/Detail-aware Representations to Eliminate
Intricate Rains | 21 pages, 14 figures | International Conference on Learning Representations 2024 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in image deraining have focused on training powerful models
on mixed multiple datasets comprising diverse rain types and backgrounds.
However, this approach tends to overlook the inherent differences among rainy
images, leading to suboptimal results. To overcome this limitation, we focus on
addressing various rainy images by delving into meaningful representations that
encapsulate both the rain and background components. Leveraging these
representations as instructive guidance, we put forth a Context-based
Instance-level Modulation (CoI-M) mechanism adept at efficiently modulating
CNN- or Transformer-based models. Furthermore, we devise a rain-/detail-aware
contrastive learning strategy to help extract joint rain-/detail-aware
representations. By integrating CoI-M with the rain-/detail-aware Contrastive
learning, we develop CoIC, an innovative and potent algorithm tailored for
training models on mixed datasets. Moreover, CoIC offers insight into modeling
relationships of datasets, quantitatively assessing the impact of rain and
details on restoration, and unveiling distinct behaviors of models given
diverse inputs. Extensive experiments validate the efficacy of CoIC in boosting
the deraining ability of CNN and Transformer models. CoIC also enhances the
deraining prowess remarkably when real-world dataset is included.
| [
{
"created": "Thu, 18 Apr 2024 11:20:53 GMT",
"version": "v1"
}
] | 2024-04-19 | [
[
"Ran",
"Wu",
""
],
[
"Ma",
"Peirong",
""
],
[
"He",
"Zhiquan",
""
],
[
"Ren",
"Hao",
""
],
[
"Lu",
"Hong",
""
]
] |
2404.12143 | Hilde Weerts | Hilde Weerts, Rapha\"ele Xenidis, Fabien Tarissan, Henrik Palmer
Olsen, Mykola Pechenizkiy | The Neutrality Fallacy: When Algorithmic Fairness Interventions are
(Not) Positive Action | null | 2024 ACM Conference on Fairness, Accountability, and Transparency
(FAccT '24) | 10.1145/3630106.3659025 | null | cs.AI cs.CY | http://creativecommons.org/licenses/by/4.0/ | Various metrics and interventions have been developed to identify and
mitigate unfair outputs of machine learning systems. While individuals and
organizations have an obligation to avoid discrimination, the use of
fairness-aware machine learning interventions has also been described as
amounting to 'algorithmic positive action' under European Union (EU)
non-discrimination law. As the Court of Justice of the European Union has been
strict when it comes to assessing the lawfulness of positive action, this would
impose a significant legal burden on those wishing to implement fair-ml
interventions. In this paper, we propose that algorithmic fairness
interventions often should be interpreted as a means to prevent discrimination,
rather than a measure of positive action. Specifically, we suggest that this
category mistake can often be attributed to neutrality fallacies: faulty
assumptions regarding the neutrality of fairness-aware algorithmic
decision-making. Our findings raise the question of whether a negative
obligation to refrain from discrimination is sufficient in the context of
algorithmic decision-making. Consequently, we suggest moving away from a duty
to 'not do harm' towards a positive obligation to actively 'do no harm' as a
more adequate framework for algorithmic decision-making and fair
ml-interventions.
| [
{
"created": "Thu, 18 Apr 2024 12:44:35 GMT",
"version": "v1"
}
] | 2024-04-19 | [
[
"Weerts",
"Hilde",
""
],
[
"Xenidis",
"Raphaële",
""
],
[
"Tarissan",
"Fabien",
""
],
[
"Olsen",
"Henrik Palmer",
""
],
[
"Pechenizkiy",
"Mykola",
""
]
] |
2404.12240 | Lukas Rottkamp | Lukas Rottkamp, Matthias Schubert | A Time-Inhomogeneous Markov Model for Resource Availability under Sparse
Observations | 11 pages, long version of a paper published at 26th ACM SIGSPATIAL
International Conference on Advances in Geographic Information Systems
(SIGSPATIAL 2018) | Proceedings of the 26th ACM SIGSPATIAL International Conference on
Advances in Geographic Information Systems (pp. 460-463) 2018 | 10.1145/3274895.3274945 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate spatio-temporal information about the current situation is crucial
for smart city applications such as modern routing algorithms. Often, this
information describes the state of stationary resources, e.g. the availability
of parking bays, charging stations or the amount of people waiting for a
vehicle to pick them up near a given location. To exploit this kind of
information, predicting future states of the monitored resources is often
mandatory because a resource might change its state within the time until it is
needed. To train an accurate predictive model, it is often not possible to
obtain a continuous time series on the state of the resource. For example, the
information might be collected from traveling agents visiting the resource with
an irregular frequency. Thus, it is necessary to develop methods which work on
sparse observations for training and prediction. In this paper, we propose
time-inhomogeneous discrete Markov models to allow accurate prediction even
when the frequency of observation is very rare. Our new model is able to blend
recent observations with historic data and also provide useful probabilistic
estimates for future states. Since resources availability in a city is
typically time-dependent, our Markov model is time-inhomogeneous and cyclic
within a predefined time interval. To train our model, we propose a modified
Baum-Welch algorithm. Evaluations on real-world datasets of parking bay
availability show that our new method indeed yields good results compared to
methods being trained on complete data and non-cyclic variants.
| [
{
"created": "Thu, 18 Apr 2024 15:00:59 GMT",
"version": "v1"
}
] | 2024-04-19 | [
[
"Rottkamp",
"Lukas",
""
],
[
"Schubert",
"Matthias",
""
]
] |
2404.12292 | Niklas Penzel | Niklas Penzel, Gideon Stein, Joachim Denzler | Reducing Bias in Pre-trained Models by Tuning while Penalizing Change | 12 pages, 12 figures, presented at VISAPP 2024 | Proceedings of the 19th International Joint Conference on Computer
Vision (2024), Imaging and Computer Graphics Theory and Applications - Volume
2: VISAPP, ISBN 978-989-758-679-8, ISSN 2184-4321, SciTePress, pages 90-101 | 10.5220/0012345800003660 | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Deep models trained on large amounts of data often incorporate implicit
biases present during training time. If later such a bias is discovered during
inference or deployment, it is often necessary to acquire new data and retrain
the model. This behavior is especially problematic in critical areas such as
autonomous driving or medical decision-making. In these scenarios, new data is
often expensive and hard to come by. In this work, we present a method based on
change penalization that takes a pre-trained model and adapts the weights to
mitigate a previously detected bias. We achieve this by tuning a
zero-initialized copy of a frozen pre-trained network. Our method needs very
few, in extreme cases only a single, examples that contradict the bias to
increase performance. Additionally, we propose an early stopping criterion to
modify baselines and reduce overfitting. We evaluate our approach on a
well-known bias in skin lesion classification and three other datasets from the
domain shift literature. We find that our approach works especially well with
very few images. Simple fine-tuning combined with our early stopping also leads
to performance benefits for a larger number of tuning samples.
| [
{
"created": "Thu, 18 Apr 2024 16:12:38 GMT",
"version": "v1"
}
] | 2024-04-19 | [
[
"Penzel",
"Niklas",
""
],
[
"Stein",
"Gideon",
""
],
[
"Denzler",
"Joachim",
""
]
] |
2404.12295 | Niklas Penzel | Tristan Piater, Niklas Penzel, Gideon Stein, Joachim Denzler | When Medical Imaging Met Self-Attention: A Love Story That Didn't Quite
Work Out | 10 pages, 2 figures, 5 tables, presented at VISAPP 2024 | Proceedings of the 19th International Joint Conference on Computer
Vision, Imaging and Computer Graphics Theory and Applications - Volume 2:
VISAPP (2024), ISBN 978-989-758-679-8, ISSN 2184-4321, SciTePress, pages
149-158 | 10.5220/0012382600003660 | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | A substantial body of research has focused on developing systems that assist
medical professionals during labor-intensive early screening processes, many
based on convolutional deep-learning architectures. Recently, multiple studies
explored the application of so-called self-attention mechanisms in the vision
domain. These studies often report empirical improvements over fully
convolutional approaches on various datasets and tasks. To evaluate this trend
for medical imaging, we extend two widely adopted convolutional architectures
with different self-attention variants on two different medical datasets. With
this, we aim to specifically evaluate the possible advantages of additional
self-attention. We compare our models with similarly sized convolutional and
attention-based baselines and evaluate performance gains statistically.
Additionally, we investigate how including such layers changes the features
learned by these models during the training. Following a hyperparameter search,
and contrary to our expectations, we observe no significant improvement in
balanced accuracy over fully convolutional models. We also find that important
features, such as dermoscopic structures in skin lesion images, are still not
learned by employing self-attention. Finally, analyzing local explanations, we
confirm biased feature usage. We conclude that merely incorporating attention
is insufficient to surpass the performance of existing fully convolutional
methods.
| [
{
"created": "Thu, 18 Apr 2024 16:18:41 GMT",
"version": "v1"
}
] | 2024-04-19 | [
[
"Piater",
"Tristan",
""
],
[
"Penzel",
"Niklas",
""
],
[
"Stein",
"Gideon",
""
],
[
"Denzler",
"Joachim",
""
]
] |
2404.12341 | Yinzhu Jin | Yinzhu Jin, Matthew B. Dwyer, P. Thomas Fletcher | Measuring Feature Dependency of Neural Networks by Collapsing Feature
Dimensions in the Data Manifold | Accepted and pulished in International Symposium on Biomedical
Imaging (ISBI) 2024: https://ieeexplore.ieee.org/document/10635874 | in 2024 IEEE International Symposium on Biomedical Imaging (ISBI),
Athens, Greece, 2024, pp. 1-5 | 10.1109/ISBI56570.2024.10635874 | null | cs.LG cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This paper introduces a new technique to measure the feature dependency of
neural network models. The motivation is to better understand a model by
querying whether it is using information from human-understandable features,
e.g., anatomical shape, volume, or image texture. Our method is based on the
principle that if a model is dependent on a feature, then removal of that
feature should significantly harm its performance. A targeted feature is
"removed" by collapsing the dimension in the data distribution that corresponds
to that feature. We perform this by moving data points along the feature
dimension to a baseline feature value while staying on the data manifold, as
estimated by a deep generative model. Then we observe how the model's
performance changes on the modified test data set, with the target feature
dimension removed. We test our method on deep neural network models trained on
synthetic image data with known ground truth, an Alzheimer's disease prediction
task using MRI and hippocampus segmentations from the OASIS-3 dataset, and a
cell nuclei classification task using the Lizard dataset.
| [
{
"created": "Thu, 18 Apr 2024 17:10:18 GMT",
"version": "v1"
},
{
"created": "Mon, 7 Oct 2024 21:43:23 GMT",
"version": "v2"
}
] | 2024-10-10 | [
[
"Jin",
"Yinzhu",
""
],
[
"Dwyer",
"Matthew B.",
""
],
[
"Fletcher",
"P. Thomas",
""
]
] |
2404.12361 | Trevor Chan | Trevor J. Chan, Chamith S. Rajapakse | Learning the Domain Specific Inverse NUFFT for Accelerated Spiral MRI
using Diffusion Models | null | 2024 IEEE International Symposium on Biomedical Imaging (ISBI) | 10.1109/ISBI56570.2024.10635304. | null | cs.AI physics.med-ph | http://creativecommons.org/licenses/by/4.0/ | Deep learning methods for accelerated MRI achieve state-of-the-art results
but largely ignore additional speedups possible with noncartesian sampling
trajectories. To address this gap, we created a generative diffusion
model-based reconstruction algorithm for multi-coil highly undersampled spiral
MRI. This model uses conditioning during training as well as frequency-based
guidance to ensure consistency between images and measurements. Evaluated on
retrospective data, we show high quality (structural similarity > 0.87) in
reconstructed images with ultrafast scan times (0.02 seconds for a 2D image).
We use this algorithm to identify a set of optimal variable-density spiral
trajectories and show large improvements in image quality compared to
conventional reconstruction using the non-uniform fast Fourier transform. By
combining efficient spiral sampling trajectories, multicoil imaging, and deep
learning reconstruction, these methods could enable the extremely high
acceleration factors needed for real-time 3D imaging.
| [
{
"created": "Thu, 18 Apr 2024 17:40:23 GMT",
"version": "v1"
},
{
"created": "Fri, 10 May 2024 18:47:01 GMT",
"version": "v2"
}
] | 2024-10-02 | [
[
"Chan",
"Trevor J.",
""
],
[
"Rajapakse",
"Chamith S.",
""
]
] |
2404.12415 | Loganathan Girija Divyanth | Shubhadip Dasgupta, Satwik Pate, Divya Rathore, L.G. Divyanth, Ayan
Das, Anshuman Nayak, Subhadip Dey, Asim Biswas, David C. Weindorf, Bin Li,
Sergio Henrique Godinho Silva, Bruno Teixeira Ribeiro, Sanjay Srivastava,
Somsubhra Chakraborty | Prediction of soil fertility parameters using USB-microscope imagery and
portable X-ray fluorescence spectrometry | Published in 'Soil Advances' | Soil Advances, Volume 2, 2024, 100016 | 10.1016/j.soilad.2024.100016 | null | eess.IV cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | This study investigated the use of portable X-ray fluorescence (PXRF)
spectrometry and soil image analysis for rapid soil fertility assessment, with
a focus on key indicators such as available boron (B), organic carbon (OC),
available manganese (Mn), available sulfur (S), and the sulfur availability
index (SAI). A total of 1,133 soil samples from diverse agro-climatic zones in
Eastern India were analyzed. The research integrated color and texture features
from microscopic soil images, PXRF data, and auxiliary soil variables (AVs)
using a Random Forest model. Results showed that combining image features (IFs)
with AVs significantly improved prediction accuracy for available B (R2 = 0.80)
and OC (R2 = 0.88). A data fusion approach, incorporating IFs, AVs, and PXRF
data, further enhanced predictions for available Mn and SAI, with R2 values of
0.72 and 0.70, respectively. The study highlights the potential of integrating
these technologies to offer rapid, cost-effective soil testing methods, paving
the way for more advanced predictive models and a deeper understanding of soil
fertility. Future work should explore the application of deep learning models
on a larger dataset, incorporating soils from a wider range of agro-climatic
zones under field conditions.
| [
{
"created": "Wed, 17 Apr 2024 17:57:20 GMT",
"version": "v1"
},
{
"created": "Thu, 5 Sep 2024 05:38:13 GMT",
"version": "v2"
}
] | 2024-09-06 | [
[
"Dasgupta",
"Shubhadip",
""
],
[
"Pate",
"Satwik",
""
],
[
"Rathore",
"Divya",
""
],
[
"Divyanth",
"L. G.",
""
],
[
"Das",
"Ayan",
""
],
[
"Nayak",
"Anshuman",
""
],
[
"Dey",
"Subhadip",
""
],
[
"Biswas",
"Asim",
""
],
[
"Weindorf",
"David C.",
""
],
[
"Li",
"Bin",
""
],
[
"Silva",
"Sergio Henrique Godinho",
""
],
[
"Ribeiro",
"Bruno Teixeira",
""
],
[
"Srivastava",
"Sanjay",
""
],
[
"Chakraborty",
"Somsubhra",
""
]
] |
2404.12489 | Christopher Bryant | Kelvin Wey Han Chan, Christopher Bryant, Li Nguyen, Andrew Caines,
Zheng Yuan | Grammatical Error Correction for Code-Switched Sentences by Learners of
English | null | Proceedings of the 2024 Joint International Conference on
Computational Linguistics | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Code-switching (CSW) is a common phenomenon among multilingual speakers where
multiple languages are used in a single discourse or utterance. Mixed language
utterances may still contain grammatical errors however, yet most existing
Grammar Error Correction (GEC) systems have been trained on monolingual data
and not developed with CSW in mind. In this work, we conduct the first
exploration into the use of GEC systems on CSW text. Through this exploration,
we propose a novel method of generating synthetic CSW GEC datasets by
translating different spans of text within existing GEC corpora. We then
investigate different methods of selecting these spans based on CSW ratio,
switch-point factor and linguistic constraints, and identify how they affect
the performance of GEC systems on CSW text. Our best model achieves an average
increase of 1.57 $F_{0.5}$ across 3 CSW test sets (English-Chinese,
English-Korean and English-Japanese) without affecting the model's performance
on a monolingual dataset. We furthermore discovered that models trained on one
CSW language generalise relatively well to other typologically similar CSW
languages.
| [
{
"created": "Thu, 18 Apr 2024 20:05:30 GMT",
"version": "v1"
},
{
"created": "Mon, 6 May 2024 22:27:36 GMT",
"version": "v2"
}
] | 2024-08-13 | [
[
"Chan",
"Kelvin Wey Han",
""
],
[
"Bryant",
"Christopher",
""
],
[
"Nguyen",
"Li",
""
],
[
"Caines",
"Andrew",
""
],
[
"Yuan",
"Zheng",
""
]
] |
2404.12631 | Solvi Arnold | Solvi Arnold, Reiji Suzuki, Takaya Arita, Kimitoshi Yamazaki | Breaching the Bottleneck: Evolutionary Transition from Reward-Driven
Learning to Reward-Agnostic Domain-Adapted Learning in Neuromodulated Neural
Nets | Camera ready version. 9 pages, 5 figures | ALIFE 2024: Proceedings of the 2024 Artificial Life Conference | 10.1162/isal_a_00725 | null | cs.NE cs.AI | http://creativecommons.org/licenses/by/4.0/ | Advanced biological intelligence learns efficiently from an information-rich
stream of stimulus information, even when feedback on behaviour quality is
sparse or absent. Such learning exploits implicit assumptions about task
domains. We refer to such learning as Domain-Adapted Learning (DAL). In
contrast, AI learning algorithms rely on explicit externally provided measures
of behaviour quality to acquire fit behaviour. This imposes an information
bottleneck that precludes learning from diverse non-reward stimulus
information, limiting learning efficiency. We consider the question of how
biological evolution circumvents this bottleneck to produce DAL. We propose
that species first evolve the ability to learn from reward signals, providing
inefficient (bottlenecked) but broad adaptivity. From there, integration of
non-reward information into the learning process can proceed via gradual
accumulation of biases induced by such information on specific task domains.
This scenario provides a biologically plausible pathway towards
bottleneck-free, domain-adapted learning. Focusing on the second phase of this
scenario, we set up a population of NNs with reward-driven learning modelled as
Reinforcement Learning (A2C), and allow evolution to improve learning
efficiency by integrating non-reward information into the learning process
using a neuromodulatory update mechanism. On a navigation task in continuous 2D
space, evolved DAL agents show a 300-fold increase in learning speed compared
to pure RL agents. Evolution is found to eliminate reliance on reward
information altogether, allowing DAL agents to learn from non-reward
information exclusively, using local neuromodulation-based connection weight
updates only. Code available at github.com/aislab/dal.
| [
{
"created": "Fri, 19 Apr 2024 05:14:47 GMT",
"version": "v1"
},
{
"created": "Fri, 2 Aug 2024 07:04:42 GMT",
"version": "v2"
}
] | 2024-08-05 | [
[
"Arnold",
"Solvi",
""
],
[
"Suzuki",
"Reiji",
""
],
[
"Arita",
"Takaya",
""
],
[
"Yamazaki",
"Kimitoshi",
""
]
] |
2404.12691 | William Brannon | Shayne Longpre, Robert Mahari, Naana Obeng-Marnu, William Brannon,
Tobin South, Katy Gero, Sandy Pentland, Jad Kabbara | Data Authenticity, Consent, & Provenance for AI are all broken: what
will it take to fix them? | ICML 2024 camera-ready version (Spotlight paper). 9 pages, 2 tables | Proceedings of ICML 2024, in PMLR 235:32711-32725. URL:
https://proceedings.mlr.press/v235/longpre24b.html | null | null | cs.AI cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | New capabilities in foundation models are owed in large part to massive,
widely-sourced, and under-documented training data collections. Existing
practices in data collection have led to challenges in tracing authenticity,
verifying consent, preserving privacy, addressing representation and bias,
respecting copyright, and overall developing ethical and trustworthy foundation
models. In response, regulation is emphasizing the need for training data
transparency to understand foundation models' limitations. Based on a
large-scale analysis of the foundation model training data landscape and
existing solutions, we identify the missing infrastructure to facilitate
responsible foundation model development practices. We examine the current
shortcomings of common tools for tracing data authenticity, consent, and
documentation, and outline how policymakers, developers, and data creators can
facilitate responsible foundation model development by adopting universal data
provenance standards.
| [
{
"created": "Fri, 19 Apr 2024 07:42:35 GMT",
"version": "v1"
},
{
"created": "Fri, 30 Aug 2024 21:20:12 GMT",
"version": "v2"
}
] | 2024-09-04 | [
[
"Longpre",
"Shayne",
""
],
[
"Mahari",
"Robert",
""
],
[
"Obeng-Marnu",
"Naana",
""
],
[
"Brannon",
"William",
""
],
[
"South",
"Tobin",
""
],
[
"Gero",
"Katy",
""
],
[
"Pentland",
"Sandy",
""
],
[
"Kabbara",
"Jad",
""
]
] |
2404.12718 | Hisashi Shimodaira | Hisashi Shimodaira | Improving Prediction Accuracy of Semantic Segmentation Methods Using
Convolutional Autoencoder Based Pre-processing Layers | The changes from the previous version: References [14] and [17] are
added in page 2372. Summary of results and discussion (6) are added in page
2383. The new version has been reviewed by AAIML Journal. Reviewer1: The
manuscript presents a solid contribution and is well written. The reviewer2:
The work is novel and the results are promissing | Advances in Artificial Intelligence and Machine Learning; Research
4 (2) 2369-2386; Published 29-06-2024 | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a method to improve prediction accuracy of semantic
segmentation methods as follows: (1) construct a neural network that has
pre-processing layers based on a convolutional autoencoder ahead of a semantic
segmentation network, and (2) train the entire network initialized by the
weights of the pre-trained autoencoder. We applied this method to the fully
convolutional network (FCN) and experimentally compared its prediction accuracy
on the cityscapes dataset. The Mean IoU of the proposed target model with the
He normal initialization is 18.7% higher than that of FCN with the He normal
initialization. In addition, those of the modified models of the target model
are significantly higher than that of FCN with the He normal initialization.
The accuracy and loss curves during the training showed that these are
resulting from the improvement of the generalization ability. All of these
results provide strong evidence that the proposed method is significantly
effective in improving the prediction accuracy of FCN. The proposed method has
the following features: it is comparatively simple, whereas the effect on
improving the generalization ability and prediction accuracy of FCN is
significant; the increase in the number of parameters by using it is very
small, and that in the computation time is substantially large. In principle,
the proposed method can be applied to other semantic segmentation methods. For
semantic segmentation, at present, there is no effective way to improve the
prediction accuracy of existing methods. None have published a method which is
the same as or similar to our method and none have used such a method in
practice. Therefore, we believe that our method is useful in practice and
worthy of being widely known and used.
| [
{
"created": "Fri, 19 Apr 2024 08:58:53 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Jul 2024 08:33:59 GMT",
"version": "v2"
}
] | 2024-07-10 | [
[
"Shimodaira",
"Hisashi",
""
]
] |
2404.12810 | Marharyta Domnich | Marharyta Domnich, and Raul Vicente | Enhancing Counterfactual Explanation Search with Diffusion Distance and
Directional Coherence | This work has been accepted to be presented to The 2nd World
Conference on eXplainable Artificial Intelligence (xAI 2024), July 17-19,
2024 - Valletta, Malta | In: Longo, L., Lapuschkin, S., Seifert, C. (eds) Explainable
Artificial Intelligence. xAI 2024. Communications in Computer and Information
Science, vol 2155. Springer, Cham | 10.1007/978-3-031-63800-8_4 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | A pressing issue in the adoption of AI models is the increasing demand for
more human-centric explanations of their predictions. To advance towards more
human-centric explanations, understanding how humans produce and select
explanations has been beneficial. In this work, inspired by insights of human
cognition we propose and test the incorporation of two novel biases to enhance
the search for effective counterfactual explanations. Central to our
methodology is the application of diffusion distance, which emphasizes data
connectivity and actionability in the search for feasible counterfactual
explanations. In particular, diffusion distance effectively weights more those
points that are more interconnected by numerous short-length paths. This
approach brings closely connected points nearer to each other, identifying a
feasible path between them. We also introduce a directional coherence term that
allows the expression of a preference for the alignment between the joint and
marginal directional changes in feature space to reach a counterfactual. This
term enables the generation of counterfactual explanations that align with a
set of marginal predictions based on expectations of how the outcome of the
model varies by changing one feature at a time. We evaluate our method, named
Coherent Directional Counterfactual Explainer (CoDiCE), and the impact of the
two novel biases against existing methods such as DiCE, FACE, Prototypes, and
Growing Spheres. Through a series of ablation experiments on both synthetic and
real datasets with continuous and mixed-type features, we demonstrate the
effectiveness of our method.
| [
{
"created": "Fri, 19 Apr 2024 11:47:17 GMT",
"version": "v1"
},
{
"created": "Thu, 25 Jul 2024 08:00:44 GMT",
"version": "v2"
}
] | 2024-07-26 | [
[
"Domnich",
"Marharyta",
""
],
[
"Vicente",
"Raul",
""
]
] |
2404.12832 | Marharyta Domnich | Dmytro Shvetsov, Joonas Ariva, Marharyta Domnich, Raul Vicente, and
Dmytro Fishman | COIN: Counterfactual inpainting for weakly supervised semantic
segmentation for medical images | This work has been accepted to be presented to The 2nd World
Conference on eXplainable Artificial Intelligence (xAI 2024), July 17-19,
2024 - Valletta, Malta | In: Longo, L., Lapuschkin, S., Seifert, C. (eds) Explainable
Artificial Intelligence. xAI 2024. Communications in Computer and Information
Science, vol 2155. Springer, Cham | 10.1007/978-3-031-63800-8_3 | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Deep learning is dramatically transforming the field of medical imaging and
radiology, enabling the identification of pathologies in medical images,
including computed tomography (CT) and X-ray scans. However, the performance of
deep learning models, particularly in segmentation tasks, is often limited by
the need for extensive annotated datasets. To address this challenge, the
capabilities of weakly supervised semantic segmentation are explored through
the lens of Explainable AI and the generation of counterfactual explanations.
The scope of this research is development of a novel counterfactual inpainting
approach (COIN) that flips the predicted classification label from abnormal to
normal by using a generative model. For instance, if the classifier deems an
input medical image X as abnormal, indicating the presence of a pathology, the
generative model aims to inpaint the abnormal region, thus reversing the
classifier's original prediction label. The approach enables us to produce
precise segmentations for pathologies without depending on pre-existing
segmentation masks. Crucially, image-level labels are utilized, which are
substantially easier to acquire than creating detailed segmentation masks. The
effectiveness of the method is demonstrated by segmenting synthetic targets and
actual kidney tumors from CT images acquired from Tartu University Hospital in
Estonia. The findings indicate that COIN greatly surpasses established
attribution methods, such as RISE, ScoreCAM, and LayerCAM, as well as an
alternative counterfactual explanation method introduced by Singla et al. This
evidence suggests that COIN is a promising approach for semantic segmentation
of tumors in CT images, and presents a step forward in making deep learning
applications more accessible and effective in healthcare, where annotated data
is scarce.
| [
{
"created": "Fri, 19 Apr 2024 12:09:49 GMT",
"version": "v1"
},
{
"created": "Thu, 25 Jul 2024 08:09:12 GMT",
"version": "v2"
}
] | 2024-07-26 | [
[
"Shvetsov",
"Dmytro",
""
],
[
"Ariva",
"Joonas",
""
],
[
"Domnich",
"Marharyta",
""
],
[
"Vicente",
"Raul",
""
],
[
"Fishman",
"Dmytro",
""
]
] |
2404.12845 | Aleksei Dorkin | Aleksei Dorkin and Kairit Sirts | TartuNLP @ SIGTYP 2024 Shared Task: Adapting XLM-RoBERTa for Ancient and
Historical Languages | 11 pages, 3 figures | Proceedings of the 6th Workshop on Research in Computational
Linguistic Typology and Multilingual NLP, pp. 120-130, March 2024 | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | We present our submission to the unconstrained subtask of the SIGTYP 2024
Shared Task on Word Embedding Evaluation for Ancient and Historical Languages
for morphological annotation, POS-tagging, lemmatization, character- and
word-level gap-filling. We developed a simple, uniform, and computationally
lightweight approach based on the adapters framework using parameter-efficient
fine-tuning. We applied the same adapter-based approach uniformly to all tasks
and 16 languages by fine-tuning stacked language- and task-specific adapters.
Our submission obtained an overall second place out of three submissions, with
the first place in word-level gap-filling. Our results show the feasibility of
adapting language models pre-trained on modern languages to historical and
ancient languages via adapter training.
| [
{
"created": "Fri, 19 Apr 2024 12:26:28 GMT",
"version": "v1"
}
] | 2024-04-22 | [
[
"Dorkin",
"Aleksei",
""
],
[
"Sirts",
"Kairit",
""
]
] |
2404.12886 | Zeyu Ling | Zeyu Ling, Bo Han, Yongkang Wongkan, Han Lin, Mohan Kankanhalli,
Weidong Geng | MCM: Multi-condition Motion Synthesis Framework | null | International Joint Conference on Artificial Intelligence 2024 | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Conditional human motion synthesis (HMS) aims to generate human motion
sequences that conform to specific conditions. Text and audio represent the two
predominant modalities employed as HMS control conditions. While existing
research has primarily focused on single conditions, the multi-condition human
motion synthesis remains underexplored. In this study, we propose a
multi-condition HMS framework, termed MCM, based on a dual-branch structure
composed of a main branch and a control branch. This framework effectively
extends the applicability of the diffusion model, which is initially predicated
solely on textual conditions, to auditory conditions. This extension
encompasses both music-to-dance and co-speech HMS while preserving the
intrinsic quality of motion and the capabilities for semantic association
inherent in the original model. Furthermore, we propose the implementation of a
Transformer-based diffusion model, designated as MWNet, as the main branch.
This model adeptly apprehends the spatial intricacies and inter-joint
correlations inherent in motion sequences, facilitated by the integration of
multi-wise self-attention modules. Extensive experiments show that our method
achieves competitive results in single-condition and multi-condition HMS tasks.
| [
{
"created": "Fri, 19 Apr 2024 13:40:25 GMT",
"version": "v1"
}
] | 2024-04-22 | [
[
"Ling",
"Zeyu",
""
],
[
"Han",
"Bo",
""
],
[
"Wongkan",
"Yongkang",
""
],
[
"Lin",
"Han",
""
],
[
"Kankanhalli",
"Mohan",
""
],
[
"Geng",
"Weidong",
""
]
] |
2404.13024 | Ahan Shabanov | Ahan Shabanov, Shrisudhan Govindarajan, Cody Reading, Lily Goli,
Daniel Rebain, Kwang Moo Yi, Andrea Tagliasacchi | BANF: Band-limited Neural Fields for Levels of Detail Reconstruction | Project Page: https://theialab.github.io/banf | Proceedings of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition (CVPR), 2024, pp. 20571-20580 | null | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Largely due to their implicit nature, neural fields lack a direct mechanism
for filtering, as Fourier analysis from discrete signal processing is not
directly applicable to these representations. Effective filtering of neural
fields is critical to enable level-of-detail processing in downstream
applications, and support operations that involve sampling the field on regular
grids (e.g. marching cubes). Existing methods that attempt to decompose neural
fields in the frequency domain either resort to heuristics or require extensive
modifications to the neural field architecture. We show that via a simple
modification, one can obtain neural fields that are low-pass filtered, and in
turn show how this can be exploited to obtain a frequency decomposition of the
entire signal. We demonstrate the validity of our technique by investigating
level-of-detail reconstruction, and showing how coarser representations can be
computed effectively.
| [
{
"created": "Fri, 19 Apr 2024 17:39:50 GMT",
"version": "v1"
},
{
"created": "Thu, 11 Jul 2024 00:29:47 GMT",
"version": "v2"
}
] | 2024-07-22 | [
[
"Shabanov",
"Ahan",
""
],
[
"Govindarajan",
"Shrisudhan",
""
],
[
"Reading",
"Cody",
""
],
[
"Goli",
"Lily",
""
],
[
"Rebain",
"Daniel",
""
],
[
"Yi",
"Kwang Moo",
""
],
[
"Tagliasacchi",
"Andrea",
""
]
] |
2404.13071 | Edward Chang | Edward Y. Chang | Modeling Emotions and Ethics with Large Language Models | 8 pages, 4 figures, 3 tables | IEEE MIPR 2024 | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper explores the integration of human-like emotions and ethical
considerations into Large Language Models (LLMs). We first model eight
fundamental human emotions, presented as opposing pairs, and employ
collaborative LLMs to reinterpret and express these emotions across a spectrum
of intensity. Our focus extends to embedding a latent ethical dimension within
LLMs, guided by a novel self-supervised learning algorithm with human feedback
(SSHF). This approach enables LLMs to perform self-evaluations and adjustments
concerning ethical guidelines, enhancing their capability to generate content
that is not only emotionally resonant but also ethically aligned. The
methodologies and case studies presented herein illustrate the potential of
LLMs to transcend mere text and image generation, venturing into the realms of
empathetic interaction and principled decision-making, thereby setting a new
precedent in the development of emotionally aware and ethically conscious AI
systems.
| [
{
"created": "Mon, 15 Apr 2024 05:30:26 GMT",
"version": "v1"
},
{
"created": "Tue, 25 Jun 2024 04:36:08 GMT",
"version": "v2"
}
] | 2024-06-26 | [
[
"Chang",
"Edward Y.",
""
]
] |
2404.13077 | James Snyder Jr | Yilin Gao, Sai Kumar Arava, Yancheng Li and James W. Snyder Jr | Improving the Capabilities of Large Language Model Based Marketing
Analytics Copilots With Semantic Search And Fine-Tuning | 16 pages, 5 figures, presented at the 2nd International Conference on
NLP & AI (NLPAI 2024) | International Journal on Cybernetics & Informatics (IJCI), vol.
13, no. 2, pp. 15-31, Apr. 2024 | 10.5121/ijci.2024.130202 | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial intelligence (AI) is widely deployed to solve problems related to
marketing attribution and budget optimization. However, AI models can be quite
complex, and it can be difficult to understand model workings and insights
without extensive implementation teams. In principle, recently developed large
language models (LLMs), like GPT-4, can be deployed to provide marketing
insights, reducing the time and effort required to make critical decisions. In
practice, there are substantial challenges that need to be overcome to reliably
use such models. We focus on domain-specific question-answering, SQL generation
needed for data retrieval, and tabular analysis and show how a combination of
semantic search, prompt engineering, and fine-tuning can be applied to
dramatically improve the ability of LLMs to execute these tasks accurately. We
compare both proprietary models, like GPT-4, and open-source models, like
Llama-2-70b, as well as various embedding methods. These models are tested on
sample use cases specific to marketing mix modeling and attribution.
| [
{
"created": "Tue, 16 Apr 2024 03:39:16 GMT",
"version": "v1"
}
] | 2024-04-23 | [
[
"Gao",
"Yilin",
""
],
[
"Arava",
"Sai Kumar",
""
],
[
"Li",
"Yancheng",
""
],
[
"Snyder",
"James W.",
"Jr"
]
] |
2404.13099 | Mohit Gupta | Avinash Anand, Mohit Gupta, Kritarth Prasad, Navya Singla, Sanjana
Sanjeev, Jatin Kumar, Adarsh Raj Shivam, Rajiv Ratn Shah | Mathify: Evaluating Large Language Models on Mathematical Problem
Solving Tasks | 10 pages, 3 figures, NeurIPS 2023 Workshop on Generative AI for
Education (GAIED) | NeurIPS 2023 Workshop on Generative AI for Education (GAIED) | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | The rapid progress in the field of natural language processing (NLP) systems
and the expansion of large language models (LLMs) have opened up numerous
opportunities in the field of education and instructional methods. These
advancements offer the potential for tailored learning experiences and
immediate feedback, all delivered through accessible and cost-effective
services. One notable application area for this technological advancement is in
the realm of solving mathematical problems. Mathematical problem-solving not
only requires the ability to decipher complex problem statements but also the
skill to perform precise arithmetic calculations at each step of the
problem-solving process. However, the evaluation of the arithmetic capabilities
of large language models remains an area that has received relatively little
attention. In response, we introduce an extensive mathematics dataset called
"MathQuest" sourced from the 11th and 12th standard Mathematics NCERT
textbooks. This dataset encompasses mathematical challenges of varying
complexity and covers a wide range of mathematical concepts. Utilizing this
dataset, we conduct fine-tuning experiments with three prominent LLMs: LLaMA-2,
WizardMath, and MAmmoTH. These fine-tuned models serve as benchmarks for
evaluating their performance on our dataset. Our experiments reveal that among
the three models, MAmmoTH-13B emerges as the most proficient, achieving the
highest level of competence in solving the presented mathematical problems.
Consequently, MAmmoTH-13B establishes itself as a robust and dependable
benchmark for addressing NCERT mathematics problems.
| [
{
"created": "Fri, 19 Apr 2024 08:45:42 GMT",
"version": "v1"
}
] | 2024-04-23 | [
[
"Anand",
"Avinash",
""
],
[
"Gupta",
"Mohit",
""
],
[
"Prasad",
"Kritarth",
""
],
[
"Singla",
"Navya",
""
],
[
"Sanjeev",
"Sanjana",
""
],
[
"Kumar",
"Jatin",
""
],
[
"Shivam",
"Adarsh Raj",
""
],
[
"Shah",
"Rajiv Ratn",
""
]
] |
2404.13108 | Marek Wodzinski | Marek Wodzinski, Niccol\`o Marini, Manfredo Atzori, Henning M\"uller | RegWSI: Whole Slide Image Registration using Combined Deep Feature- and
Intensity-Based Methods: Winner of the ACROBAT 2023 Challenge | null | Computer Methods and Programs in Biomedicine, Vol. 250, 2024 | 10.1016/j.cmpb.2024.108187 | null | eess.IV cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The automatic registration of differently stained whole slide images (WSIs)
is crucial for improving diagnosis and prognosis by fusing complementary
information emerging from different visible structures. It is also useful to
quickly transfer annotations between consecutive or restained slides, thus
significantly reducing the annotation time and associated costs. Nevertheless,
the slide preparation is different for each stain and the tissue undergoes
complex and large deformations. Therefore, a robust, efficient, and accurate
registration method is highly desired by the scientific community and hospitals
specializing in digital pathology. We propose a two-step hybrid method
consisting of (i) deep learning- and feature-based initial alignment algorithm,
and (ii) intensity-based nonrigid registration using the instance optimization.
The proposed method does not require any fine-tuning to a particular dataset
and can be used directly for any desired tissue type and stain. The method
scored 1st place in the ACROBAT 2023 challenge. We evaluated using three open
datasets: (i) ANHIR, (ii) ACROBAT, and (iii) HyReCo, and performed several
ablation studies concerning the resolution used for registration and the
initial alignment robustness and stability. The method achieves the most
accurate results for the ACROBAT dataset, the cell-level registration accuracy
for the restained slides from the HyReCo dataset, and is among the best methods
evaluated on the ANHIR dataset. The method does not require any fine-tuning to
a new datasets and can be used out-of-the-box for other types of microscopic
images. The method is incorporated into the DeeperHistReg framework, allowing
others to directly use it to register, transform, and save the WSIs at any
desired pyramid level. The proposed method is a significant contribution to the
WSI registration, thus advancing the field of digital pathology.
| [
{
"created": "Fri, 19 Apr 2024 16:19:30 GMT",
"version": "v1"
},
{
"created": "Fri, 26 Apr 2024 10:10:52 GMT",
"version": "v2"
}
] | 2024-05-22 | [
[
"Wodzinski",
"Marek",
""
],
[
"Marini",
"Niccolò",
""
],
[
"Atzori",
"Manfredo",
""
],
[
"Müller",
"Henning",
""
]
] |
2404.13353 | Pengzhi Li | Pengzhi Li, Baijuan Li | Generating Daylight-driven Architectural Design via Diffusion Models | Project page: https://zrealli.github.io/DDADesign/ | CVPR 2024 Workshop | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In recent years, the rapid development of large-scale models has made new
possibilities for interdisciplinary fields such as architecture. In this paper,
we present a novel daylight-driven AI-aided architectural design method.
Firstly, we formulate a method for generating massing models, producing
architectural massing models using random parameters quickly. Subsequently, we
integrate a daylight-driven facade design strategy, accurately determining
window layouts and applying them to the massing models. Finally, we seamlessly
combine a large-scale language model with a text-to-image model, enhancing the
efficiency of generating visual architectural design renderings. Experimental
results demonstrate that our approach supports architects' creative
inspirations and pioneers novel avenues for architectural design development.
Project page: https://zrealli.github.io/DDADesign/.
| [
{
"created": "Sat, 20 Apr 2024 11:28:14 GMT",
"version": "v1"
}
] | 2024-04-23 | [
[
"Li",
"Pengzhi",
""
],
[
"Li",
"Baijuan",
""
]
] |
2404.13421 | Michael Duchesne | Michael Duchesne, Kaiwen Zhang, Chamseddine Talhi | MultiConfederated Learning: Inclusive Non-IID Data handling with
Decentralized Federated Learning | null | Proceedings of the 39th ACM/SIGAPP Symposium on Applied Computing,
SAC '24, 1587-1595, April 2024. ACM | 10.1145/3605098.3636000 | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Federated Learning (FL) has emerged as a prominent privacy-preserving
technique for enabling use cases like confidential clinical machine learning.
FL operates by aggregating models trained by remote devices which owns the
data. Thus, FL enables the training of powerful global models using
crowd-sourced data from a large number of learners, without compromising their
privacy. However, the aggregating server is a single point of failure when
generating the global model. Moreover, the performance of the model suffers
when the data is not independent and identically distributed (non-IID data) on
all remote devices. This leads to vastly different models being aggregated,
which can reduce the performance by as much as 50% in certain scenarios.
In this paper, we seek to address the aforementioned issues while retaining
the benefits of FL. We propose MultiConfederated Learning: a decentralized FL
framework which is designed to handle non-IID data. Unlike traditional FL,
MultiConfederated Learning will maintain multiple models in parallel (instead
of a single global model) to help with convergence when the data is non-IID.
With the help of transfer learning, learners can converge to fewer models. In
order to increase adaptability, learners are allowed to choose which updates to
aggregate from their peers.
| [
{
"created": "Sat, 20 Apr 2024 16:38:26 GMT",
"version": "v1"
}
] | 2024-04-23 | [
[
"Duchesne",
"Michael",
""
],
[
"Zhang",
"Kaiwen",
""
],
[
"Talhi",
"Chamseddine",
""
]
] |
2404.13439 | Sefika Efeoglu | Sefika Efeoglu and Adrian Paschke | Fine-Grained Named Entities for Corona News | Published at SWAT4HCLS 2023: The 14th International Conference on
Semantic Web Applications and Tools for Health Care and Life Sciences | CEUR-WS 2023 | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Information resources such as newspapers have produced unstructured text data
in various languages related to the corona outbreak since December 2019.
Analyzing these unstructured texts is time-consuming without representing them
in a structured format; therefore, representing them in a structured format is
crucial. An information extraction pipeline with essential tasks -- named
entity tagging and relation extraction -- to accomplish this goal might be
applied to these texts. This study proposes a data annotation pipeline to
generate training data from corona news articles, including generic and
domain-specific entities. Named entity recognition models are trained on this
annotated corpus and then evaluated on test sentences manually annotated by
domain experts evaluating the performance of a trained model. The code base and
demonstration are available at https://github.com/sefeoglu/coronanews-ner.git.
| [
{
"created": "Sat, 20 Apr 2024 18:22:49 GMT",
"version": "v1"
}
] | 2024-04-25 | [
[
"Efeoglu",
"Sefika",
""
],
[
"Paschke",
"Adrian",
""
]
] |
2404.13454 | Michael Bidollahkhani | Michael Bidollahkhani, Julian M. Kunkel | Revolutionizing System Reliability: The Role of AI in Predictive
Maintenance Strategies | Accepted, published and presented for the IARIA CLOUDCOMP2024
Conference of Venice, Italy | In Proceedings of the IARIA CloudComputing 2024 Conference (pp.
1-9). Venice, Italy. ISSN: 2308-4294. ISBN: 978-1-68558-156-5 | null | null | cs.AI cs.PF cs.SY eess.SY | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The landscape of maintenance in distributed systems is rapidly evolving with
the integration of Artificial Intelligence (AI). Also, as the complexity of
computing continuum systems intensifies, the role of AI in predictive
maintenance (Pd.M.) becomes increasingly pivotal. This paper presents a
comprehensive survey of the current state of Pd.M. in the computing continuum,
with a focus on the combination of scalable AI technologies. Recognizing the
limitations of traditional maintenance practices in the face of increasingly
complex and heterogenous computing continuum systems, the study explores how
AI, especially machine learning and neural networks, is being used to enhance
Pd.M. strategies. The survey encompasses a thorough review of existing
literature, highlighting key advancements, methodologies, and case studies in
the field. It critically examines the role of AI in improving prediction
accuracy for system failures and in optimizing maintenance schedules, thereby
contributing to reduced downtime and enhanced system longevity. By synthesizing
findings from the latest advancements in the field, the article provides
insights into the effectiveness and challenges of implementing AI-driven
predictive maintenance. It underscores the evolution of maintenance practices
in response to technological advancements and the growing complexity of
computing continuum systems. The conclusions drawn from this survey are
instrumental for practitioners and researchers in understanding the current
landscape and future directions of Pd.M. in distributed systems. It emphasizes
the need for continued research and development in this area, pointing towards
a trend of more intelligent, efficient, and cost-effective maintenance
solutions in the era of AI.
| [
{
"created": "Sat, 20 Apr 2024 19:31:05 GMT",
"version": "v1"
}
] | 2024-04-23 | [
[
"Bidollahkhani",
"Michael",
""
],
[
"Kunkel",
"Julian M.",
""
]
] |
2404.13515 | Yuxuan Zhu | Yuxuan Zhu, Jiachen Liu, Mosharaf Chowdhury, Fan Lai | FedTrans: Efficient Federated Learning via Multi-Model Transformation | null | MLSys (2024) | null | null | cs.LG cs.AI cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Federated learning (FL) aims to train machine learning (ML) models across
potentially millions of edge client devices. Yet, training and customizing
models for FL clients is notoriously challenging due to the heterogeneity of
client data, device capabilities, and the massive scale of clients, making
individualized model exploration prohibitively expensive. State-of-the-art FL
solutions personalize a globally trained model or concurrently train multiple
models, but they often incur suboptimal model accuracy and huge training costs.
In this paper, we introduce FedTrans, a multi-model FL training framework
that automatically produces and trains high-accuracy, hardware-compatible
models for individual clients at scale. FedTrans begins with a basic global
model, identifies accuracy bottlenecks in model architectures during training,
and then employs model transformation to derive new models for heterogeneous
clients on the fly. It judiciously assigns models to individual clients while
performing soft aggregation on multi-model updates to minimize total training
costs. Our evaluations using realistic settings show that FedTrans improves
individual client model accuracy by 14% - 72% while slashing training costs by
1.6X - 20X over state-of-the-art solutions.
| [
{
"created": "Sun, 21 Apr 2024 03:31:01 GMT",
"version": "v1"
},
{
"created": "Thu, 25 Apr 2024 20:34:32 GMT",
"version": "v2"
}
] | 2024-04-29 | [
[
"Zhu",
"Yuxuan",
""
],
[
"Liu",
"Jiachen",
""
],
[
"Chowdhury",
"Mosharaf",
""
],
[
"Lai",
"Fan",
""
]
] |
2404.13565 | Panfeng Li | Panfeng Li, Qikai Yang, Xieming Geng, Wenjing Zhou, Zhicheng Ding, Yi
Nian | Exploring Diverse Methods in Visual Question Answering | Accepted by 2024 5th International Conference on Electronic
Communication and Artificial Intelligence | Proceedings of the 2024 5th International Conference on Electronic
Communication and Artificial Intelligence (ICECAI), 2024, pp. 681-685 | 10.1109/ICECAI62591.2024.10674838 | null | cs.CV cs.AI cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | This study explores innovative methods for improving Visual Question
Answering (VQA) using Generative Adversarial Networks (GANs), autoencoders, and
attention mechanisms. Leveraging a balanced VQA dataset, we investigate three
distinct strategies. Firstly, GAN-based approaches aim to generate answer
embeddings conditioned on image and question inputs, showing potential but
struggling with more complex tasks. Secondly, autoencoder-based techniques
focus on learning optimal embeddings for questions and images, achieving
comparable results with GAN due to better ability on complex questions. Lastly,
attention mechanisms, incorporating Multimodal Compact Bilinear pooling (MCB),
address language priors and attention modeling, albeit with a
complexity-performance trade-off. This study underscores the challenges and
opportunities in VQA and suggests avenues for future research, including
alternative GAN formulations and attentional mechanisms.
| [
{
"created": "Sun, 21 Apr 2024 07:34:44 GMT",
"version": "v1"
},
{
"created": "Tue, 21 May 2024 02:38:35 GMT",
"version": "v2"
}
] | 2024-09-26 | [
[
"Li",
"Panfeng",
""
],
[
"Yang",
"Qikai",
""
],
[
"Geng",
"Xieming",
""
],
[
"Zhou",
"Wenjing",
""
],
[
"Ding",
"Zhicheng",
""
],
[
"Nian",
"Yi",
""
]
] |
2404.13634 | Md Fahim Sikder | Resmi Ramachandranpillai, Md Fahim Sikder, David Bergstr\"om, Fredrik
Heintz | Bt-GAN: Generating Fair Synthetic Healthdata via Bias-transforming
Generative Adversarial Networks | null | Journal of Artificial Intelligence Research, vol. 79, Apr. 2024,
1313-41 | 10.1613/jair.1.15317 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Synthetic data generation offers a promising solution to enhance the
usefulness of Electronic Healthcare Records (EHR) by generating realistic
de-identified data. However, the existing literature primarily focuses on the
quality of synthetic health data, neglecting the crucial aspect of fairness in
downstream predictions. Consequently, models trained on synthetic EHR have
faced criticism for producing biased outcomes in target tasks. These biases can
arise from either spurious correlations between features or the failure of
models to accurately represent sub-groups. To address these concerns, we
present Bias-transforming Generative Adversarial Networks (Bt-GAN), a GAN-based
synthetic data generator specifically designed for the healthcare domain. In
order to tackle spurious correlations (i), we propose an
information-constrained Data Generation Process that enables the generator to
learn a fair deterministic transformation based on a well-defined notion of
algorithmic fairness. To overcome the challenge of capturing exact sub-group
representations (ii), we incentivize the generator to preserve sub-group
densities through score-based weighted sampling. This approach compels the
generator to learn from underrepresented regions of the data manifold. We
conduct extensive experiments using the MIMIC-III database. Our results
demonstrate that Bt-GAN achieves SOTA accuracy while significantly improving
fairness and minimizing bias amplification. We also perform an in-depth
explainability analysis to provide additional evidence supporting the validity
of our study. In conclusion, our research introduces a novel and professional
approach to addressing the limitations of synthetic data generation in the
healthcare domain. By incorporating fairness considerations and leveraging
advanced techniques such as GANs, we pave the way for more reliable and
unbiased predictions in healthcare applications.
| [
{
"created": "Sun, 21 Apr 2024 12:16:38 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Apr 2024 07:06:55 GMT",
"version": "v2"
},
{
"created": "Fri, 26 Apr 2024 05:02:53 GMT",
"version": "v3"
}
] | 2024-04-29 | [
[
"Ramachandranpillai",
"Resmi",
""
],
[
"Sikder",
"Md Fahim",
""
],
[
"Bergström",
"David",
""
],
[
"Heintz",
"Fredrik",
""
]
] |
2404.13667 | Felix Schmitt-Koopmann | Felix M. Schmitt-Koopmann, Elaine M. Huang, Hans-Peter Hutter, Thilo
Stadelmann, Alireza Darvishy | MathNet: A Data-Centric Approach for Printed Mathematical Expression
Recognition | 12 pages, 6 figures | IEEE Access 12 (2024) 76963-76974 | 10.1109/ACCESS.2024.3404834 | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Printed mathematical expression recognition (MER) models are usually trained
and tested using LaTeX-generated mathematical expressions (MEs) as input and
the LaTeX source code as ground truth. As the same ME can be generated by
various different LaTeX source codes, this leads to unwanted variations in the
ground truth data that bias test performance results and hinder efficient
learning. In addition, the use of only one font to generate the MEs heavily
limits the generalization of the reported results to realistic scenarios. We
propose a data-centric approach to overcome this problem, and present
convincing experimental results: Our main contribution is an enhanced LaTeX
normalization to map any LaTeX ME to a canonical form. Based on this process,
we developed an improved version of the benchmark dataset im2latex-100k,
featuring 30 fonts instead of one. Second, we introduce the real-world dataset
realFormula, with MEs extracted from papers. Third, we developed a MER model,
MathNet, based on a convolutional vision transformer, with superior results on
all four test sets (im2latex-100k, im2latexv2, realFormula, and InftyMDB-1),
outperforming the previous state of the art by up to 88.3%.
| [
{
"created": "Sun, 21 Apr 2024 14:03:34 GMT",
"version": "v1"
}
] | 2024-06-10 | [
[
"Schmitt-Koopmann",
"Felix M.",
""
],
[
"Huang",
"Elaine M.",
""
],
[
"Hutter",
"Hans-Peter",
""
],
[
"Stadelmann",
"Thilo",
""
],
[
"Darvishy",
"Alireza",
""
]
] |
2404.13756 | Anthony Bilic | Anthony Bilic, Chen Chen | BC-MRI-SEG: A Breast Cancer MRI Tumor Segmentation Benchmark | null | IEEE International Conference on Healthcare Informatics (IEEE ICHI
2024) | 10.1109/ICHI61247.2024.00107 | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Binary breast cancer tumor segmentation with Magnetic Resonance Imaging (MRI)
data is typically trained and evaluated on private medical data, which makes
comparing deep learning approaches difficult. We propose a benchmark
(BC-MRI-SEG) for binary breast cancer tumor segmentation based on publicly
available MRI datasets. The benchmark consists of four datasets in total, where
two datasets are used for supervised training and evaluation, and two are used
for zero-shot evaluation. Additionally we compare state-of-the-art (SOTA)
approaches on our benchmark and provide an exhaustive list of available public
breast cancer MRI datasets. The source code has been made available at
https://irulenot.github.io/BC_MRI_SEG_Benchmark.
| [
{
"created": "Sun, 21 Apr 2024 19:42:28 GMT",
"version": "v1"
},
{
"created": "Sun, 2 Jun 2024 16:29:39 GMT",
"version": "v2"
}
] | 2024-08-27 | [
[
"Bilic",
"Anthony",
""
],
[
"Chen",
"Chen",
""
]
] |
2404.13812 | Qikai Yang | Qikai Yang, Panfeng Li, Xinhe Xu, Zhicheng Ding, Wenjing Zhou, Yi Nian | A Comparative Study on Enhancing Prediction in Social Network
Advertisement through Data Augmentation | Accepted by 2024 4th International Conference on Machine Learning and
Intelligent Systems Engineering (MLISE) | Proceedings of the 2024 4th International Conference on Machine
Learning and Intelligent Systems Engineering (MLISE), 2024, pp. 214-218 | 10.1109/MLISE62164.2024.10674203 | null | cs.SI cs.AI cs.IR cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In the ever-evolving landscape of social network advertising, the volume and
accuracy of data play a critical role in the performance of predictive models.
However, the development of robust predictive algorithms is often hampered by
the limited size and potential bias present in real-world datasets. This study
presents and explores a generative augmentation framework of social network
advertising data. Our framework explores three generative models for data
augmentation - Generative Adversarial Networks (GANs), Variational Autoencoders
(VAEs), and Gaussian Mixture Models (GMMs) - to enrich data availability and
diversity in the context of social network advertising analytics effectiveness.
By performing synthetic extensions of the feature space, we find that through
data augmentation, the performance of various classifiers has been
quantitatively improved. Furthermore, we compare the relative performance gains
brought by each data augmentation technique, providing insights for
practitioners to select appropriate techniques to enhance model performance.
This paper contributes to the literature by showing that synthetic data
augmentation alleviates the limitations imposed by small or imbalanced datasets
in the field of social network advertising. At the same time, this article also
provides a comparative perspective on the practicality of different data
augmentation methods, thereby guiding practitioners to choose appropriate
techniques to enhance model performance.
| [
{
"created": "Mon, 22 Apr 2024 01:16:11 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Apr 2024 02:43:14 GMT",
"version": "v2"
},
{
"created": "Sun, 28 Apr 2024 22:00:53 GMT",
"version": "v3"
}
] | 2024-09-23 | [
[
"Yang",
"Qikai",
""
],
[
"Li",
"Panfeng",
""
],
[
"Xu",
"Xinhe",
""
],
[
"Ding",
"Zhicheng",
""
],
[
"Zhou",
"Wenjing",
""
],
[
"Nian",
"Yi",
""
]
] |
2404.13865 | Mohit Gupta | Avinash Anand, Kritarth Prasad, Ujjwal Goel, Mohit Gupta, Naman Lal,
Astha Verma, Rajiv Ratn Shah | Context-Enhanced Language Models for Generating Multi-Paper Citations | 14 pages, 7 figures, 11th International Conference, BDA 2023, Delhi,
India | Big Data and Artificial Intelligence 2023, Delhi, India, December
7, 80 94 | 10.1007/978-3-031-49601-1_6 | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Citation text plays a pivotal role in elucidating the connection between
scientific documents, demanding an in-depth comprehension of the cited paper.
Constructing citations is often time-consuming, requiring researchers to delve
into extensive literature and grapple with articulating relevant content. To
address this challenge, the field of citation text generation (CTG) has
emerged. However, while earlier methods have primarily centered on creating
single-sentence citations, practical scenarios frequently necessitate citing
multiple papers within a single paragraph. To bridge this gap, we propose a
method that leverages Large Language Models (LLMs) to generate multi-citation
sentences. Our approach involves a single source paper and a collection of
target papers, culminating in a coherent paragraph containing multi-sentence
citation text. Furthermore, we introduce a curated dataset named MCG-S2ORC,
composed of English-language academic research papers in Computer Science,
showcasing multiple citation instances. In our experiments, we evaluate three
LLMs LLaMA, Alpaca, and Vicuna to ascertain the most effective model for this
endeavor. Additionally, we exhibit enhanced performance by integrating
knowledge graphs from target papers into the prompts for generating citation
text. This research underscores the potential of harnessing LLMs for citation
generation, opening a compelling avenue for exploring the intricate connections
between scientific documents.
| [
{
"created": "Mon, 22 Apr 2024 04:30:36 GMT",
"version": "v1"
}
] | 2024-04-23 | [
[
"Anand",
"Avinash",
""
],
[
"Prasad",
"Kritarth",
""
],
[
"Goel",
"Ujjwal",
""
],
[
"Gupta",
"Mohit",
""
],
[
"Lal",
"Naman",
""
],
[
"Verma",
"Astha",
""
],
[
"Shah",
"Rajiv Ratn",
""
]
] |
2404.13880 | Panfeng Li | Zhicheng Ding, Panfeng Li, Qikai Yang, Siyang Li, Qingtian Gong | Regional Style and Color Transfer | Accepted by 2024 5th International Conference on Computer Vision,
Image and Deep Learning | Proceedings of the 2024 5th International Conference on Computer
Vision, Image and Deep Learning (CVIDL), 2024, pp. 593-597 | 10.1109/CVIDL62147.2024.10604182 | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This paper presents a novel contribution to the field of regional style
transfer. Existing methods often suffer from the drawback of applying style
homogeneously across the entire image, leading to stylistic inconsistencies or
foreground object twisted when applied to image with foreground elements such
as person figures. To address this limitation, we propose a new approach that
leverages a segmentation network to precisely isolate foreground objects within
the input image. Subsequently, style transfer is applied exclusively to the
background region. The isolated foreground objects are then carefully
reintegrated into the style-transferred background. To enhance the visual
coherence between foreground and background, a color transfer step is employed
on the foreground elements prior to their rein-corporation. Finally, we utilize
feathering techniques to achieve a seamless amalgamation of foreground and
background, resulting in a visually unified and aesthetically pleasing final
composition. Extensive evaluations demonstrate that our proposed approach
yields significantly more natural stylistic transformations compared to
conventional methods.
| [
{
"created": "Mon, 22 Apr 2024 05:07:02 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Apr 2024 02:55:29 GMT",
"version": "v2"
},
{
"created": "Wed, 26 Jun 2024 22:43:05 GMT",
"version": "v3"
}
] | 2024-09-17 | [
[
"Ding",
"Zhicheng",
""
],
[
"Li",
"Panfeng",
""
],
[
"Yang",
"Qikai",
""
],
[
"Li",
"Siyang",
""
],
[
"Gong",
"Qingtian",
""
]
] |
2404.13996 | Fabrice Mayran De Chamisso | Fabrice Mayran de Chamisso, Lo\"ic Cotten, Valentine Dhers, Thomas
Lompech, Florian Seywert and Arnaud Susset | Challenges in automatic and selective plant-clearing | null | Proceedings of the IEEE ICRA 2024 Workshop on Field Robotics | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | With the advent of multispectral imagery and AI, there have been numerous
works on automatic plant segmentation for purposes such as counting, picking,
health monitoring, localized pesticide delivery, etc. In this paper, we tackle
the related problem of automatic and selective plant-clearing in a sustainable
forestry context, where an autonomous machine has to detect and avoid specific
plants while clearing any weeds which may compete with the species being
cultivated. Such an autonomous system requires a high level of robustness to
weather conditions, plant variability, terrain and weeds while remaining cheap
and easy to maintain. We notably discuss the lack of robustness of spectral
imagery, investigate the impact of the reference database's size and discuss
issues specific to AI systems operating in uncontrolled environments.
| [
{
"created": "Mon, 22 Apr 2024 09:01:14 GMT",
"version": "v1"
}
] | 2024-04-24 | [
[
"de Chamisso",
"Fabrice Mayran",
""
],
[
"Cotten",
"Loïc",
""
],
[
"Dhers",
"Valentine",
""
],
[
"Lompech",
"Thomas",
""
],
[
"Seywert",
"Florian",
""
],
[
"Susset",
"Arnaud",
""
]
] |
2404.14024 | Alexandre Bittar | Alexandre Bittar, Philip N. Garner | Exploring neural oscillations during speech perception via surrogate
gradient spiking neural networks | null | Frontiers in Neuroscience, Vol. 18 (2024) | 10.3389/fnins.2024.1449181 | null | cs.CL q-bio.NC | http://creativecommons.org/licenses/by-sa/4.0/ | Understanding cognitive processes in the brain demands sophisticated models
capable of replicating neural dynamics at large scales. We present a
physiologically inspired speech recognition architecture, compatible and
scalable with deep learning frameworks, and demonstrate that end-to-end
gradient descent training leads to the emergence of neural oscillations in the
central spiking neural network. Significant cross-frequency couplings,
indicative of these oscillations, are measured within and across network layers
during speech processing, whereas no such interactions are observed when
handling background noise inputs. Furthermore, our findings highlight the
crucial inhibitory role of feedback mechanisms, such as spike frequency
adaptation and recurrent connections, in regulating and synchronising neural
activity to improve recognition performance. Overall, on top of developing our
understanding of synchronisation phenomena notably observed in the human
auditory pathway, our architecture exhibits dynamic and efficient information
processing, with relevance to neuromorphic technology.
| [
{
"created": "Mon, 22 Apr 2024 09:40:07 GMT",
"version": "v1"
},
{
"created": "Mon, 2 Sep 2024 16:20:49 GMT",
"version": "v2"
}
] | 2024-09-26 | [
[
"Bittar",
"Alexandre",
""
],
[
"Garner",
"Philip N.",
""
]
] |
2404.14044 | Jiahao Ma | Jiahao Ma, Miaomiao Liu, David Ahmedt-Aristizaba, Chuong Nguyen | HashPoint: Accelerated Point Searching and Sampling for Neural Rendering | CVPR2024 Highlight | The IEEE/CVF Conference on Computer Vision and Pattern Recognition
2024 | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | In this paper, we address the problem of efficient point searching and
sampling for volume neural rendering. Within this realm, two typical approaches
are employed: rasterization and ray tracing. The rasterization-based methods
enable real-time rendering at the cost of increased memory and lower fidelity.
In contrast, the ray-tracing-based methods yield superior quality but demand
longer rendering time. We solve this problem by our HashPoint method combining
these two strategies, leveraging rasterization for efficient point searching
and sampling, and ray marching for rendering. Our method optimizes point
searching by rasterizing points within the camera's view, organizing them in a
hash table, and facilitating rapid searches. Notably, we accelerate the
rendering process by adaptive sampling on the primary surface encountered by
the ray. Our approach yields substantial speed-up for a range of
state-of-the-art ray-tracing-based methods, maintaining equivalent or superior
accuracy across synthetic and real test datasets. The code will be available at
https://jiahao-ma.github.io/hashpoint/.
| [
{
"created": "Mon, 22 Apr 2024 09:57:53 GMT",
"version": "v1"
},
{
"created": "Sat, 11 May 2024 13:31:18 GMT",
"version": "v2"
}
] | 2024-05-14 | [
[
"Ma",
"Jiahao",
""
],
[
"Liu",
"Miaomiao",
""
],
[
"Ahmedt-Aristizaba",
"David",
""
],
[
"Nguyen",
"Chuong",
""
]
] |
2404.14050 | Hilde Weerts | Hilde Weerts, Aislinn Kelly-Lyth, Reuben Binns, Jeremias Adams-Prassl | Unlawful Proxy Discrimination: A Framework for Challenging Inherently
Discriminatory Algorithms | null | 2024 ACM Conference on Fairness, Accountability, and Transparency
(FAccT '24) | 10.1145/3630106.3659010 | null | cs.AI cs.CY | http://creativecommons.org/licenses/by/4.0/ | Emerging scholarship suggests that the EU legal concept of direct
discrimination - where a person is given different treatment on grounds of a
protected characteristic - may apply to various algorithmic decision-making
contexts. This has important implications: unlike indirect discrimination,
there is generally no 'objective justification' stage in the direct
discrimination framework, which means that the deployment of directly
discriminatory algorithms will usually be unlawful per se. In this paper, we
focus on the most likely candidate for direct discrimination in the algorithmic
context, termed inherent direct discrimination, where a proxy is inextricably
linked to a protected characteristic. We draw on computer science literature to
suggest that, in the algorithmic context, 'treatment on the grounds of' needs
to be understood in terms of two steps: proxy capacity and proxy use. Only
where both elements can be made out can direct discrimination be said to be `on
grounds of' a protected characteristic. We analyse the legal conditions of our
proposed proxy capacity and proxy use tests. Based on this analysis, we discuss
technical approaches and metrics that could be developed or applied to identify
inherent direct discrimination in algorithmic decision-making.
| [
{
"created": "Mon, 22 Apr 2024 10:06:17 GMT",
"version": "v1"
}
] | 2024-04-23 | [
[
"Weerts",
"Hilde",
""
],
[
"Kelly-Lyth",
"Aislinn",
""
],
[
"Binns",
"Reuben",
""
],
[
"Adams-Prassl",
"Jeremias",
""
]
] |
2404.14057 | Shir Lissak | Shir Lissak, Yaakov Ophir, Refael Tikochinski, Anat Brunstein Klomek,
Itay Sisso, Eyal Fruchter, Roi Reichart | Bored to Death: Artificial Intelligence Research Reveals the Role of
Boredom in Suicide Behavior | null | www.frontiersin.org/journals/psychiatry/articles/10.3389/fpsyt.2024.1328122 | null | null | cs.CL | http://creativecommons.org/publicdomain/zero/1.0/ | Background: Recent advancements in Artificial Intelligence (AI) contributed
significantly to suicide assessment, however, our theoretical understanding of
this complex behavior is still limited. Objective: This study aimed to harness
AI methodologies to uncover hidden risk factors that trigger or aggravate
suicide behaviors. Method: The primary dataset included 228,052 Facebook
postings by 1,006 users who completed the gold-standard Columbia Suicide
Severity Rating Scale. This dataset was analyzed using a bottom-up research
pipeline without a-priory hypotheses and its findings were validated using a
top-down analysis of a new dataset. This secondary dataset included responses
by 1,062 participants to the same suicide scale as well as to well-validated
scales measuring depression and boredom. Results: An almost fully automated,
AI-guided research pipeline resulted in four Facebook topics that predicted the
risk of suicide, of which the strongest predictor was boredom. A comprehensive
literature review using APA PsycInfo revealed that boredom is rarely perceived
as a unique risk factor of suicide. A complementing top-down path analysis of
the secondary dataset uncovered an indirect relationship between boredom and
suicide, which was mediated by depression. An equivalent mediated relationship
was observed in the primary Facebook dataset as well. However, here, a direct
relationship between boredom and suicide risk was also observed. Conclusions:
Integrating AI methods allowed the discovery of an under-researched risk factor
of suicide. The study signals boredom as a maladaptive 'ingredient' that might
trigger suicide behaviors, regardless of depression. Further studies are
recommended to direct clinicians' attention to this burdening, and sometimes
existential experience.
| [
{
"created": "Mon, 22 Apr 2024 10:16:02 GMT",
"version": "v1"
},
{
"created": "Fri, 26 Apr 2024 11:50:25 GMT",
"version": "v2"
}
] | 2024-04-29 | [
[
"Lissak",
"Shir",
""
],
[
"Ophir",
"Yaakov",
""
],
[
"Tikochinski",
"Refael",
""
],
[
"Klomek",
"Anat Brunstein",
""
],
[
"Sisso",
"Itay",
""
],
[
"Fruchter",
"Eyal",
""
],
[
"Reichart",
"Roi",
""
]
] |
2404.14183 | Yuxia Wang | Yuxia Wang, Jonibek Mansurov, Petar Ivanov, Jinyan Su, Artem
Shelmanov, Akim Tsvigun, Osama Mohammed Afzal, Tarek Mahmoud, Giovanni
Puccetti, Thomas Arnold, Chenxi Whitehouse, Alham Fikri Aji, Nizar Habash,
Iryna Gurevych, Preslav Nakov | SemEval-2024 Task 8: Multidomain, Multimodel and Multilingual
Machine-Generated Text Detection | 23 pages, 12 tables | Proceedings of the 18th International Workshop on Semantic
Evaluation (SemEval-2024) | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | We present the results and the main findings of SemEval-2024 Task 8:
Multigenerator, Multidomain, and Multilingual Machine-Generated Text Detection.
The task featured three subtasks. Subtask A is a binary classification task
determining whether a text is written by a human or generated by a machine.
This subtask has two tracks: a monolingual track focused solely on English
texts and a multilingual track. Subtask B is to detect the exact source of a
text, discerning whether it is written by a human or generated by a specific
LLM. Subtask C aims to identify the changing point within a text, at which the
authorship transitions from human to machine. The task attracted a large number
of participants: subtask A monolingual (126), subtask A multilingual (59),
subtask B (70), and subtask C (30). In this paper, we present the task, analyze
the results, and discuss the system submissions and the methods they used. For
all subtasks, the best systems used LLMs.
| [
{
"created": "Mon, 22 Apr 2024 13:56:07 GMT",
"version": "v1"
}
] | 2024-04-23 | [
[
"Wang",
"Yuxia",
""
],
[
"Mansurov",
"Jonibek",
""
],
[
"Ivanov",
"Petar",
""
],
[
"Su",
"Jinyan",
""
],
[
"Shelmanov",
"Artem",
""
],
[
"Tsvigun",
"Akim",
""
],
[
"Afzal",
"Osama Mohammed",
""
],
[
"Mahmoud",
"Tarek",
""
],
[
"Puccetti",
"Giovanni",
""
],
[
"Arnold",
"Thomas",
""
],
[
"Whitehouse",
"Chenxi",
""
],
[
"Aji",
"Alham Fikri",
""
],
[
"Habash",
"Nizar",
""
],
[
"Gurevych",
"Iryna",
""
],
[
"Nakov",
"Preslav",
""
]
] |
2404.14232 | Anwesha Das | Anwesha Das, Zekun Wu, Iza \v{S}krjanec, and Anna Maria Feit (Saarland
University, Germany) | Shifting Focus with HCEye: Exploring the Dynamics of Visual Highlighting
and Cognitive Load on User Attention and Saliency Prediction | 18 pages, 9 Figures, Conference: ACM Symposium on Eye Tracking
Research & Applications (ETRA); Journal: Proc. ACM Hum.-Comput. Interact.,
Vol. 8, No. ETRA, Article 236. Publication date: May 2024 | Proc. ACM Hum.-Comput. Interact., Vol. 8, No. ETRA, Article 236.
Publication date: May 2024 | 10.1145/3655610 | null | cs.HC cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Visual highlighting can guide user attention in complex interfaces. However,
its effectiveness under limited attentional capacities is underexplored. This
paper examines the joint impact of visual highlighting (permanent and dynamic)
and dual-task-induced cognitive load on gaze behaviour. Our analysis, using
eye-movement data from 27 participants viewing 150 unique webpages reveals that
while participants' ability to attend to UI elements decreases with increasing
cognitive load, dynamic adaptations (i.e., highlighting) remain
attention-grabbing. The presence of these factors significantly alters what
people attend to and thus what is salient. Accordingly, we show that
state-of-the-art saliency models increase their performance when accounting for
different cognitive loads. Our empirical insights, along with our openly
available dataset, enhance our understanding of attentional processes in UIs
under varying cognitive (and perceptual) loads and open the door for new models
that can predict user attention while multitasking.
| [
{
"created": "Mon, 22 Apr 2024 14:45:30 GMT",
"version": "v1"
},
{
"created": "Wed, 1 May 2024 14:54:30 GMT",
"version": "v2"
},
{
"created": "Thu, 2 May 2024 09:06:35 GMT",
"version": "v3"
}
] | 2024-05-03 | [
[
"Das",
"Anwesha",
"",
"Saarland\n University, Germany"
],
[
"Wu",
"Zekun",
"",
"Saarland\n University, Germany"
],
[
"Škrjanec",
"Iza",
"",
"Saarland\n University, Germany"
],
[
"Feit",
"Anna Maria",
"",
"Saarland\n University, Germany"
]
] |
2404.14357 | Ted Edward Holmberg | Ted Edward Holmberg, Elias Ioup, Mahdi Abdelguerfi | A Stochastic Geo-spatiotemporal Bipartite Network to Optimize GCOOS
Sensor Placement Strategies | 7 pages, 6 figures, 2022 IEEE International Conference on Big Data
(Big Data) | 2022 IEEE International Conference on Big Data (Big Data), Osaka,
Japan, 2022, pp. 3668-3674 | 10.1109/BigData55660.2022.10020928 | null | cs.MA cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes two new measures applicable in a spatial bipartite
network model: coverage and coverage robustness. The bipartite network must
consist of observer nodes, observable nodes, and edges that connect observer
nodes to observable nodes. The coverage and coverage robustness scores evaluate
the effectiveness of the observer node placements. This measure is beneficial
for stochastic data as it may be coupled with Monte Carlo simulations to
identify optimal placements for new observer nodes. In this paper, we construct
a Geo-SpatioTemporal Bipartite Network (GSTBN) within the stochastic and
dynamical environment of the Gulf of Mexico. This GSTBN consists of GCOOS
sensor nodes and HYCOM Region of Interest (RoI) event nodes. The goal is to
identify optimal placements to expand GCOOS to improve the forecasting outcomes
by the HYCOM ocean prediction model.
| [
{
"created": "Mon, 22 Apr 2024 17:12:06 GMT",
"version": "v1"
},
{
"created": "Fri, 27 Sep 2024 18:17:35 GMT",
"version": "v2"
}
] | 2024-10-01 | [
[
"Holmberg",
"Ted Edward",
""
],
[
"Ioup",
"Elias",
""
],
[
"Abdelguerfi",
"Mahdi",
""
]
] |
2404.14388 | Ted Edward Holmberg | Ted Edward Holmberg, Mahdi Abdelguerfi, Elias Ioup | STROOBnet Optimization via GPU-Accelerated Proximal Recurrence
Strategies | 10 pages, 17 figures, 2023 IEEE International Conference on Big Data
(BigData) | 2023 IEEE International Conference on Big Data (BigData),
Sorrento, Italy, 2023, pp. 2920-2929 | 10.1109/BigData59044.2023.10386774 | null | cs.LG cs.CV cs.MA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spatiotemporal networks' observational capabilities are crucial for accurate
data gathering and informed decisions across multiple sectors. This study
focuses on the Spatiotemporal Ranged Observer-Observable Bipartite Network
(STROOBnet), linking observational nodes (e.g., surveillance cameras) to events
within defined geographical regions, enabling efficient monitoring. Using data
from Real-Time Crime Camera (RTCC) systems and Calls for Service (CFS) in New
Orleans, where RTCC combats rising crime amidst reduced police presence, we
address the network's initial observational imbalances. Aiming for uniform
observational efficacy, we propose the Proximal Recurrence approach. It
outperformed traditional clustering methods like k-means and DBSCAN by offering
holistic event frequency and spatial consideration, enhancing observational
coverage.
| [
{
"created": "Mon, 22 Apr 2024 17:46:29 GMT",
"version": "v1"
},
{
"created": "Fri, 27 Sep 2024 18:56:19 GMT",
"version": "v2"
}
] | 2024-10-01 | [
[
"Holmberg",
"Ted Edward",
""
],
[
"Abdelguerfi",
"Mahdi",
""
],
[
"Ioup",
"Elias",
""
]
] |
2404.14450 | Sefika Efeoglu | Sefika Efeoglu | GraphMatcher: A Graph Representation Learning Approach for Ontology
Matching | The 17th International Workshop on Ontology Matching, The 21st
International Semantic Web Conference (ISWC) 2022, 23 October 2022, Hangzhou,
China | The 17th International Workshop on Ontology Matching, The 21st
International Semantic Web Conference (ISWC) 2022, 23 October 2022, Hangzhou,
China | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Ontology matching is defined as finding a relationship or correspondence
between two or more entities in two or more ontologies. To solve the
interoperability problem of the domain ontologies, semantically similar
entities in these ontologies must be found and aligned before merging them.
GraphMatcher, developed in this study, is an ontology matching system using a
graph attention approach to compute higher-level representation of a class
together with its surrounding terms. The GraphMatcher has obtained remarkable
results in in the Ontology Alignment Evaluation Initiative (OAEI) 2022
conference track. Its codes are available at
~\url{https://github.com/sefeoglu/gat_ontology_matching}.
| [
{
"created": "Sat, 20 Apr 2024 18:30:17 GMT",
"version": "v1"
}
] | 2024-04-24 | [
[
"Efeoglu",
"Sefika",
""
]
] |
2404.14575 | Richard Stromer | Richard Stromer (1), Oskar Triebe (1), Chad Zanocco (1), Ram Rajagopal
(1) ((1) Stanford University) | Designing forecasting software for forecast users: Empowering
non-experts to create and understand their own forecasts | 10 pages | AMCIS 2023 Proceedings 1 | null | null | cs.HC cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Forecasts inform decision-making in nearly every domain. Forecasts are often
produced by experts with rare or hard to acquire skills. In practice, forecasts
are often used by domain experts and managers with little forecasting
expertise. Our study focuses on how to design forecasting software that
empowers non-expert users. We study how users can make use of state-of-the-art
forecasting methods, embed their domain knowledge, and how they build
understanding and trust towards generated forecasts. To do so, we co-designed a
forecasting software prototype using feedback from users and then analyzed
their interactions with our prototype. Our results identified three main
considerations for non-expert users: (1) a safe stepwise approach facilitating
causal understanding and trust; (2) a white box model supporting
human-reasoning-friendly components; (3) the inclusion of domain knowledge.
This paper contributes insights into how non-expert users interact with
forecasting software and by recommending ways to design more accessible
forecasting software.
| [
{
"created": "Mon, 22 Apr 2024 20:53:08 GMT",
"version": "v1"
}
] | 2024-04-24 | [
[
"Stromer",
"Richard",
"",
"Stanford University"
],
[
"Triebe",
"Oskar",
"",
"Stanford University"
],
[
"Zanocco",
"Chad",
"",
"Stanford University"
],
[
"Rajagopal",
"Ram",
"",
"Stanford University"
]
] |
2404.14606 | Armando Zhu | Armando Zhu, Keqin Li, Tong Wu, Peng Zhao, Bo Hong | Cross-Task Multi-Branch Vision Transformer for Facial Expression and
Mask Wearing Classification | null | Journal of Computer Technology and Applied Mathematics, vol. 1,
no. 1, Apr. 2024, pp. 46-53, | 10.5281/zenodo.11083875 | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With wearing masks becoming a new cultural norm, facial expression
recognition (FER) while taking masks into account has become a significant
challenge. In this paper, we propose a unified multi-branch vision transformer
for facial expression recognition and mask wearing classification tasks. Our
approach extracts shared features for both tasks using a dual-branch
architecture that obtains multi-scale feature representations. Furthermore, we
propose a cross-task fusion phase that processes tokens for each task with
separate branches, while exchanging information using a cross attention module.
Our proposed framework reduces the overall complexity compared with using
separate networks for both tasks by the simple yet effective cross-task fusion
phase. Extensive experiments demonstrate that our proposed model performs
better than or on par with different state-of-the-art methods on both facial
expression recognition and facial mask wearing classification task.
| [
{
"created": "Mon, 22 Apr 2024 22:02:19 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Apr 2024 06:34:16 GMT",
"version": "v2"
}
] | 2024-05-01 | [
[
"Zhu",
"Armando",
""
],
[
"Li",
"Keqin",
""
],
[
"Wu",
"Tong",
""
],
[
"Zhao",
"Peng",
""
],
[
"Hong",
"Bo",
""
]
] |
2404.14736 | Katie Seaborn | Katie Seaborn, Jacqueline Urakami, Peter Pennefather, Norihisa P.
Miyake | Qualitative Approaches to Voice UX | null | ACM Computing Surveys (2024) | 10.1145/3658666 | null | cs.HC cs.AI cs.CL cs.CY cs.SD eess.AS | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Voice is a natural mode of expression offered by modern computer-based
systems. Qualitative perspectives on voice-based user experiences (voice UX)
offer rich descriptions of complex interactions that numbers alone cannot fully
represent. We conducted a systematic review of the literature on qualitative
approaches to voice UX, capturing the nature of this body of work in a
systematic map and offering a qualitative synthesis of findings. We highlight
the benefits of qualitative methods for voice UX research, identify
opportunities for increasing rigour in methods and outcomes, and distill
patterns of experience across a diversity of devices and modes of qualitative
praxis.
| [
{
"created": "Tue, 23 Apr 2024 04:33:49 GMT",
"version": "v1"
}
] | 2024-04-24 | [
[
"Seaborn",
"Katie",
""
],
[
"Urakami",
"Jacqueline",
""
],
[
"Pennefather",
"Peter",
""
],
[
"Miyake",
"Norihisa P.",
""
]
] |
2404.14771 | Hong Huang | Hong Huang, Yuyi Wang, Luyao Li, Jun Lin | Music Style Transfer With Diffusion Model | 8 pages, 6 figures, ICMC 2023 | International Computer Music Conference (ICMC 2023) pp. 40-47,
October 2023 | null | null | cs.SD cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Previous studies on music style transfer have mainly focused on one-to-one
style conversion, which is relatively limited. When considering the conversion
between multiple styles, previous methods required designing multiple modes to
disentangle the complex style of the music, resulting in large computational
costs and slow audio generation. The existing music style transfer methods
generate spectrograms with artifacts, leading to significant noise in the
generated audio. To address these issues, this study proposes a music style
transfer framework based on diffusion models (DM) and uses spectrogram-based
methods to achieve multi-to-multi music style transfer. The GuideDiff method is
used to restore spectrograms to high-fidelity audio, accelerating audio
generation speed and reducing noise in the generated audio. Experimental
results show that our model has good performance in multi-mode music style
transfer compared to the baseline and can generate high-quality audio in
real-time on consumer-grade GPUs.
| [
{
"created": "Tue, 23 Apr 2024 06:22:19 GMT",
"version": "v1"
}
] | 2024-04-24 | [
[
"Huang",
"Hong",
""
],
[
"Wang",
"Yuyi",
""
],
[
"Li",
"Luyao",
""
],
[
"Lin",
"Jun",
""
]
] |
2404.14946 | Sen Liu | Sen Liu, Yiwei Guo, Xie Chen and Kai Yu | StoryTTS: A Highly Expressive Text-to-Speech Dataset with Rich Textual
Expressiveness Annotations | Accepted by ICASSP 2024 | IEEE International Conference on Acoustics, Speech and Signal
Processing (ICASSP), 2024, pp. 11521-11525 | null | null | cs.SD cs.CL eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While acoustic expressiveness has long been studied in expressive
text-to-speech (ETTS), the inherent expressiveness in text lacks sufficient
attention, especially for ETTS of artistic works. In this paper, we introduce
StoryTTS, a highly ETTS dataset that contains rich expressiveness both in
acoustic and textual perspective, from the recording of a Mandarin storytelling
show. A systematic and comprehensive labeling framework is proposed for textual
expressiveness. We analyze and define speech-related textual expressiveness in
StoryTTS to include five distinct dimensions through linguistics, rhetoric,
etc. Then we employ large language models and prompt them with a few manual
annotation examples for batch annotation. The resulting corpus contains 61
hours of consecutive and highly prosodic speech equipped with accurate text
transcriptions and rich textual expressiveness annotations. Therefore, StoryTTS
can aid future ETTS research to fully mine the abundant intrinsic textual and
acoustic features. Experiments are conducted to validate that TTS models can
generate speech with improved expressiveness when integrating with the
annotated textual labels in StoryTTS.
| [
{
"created": "Tue, 23 Apr 2024 11:41:35 GMT",
"version": "v1"
}
] | 2024-04-24 | [
[
"Liu",
"Sen",
""
],
[
"Guo",
"Yiwei",
""
],
[
"Chen",
"Xie",
""
],
[
"Yu",
"Kai",
""
]
] |
2404.15003 | Aleksei Dorkin | Aleksei Dorkin and Kairit Sirts | Comparison of Current Approaches to Lemmatization: A Case Study in
Estonian | 6 pages, 2 figures | Proceedings of the 24th Nordic Conference on Computational
Linguistics (NoDaLiDa), pp. 280-285, May 2023 | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | This study evaluates three different lemmatization approaches to Estonian --
Generative character-level models, Pattern-based word-level classification
models, and rule-based morphological analysis. According to our experiments, a
significantly smaller Generative model consistently outperforms the
Pattern-based classification model based on EstBERT. Additionally, we observe a
relatively small overlap in errors made by all three models, indicating that an
ensemble of different approaches could lead to improvements.
| [
{
"created": "Tue, 23 Apr 2024 13:06:32 GMT",
"version": "v1"
}
] | 2024-04-24 | [
[
"Dorkin",
"Aleksei",
""
],
[
"Sirts",
"Kairit",
""
]
] |
2404.15010 | Shuofeng Sun | Shuofeng Sun, Yongming Rao, Jiwen Lu, Haibin Yan | X-3D: Explicit 3D Structure Modeling for Point Cloud Recognition | null | The IEEE/CVF Conference on Computer Vision and Pattern Recognition
2024 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Numerous prior studies predominantly emphasize constructing relation vectors
for individual neighborhood points and generating dynamic kernels for each
vector and embedding these into high-dimensional spaces to capture implicit
local structures. However, we contend that such implicit high-dimensional
structure modeling approch inadequately represents the local geometric
structure of point clouds due to the absence of explicit structural
information. Hence, we introduce X-3D, an explicit 3D structure modeling
approach. X-3D functions by capturing the explicit local structural information
within the input 3D space and employing it to produce dynamic kernels with
shared weights for all neighborhood points within the current local region.
This modeling approach introduces effective geometric prior and significantly
diminishes the disparity between the local structure of the embedding space and
the original input point cloud, thereby improving the extraction of local
features. Experiments show that our method can be used on a variety of methods
and achieves state-of-the-art performance on segmentation, classification,
detection tasks with lower extra computational cost, such as \textbf{90.7\%} on
ScanObjectNN for classification, \textbf{79.2\%} on S3DIS 6 fold and
\textbf{74.3\%} on S3DIS Area 5 for segmentation, \textbf{76.3\%} on ScanNetV2
for segmentation and \textbf{64.5\%} mAP , \textbf{46.9\%} mAP on SUN RGB-D and
\textbf{69.0\%} mAP , \textbf{51.1\%} mAP on ScanNetV2 . Our code is available
at
\href{https://github.com/sunshuofeng/X-3D}{https://github.com/sunshuofeng/X-3D}.
| [
{
"created": "Tue, 23 Apr 2024 13:15:35 GMT",
"version": "v1"
}
] | 2024-04-24 | [
[
"Sun",
"Shuofeng",
""
],
[
"Rao",
"Yongming",
""
],
[
"Lu",
"Jiwen",
""
],
[
"Yan",
"Haibin",
""
]
] |
2404.15129 | Sara Dadjouy | Sara Dadjouy, Hedieh Sajedi | Gallbladder Cancer Detection in Ultrasound Images based on YOLO and
Faster R-CNN | Published in 2024 10th International Conference on Artificial
Intelligence and Robotics (QICAR) | 2024 10th International Conference on Artificial Intelligence and
Robotics (QICAR) (pp. 227-231). IEEE | 10.1109/QICAR61538.2024.10496645 | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Medical image analysis is a significant application of artificial
intelligence for disease diagnosis. A crucial step in this process is the
identification of regions of interest within the images. This task can be
automated using object detection algorithms. YOLO and Faster R-CNN are renowned
for such algorithms, each with its own strengths and weaknesses. This study
aims to explore the advantages of both techniques to select more accurate
bounding boxes for gallbladder detection from ultrasound images, thereby
enhancing gallbladder cancer classification. A fusion method that leverages the
benefits of both techniques is presented in this study. The proposed method
demonstrated superior classification performance, with an accuracy of 92.62%,
compared to the individual use of Faster R-CNN and YOLOv8, which yielded
accuracies of 90.16% and 82.79%, respectively.
| [
{
"created": "Tue, 23 Apr 2024 15:29:02 GMT",
"version": "v1"
}
] | 2024-04-24 | [
[
"Dadjouy",
"Sara",
""
],
[
"Sajedi",
"Hedieh",
""
]
] |
2404.15196 | Yurii Paniv | Yurii Paniv, Dmytro Chaplynskyi, Nikita Trynus, Volodymyr Kyrylov | Setting up the Data Printer with Improved English to Ukrainian Machine
Translation | Published at Proceedings of the Third Ukrainian Natural Language
Processing Workshop (UNLP)@ LREC-COLING 2024 (pp. 41-50) | Proceedings of the Third Ukrainian Natural Language Processing
Workshop (UNLP)@ LREC-COLING 2024 (pp. 41-50) | null | null | cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | To build large language models for Ukrainian we need to expand our corpora
with large amounts of new algorithmic tasks expressed in natural language.
Examples of task performance expressed in English are abundant, so with a
high-quality translation system our community will be enabled to curate
datasets faster. To aid this goal, we introduce a recipe to build a translation
system using supervised finetuning of a large pretrained language model with a
noisy parallel dataset of 3M pairs of Ukrainian and English sentences followed
by a second phase of training using 17K examples selected by k-fold perplexity
filtering on another dataset of higher quality. Our decoder-only model named
Dragoman beats performance of previous state of the art encoder-decoder models
on the FLORES devtest set.
| [
{
"created": "Tue, 23 Apr 2024 16:34:34 GMT",
"version": "v1"
},
{
"created": "Fri, 12 Jul 2024 10:06:15 GMT",
"version": "v2"
}
] | 2024-07-15 | [
[
"Paniv",
"Yurii",
""
],
[
"Chaplynskyi",
"Dmytro",
""
],
[
"Trynus",
"Nikita",
""
],
[
"Kyrylov",
"Volodymyr",
""
]
] |
2404.15276 | Xiangyu Xu | Xiangyu Xu, Lijuan Liu, Shuicheng Yan | SMPLer: Taming Transformers for Monocular 3D Human Shape and Pose
Estimation | Published at TPAMI 2024 | https://www.computer.org/csdl/journal/tp/2024/05/10354384/1SP2qWh8Fq0 | null | null | cs.CV cs.AI cs.GR cs.LG cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing Transformers for monocular 3D human shape and pose estimation
typically have a quadratic computation and memory complexity with respect to
the feature length, which hinders the exploitation of fine-grained information
in high-resolution features that is beneficial for accurate reconstruction. In
this work, we propose an SMPL-based Transformer framework (SMPLer) to address
this issue. SMPLer incorporates two key ingredients: a decoupled attention
operation and an SMPL-based target representation, which allow effective
utilization of high-resolution features in the Transformer. In addition, based
on these two designs, we also introduce several novel modules including a
multi-scale attention and a joint-aware attention to further boost the
reconstruction performance. Extensive experiments demonstrate the effectiveness
of SMPLer against existing 3D human shape and pose estimation methods both
quantitatively and qualitatively. Notably, the proposed algorithm achieves an
MPJPE of 45.2 mm on the Human3.6M dataset, improving upon Mesh Graphormer by
more than 10% with fewer than one-third of the parameters. Code and pretrained
models are available at https://github.com/xuxy09/SMPLer.
| [
{
"created": "Tue, 23 Apr 2024 17:59:59 GMT",
"version": "v1"
}
] | 2024-04-24 | [
[
"Xu",
"Xiangyu",
""
],
[
"Liu",
"Lijuan",
""
],
[
"Yan",
"Shuicheng",
""
]
] |
2404.15310 | Ruikun Hou | Ruikun Hou, Tim F\"utterer, Babette B\"uhler, Efe Bozkir, Peter
Gerjets, Ulrich Trautwein, Enkelejda Kasneci | Automated Assessment of Encouragement and Warmth in Classrooms
Leveraging Multimodal Emotional Features and ChatGPT | Accepted as a full paper by the 25th International Conference on
Artificial Intelligence in Education (AIED 2024) | Proceedings of the 25th International Conference on Artificial
Intelligence in Education (AIED 2024) | 10.1007/978-3-031-64302-6_5 | null | cs.HC cs.AI cs.CY cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Classroom observation protocols standardize the assessment of teaching
effectiveness and facilitate comprehension of classroom interactions. Whereas
these protocols offer teachers specific feedback on their teaching practices,
the manual coding by human raters is resource-intensive and often unreliable.
This has sparked interest in developing AI-driven, cost-effective methods for
automating such holistic coding. Our work explores a multimodal approach to
automatically estimating encouragement and warmth in classrooms, a key
component of the Global Teaching Insights (GTI) study's observation protocol.
To this end, we employed facial and speech emotion recognition with sentiment
analysis to extract interpretable features from video, audio, and transcript
data. The prediction task involved both classification and regression methods.
Additionally, in light of recent large language models' remarkable text
annotation capabilities, we evaluated ChatGPT's zero-shot performance on this
scoring task based on transcripts. We demonstrated our approach on the GTI
dataset, comprising 367 16-minute video segments from 92 authentic lesson
recordings. The inferences of GPT-4 and the best-trained model yielded
correlations of r = .341 and r = .441 with human ratings, respectively.
Combining estimates from both models through averaging, an ensemble approach
achieved a correlation of r = .513, comparable to human inter-rater
reliability. Our model explanation analysis indicated that text sentiment
features were the primary contributors to the trained model's decisions.
Moreover, GPT-4 could deliver logical and concrete reasoning as potential
teacher guidelines. Our findings provide insights into using advanced,
multimodal techniques for automated classroom observation, aiming to foster
teacher training through frequent and valuable feedback.
| [
{
"created": "Mon, 1 Apr 2024 16:58:09 GMT",
"version": "v1"
}
] | 2024-07-04 | [
[
"Hou",
"Ruikun",
""
],
[
"Fütterer",
"Tim",
""
],
[
"Bühler",
"Babette",
""
],
[
"Bozkir",
"Efe",
""
],
[
"Gerjets",
"Peter",
""
],
[
"Trautwein",
"Ulrich",
""
],
[
"Kasneci",
"Enkelejda",
""
]
] |
2404.15324 | Jos\'e L. Risco-Mart\'in | Jos\'e L. Risco-Mart\'in, Ignacio-Iker Prado-Rujas, Javier Campoy,
Mar\'ia S. P\'erez and Katzalin Olcoz | Advanced simulation-based predictive modelling for solar irradiance
sensor farms | null | Journal of Simulation, pp. 1-18, 2024 | 10.1080/17477778.2024.2333775 | null | eess.SP cs.AI cs.SY eess.SY | http://creativecommons.org/licenses/by/4.0/ | As solar power continues to grow and replace traditional energy sources, the
need for reliable forecasting models becomes increasingly important to ensure
the stability and efficiency of the grid. However, the management of these
models still needs to be improved, and new tools and technologies are required
to handle the deployment and control of solar facilities. This work introduces
a novel framework named Cloud-based Analysis and Integration for Data
Efficiency (CAIDE), designed for real-time monitoring, management, and
forecasting of solar irradiance sensor farms. CAIDE is designed to manage
multiple sensor farms simultaneously while improving predictive models in
real-time using well-grounded Modeling and Simulation (M&S) methodologies. The
framework leverages Model Based Systems Engineering (MBSE) and an Internet of
Things (IoT) infrastructure to support the deployment and analysis of solar
plants in dynamic environments. The system can adapt and re-train the model
when given incorrect results, ensuring that forecasts remain accurate and
up-to-date. Furthermore, CAIDE can be executed in sequential, parallel, and
distributed architectures, assuring scalability. The effectiveness of CAIDE is
demonstrated in a complex scenario composed of several solar irradiance sensor
farms connected to a centralized management system. Our results show that CAIDE
is scalable and effective in managing and forecasting solar power production
while improving the accuracy of predictive models in real time. The framework
has important implications for the deployment of solar plants and the future of
renewable energy sources.
| [
{
"created": "Fri, 5 Apr 2024 15:44:51 GMT",
"version": "v1"
}
] | 2024-04-25 | [
[
"Risco-Martín",
"José L.",
""
],
[
"Prado-Rujas",
"Ignacio-Iker",
""
],
[
"Campoy",
"Javier",
""
],
[
"Pérez",
"María S.",
""
],
[
"Olcoz",
"Katzalin",
""
]
] |
2404.15814 | Hyunsu Kim | Hyunsu Kim, Jongmin Yoon, and Juho Lee | Fast Ensembling with Diffusion Schr\"odinger Bridge | null | ICLR 2024 | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep Ensemble (DE) approach is a straightforward technique used to enhance
the performance of deep neural networks by training them from different initial
points, converging towards various local optima. However, a limitation of this
methodology lies in its high computational overhead for inference, arising from
the necessity to store numerous learned parameters and execute individual
forward passes for each parameter during the inference stage. We propose a
novel approach called Diffusion Bridge Network (DBN) to address this challenge.
Based on the theory of the Schr\"odinger bridge, this method directly learns to
simulate an Stochastic Differential Equation (SDE) that connects the output
distribution of a single ensemble member to the output distribution of the
ensembled model, allowing us to obtain ensemble prediction without having to
invoke forward pass through all the ensemble models. By substituting the heavy
ensembles with this lightweight neural network constructing DBN, we achieved
inference with reduced computational cost while maintaining accuracy and
uncertainty scores on benchmark datasets such as CIFAR-10, CIFAR-100, and
TinyImageNet. Our implementation is available at
https://github.com/kim-hyunsu/dbn.
| [
{
"created": "Wed, 24 Apr 2024 11:35:02 GMT",
"version": "v1"
}
] | 2024-04-25 | [
[
"Kim",
"Hyunsu",
""
],
[
"Yoon",
"Jongmin",
""
],
[
"Lee",
"Juho",
""
]
] |
2404.16042 | Romy M\"uller | Romy M\"uller | How explainable AI affects human performance: A systematic review of the
behavioural consequences of saliency maps | null | International Journal of Human-Computer Interaction (2024) 1-32 | 10.1080/10447318.2024.2381929 | null | cs.HC cs.AI | http://creativecommons.org/licenses/by/4.0/ | Saliency maps can explain how deep neural networks classify images. But are
they actually useful for humans? The present systematic review of 68 user
studies found that while saliency maps can enhance human performance, null
effects or even costs are quite common. To investigate what modulates these
effects, the empirical outcomes were organised along several factors related to
the human tasks, AI performance, XAI methods, images to be classified, human
participants and comparison conditions. In image-focused tasks, benefits were
less common than in AI-focused tasks, but the effects depended on the specific
cognitive requirements. Moreover, benefits were usually restricted to incorrect
AI predictions in AI-focused tasks but to correct ones in image-focused tasks.
XAI-related factors had surprisingly little impact. The evidence was limited
for image- and human-related factors and the effects were highly dependent on
the comparison conditions. These findings may support the design of future user
studies.
| [
{
"created": "Wed, 3 Apr 2024 21:46:25 GMT",
"version": "v1"
},
{
"created": "Fri, 26 Apr 2024 04:25:12 GMT",
"version": "v2"
}
] | 2024-08-20 | [
[
"Müller",
"Romy",
""
]
] |
2404.16047 | Nanna Inie | Nanna Inie, Stefania Druga, Peter Zukerman, Emily M. Bender | From "AI" to Probabilistic Automation: How Does Anthropomorphization of
Technical Systems Descriptions Influence Trust? | Accepted to FAccT 2024. arXiv admin note: text overlap with
arXiv:2403.05957 | FAccT 2024 | 10.1145/3630106.3659040 | null | cs.HC cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper investigates the influence of anthropomorphized descriptions of
so-called "AI" (artificial intelligence) systems on people's self-assessment of
trust in the system. Building on prior work, we define four categories of
anthropomorphization (1. Properties of a cognizer, 2. Agency, 3. Biological
metaphors, and 4. Properties of a communicator). We use a survey-based approach
(n=954) to investigate whether participants are likely to trust one of two
(fictitious) "AI" systems by randomly assigning people to see either an
anthropomorphized or a de-anthropomorphized description of the systems. We find
that participants are no more likely to trust anthropomorphized over
de-anthropmorphized product descriptions overall. The type of product or system
in combination with different anthropomorphic categories appears to exert
greater influence on trust than anthropomorphizing language alone, and age is
the only demographic factor that significantly correlates with people's
preference for anthropomorphized or de-anthropomorphized descriptions. When
elaborating on their choices, participants highlight factors such as lesser of
two evils, lower or higher stakes contexts, and human favoritism as driving
motivations when choosing between product A and B, irrespective of whether they
saw an anthropomorphized or a de-anthropomorphized description of the product.
Our results suggest that "anthropomorphism" in "AI" descriptions is an
aggregate concept that may influence different groups differently, and provide
nuance to the discussion of whether anthropomorphization leads to higher trust
and over-reliance by the general public in systems sold as "AI".
| [
{
"created": "Mon, 8 Apr 2024 17:01:09 GMT",
"version": "v1"
}
] | 2024-06-10 | [
[
"Inie",
"Nanna",
""
],
[
"Druga",
"Stefania",
""
],
[
"Zukerman",
"Peter",
""
],
[
"Bender",
"Emily M.",
""
]
] |
2404.16074 | Md Shajalal | Md Shajalal, Alexander Boden, Gunnar Stevens, Delong Du, Dean-Robin
Kern | Explaining AI Decisions: Towards Achieving Human-Centered Explainability
in Smart Home Environments | This is the pre-print version of our accepted paper at the 2nd World
Conference on eXplainable Artificial Intelligence (xAI2024), which will be
held in Valletta, Malta in 17-19 July, 2024 | Explainable Artificial Intelligence. xAI 2024. Communications in
Computer and Information Science, vol 2156. Springer, Cham | 10.1007/978-3-031-63803-9_23 | null | cs.HC cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Smart home systems are gaining popularity as homeowners strive to enhance
their living and working environments while minimizing energy consumption.
However, the adoption of artificial intelligence (AI)-enabled decision-making
models in smart home systems faces challenges due to the complexity and
black-box nature of these systems, leading to concerns about explainability,
trust, transparency, accountability, and fairness. The emerging field of
explainable artificial intelligence (XAI) addresses these issues by providing
explanations for the models' decisions and actions. While state-of-the-art XAI
methods are beneficial for AI developers and practitioners, they may not be
easily understood by general users, particularly household members. This paper
advocates for human-centered XAI methods, emphasizing the importance of
delivering readily comprehensible explanations to enhance user satisfaction and
drive the adoption of smart home systems. We review state-of-the-art XAI
methods and prior studies focusing on human-centered explanations for general
users in the context of smart home applications. Through experiments on two
smart home application scenarios, we demonstrate that explanations generated by
prominent XAI techniques might not be effective in helping users understand and
make decisions. We thus argue for the necessity of a human-centric approach in
representing explanations in smart home systems and highlight relevant
human-computer interaction (HCI) methodologies, including user studies,
prototyping, technology probes analysis, and heuristic evaluation, that can be
employed to generate and present human-centered explanations to users.
| [
{
"created": "Tue, 23 Apr 2024 22:31:42 GMT",
"version": "v1"
}
] | 2024-07-30 | [
[
"Shajalal",
"Md",
""
],
[
"Boden",
"Alexander",
""
],
[
"Stevens",
"Gunnar",
""
],
[
"Du",
"Delong",
""
],
[
"Kern",
"Dean-Robin",
""
]
] |
2404.16104 | David Doukhan | Albert Rilliard, David Doukhan, R\'emi Uro, Simon Devauchelle | Evolution of Voices in French Audiovisual Media Across Genders and Age
in a Diachronic Perspective | 5 pages, 2 figures, keywords:, Gender, Diachrony, Vocal Tract
Resonance, Vocal register, Broadcast speech | Radek Skarnitzl & Jan Vol\'in (Eds.), Proceedings of the 20th
International Congress of Phonetic Sciences (ICPhS), Prague 2023, pp.
753-757. Guarant International. ISBN 978-80-908 114-2-3 | null | null | eess.AS cs.CL cs.SD | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We present a diachronic acoustic analysis of the voice of 1023 speakers from
French media archives. The speakers are spread across 32 categories based on
four periods (years 1955/56, 1975/76, 1995/96, 2015/16), four age groups
(20-35; 36-50; 51-65, >65), and two genders. The fundamental frequency ($F_0$)
and the first four formants (F1-4) were estimated. Procedures used to ensure
the quality of these estimations on heterogeneous data are described. From each
speaker's $F_0$ distribution, the base-$F_0$ value was calculated to estimate
the register. Average vocal tract length was estimated from formant
frequencies. Base-$F_0$ and vocal tract length were fit by linear mixed models
to evaluate how they may have changed across time periods and genders,
corrected for age effects. Results show an effect of the period with a tendency
to lower voices, independently of gender. A lowering of pitch is observed with
age for female but not male speakers.
| [
{
"created": "Wed, 24 Apr 2024 18:00:06 GMT",
"version": "v1"
}
] | 2024-04-26 | [
[
"Rilliard",
"Albert",
""
],
[
"Doukhan",
"David",
""
],
[
"Uro",
"Rémi",
""
],
[
"Devauchelle",
"Simon",
""
]
] |
2404.16218 | Julian Stier | Simon Neumeyer, Julian Stier, Michael Granitzer | Efficient NAS with FaDE on Hierarchical Spaces | null | Advances in Intelligent Data Analysis XXII. IDA 2024. Lecture
Notes in Computer Science, vol 14642. Springer, Cham | 10.1007/978-3-031-58553-1_13 | null | cs.NE cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural architecture search (NAS) is a challenging problem. Hierarchical
search spaces allow for cheap evaluations of neural network sub modules to
serve as surrogate for architecture evaluations. Yet, sometimes the hierarchy
is too restrictive or the surrogate fails to generalize. We present FaDE which
uses differentiable architecture search to obtain relative performance
predictions on finite regions of a hierarchical NAS space. The relative nature
of these ranks calls for a memory-less, batch-wise outer search algorithm for
which we use an evolutionary algorithm with pseudo-gradient descent. FaDE is
especially suited on deep hierarchical, respectively multi-cell search spaces,
which it can explore by linear instead of exponential cost and therefore
eliminates the need for a proxy search space.
Our experiments show that firstly, FaDE-ranks on finite regions of the search
space correlate with corresponding architecture performances and secondly, the
ranks can empower a pseudo-gradient evolutionary search on the complete neural
architecture search space.
| [
{
"created": "Wed, 24 Apr 2024 21:33:17 GMT",
"version": "v1"
}
] | 2024-04-26 | [
[
"Neumeyer",
"Simon",
""
],
[
"Stier",
"Julian",
""
],
[
"Granitzer",
"Michael",
""
]
] |
2404.16409 | Nicolas Audebert | Aimi Okabayashi (IRISA, OBELIX), Nicolas Audebert (CEDRIC - VERTIGO,
CNAM, LaSTIG, IGN), Simon Donike (IPL), Charlotte Pelletier (OBELIX, IRISA) | Cross-sensor super-resolution of irregularly sampled Sentinel-2 time
series | null | EARTHVISION 2024 IEEE/CVF CVPR Workshop. Large Scale Computer
Vision for Remote Sensing Imagery, Jun 2024, Seattle, United States | null | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Satellite imaging generally presents a trade-off between the frequency of
acquisitions and the spatial resolution of the images. Super-resolution is
often advanced as a way to get the best of both worlds. In this work, we
investigate multi-image super-resolution of satellite image time series, i.e.
how multiple images of the same area acquired at different dates can help
reconstruct a higher resolution observation. In particular, we extend
state-of-the-art deep single and multi-image super-resolution algorithms, such
as SRDiff and HighRes-net, to deal with irregularly sampled Sentinel-2 time
series. We introduce BreizhSR, a new dataset for 4x super-resolution of
Sentinel-2 time series using very high-resolution SPOT-6 imagery of Brittany, a
French region. We show that using multiple images significantly improves
super-resolution performance, and that a well-designed temporal positional
encoding allows us to perform super-resolution for different times of the
series. In addition, we observe a trade-off between spectral fidelity and
perceptual quality of the reconstructed HR images, questioning future
directions for super-resolution of Earth Observation data.
| [
{
"created": "Thu, 25 Apr 2024 08:36:09 GMT",
"version": "v1"
}
] | 2024-04-26 | [
[
"Okabayashi",
"Aimi",
"",
"IRISA, OBELIX"
],
[
"Audebert",
"Nicolas",
"",
"CEDRIC - VERTIGO,\n CNAM, LaSTIG, IGN"
],
[
"Donike",
"Simon",
"",
"IPL"
],
[
"Pelletier",
"Charlotte",
"",
"OBELIX, IRISA"
]
] |
2404.16442 | Andreas Fischer | Zineddine Bettouche, Anas Safi, Andreas Fischer | Contextual Categorization Enhancement through LLMs Latent-Space | null | Fifteenth International Conference on Computational Logics,
Algebras, Programming, Tools, and Benchmarking (COMPUTATION TOOLS 2024),
ISSN: 2308-4170 | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Managing the semantic quality of the categorization in large textual
datasets, such as Wikipedia, presents significant challenges in terms of
complexity and cost. In this paper, we propose leveraging transformer models to
distill semantic information from texts in the Wikipedia dataset and its
associated categories into a latent space. We then explore different approaches
based on these encodings to assess and enhance the semantic identity of the
categories. Our graphical approach is powered by Convex Hull, while we utilize
Hierarchical Navigable Small Worlds (HNSWs) for the hierarchical approach. As a
solution to the information loss caused by the dimensionality reduction, we
modulate the following mathematical solution: an exponential decay function
driven by the Euclidean distances between the high-dimensional encodings of the
textual categories. This function represents a filter built around a contextual
category and retrieves items with a certain Reconsideration Probability (RP).
Retrieving high-RP items serves as a tool for database administrators to
improve data groupings by providing recommendations and identifying outliers
within a contextual framework.
| [
{
"created": "Thu, 25 Apr 2024 09:20:51 GMT",
"version": "v1"
}
] | 2024-04-26 | [
[
"Bettouche",
"Zineddine",
""
],
[
"Safi",
"Anas",
""
],
[
"Fischer",
"Andreas",
""
]
] |
2404.16547 | Giampiero Salvi | Giampiero Salvi | Developing Acoustic Models for Automatic Speech Recognition in Swedish | 16 pages, 7 figures | European Student Journal of Language and Speech, 1999 | null | null | eess.AS cs.AI cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper is concerned with automatic continuous speech recognition using
trainable systems. The aim of this work is to build acoustic models for spoken
Swedish. This is done employing hidden Markov models and using the SpeechDat
database to train their parameters. Acoustic modeling has been worked out at a
phonetic level, allowing general speech recognition applications, even though a
simplified task (digits and natural number recognition) has been considered for
model evaluation. Different kinds of phone models have been tested, including
context independent models and two variations of context dependent models.
Furthermore many experiments have been done with bigram language models to tune
some of the system parameters. System performance over various speaker subsets
with different sex, age and dialect has also been examined. Results are
compared to previous similar studies showing a remarkable improvement.
| [
{
"created": "Thu, 25 Apr 2024 12:03:14 GMT",
"version": "v1"
}
] | 2024-04-26 | [
[
"Salvi",
"Giampiero",
""
]
] |
2404.16558 | Leandro Di Bella | Leandro Di Bella, Yangxintong Lyu, Adrian Munteanu | DeepKalPose: An Enhanced Deep-Learning Kalman Filter for Temporally
Consistent Monocular Vehicle Pose Estimation | 4 pages, 3 Figures, published to IET Electronic Letters | Electronics Letters (ISSN: 00135194), jaar: 2024, volume: 60,
nummer: 8, startpagina: ? | 10.1049/ell2.13191 | null | cs.CV cs.AI cs.RO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This paper presents DeepKalPose, a novel approach for enhancing temporal
consistency in monocular vehicle pose estimation applied on video through a
deep-learning-based Kalman Filter. By integrating a Bi-directional Kalman
filter strategy utilizing forward and backward time-series processing, combined
with a learnable motion model to represent complex motion patterns, our method
significantly improves pose accuracy and robustness across various conditions,
particularly for occluded or distant vehicles. Experimental validation on the
KITTI dataset confirms that DeepKalPose outperforms existing methods in both
pose accuracy and temporal consistency.
| [
{
"created": "Thu, 25 Apr 2024 12:15:11 GMT",
"version": "v1"
}
] | 2024-04-26 | [
[
"Di Bella",
"Leandro",
""
],
[
"Lyu",
"Yangxintong",
""
],
[
"Munteanu",
"Adrian",
""
]
] |
2404.16561 | Haonan Wang | Ruiyang Wang, Haonan Wang, Junfeng Sun, Mingjia Zhao, Meng Liu | Research on geometric figure classification algorithm based on Deep
Learning | 6 pages,9 figures | Scientific Journal of Intelligent Systems Research,Volume 4 Issue
6, 2022 | null | ISSN: 2664-9640 | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, with the rapid development of computer information
technology, the development of artificial intelligence has been accelerating.
The traditional geometry recognition technology is relatively backward and the
recognition rate is low. In the face of massive information database, the
traditional algorithm model inevitably has the problems of low recognition
accuracy and poor performance. Deep learning theory has gradually become a very
important part of machine learning. The implementation of convolutional neural
network (CNN) reduces the difficulty of graphics generation algorithm. In this
paper, using the advantages of lenet-5 architecture sharing weights and feature
extraction and classification, the proposed geometric pattern recognition
algorithm model is faster in the training data set. By constructing the shared
feature parameters of the algorithm model, the cross-entropy loss function is
used in the recognition process to improve the generalization of the model and
improve the average recognition accuracy of the test data set.
| [
{
"created": "Thu, 25 Apr 2024 12:18:04 GMT",
"version": "v1"
}
] | 2024-04-26 | [
[
"Wang",
"Ruiyang",
""
],
[
"Wang",
"Haonan",
""
],
[
"Sun",
"Junfeng",
""
],
[
"Zhao",
"Mingjia",
""
],
[
"Liu",
"Meng",
""
]
] |
2404.16954 | Harit Vishwakarma | Harit Vishwakarma, Heguang Lin, Ramya Korlakai Vinayak | Taming False Positives in Out-of-Distribution Detection with Human
Feedback | Appeared in the 27th International Conference on Artificial
Intelligence and Statistics (AISTATS 2024) | PMLR 238:1486-1494, 2024 | null | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robustness to out-of-distribution (OOD) samples is crucial for safely
deploying machine learning models in the open world. Recent works have focused
on designing scoring functions to quantify OOD uncertainty. Setting appropriate
thresholds for these scoring functions for OOD detection is challenging as OOD
samples are often unavailable up front. Typically, thresholds are set to
achieve a desired true positive rate (TPR), e.g., $95\%$ TPR. However, this can
lead to very high false positive rates (FPR), ranging from 60 to 96\%, as
observed in the Open-OOD benchmark. In safety-critical real-life applications,
e.g., medical diagnosis, controlling the FPR is essential when dealing with
various OOD samples dynamically. To address these challenges, we propose a
mathematically grounded OOD detection framework that leverages expert feedback
to \emph{safely} update the threshold on the fly. We provide theoretical
results showing that it is guaranteed to meet the FPR constraint at all times
while minimizing the use of human feedback. Another key feature of our
framework is that it can work with any scoring function for OOD uncertainty
quantification. Empirical evaluation of our system on synthetic and benchmark
OOD datasets shows that our method can maintain FPR at most $5\%$ while
maximizing TPR.
| [
{
"created": "Thu, 25 Apr 2024 18:06:47 GMT",
"version": "v1"
}
] | 2024-04-29 | [
[
"Vishwakarma",
"Harit",
""
],
[
"Lin",
"Heguang",
""
],
[
"Vinayak",
"Ramya Korlakai",
""
]
] |
2404.17027 | Sudha Rao | Xiangyu Peng, Jessica Quaye, Sudha Rao, Weijia Xu, Portia Botchway,
Chris Brockett, Nebojsa Jojic, Gabriel DesGarennes, Ken Lobb, Michael Xu,
Jorge Leandro, Claire Jin, Bill Dolan | Player-Driven Emergence in LLM-Driven Game Narrative | Accepted at IEEE Conference on Games 2024 | IEEE Conference on Games 2024 | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We explore how interaction with large language models (LLMs) can give rise to
emergent behaviors, empowering players to participate in the evolution of game
narratives. Our testbed is a text-adventure game in which players attempt to
solve a mystery under a fixed narrative premise, but can freely interact with
non-player characters generated by GPT-4, a large language model. We recruit 28
gamers to play the game and use GPT-4 to automatically convert the game logs
into a node-graph representing the narrative in the player's gameplay. We find
that through their interactions with the non-deterministic behavior of the LLM,
players are able to discover interesting new emergent nodes that were not a
part of the original narrative but have potential for being fun and engaging.
Players that created the most emergent nodes tended to be those that often
enjoy games that facilitate discovery, exploration and experimentation.
| [
{
"created": "Thu, 25 Apr 2024 20:39:44 GMT",
"version": "v1"
},
{
"created": "Thu, 16 May 2024 21:10:03 GMT",
"version": "v2"
},
{
"created": "Mon, 3 Jun 2024 21:27:14 GMT",
"version": "v3"
}
] | 2024-06-05 | [
[
"Peng",
"Xiangyu",
""
],
[
"Quaye",
"Jessica",
""
],
[
"Rao",
"Sudha",
""
],
[
"Xu",
"Weijia",
""
],
[
"Botchway",
"Portia",
""
],
[
"Brockett",
"Chris",
""
],
[
"Jojic",
"Nebojsa",
""
],
[
"DesGarennes",
"Gabriel",
""
],
[
"Lobb",
"Ken",
""
],
[
"Xu",
"Michael",
""
],
[
"Leandro",
"Jorge",
""
],
[
"Jin",
"Claire",
""
],
[
"Dolan",
"Bill",
""
]
] |
2404.17126 | Hai Siong Tan | Hai Siong Tan, Kuancheng Wang, Rafe Mcbeth | Deep Evidential Learning for Radiotherapy Dose Prediction | 28 pages | Computers in Biology and Medicine, Vol. 182, Nov 2024, 109172 | 10.1016/j.compbiomed.2024.109172 | null | cs.LG cs.AI eess.IV physics.med-ph | http://creativecommons.org/licenses/by/4.0/ | In this work, we present a novel application of an uncertainty-quantification
framework called Deep Evidential Learning in the domain of radiotherapy dose
prediction. Using medical images of the Open Knowledge-Based Planning Challenge
dataset, we found that this model can be effectively harnessed to yield
uncertainty estimates that inherited correlations with prediction errors upon
completion of network training. This was achieved only after reformulating the
original loss function for a stable implementation. We found that (i)epistemic
uncertainty was highly correlated with prediction errors, with various
association indices comparable or stronger than those for Monte-Carlo Dropout
and Deep Ensemble methods, (ii)the median error varied with uncertainty
threshold much more linearly for epistemic uncertainty in Deep Evidential
Learning relative to these other two conventional frameworks, indicative of a
more uniformly calibrated sensitivity to model errors, (iii)relative to
epistemic uncertainty, aleatoric uncertainty demonstrated a more significant
shift in its distribution in response to Gaussian noise added to CT intensity,
compatible with its interpretation as reflecting data noise. Collectively, our
results suggest that Deep Evidential Learning is a promising approach that can
endow deep-learning models in radiotherapy dose prediction with statistical
robustness. Towards enhancing its clinical relevance, we demonstrate how we can
use such a model to construct the predicted Dose-Volume-Histograms' confidence
intervals.
| [
{
"created": "Fri, 26 Apr 2024 02:43:45 GMT",
"version": "v1"
},
{
"created": "Mon, 23 Sep 2024 08:45:43 GMT",
"version": "v2"
}
] | 2024-09-24 | [
[
"Tan",
"Hai Siong",
""
],
[
"Wang",
"Kuancheng",
""
],
[
"Mcbeth",
"Rafe",
""
]
] |
2404.17148 | Xiongjun Guan | Xiongjun Guan, Yongjie Duan, Jianjiang Feng, Jie Zhou | Direct Regression of Distortion Field from a Single Fingerprint Image | null | 2022 IEEE International Joint Conference on Biometrics (IJCB), Abu
Dhabi, United Arab Emirates, 2022, pp. 1-8 | 10.1109/IJCB54206.2022.10007981 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Skin distortion is a long standing challenge in fingerprint matching, which
causes false non-matches. Previous studies have shown that the recognition rate
can be improved by estimating the distortion field from a distorted fingerprint
and then rectifying it into a normal fingerprint. However, existing
rectification methods are based on principal component representation of
distortion fields, which is not accurate and are very sensitive to finger pose.
In this paper, we propose a rectification method where a self-reference based
network is utilized to directly estimate the dense distortion field of
distorted fingerprint instead of its low dimensional representation. This
method can output accurate distortion fields of distorted fingerprints with
various finger poses. Considering the limited number and variety of distorted
fingerprints in the existing public dataset, we collected more distorted
fingerprints with diverse finger poses and distortion patterns as a new
database. Experimental results demonstrate that our proposed method achieves
the state-of-the-art rectification performance in terms of distortion field
estimation and rectified fingerprint matching.
| [
{
"created": "Fri, 26 Apr 2024 04:35:42 GMT",
"version": "v1"
}
] | 2024-04-29 | [
[
"Guan",
"Xiongjun",
""
],
[
"Duan",
"Yongjie",
""
],
[
"Feng",
"Jianjiang",
""
],
[
"Zhou",
"Jie",
""
]
] |
2404.17149 | Xiongjun Guan | Xiongjun Guan, Jianjiang Feng, Jie Zhou | Pose-Specific 3D Fingerprint Unfolding | null | 15th Chinese Conference on Biometric Recognition (CCBR), Shanghai,
China, 2021, pp. 185-194 | 10.1007/978-3-030-86608-2_21 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In order to make 3D fingerprints compatible with traditional 2D flat
fingerprints, a common practice is to unfold the 3D fingerprint into a 2D
rolled fingerprint, which is then matched with the flat fingerprints by
traditional 2D fingerprint recognition algorithms. The problem with this method
is that there may be large elastic deformation between the unfolded rolled
fingerprint and flat fingerprint, which affects the recognition rate. In this
paper, we propose a pose-specific 3D fingerprint unfolding algorithm to unfold
the 3D fingerprint using the same pose as the flat fingerprint. Our experiments
show that the proposed unfolding algorithm improves the compatibility between
3D fingerprint and flat fingerprint and thus leads to higher genuine matching
scores.
| [
{
"created": "Fri, 26 Apr 2024 04:44:23 GMT",
"version": "v1"
}
] | 2024-04-29 | [
[
"Guan",
"Xiongjun",
""
],
[
"Feng",
"Jianjiang",
""
],
[
"Zhou",
"Jie",
""
]
] |
2404.17194 | Hailay Kidu Teklehaymanot | Hailay Teklehaymanot, Dren Fazlija, Niloy Ganguly, Gourab K. Patro,
Wolfgang Nejdl | TIGQA:An Expert Annotated Question Answering Dataset in Tigrinya | 9 pages,3 figures, 7 tables,2 listings | LREC-COLING 2024 | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The absence of explicitly tailored, accessible annotated datasets for
educational purposes presents a notable obstacle for NLP tasks in languages
with limited resources.This study initially explores the feasibility of using
machine translation (MT) to convert an existing dataset into a Tigrinya dataset
in SQuAD format. As a result, we present TIGQA, an expert annotated educational
dataset consisting of 2.68K question-answer pairs covering 122 diverse topics
such as climate, water, and traffic. These pairs are from 537 context
paragraphs in publicly accessible Tigrinya and Biology books. Through
comprehensive analyses, we demonstrate that the TIGQA dataset requires skills
beyond simple word matching, requiring both single-sentence and
multiple-sentence inference abilities. We conduct experiments using
state-of-the art MRC methods, marking the first exploration of such models on
TIGQA. Additionally, we estimate human performance on the dataset and juxtapose
it with the results obtained from pretrained models.The notable disparities
between human performance and best model performance underscore the potential
for further enhancements to TIGQA through continued research. Our dataset is
freely accessible via the provided link to encourage the research community to
address the challenges in the Tigrinya MRC.
| [
{
"created": "Fri, 26 Apr 2024 07:07:43 GMT",
"version": "v1"
}
] | 2024-04-29 | [
[
"Teklehaymanot",
"Hailay",
""
],
[
"Fazlija",
"Dren",
""
],
[
"Ganguly",
"Niloy",
""
],
[
"Patro",
"Gourab K.",
""
],
[
"Nejdl",
"Wolfgang",
""
]
] |
2404.17273 | Xuri Ge | Xuri Ge, Songpei Xu, Fuhai Chen, Jie Wang, Guoxin Wang, Shan An,
Joemon M. Jose | 3SHNet: Boosting Image-Sentence Retrieval via Visual Semantic-Spatial
Self-Highlighting | Accepted Information Processing and Management (IP&M), 10 pages, 9
figures and 8 tables | Information Processing & Management, Volume 61, Issue 4, July
2024, 103716 | 10.1016/j.ipm.2024.103716 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a novel visual Semantic-Spatial Self-Highlighting
Network (termed 3SHNet) for high-precision, high-efficiency and
high-generalization image-sentence retrieval. 3SHNet highlights the salient
identification of prominent objects and their spatial locations within the
visual modality, thus allowing the integration of visual semantics-spatial
interactions and maintaining independence between two modalities. This
integration effectively combines object regions with the corresponding semantic
and position layouts derived from segmentation to enhance the visual
representation. And the modality-independence guarantees efficiency and
generalization. Additionally, 3SHNet utilizes the structured contextual visual
scene information from segmentation to conduct the local (region-based) or
global (grid-based) guidance and achieve accurate hybrid-level retrieval.
Extensive experiments conducted on MS-COCO and Flickr30K benchmarks
substantiate the superior performances, inference efficiency and generalization
of the proposed 3SHNet when juxtaposed with contemporary state-of-the-art
methodologies. Specifically, on the larger MS-COCO 5K test set, we achieve
16.3%, 24.8%, and 18.3% improvements in terms of rSum score, respectively,
compared with the state-of-the-art methods using different image
representations, while maintaining optimal retrieval efficiency. Moreover, our
performance on cross-dataset generalization improves by 18.6%. Data and code
are available at https://github.com/XuriGe1995/3SHNet.
| [
{
"created": "Fri, 26 Apr 2024 09:25:18 GMT",
"version": "v1"
}
] | 2024-04-29 | [
[
"Ge",
"Xuri",
""
],
[
"Xu",
"Songpei",
""
],
[
"Chen",
"Fuhai",
""
],
[
"Wang",
"Jie",
""
],
[
"Wang",
"Guoxin",
""
],
[
"An",
"Shan",
""
],
[
"Jose",
"Joemon M.",
""
]
] |
2404.17357 | Yushen Xu | Yushen Xu, Xiaosong Li, Yuchan Jie and Haishu Tan | Simultaneous Tri-Modal Medical Image Fusion and Super-Resolution using
Conditional Diffusion Model | Accepted by MICCAI 2024 | International Conference on Medical Image Computing and
Computer-Assisted Intervention. Cham: Springer Nature Switzerland, 2024:
635-645 | 10.1007/978-3-031-72104-5_61 | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In clinical practice, tri-modal medical image fusion, compared to the
existing dual-modal technique, can provide a more comprehensive view of the
lesions, aiding physicians in evaluating the disease's shape, location, and
biological activity. However, due to the limitations of imaging equipment and
considerations for patient safety, the quality of medical images is usually
limited, leading to sub-optimal fusion performance, and affecting the depth of
image analysis by the physician. Thus, there is an urgent need for a technology
that can both enhance image resolution and integrate multi-modal information.
Although current image processing methods can effectively address image fusion
and super-resolution individually, solving both problems synchronously remains
extremely challenging. In this paper, we propose TFS-Diff, a simultaneously
realize tri-modal medical image fusion and super-resolution model. Specially,
TFS-Diff is based on the diffusion model generation of a random iterative
denoising process. We also develop a simple objective function and the proposed
fusion super-resolution loss, effectively evaluates the uncertainty in the
fusion and ensures the stability of the optimization process. And the channel
attention module is proposed to effectively integrate key information from
different modalities for clinical diagnosis, avoiding information loss caused
by multiple image processing. Extensive experiments on public Harvard datasets
show that TFS-Diff significantly surpass the existing state-of-the-art methods
in both quantitative and visual evaluations. Code is available at
https://github.com/XylonXu01/TFS-Diff.
| [
{
"created": "Fri, 26 Apr 2024 12:13:41 GMT",
"version": "v1"
},
{
"created": "Mon, 13 May 2024 12:19:52 GMT",
"version": "v2"
},
{
"created": "Sat, 14 Sep 2024 02:26:01 GMT",
"version": "v3"
},
{
"created": "Tue, 15 Oct 2024 01:14:50 GMT",
"version": "v4"
}
] | 2024-10-16 | [
[
"Xu",
"Yushen",
""
],
[
"Li",
"Xiaosong",
""
],
[
"Jie",
"Yuchan",
""
],
[
"Tan",
"Haishu",
""
]
] |
2404.17427 | Moussa Kassem Sbeyti | Moussa Kassem Sbeyti, Michelle Karg, Christian Wirth, Nadja Klein,
Sahin Albayrak | Cost-Sensitive Uncertainty-Based Failure Recognition for Object
Detection | Accepted with an oral presentation at UAI 2024 | The 40th Conference on Uncertainty in Artificial Intelligence,
2024, https://openreview.net/forum?id=HuibNFkaoi | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Object detectors in real-world applications often fail to detect objects due
to varying factors such as weather conditions and noisy input. Therefore, a
process that mitigates false detections is crucial for both safety and
accuracy. While uncertainty-based thresholding shows promise, previous works
demonstrate an imperfect correlation between uncertainty and detection errors.
This hinders ideal thresholding, prompting us to further investigate the
correlation and associated cost with different types of uncertainty. We
therefore propose a cost-sensitive framework for object detection tailored to
user-defined budgets on the two types of errors, missing and false detections.
We derive minimum thresholding requirements to prevent performance degradation
and define metrics to assess the applicability of uncertainty for failure
recognition. Furthermore, we automate and optimize the thresholding process to
maximize the failure recognition rate w.r.t. the specified budget. Evaluation
on three autonomous driving datasets demonstrates that our approach
significantly enhances safety, particularly in challenging scenarios.
Leveraging localization aleatoric uncertainty and softmax-based entropy only,
our method boosts the failure recognition rate by 36-60\% compared to
conventional approaches. Code is available at
https://mos-ks.github.io/publications.
| [
{
"created": "Fri, 26 Apr 2024 14:03:55 GMT",
"version": "v1"
}
] | 2024-06-14 | [
[
"Sbeyti",
"Moussa Kassem",
""
],
[
"Karg",
"Michelle",
""
],
[
"Wirth",
"Christian",
""
],
[
"Klein",
"Nadja",
""
],
[
"Albayrak",
"Sahin",
""
]
] |
2404.17475 | Van Bach Nguyen | Van Bach Nguyen, J\"org Schl\"otterer, Christin Seifert | CEval: A Benchmark for Evaluating Counterfactual Text Generation | null | INLG 2024 | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Counterfactual text generation aims to minimally change a text, such that it
is classified differently. Judging advancements in method development for
counterfactual text generation is hindered by a non-uniform usage of data sets
and metrics in related work. We propose CEval, a benchmark for comparing
counterfactual text generation methods. CEval unifies counterfactual and text
quality metrics, includes common counterfactual datasets with human
annotations, standard baselines (MICE, GDBA, CREST) and the open-source
language model LLAMA-2. Our experiments found no perfect method for generating
counterfactual text. Methods that excel at counterfactual metrics often produce
lower-quality text while LLMs with simple prompts generate high-quality text
but struggle with counterfactual criteria. By making CEval available as an
open-source Python library, we encourage the community to contribute more
methods and maintain consistent evaluation in future work.
| [
{
"created": "Fri, 26 Apr 2024 15:23:47 GMT",
"version": "v1"
},
{
"created": "Tue, 13 Aug 2024 07:39:59 GMT",
"version": "v2"
}
] | 2024-08-14 | [
[
"Nguyen",
"Van Bach",
""
],
[
"Schlötterer",
"Jörg",
""
],
[
"Seifert",
"Christin",
""
]
] |
2404.17488 | Ingeborg Beckers | Danja Brandt and Martin Tschaikner, Teodor Chiaburu, Henning Schmidt,
Ilona Schrimpf, Alexandra Stadel and Ingeborg E. Beckers, Frank Hau{\ss}er | Low Cost Machine Vision for Insect Classification | null | Arai, K. (eds) Intelligent Systems and Applications. IntelliSys
2023. Lecture Notes in Networks and Systems, vol 824. Springer | 10.1007/978-3-031-47715-7_2 | null | cs.CV cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Preserving the number and diversity of insects is one of our society's most
important goals in the area of environmental sustainability. A prerequisite for
this is a systematic and up-scaled monitoring in order to detect correlations
and identify countermeasures. Therefore, automatized monitoring using live
traps is important, but so far there is no system that provides image data of
sufficient detailed information for entomological classification.
In this work, we present an imaging method as part of a multisensor system
developed as a low-cost, scalable, open-source system that is adaptable to
classical trap types. The image quality meets the requirements needed for
classification in the taxonomic tree. Therefore, illumination and resolution
have been optimized and motion artefacts have been suppressed. The system is
evaluated exemplarily on a dataset consisting of 16 insect species of the same
as well as different genus, family and order. We demonstrate that standard
CNN-architectures like ResNet50 (pretrained on iNaturalist data) or MobileNet
perform very well for the prediction task after re-training. Smaller custom
made CNNs also lead to promising results. Classification accuracy of $>96\%$
has been achieved. Moreover, it was proved that image cropping of insects is
necessary for classification of species with high inter-class similarity.
| [
{
"created": "Fri, 26 Apr 2024 15:43:24 GMT",
"version": "v1"
}
] | 2024-04-29 | [
[
"Brandt",
"Danja",
""
],
[
"Tschaikner",
"Martin",
""
],
[
"Chiaburu",
"Teodor",
""
],
[
"Schmidt",
"Henning",
""
],
[
"Schrimpf",
"Ilona",
""
],
[
"Stadel",
"Alexandra",
""
],
[
"Beckers",
"Ingeborg E.",
""
],
[
"Haußer",
"Frank",
""
]
] |
2404.17552 | David Doukhan | R\'emi Uro, David Doukhan, Albert Rilliard, La\"etitia Larcher,
Anissa-Claire Adgharouamane, Marie Tahon, Antoine Laurent | A Semi-Automatic Approach to Create Large Gender- and Age-Balanced
Speaker Corpora: Usefulness of Speaker Diarization & Identification | Keywords:, semi-automatic processing, corpus creation, diarization,
speaker identification, gender-balanced, age-balanced, speaker corpus,
diachrony | Proceedings of the 13th Conference on Language Resources and
Evaluation (LREC 2022), pages 3271-3280, Marseille, 20-25 June 2022. European
Language Resources Association (ELRA) | null | null | eess.AS cs.CL cs.DL cs.LG cs.SD | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This paper presents a semi-automatic approach to create a diachronic corpus
of voices balanced for speaker's age, gender, and recording period, according
to 32 categories (2 genders, 4 age ranges and 4 recording periods). Corpora
were selected at French National Institute of Audiovisual (INA) to obtain at
least 30 speakers per category (a total of 960 speakers; only 874 have be found
yet). For each speaker, speech excerpts were extracted from audiovisual
documents using an automatic pipeline consisting of speech detection,
background music and overlapped speech removal and speaker diarization, used to
present clean speaker segments to human annotators identifying target speakers.
This pipeline proved highly effective, cutting down manual processing by a
factor of ten. Evaluation of the quality of the automatic processing and of the
final output is provided. It shows the automatic processing compare to
up-to-date process, and that the output provides high quality speech for most
of the selected excerpts. This method shows promise for creating large corpora
of known target speakers.
| [
{
"created": "Fri, 26 Apr 2024 17:30:36 GMT",
"version": "v1"
}
] | 2024-04-29 | [
[
"Uro",
"Rémi",
""
],
[
"Doukhan",
"David",
""
],
[
"Rilliard",
"Albert",
""
],
[
"Larcher",
"Laëtitia",
""
],
[
"Adgharouamane",
"Anissa-Claire",
""
],
[
"Tahon",
"Marie",
""
],
[
"Laurent",
"Antoine",
""
]
] |
2404.17591 | Peibo Li | Peibo Li, Maarten de Rijke, Hao Xue, Shuang Ao, Yang Song and Flora D.
Salim | Large Language Models for Next Point-of-Interest Recommendation | null | In Proceedings of the 47th International ACM SIGIR Conference on
Research and Development in Information Retrieval, SIGIR 2024, Association
for Computing Machinery, New York, NY, USA, 1463-1472 | 10.1145/3626772.3657840 | null | cs.IR cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | The next Point of Interest (POI) recommendation task is to predict users'
immediate next POI visit given their historical data. Location-Based Social
Network (LBSN) data, which is often used for the next POI recommendation task,
comes with challenges. One frequently disregarded challenge is how to
effectively use the abundant contextual information present in LBSN data.
Previous methods are limited by their numerical nature and fail to address this
challenge. In this paper, we propose a framework that uses pretrained Large
Language Models (LLMs) to tackle this challenge. Our framework allows us to
preserve heterogeneous LBSN data in its original format, hence avoiding the
loss of contextual information. Furthermore, our framework is capable of
comprehending the inherent meaning of contextual information due to the
inclusion of commonsense knowledge. In experiments, we test our framework on
three real-world LBSN datasets. Our results show that the proposed framework
outperforms the state-of-the-art models in all three datasets. Our analysis
demonstrates the effectiveness of the proposed framework in using contextual
information as well as alleviating the commonly encountered cold-start and
short trajectory problems.
| [
{
"created": "Fri, 19 Apr 2024 13:28:36 GMT",
"version": "v1"
},
{
"created": "Thu, 1 Aug 2024 08:54:15 GMT",
"version": "v2"
}
] | 2024-08-02 | [
[
"Li",
"Peibo",
""
],
[
"de Rijke",
"Maarten",
""
],
[
"Xue",
"Hao",
""
],
[
"Ao",
"Shuang",
""
],
[
"Song",
"Yang",
""
],
[
"Salim",
"Flora D.",
""
]
] |
2404.17593 | Sefika Efeoglu | Sefika Efeoglu | A Continual Relation Extraction Approach for Knowledge Graph
Completeness | Published at TPDL 2022 | TPDL 2022: 26th International Conference on Theory and Practice of
Digital Libraries, 20-23 September 2022, Padua, Italy | null | null | cs.DL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Representing unstructured data in a structured form is most significant for
information system management to analyze and interpret it. To do this, the
unstructured data might be converted into Knowledge Graphs, by leveraging an
information extraction pipeline whose main tasks are named entity recognition
and relation extraction. This thesis aims to develop a novel continual relation
extraction method to identify relations (interconnections) between entities in
a data stream coming from the real world. Domain-specific data of this thesis
is corona news from German and Austrian newspapers.
| [
{
"created": "Sat, 20 Apr 2024 18:15:52 GMT",
"version": "v1"
}
] | 2024-04-30 | [
[
"Efeoglu",
"Sefika",
""
]
] |
2404.17610 | Xiongjun Guan | Xiongjun Guan, Yongjie Duan, Jianjiang Feng, Jie Zhou | Regression of Dense Distortion Field from a Single Fingerprint Image | arXiv admin note: text overlap with arXiv:2404.17148 | IEEE Transactions on Information Forensics and Security, vol. 18,
pp. 4377-4390, 2023 | 10.1109/TIFS.2023.3296310 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Skin distortion is a long standing challenge in fingerprint matching, which
causes false non-matches. Previous studies have shown that the recognition rate
can be improved by estimating the distortion field from a distorted fingerprint
and then rectifying it into a normal fingerprint. However, existing
rectification methods are based on principal component representation of
distortion fields, which is not accurate and are very sensitive to finger pose.
In this paper, we propose a rectification method where a self-reference based
network is utilized to directly estimate the dense distortion field of
distorted fingerprint instead of its low dimensional representation. This
method can output accurate distortion fields of distorted fingerprints with
various finger poses and distortion patterns. We conducted experiments on
FVC2004 DB1\_A, expanded Tsinghua Distorted Fingerprint database (with
additional distorted fingerprints in diverse finger poses and distortion
patterns) and a latent fingerprint database. Experimental results demonstrate
that our proposed method achieves the state-of-the-art rectification
performance in terms of distortion field estimation and rectified fingerprint
matching.
| [
{
"created": "Fri, 26 Apr 2024 05:00:51 GMT",
"version": "v1"
}
] | 2024-04-30 | [
[
"Guan",
"Xiongjun",
""
],
[
"Duan",
"Yongjie",
""
],
[
"Feng",
"Jianjiang",
""
],
[
"Zhou",
"Jie",
""
]
] |
2404.17617 | Yuhang Zhang | Tao Liu, Yuhang Zhang, Zhu Feng, Zhiqin Yang, Chen Xu, Dapeng Man, Wu
Yang | Beyond Traditional Threats: A Persistent Backdoor Attack on Federated
Learning | null | Proceedings of the AAAI Conference on Artificial Intelligence.
2024, 38(19): 21359-21367 | null | null | cs.CR cs.AI cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Backdoors on federated learning will be diluted by subsequent benign updates.
This is reflected in the significant reduction of attack success rate as
iterations increase, ultimately failing. We use a new metric to quantify the
degree of this weakened backdoor effect, called attack persistence. Given that
research to improve this performance has not been widely noted,we propose a
Full Combination Backdoor Attack (FCBA) method. It aggregates more combined
trigger information for a more complete backdoor pattern in the global model.
Trained backdoored global model is more resilient to benign updates, leading to
a higher attack success rate on the test set. We test on three datasets and
evaluate with two models across various settings. FCBA's persistence
outperforms SOTA federated learning backdoor attacks. On GTSRB, postattack 120
rounds, our attack success rate rose over 50% from baseline. The core code of
our method is available at https://github.com/PhD-TaoLiu/FCBA.
| [
{
"created": "Fri, 26 Apr 2024 11:47:36 GMT",
"version": "v1"
}
] | 2024-04-30 | [
[
"Liu",
"Tao",
""
],
[
"Zhang",
"Yuhang",
""
],
[
"Feng",
"Zhu",
""
],
[
"Yang",
"Zhiqin",
""
],
[
"Xu",
"Chen",
""
],
[
"Man",
"Dapeng",
""
],
[
"Yang",
"Wu",
""
]
] |
2404.17820 | Yuchun Wang | Yuchun Wang and Cheng Gong and Jianwei Gong and Peng Jia | Motion planning for off-road autonomous driving based on human-like
cognition and weight adaptation | null | Journal of Field Robotics,2024,1-22 | 10.1002/rob.22345 | null | cs.RO cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Driving in an off-road environment is challenging for autonomous vehicles due
to the complex and varied terrain. To ensure stable and efficient travel, the
vehicle requires consideration and balancing of environmental factors, such as
undulations, roughness, and obstacles, to generate optimal trajectories that
can adapt to changing scenarios. However, traditional motion planners often
utilize a fixed cost function for trajectory optimization, making it difficult
to adapt to different driving strategies in challenging irregular terrains and
uncommon scenarios. To address these issues, we propose an adaptive motion
planner based on human-like cognition and cost evaluation for off-road driving.
First, we construct a multi-layer map describing different features of off-road
terrains, including terrain elevation, roughness, obstacle, and artificial
potential field map. Subsequently, we employ a CNN-LSTM network to learn the
trajectories planned by human drivers in various off-road scenarios. Then,
based on human-like generated trajectories in different environments, we design
a primitive-based trajectory planner that aims to mimic human trajectories and
cost weight selection, generating trajectories that are consistent with the
dynamics of off-road vehicles. Finally, we compute optimal cost weights and
select and extend behavioral primitives to generate highly adaptive, stable,
and efficient trajectories.
We validate the effectiveness of the proposed method through experiments in a
desert off-road environment with complex terrain and varying road conditions.
The experimental results show that the proposed human-like motion planner has
excellent adaptability to different off-road conditions. It shows real-time
operation, greater stability, and more human-like planning ability in diverse
and challenging scenarios.
| [
{
"created": "Sat, 27 Apr 2024 08:00:35 GMT",
"version": "v1"
}
] | 2024-04-30 | [
[
"Wang",
"Yuchun",
""
],
[
"Gong",
"Cheng",
""
],
[
"Gong",
"Jianwei",
""
],
[
"Jia",
"Peng",
""
]
] |
2404.17861 | Yuval Haitman | Yuval Haitman, Oded Bialer | BoostRad: Enhancing Object Detection by Boosting Radar Reflections | WACV2024 | 2024 IEEE/CVF Winter Conference on Applications of Computer Vision
(WACV), Waikoloa, HI, USA, 2024, pp. 1627-1636 | 10.1109/WACV57701.2024.00166 | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Automotive radars have an important role in autonomous driving systems. The
main challenge in automotive radar detection is the radar's wide point spread
function (PSF) in the angular domain that causes blurriness and clutter in the
radar image. Numerous studies suggest employing an 'end-to-end' learning
strategy using a Deep Neural Network (DNN) to directly detect objects from
radar images. This approach implicitly addresses the PSF's impact on objects of
interest. In this paper, we propose an alternative approach, which we term
"Boosting Radar Reflections" (BoostRad). In BoostRad, a first DNN is trained to
narrow the PSF for all the reflection points in the scene. The output of the
first DNN is a boosted reflection image with higher resolution and reduced
clutter, resulting in a sharper and cleaner image. Subsequently, a second DNN
is employed to detect objects within the boosted reflection image. We develop a
novel method for training the boosting DNN that incorporates domain knowledge
of radar's PSF characteristics. BoostRad's performance is evaluated using the
RADDet and CARRADA datasets, revealing its superiority over reference methods.
| [
{
"created": "Sat, 27 Apr 2024 10:40:52 GMT",
"version": "v1"
}
] | 2024-04-30 | [
[
"Haitman",
"Yuval",
""
],
[
"Bialer",
"Oded",
""
]
] |
2404.17865 | Hao Sun | Zitong Zhang, Yang Liu, Hao Sun | Vision-based Discovery of Nonlinear Dynamics for 3D Moving Target | 17 pages | IJCAI 2024 | null | null | cs.CV cs.AI nlin.CD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data-driven discovery of governing equations has kindled significant
interests in many science and engineering areas. Existing studies primarily
focus on uncovering equations that govern nonlinear dynamics based on direct
measurement of the system states (e.g., trajectories). Limited efforts have
been placed on distilling governing laws of dynamics directly from videos for
moving targets in a 3D space. To this end, we propose a vision-based approach
to automatically uncover governing equations of nonlinear dynamics for 3D
moving targets via raw videos recorded by a set of cameras. The approach is
composed of three key blocks: (1) a target tracking module that extracts plane
pixel motions of the moving target in each video, (2) a Rodrigues' rotation
formula-based coordinate transformation learning module that reconstructs the
3D coordinates with respect to a predefined reference point, and (3) a
spline-enhanced library-based sparse regressor that uncovers the underlying
governing law of dynamics. This framework is capable of effectively handling
the challenges associated with measurement data, e.g., noise in the video,
imprecise tracking of the target that causes data missing, etc. The efficacy of
our method has been demonstrated through multiple sets of synthetic videos
considering different nonlinear dynamics.
| [
{
"created": "Sat, 27 Apr 2024 11:13:55 GMT",
"version": "v1"
}
] | 2024-04-30 | [
[
"Zhang",
"Zitong",
""
],
[
"Liu",
"Yang",
""
],
[
"Sun",
"Hao",
""
]
] |
2404.17877 | Yubo Feng | Yubo Feng, Lishuang Li, Yi Xiang, Xueyang Qin | PromptCL: Improving Event Representation via Prompt Template and
Contrastive Learning | NLPCC 2023 Best Student Paper | Natural Language Processing and Chinese Computing (NLPCC 2023) | 10.1007/978-3-031-44693-1_21 | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The representation of events in text plays a significant role in various NLP
tasks. Recent research demonstrates that contrastive learning has the ability
to improve event comprehension capabilities of Pre-trained Language Models
(PLMs) and enhance the performance of event representation learning. However,
the efficacy of event representation learning based on contrastive learning and
PLMs is limited by the short length of event texts. The length of event texts
differs significantly from the text length used in the pre-training of PLMs. As
a result, there is inconsistency in the distribution of text length between
pre-training and event representation learning, which may undermine the
learning process of event representation based on PLMs. In this study, we
present PromptCL, a novel framework for event representation learning that
effectively elicits the capabilities of PLMs to comprehensively capture the
semantics of short event texts. PromptCL utilizes a Prompt template borrowed
from prompt learning to expand the input text during Contrastive Learning. This
helps in enhancing the event representation learning by providing a structured
outline of the event components. Moreover, we propose Subject-Predicate-Object
(SPO) word order and Event-oriented Masked Language Modeling (EventMLM) to
train PLMs to understand the relationships between event components. Our
experimental results demonstrate that PromptCL outperforms state-of-the-art
baselines on event related tasks. Additionally, we conduct a thorough analysis
and demonstrate that using a prompt results in improved generalization
capabilities for event representations. Our code will be available at
https://github.com/YuboFeng2023/PromptCL.
| [
{
"created": "Sat, 27 Apr 2024 12:22:43 GMT",
"version": "v1"
}
] | 2024-04-30 | [
[
"Feng",
"Yubo",
""
],
[
"Li",
"Lishuang",
""
],
[
"Xiang",
"Yi",
""
],
[
"Qin",
"Xueyang",
""
]
] |
2404.17892 | Lindsey Kerbel | Lindsey Kerbel, Beshah Ayalew, Andrej Ivanco | Shared learning of powertrain control policies for vehicle fleets | null | Elsevier Applied Energy Volume 365, 1 July 2024, 123217 | 10.1016/j.apenergy.2024.123217 | null | eess.SY cs.AI cs.LG cs.SY | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Emerging data-driven approaches, such as deep reinforcement learning (DRL),
aim at on-the-field learning of powertrain control policies that optimize fuel
economy and other performance metrics. Indeed, they have shown great potential
in this regard for individual vehicles on specific routes or drive cycles.
However, for fleets of vehicles that must service a distribution of routes, DRL
approaches struggle with learning stability issues that result in high
variances and challenge their practical deployment. In this paper, we present a
novel framework for shared learning among a fleet of vehicles through the use
of a distilled group policy as the knowledge sharing mechanism for the policy
learning computations at each vehicle. We detail the mathematical formulation
that makes this possible. Several scenarios are considered to analyze the
functionality, performance, and computational scalability of the framework with
fleet size. Comparisons of the cumulative performance of fleets using our
proposed shared learning approach with a baseline of individual learning agents
and another state-of-the-art approach with a centralized learner show clear
advantages to our approach. For example, we find a fleet average asymptotic
improvement of 8.5 percent in fuel economy compared to the baseline while also
improving on the metrics of acceleration error and shifting frequency for
fleets serving a distribution of suburban routes. Furthermore, we include
demonstrative results that show how the framework reduces variance within a
fleet and also how it helps individual agents adapt better to new routes.
| [
{
"created": "Sat, 27 Apr 2024 13:01:05 GMT",
"version": "v1"
}
] | 2024-04-30 | [
[
"Kerbel",
"Lindsey",
""
],
[
"Ayalew",
"Beshah",
""
],
[
"Ivanco",
"Andrej",
""
]
] |
2404.17900 | Baihong Lin | Di Wu, Shicai Fan, Xue Zhou, Li Yu, Yuzhong Deng, Jianxiao Zou,
Baihong Lin | Unsupervised Anomaly Detection via Masked Diffusion Posterior Sampling | null | International Joint Conference on Artificial Intelligence 2024 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reconstruction-based methods have been commonly used for unsupervised anomaly
detection, in which a normal image is reconstructed and compared with the given
test image to detect and locate anomalies. Recently, diffusion models have
shown promising applications for anomaly detection due to their powerful
generative ability. However, these models lack strict mathematical support for
normal image reconstruction and unexpectedly suffer from low reconstruction
quality. To address these issues, this paper proposes a novel and
highly-interpretable method named Masked Diffusion Posterior Sampling (MDPS).
In MDPS, the problem of normal image reconstruction is mathematically modeled
as multiple diffusion posterior sampling for normal images based on the devised
masked noisy observation model and the diffusion-based normal image prior under
Bayesian framework. Using a metric designed from pixel-level and
perceptual-level perspectives, MDPS can effectively compute the difference map
between each normal posterior sample and the given test image. Anomaly scores
are obtained by averaging all difference maps for multiple posterior samples.
Exhaustive experiments on MVTec and BTAD datasets demonstrate that MDPS can
achieve state-of-the-art performance in normal image reconstruction quality as
well as anomaly detection and localization.
| [
{
"created": "Sat, 27 Apr 2024 13:13:27 GMT",
"version": "v1"
}
] | 2024-04-30 | [
[
"Wu",
"Di",
""
],
[
"Fan",
"Shicai",
""
],
[
"Zhou",
"Xue",
""
],
[
"Yu",
"Li",
""
],
[
"Deng",
"Yuzhong",
""
],
[
"Zou",
"Jianxiao",
""
],
[
"Lin",
"Baihong",
""
]
] |
2404.17922 | Laksh Nanwani | Laksh Nanwani, Kumaraditya Gupta, Aditya Mathur, Swayam Agrawal, A.H.
Abdul Hafez, K. Madhava Krishna | Open-Set 3D Semantic Instance Maps for Vision Language Navigation --
O3D-SIM | null | Advanced Robotics - Taylor and Francis - 2024 | 10.1080/01691864.2024.2395926 | null | cs.CV cs.RO | http://creativecommons.org/licenses/by-sa/4.0/ | Humans excel at forming mental maps of their surroundings, equipping them to
understand object relationships and navigate based on language queries. Our
previous work SI Maps [1] showed that having instance-level information and the
semantic understanding of an environment helps significantly improve
performance for language-guided tasks. We extend this instance-level approach
to 3D while increasing the pipeline's robustness and improving quantitative and
qualitative results. Our method leverages foundational models for object
recognition, image segmentation, and feature extraction. We propose a
representation that results in a 3D point cloud map with instance-level
embeddings, which bring in the semantic understanding that natural language
commands can query. Quantitatively, the work improves upon the success rate of
language-guided tasks. At the same time, we qualitatively observe the ability
to identify instances more clearly and leverage the foundational models and
language and image-aligned embeddings to identify objects that, otherwise, a
closed-set approach wouldn't be able to identify.
| [
{
"created": "Sat, 27 Apr 2024 14:20:46 GMT",
"version": "v1"
}
] | 2024-08-30 | [
[
"Nanwani",
"Laksh",
""
],
[
"Gupta",
"Kumaraditya",
""
],
[
"Mathur",
"Aditya",
""
],
[
"Agrawal",
"Swayam",
""
],
[
"Hafez",
"A. H. Abdul",
""
],
[
"Krishna",
"K. Madhava",
""
]
] |
2404.18094 | Wenbin Wang | Wenbin Wang, Yang Song, Sanjay Jha | USAT: A Universal Speaker-Adaptive Text-to-Speech Approach | 15 pages, 13 figures. Copyright has been transferred to IEEE | IEEE/ACM Transactions on Audio, Speech and Language Processing,
2024 | 10.1109/TASLP.2024.3393714 | null | cs.SD cs.AI cs.CL eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Conventional text-to-speech (TTS) research has predominantly focused on
enhancing the quality of synthesized speech for speakers in the training
dataset. The challenge of synthesizing lifelike speech for unseen,
out-of-dataset speakers, especially those with limited reference data, remains
a significant and unresolved problem. While zero-shot or few-shot
speaker-adaptive TTS approaches have been explored, they have many limitations.
Zero-shot approaches tend to suffer from insufficient generalization
performance to reproduce the voice of speakers with heavy accents. While
few-shot methods can reproduce highly varying accents, they bring a significant
storage burden and the risk of overfitting and catastrophic forgetting. In
addition, prior approaches only provide either zero-shot or few-shot
adaptation, constraining their utility across varied real-world scenarios with
different demands. Besides, most current evaluations of speaker-adaptive TTS
are conducted only on datasets of native speakers, inadvertently neglecting a
vast portion of non-native speakers with diverse accents. Our proposed
framework unifies both zero-shot and few-shot speaker adaptation strategies,
which we term as "instant" and "fine-grained" adaptations based on their
merits. To alleviate the insufficient generalization performance observed in
zero-shot speaker adaptation, we designed two innovative discriminators and
introduced a memory mechanism for the speech decoder. To prevent catastrophic
forgetting and reduce storage implications for few-shot speaker adaptation, we
designed two adapters and a unique adaptation procedure.
| [
{
"created": "Sun, 28 Apr 2024 06:50:55 GMT",
"version": "v1"
}
] | 2024-04-30 | [
[
"Wang",
"Wenbin",
""
],
[
"Song",
"Yang",
""
],
[
"Jha",
"Sanjay",
""
]
] |
2404.18183 | Shuochen Bi | Shuochen Bi, Wenqing Bao | Innovative Application of Artificial Intelligence Technology in Bank
Credit Risk Management | 6 pages, 1 figure, 2 tables | International Journal of Global Economics and Management ISSN:
3005-9690 (Print), ISSN: 3005-8090 (Online) | Volume 2, Number 3, Year 2024 | 10.62051/IJGEM.v2n3.08 | null | q-fin.RM cs.AI | http://creativecommons.org/licenses/by/4.0/ | With the rapid growth of technology, especially the widespread application of
artificial intelligence (AI) technology, the risk management level of
commercial banks is constantly reaching new heights. In the current wave of
digitalization, AI has become a key driving force for the strategic
transformation of financial institutions, especially the banking industry. For
commercial banks, the stability and safety of asset quality are crucial, which
directly relates to the long-term stable growth of the bank. Among them, credit
risk management is particularly core because it involves the flow of a large
amount of funds and the accuracy of credit decisions. Therefore, establishing a
scientific and effective credit risk decision-making mechanism is of great
strategic significance for commercial banks. In this context, the innovative
application of AI technology has brought revolutionary changes to bank credit
risk management. Through deep learning and big data analysis, AI can accurately
evaluate the credit status of borrowers, timely identify potential risks, and
provide banks with more accurate and comprehensive credit decision support. At
the same time, AI can also achieve realtime monitoring and early warning,
helping banks intervene before risks occur and reduce losses.
| [
{
"created": "Sun, 28 Apr 2024 13:29:35 GMT",
"version": "v1"
}
] | 2024-04-30 | [
[
"Bi",
"Shuochen",
""
],
[
"Bao",
"Wenqing",
""
]
] |
2404.18206 | Cuiwei Liu | Cuiwei Liu, Youzhi Jiang, Chong Du, and Zhaokui Li | Enhancing Action Recognition from Low-Quality Skeleton Data via
Part-Level Knowledge Distillation | null | published in Signal Processing 2024 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Skeleton-based action recognition is vital for comprehending human-centric
videos and has applications in diverse domains. One of the challenges of
skeleton-based action recognition is dealing with low-quality data, such as
skeletons that have missing or inaccurate joints. This paper addresses the
issue of enhancing action recognition using low-quality skeletons through a
general knowledge distillation framework. The proposed framework employs a
teacher-student model setup, where a teacher model trained on high-quality
skeletons guides the learning of a student model that handles low-quality
skeletons. To bridge the gap between heterogeneous high-quality and lowquality
skeletons, we present a novel part-based skeleton matching strategy, which
exploits shared body parts to facilitate local action pattern learning. An
action-specific part matrix is developed to emphasize critical parts for
different actions, enabling the student model to distill discriminative
part-level knowledge. A novel part-level multi-sample contrastive loss achieves
knowledge transfer from multiple high-quality skeletons to low-quality ones,
which enables the proposed knowledge distillation framework to include training
low-quality skeletons that lack corresponding high-quality matches.
Comprehensive experiments conducted on the NTU-RGB+D, Penn Action, and SYSU 3D
HOI datasets demonstrate the effectiveness of the proposed knowledge
distillation framework.
| [
{
"created": "Sun, 28 Apr 2024 14:58:54 GMT",
"version": "v1"
}
] | 2024-04-30 | [
[
"Liu",
"Cuiwei",
""
],
[
"Jiang",
"Youzhi",
""
],
[
"Du",
"Chong",
""
],
[
"Li",
"Zhaokui",
""
]
] |
2404.18401 | Lingbo Huang | Lingbo Huang, Yushi Chen, and Xin He | Spectral-Spatial Mamba for Hyperspectral Image Classification | 23 pages | Remote Sens. 2024, 16, 2449 | 10.3390/rs16132449 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, deep learning models have achieved excellent performance in
hyperspectral image (HSI) classification. Among the many deep models,
Transformer has gradually attracted interest for its excellence in modeling the
long-range dependencies of spatial-spectral features in HSI. However,
Transformer has the problem of quadratic computational complexity due to the
self-attention mechanism, which is heavier than other models and thus has
limited adoption in HSI processing. Fortunately, the recently emerging state
space model-based Mamba shows great computational efficiency while achieving
the modeling power of Transformers. Therefore, in this paper, we make a
preliminary attempt to apply the Mamba to HSI classification, leading to the
proposed spectral-spatial Mamba (SS-Mamba). Specifically, the proposed SS-Mamba
mainly consists of spectral-spatial token generation module and several stacked
spectral-spatial Mamba blocks. Firstly, the token generation module converts
any given HSI cube to spatial and spectral tokens as sequences. And then these
tokens are sent to stacked spectral-spatial mamba blocks (SS-MB). Each SS-MB
block consists of two basic mamba blocks and a spectral-spatial feature
enhancement module. The spatial and spectral tokens are processed separately by
the two basic mamba blocks, respectively. Besides, the feature enhancement
module modulates spatial and spectral tokens using HSI sample's center region
information. In this way, the spectral and spatial tokens cooperate with each
other and achieve information fusion within each block. The experimental
results conducted on widely used HSI datasets reveal that the proposed model
achieves competitive results compared with the state-of-the-art methods. The
Mamba-based method opens a new window for HSI classification.
| [
{
"created": "Mon, 29 Apr 2024 03:36:05 GMT",
"version": "v1"
},
{
"created": "Wed, 31 Jul 2024 03:42:47 GMT",
"version": "v2"
},
{
"created": "Thu, 1 Aug 2024 09:04:39 GMT",
"version": "v3"
}
] | 2024-08-02 | [
[
"Huang",
"Lingbo",
""
],
[
"Chen",
"Yushi",
""
],
[
"He",
"Xin",
""
]
] |
2404.18443 | Ran Xu | Ran Xu, Wenqi Shi, Yue Yu, Yuchen Zhuang, Yanqiao Zhu, May D. Wang,
Joyce C. Ho, Chao Zhang, Carl Yang | BMRetriever: Tuning Large Language Models as Better Biomedical Text
Retrievers | Accepted to EMNLP 2024. The model and data are uploaded to
\url{https://github.com/ritaranx/BMRetriever} | EMNLP 2024 | null | null | cs.CL cs.AI cs.IR q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Developing effective biomedical retrieval models is important for excelling
at knowledge-intensive biomedical tasks but still challenging due to the
deficiency of sufficient publicly annotated biomedical data and computational
resources. We present BMRetriever, a series of dense retrievers for enhancing
biomedical retrieval via unsupervised pre-training on large biomedical corpora,
followed by instruction fine-tuning on a combination of labeled datasets and
synthetic pairs. Experiments on 5 biomedical tasks across 11 datasets verify
BMRetriever's efficacy on various biomedical applications. BMRetriever also
exhibits strong parameter efficiency, with the 410M variant outperforming
baselines up to 11.7 times larger, and the 2B variant matching the performance
of models with over 5B parameters. The training data and model checkpoints are
released at \url{https://huggingface.co/BMRetriever} to ensure transparency,
reproducibility, and application to new domains.
| [
{
"created": "Mon, 29 Apr 2024 05:40:08 GMT",
"version": "v1"
},
{
"created": "Fri, 4 Oct 2024 03:25:34 GMT",
"version": "v2"
}
] | 2024-10-07 | [
[
"Xu",
"Ran",
""
],
[
"Shi",
"Wenqi",
""
],
[
"Yu",
"Yue",
""
],
[
"Zhuang",
"Yuchen",
""
],
[
"Zhu",
"Yanqiao",
""
],
[
"Wang",
"May D.",
""
],
[
"Ho",
"Joyce C.",
""
],
[
"Zhang",
"Chao",
""
],
[
"Yang",
"Carl",
""
]
] |
2404.18504 | Ingeborg Beckers | Martin Tschaikner and Danja Brandt, Henning Schmidt, Felix
Bie{\ss}mann, Teodor Chiaburu, Ilona Schrimpf, Thomas Schrimpf, Alexandra
Stadel and Frank Hau{\ss}er and Ingeborg Beckers | Multisensor Data Fusion for Automatized Insect Monitoring (KInsecta) | null | Remote Sensing for Agriculture, Ecosystems, and Hydrology XXV,
SPIE 12727 (2023) 1272702 | 10.1117/12.2679927 | null | cs.LG cs.CV eess.SP | http://creativecommons.org/licenses/by-sa/4.0/ | Insect populations are declining globally, making systematic monitoring
essential for conservation. Most classical methods involve death traps and
counter insect conservation. This paper presents a multisensor approach that
uses AI-based data fusion for insect classification. The system is designed as
low-cost setup and consists of a camera module and an optical wing beat sensor
as well as environmental sensors to measure temperature, irradiance or daytime
as prior information. The system has been tested in the laboratory and in the
field. First tests on a small very unbalanced data set with 7 species show
promising results for species classification. The multisensor system will
support biodiversity and agriculture studies.
| [
{
"created": "Mon, 29 Apr 2024 08:46:43 GMT",
"version": "v1"
}
] | 2024-04-30 | [
[
"Tschaikner",
"Martin",
""
],
[
"Brandt",
"Danja",
""
],
[
"Schmidt",
"Henning",
""
],
[
"Bießmann",
"Felix",
""
],
[
"Chiaburu",
"Teodor",
""
],
[
"Schrimpf",
"Ilona",
""
],
[
"Schrimpf",
"Thomas",
""
],
[
"Stadel",
"Alexandra",
""
],
[
"Haußer",
"Frank",
""
],
[
"Beckers",
"Ingeborg",
""
]
] |
2404.18876 | Cristiano Bacelar De Oliveira | Cristiano B. de Oliveira, Joao C. Neves, Rafael O. Ribeiro and David
Menotti | A Multilevel Strategy to Improve People Tracking in a Real-World
Scenario | Accepted for presentation at the International Conference on Computer
Vision Theory and Applications (VISAPP) 2024 | Proceedings of the 19th International Joint Conference on Computer
Vision, Imaging and Computer Graphics Theory and Applications - Volume 4:
VISAPP, 2024 | 10.5220/0012460000003660 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Pal\'acio do Planalto, office of the President of Brazil, was invaded by
protesters on January 8, 2023. Surveillance videos taken from inside the
building were subsequently released by the Brazilian Supreme Court for public
scrutiny. We used segments of such footage to create the UFPR-Planalto801
dataset for people tracking and re-identification in a real-world scenario.
This dataset consists of more than 500,000 images. This paper presents a
tracking approach targeting this dataset. The method proposed in this paper
relies on the use of known state-of-the-art trackers combined in a multilevel
hierarchy to correct the ID association over the trajectories. We evaluated our
method using IDF1, MOTA, MOTP and HOTA metrics. The results show improvements
for every tracker used in the experiments, with IDF1 score increasing by a
margin up to 9.5%.
| [
{
"created": "Mon, 29 Apr 2024 17:10:41 GMT",
"version": "v1"
}
] | 2024-04-30 | [
[
"de Oliveira",
"Cristiano B.",
""
],
[
"Neves",
"Joao C.",
""
],
[
"Ribeiro",
"Rafael O.",
""
],
[
"Menotti",
"David",
""
]
] |
2404.18935 | Sourabh Gothe Mr | Sourabh Vasant Gothe, Vibhav Agarwal, Sourav Ghosh, Jayesh Rajkumar
Vachhani, Pranay Kashyap, Barath Raj Kandur Raja | What's in the Flow? Exploiting Temporal Motion Cues for Unsupervised
Generic Event Boundary Detection | Accepted in WACV-2024. Supplementary at
https://openaccess.thecvf.com/content/WACV2024/supplemental/Gothe_Whats_in_the_WACV_2024_supplemental.pdf | 2024 IEEE/CVF Winter Conference on Applications of Computer Vision
(WACV), Waikoloa, HI, USA, 2024, pp. 6926-6935 | 10.1109/WACV57701.2024.00679 | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Generic Event Boundary Detection (GEBD) task aims to recognize generic,
taxonomy-free boundaries that segment a video into meaningful events. Current
methods typically involve a neural model trained on a large volume of data,
demanding substantial computational power and storage space. We explore two
pivotal questions pertaining to GEBD: Can non-parametric algorithms outperform
unsupervised neural methods? Does motion information alone suffice for high
performance? This inquiry drives us to algorithmically harness motion cues for
identifying generic event boundaries in videos. In this work, we propose
FlowGEBD, a non-parametric, unsupervised technique for GEBD. Our approach
entails two algorithms utilizing optical flow: (i) Pixel Tracking and (ii) Flow
Normalization. By conducting thorough experimentation on the challenging
Kinetics-GEBD and TAPOS datasets, our results establish FlowGEBD as the new
state-of-the-art (SOTA) among unsupervised methods. FlowGEBD exceeds the neural
models on the Kinetics-GEBD dataset by obtaining an F1@0.05 score of 0.713 with
an absolute gain of 31.7% compared to the unsupervised baseline and achieves an
average F1 score of 0.623 on the TAPOS validation dataset.
| [
{
"created": "Thu, 15 Feb 2024 14:49:15 GMT",
"version": "v1"
}
] | 2024-05-14 | [
[
"Gothe",
"Sourabh Vasant",
""
],
[
"Agarwal",
"Vibhav",
""
],
[
"Ghosh",
"Sourav",
""
],
[
"Vachhani",
"Jayesh Rajkumar",
""
],
[
"Kashyap",
"Pranay",
""
],
[
"Raja",
"Barath Raj Kandur",
""
]
] |
2404.19043 | Hyunho Lee | Hyunho Lee, Wenwen Li | Improving Interpretability of Deep Active Learning for Flood Inundation
Mapping Through Class Ambiguity Indices Using Multi-spectral Satellite
Imagery | 46 pages, 11 figures, 5 tables | Remote Sensing of Environment, 309, 114213 | 10.1016/j.rse.2024.114213 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Flood inundation mapping is a critical task for responding to the increasing
risk of flooding linked to global warming. Significant advancements of deep
learning in recent years have triggered its extensive applications, including
flood inundation mapping. To cope with the time-consuming and labor-intensive
data labeling process in supervised learning, deep active learning strategies
are one of the feasible approaches. However, there remains limited exploration
into the interpretability of how deep active learning strategies operate, with
a specific focus on flood inundation mapping in the field of remote sensing. In
this study, we introduce a novel framework of Interpretable Deep Active
Learning for Flood inundation Mapping (IDAL-FIM), specifically in terms of
class ambiguity of multi-spectral satellite images. In the experiments, we
utilize Sen1Floods11 dataset, and adopt U-Net with MC-dropout. In addition, we
employ five acquisition functions, which are the random, K-means, BALD,
entropy, and margin acquisition functions. Based on the experimental results,
we demonstrate that two proposed class ambiguity indices are effective
variables to interpret the deep active learning by establishing statistically
significant correlation with the predictive uncertainty of the deep learning
model at the tile level. Then, we illustrate the behaviors of deep active
learning through visualizing two-dimensional density plots and providing
interpretations regarding the operation of deep active learning, in flood
inundation mapping.
| [
{
"created": "Mon, 29 Apr 2024 18:33:17 GMT",
"version": "v1"
}
] | 2024-05-29 | [
[
"Lee",
"Hyunho",
""
],
[
"Li",
"Wenwen",
""
]
] |
2404.19094 | Matteo Merler | Matteo Merler, Katsiaryna Haitsiukevich, Nicola Dainese and Pekka
Marttinen | In-Context Symbolic Regression: Leveraging Large Language Models for
Function Discovery | 18 pages, 11 figures | ACL Student Research Workshop 2024 | 10.18653/v1/2024.acl-srw.49 | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | State of the art Symbolic Regression (SR) methods currently build specialized
models, while the application of Large Language Models (LLMs) remains largely
unexplored. In this work, we introduce the first comprehensive framework that
utilizes LLMs for the task of SR. We propose In-Context Symbolic Regression
(ICSR), an SR method which iteratively refines a functional form with an LLM
and determines its coefficients with an external optimizer. ICSR leverages
LLMs' strong mathematical prior both to propose an initial set of possible
functions given the observations and to refine them based on their errors. Our
findings reveal that LLMs are able to successfully find symbolic equations that
fit the given data, matching or outperforming the overall performance of the
best SR baselines on four popular benchmarks, while yielding simpler equations
with better out of distribution generalization.
| [
{
"created": "Mon, 29 Apr 2024 20:19:25 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Jul 2024 15:29:18 GMT",
"version": "v2"
}
] | 2024-09-27 | [
[
"Merler",
"Matteo",
""
],
[
"Haitsiukevich",
"Katsiaryna",
""
],
[
"Dainese",
"Nicola",
""
],
[
"Marttinen",
"Pekka",
""
]
] |
2404.19126 | Christopher Kymn | Christopher J. Kymn, Sonia Mazelet, Annabel Ng, Denis Kleyko, Bruno A.
Olshausen | Compositional Factorization of Visual Scenes with Convolutional Sparse
Coding and Resonator Networks | 9 pages, 5 figures | 2024 Neuro Inspired Computational Elements Conference (NICE) | 10.1109/NICE61972.2024.10549719 | null | cs.CV cs.NE | http://creativecommons.org/licenses/by-sa/4.0/ | We propose a system for visual scene analysis and recognition based on
encoding the sparse, latent feature-representation of an image into a
high-dimensional vector that is subsequently factorized to parse scene content.
The sparse feature representation is learned from image statistics via
convolutional sparse coding, while scene parsing is performed by a resonator
network. The integration of sparse coding with the resonator network increases
the capacity of distributed representations and reduces collisions in the
combinatorial search space during factorization. We find that for this problem
the resonator network is capable of fast and accurate vector factorization, and
we develop a confidence-based metric that assists in tracking the convergence
of the resonator network.
| [
{
"created": "Mon, 29 Apr 2024 22:03:02 GMT",
"version": "v1"
}
] | 2024-07-01 | [
[
"Kymn",
"Christopher J.",
""
],
[
"Mazelet",
"Sonia",
""
],
[
"Ng",
"Annabel",
""
],
[
"Kleyko",
"Denis",
""
],
[
"Olshausen",
"Bruno A.",
""
]
] |