id
stringlengths
10
10
submitter
stringlengths
3
52
authors
stringlengths
6
7.24k
title
stringlengths
12
217
comments
stringlengths
1
446
journal-ref
stringlengths
4
297
doi
stringlengths
12
118
report-no
stringclasses
237 values
categories
stringlengths
5
71
license
stringclasses
6 values
abstract
stringlengths
90
3.26k
versions
listlengths
1
17
update_date
stringclasses
969 values
authors_parsed
sequencelengths
1
451
2405.20978
Felton Fang
Feiteng Fang, Yuelin Bai, Shiwen Ni, Min Yang, Xiaojun Chen and Ruifeng Xu
Enhancing Noise Robustness of Retrieval-Augmented Language Models with Adaptive Adversarial Training
null
ACL 2024, Main Conference
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Large Language Models (LLMs) exhibit substantial capabilities yet encounter challenges, including hallucination, outdated knowledge, and untraceable reasoning processes. Retrieval-augmented generation (RAG) has emerged as a promising solution, integrating knowledge from external databases to mitigate these challenges. However, inappropriate retrieved passages can potentially hinder the LLMs' capacity to generate comprehensive and high-quality responses. Prior RAG studies on the robustness of retrieval noises often confine themselves to a limited set of noise types, deviating from real-world retrieval environments and limiting practical applicability. In this study, we initially investigate retrieval noises and categorize them into three distinct types, reflecting real-world environments. We analyze the impact of these various retrieval noises on the robustness of LLMs. Subsequently, we propose a novel RAG approach known as Retrieval-augmented Adaptive Adversarial Training (RAAT). RAAT leverages adaptive adversarial training to dynamically adjust the model's training process in response to retrieval noises. Concurrently, it employs multi-task learning to ensure the model's capacity to internally recognize noisy contexts. Extensive experiments demonstrate that the LLaMA-2 7B model trained using RAAT exhibits significant improvements in F1 and EM scores under diverse noise conditions. For reproducibility, we release our code and data at: https://github.com/calubkk/RAAT.
[ { "created": "Fri, 31 May 2024 16:24:53 GMT", "version": "v1" } ]
2024-06-03
[ [ "Fang", "Feiteng", "" ], [ "Bai", "Yuelin", "" ], [ "Ni", "Shiwen", "" ], [ "Yang", "Min", "" ], [ "Chen", "Xiaojun", "" ], [ "Xu", "Ruifeng", "" ] ]
2405.20980
Felix Mujkanovic
Felix Mujkanovic, Ntumba Elie Nsampi, Christian Theobalt, Hans-Peter Seidel, Thomas Leimk\"uhler
Neural Gaussian Scale-Space Fields
15 pages; SIGGRAPH 2024; project page at https://neural-gaussian-scale-space-fields.mpi-inf.mpg.de
ACM Transactions on Graphics, Volume 43, Issue 4, July 2024
10.1145/3658163
null
cs.CV cs.GR cs.LG
http://creativecommons.org/licenses/by/4.0/
Gaussian scale spaces are a cornerstone of signal representation and processing, with applications in filtering, multiscale analysis, anti-aliasing, and many more. However, obtaining such a scale space is costly and cumbersome, in particular for continuous representations such as neural fields. We present an efficient and lightweight method to learn the fully continuous, anisotropic Gaussian scale space of an arbitrary signal. Based on Fourier feature modulation and Lipschitz bounding, our approach is trained self-supervised, i.e., training does not require any manual filtering. Our neural Gaussian scale-space fields faithfully capture multiscale representations across a broad range of modalities, and support a diverse set of applications. These include images, geometry, light-stage data, texture anti-aliasing, and multiscale optimization.
[ { "created": "Fri, 31 May 2024 16:26:08 GMT", "version": "v1" } ]
2024-07-23
[ [ "Mujkanovic", "Felix", "" ], [ "Nsampi", "Ntumba Elie", "" ], [ "Theobalt", "Christian", "" ], [ "Seidel", "Hans-Peter", "" ], [ "Leimkühler", "Thomas", "" ] ]
2405.21003
Amr Alkhatib
Amr Alkhatib, Henrik Bostr\"om, Michalis Vazirgiannis
Explaining Predictions by Characteristic Rules
Machine Learning and Knowledge Discovery in Databases. ECML PKDD 2022
In: Machine Learning and Knowledge Discovery in Databases. ECML PKDD 2022. Lecture Notes in Computer Science(), vol 13713. Springer, Cham (2023)
10.1007/978-3-031-26387-3_24
null
cs.LG cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Characteristic rules have been advocated for their ability to improve interpretability over discriminative rules within the area of rule learning. However, the former type of rule has not yet been used by techniques for explaining predictions. A novel explanation technique, called CEGA (Characteristic Explanatory General Association rules), is proposed, which employs association rule mining to aggregate multiple explanations generated by any standard local explanation technique into a set of characteristic rules. An empirical investigation is presented, in which CEGA is compared to two state-of-the-art methods, Anchors and GLocalX, for producing local and aggregated explanations in the form of discriminative rules. The results suggest that the proposed approach provides a better trade-off between fidelity and complexity compared to the two state-of-the-art approaches; CEGA and Anchors significantly outperform GLocalX with respect to fidelity, while CEGA and GLocalX significantly outperform Anchors with respect to the number of generated rules. The effect of changing the format of the explanations of CEGA to discriminative rules and using LIME and SHAP as local explanation techniques instead of Anchors are also investigated. The results show that the characteristic explanatory rules still compete favorably with rules in the standard discriminative format. The results also indicate that using CEGA in combination with either SHAP or Anchors consistently leads to a higher fidelity compared to using LIME as the local explanation technique.
[ { "created": "Fri, 31 May 2024 16:44:40 GMT", "version": "v1" } ]
2024-06-03
[ [ "Alkhatib", "Amr", "" ], [ "Boström", "Henrik", "" ], [ "Vazirgiannis", "Michalis", "" ] ]
2405.21043
Fengdi Che
Fengdi Che, Chenjun Xiao, Jincheng Mei, Bo Dai, Ramki Gummadi, Oscar A Ramirez, Christopher K Harris, A. Rupam Mahmood, Dale Schuurmans
Target Networks and Over-parameterization Stabilize Off-policy Bootstrapping with Function Approximation
null
Proceedings of the 41 st International Conference on Machine Learning, 2024
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
We prove that the combination of a target network and over-parameterized linear function approximation establishes a weaker convergence condition for bootstrapped value estimation in certain cases, even with off-policy data. Our condition is naturally satisfied for expected updates over the entire state-action space or learning with a batch of complete trajectories from episodic Markov decision processes. Notably, using only a target network or an over-parameterized model does not provide such a convergence guarantee. Additionally, we extend our results to learning with truncated trajectories, showing that convergence is achievable for all tasks with minor modifications, akin to value truncation for the final states in trajectories. Our primary result focuses on temporal difference estimation for prediction, providing high-probability value estimation error bounds and empirical analysis on Baird's counterexample and a Four-room task. Furthermore, we explore the control setting, demonstrating that similar convergence conditions apply to Q-learning.
[ { "created": "Fri, 31 May 2024 17:36:16 GMT", "version": "v1" }, { "created": "Fri, 4 Oct 2024 18:04:33 GMT", "version": "v2" } ]
2024-10-08
[ [ "Che", "Fengdi", "" ], [ "Xiao", "Chenjun", "" ], [ "Mei", "Jincheng", "" ], [ "Dai", "Bo", "" ], [ "Gummadi", "Ramki", "" ], [ "Ramirez", "Oscar A", "" ], [ "Harris", "Christopher K", "" ], [ "Mahmood", "A. Rupam", "" ], [ "Schuurmans", "Dale", "" ] ]
2406.00123
Mingyuan Meng
Mingyuan Meng, Dagan Feng, Lei Bi, and Jinman Kim
Correlation-aware Coarse-to-fine MLPs for Deformable Medical Image Registration
Accepted at CVPR2024 as Oral Presentation && Best Paper Candidate
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 9645-9654
null
null
eess.IV cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Deformable image registration is a fundamental step for medical image analysis. Recently, transformers have been used for registration and outperformed Convolutional Neural Networks (CNNs). Transformers can capture long-range dependence among image features, which have been shown beneficial for registration. However, due to the high computation/memory loads of self-attention, transformers are typically used at downsampled feature resolutions and cannot capture fine-grained long-range dependence at the full image resolution. This limits deformable registration as it necessitates precise dense correspondence between each image pixel. Multi-layer Perceptrons (MLPs) without self-attention are efficient in computation/memory usage, enabling the feasibility of capturing fine-grained long-range dependence at full resolution. Nevertheless, MLPs have not been extensively explored for image registration and are lacking the consideration of inductive bias crucial for medical registration tasks. In this study, we propose the first correlation-aware MLP-based registration network (CorrMLP) for deformable medical image registration. Our CorrMLP introduces a correlation-aware multi-window MLP block in a novel coarse-to-fine registration architecture, which captures fine-grained multi-range dependence to perform correlation-aware coarse-to-fine registration. Extensive experiments with seven public medical datasets show that our CorrMLP outperforms state-of-the-art deformable registration methods.
[ { "created": "Fri, 31 May 2024 18:25:23 GMT", "version": "v1" }, { "created": "Wed, 12 Jun 2024 12:21:52 GMT", "version": "v2" } ]
2024-06-13
[ [ "Meng", "Mingyuan", "" ], [ "Feng", "Dagan", "" ], [ "Bi", "Lei", "" ], [ "Kim", "Jinman", "" ] ]
2406.00291
Yiyang Zhao
Yiyang Zhao, Linnan Wang, Tian Guo
Multi-Objective Neural Architecture Search by Learning Search Space Partitions
null
Journal of Machine Learning Research 25 (2024) 1-41
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Deploying deep learning models requires taking into consideration neural network metrics such as model size, inference latency, and #FLOPs, aside from inference accuracy. This results in deep learning model designers leveraging multi-objective optimization to design effective deep neural networks in multiple criteria. However, applying multi-objective optimizations to neural architecture search (NAS) is nontrivial because NAS tasks usually have a huge search space, along with a non-negligible searching cost. This requires effective multi-objective search algorithms to alleviate the GPU costs. In this work, we implement a novel multi-objectives optimizer based on a recently proposed meta-algorithm called LaMOO on NAS tasks. In a nutshell, LaMOO speedups the search process by learning a model from observed samples to partition the search space and then focusing on promising regions likely to contain a subset of the Pareto frontier. Using LaMOO, we observe an improvement of more than 200% sample efficiency compared to Bayesian optimization and evolutionary-based multi-objective optimizers on different NAS datasets. For example, when combined with LaMOO, qEHVI achieves a 225% improvement in sample efficiency compared to using qEHVI alone in NasBench201. For real-world tasks, LaMOO achieves 97.36% accuracy with only 1.62M #Params on CIFAR10 in only 600 search samples. On ImageNet, our large model reaches 80.4% top-1 accuracy with only 522M #FLOPs.
[ { "created": "Sat, 1 Jun 2024 03:51:34 GMT", "version": "v1" }, { "created": "Thu, 18 Jul 2024 01:53:35 GMT", "version": "v2" } ]
2024-08-20
[ [ "Zhao", "Yiyang", "" ], [ "Wang", "Linnan", "" ], [ "Guo", "Tian", "" ] ]
2406.00423
Luis Rei
Luis Rei and Dunja Mladeni\'c and Mareike Dorozynski and Franz Rottensteiner and Thomas Schleider and Rapha\"el Troncy and Jorge Sebasti\'an Lozano and Mar Gait\'an Salvatella
Multimodal Metadata Assignment for Cultural Heritage Artifacts
null
Multimedia Systems 29 (2023) 847-869
10.1007/s00530-022-01025-2
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
We develop a multimodal classifier for the cultural heritage domain using a late fusion approach and introduce a novel dataset. The three modalities are Image, Text, and Tabular data. We based the image classifier on a ResNet convolutional neural network architecture and the text classifier on a multilingual transformer architecture (XML-Roberta). Both are trained as multitask classifiers and use the focal loss to handle class imbalance. Tabular data and late fusion are handled by Gradient Tree Boosting. We also show how we leveraged specific data models and taxonomy in a Knowledge Graph to create the dataset and to store classification results. All individual classifiers accurately predict missing properties in the digitized silk artifacts, with the multimodal approach providing the best results.
[ { "created": "Sat, 1 Jun 2024 12:41:03 GMT", "version": "v1" } ]
2024-06-04
[ [ "Rei", "Luis", "" ], [ "Mladenić", "Dunja", "" ], [ "Dorozynski", "Mareike", "" ], [ "Rottensteiner", "Franz", "" ], [ "Schleider", "Thomas", "" ], [ "Troncy", "Raphaël", "" ], [ "Lozano", "Jorge Sebastián", "" ], [ "Salvatella", "Mar Gaitán", "" ] ]
2406.00512
Marcos Faundez-Zanuy
Marcos Faundez-Zanuy, Moises Diaz
On the use of first and second derivative approximations for biometric online signature recognition
Advances in Computational Intelligence. IWANN 2023. pp 461 to 472
Lecture Notes in Computer Science, vol 14134, 2023
10.1007/978-3-031-43085-5_36
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
This paper investigates the impact of different approximation methods in feature extraction for pattern recognition applications, specifically focused on delta and delta-delta parameters. Using MCYT330 online signature data-base, our experiments show that 11-point approximation outperforms 1-point approximation, resulting in a 1.4% improvement in identification rate, 36.8% reduction in random forgeries and 2.4% reduction in skilled forgeries
[ { "created": "Sat, 1 Jun 2024 17:36:34 GMT", "version": "v1" } ]
2024-06-04
[ [ "Faundez-Zanuy", "Marcos", "" ], [ "Diaz", "Moises", "" ] ]
2406.00848
Hamza El Housni
Abdelilah Nossair, Hamza El Housni
Eating Smart: Advancing Health Informatics with the Grounding DINO based Dietary Assistant App
The work presented in this paper was part of the proceedings for the First International Conference on Artificial Intelligence (ICATA 2024)
Eating Smart: Advancing Health Informatics with the Grounding DINO-based Dietary Assistant App, International Journal of Scientific and Innovative Studies, June 2024, Volume 3, Number 3, Pages 26-34, Available online at IJSRIS
10.5281/zenodo.11243881
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
The Smart Dietary Assistant utilizes Machine Learning to provide personalized dietary advice, focusing on users with conditions like diabetes. This app leverages the Grounding DINO model, which combines a text encoder and image backbone to enhance food item detection without requiring a labeled dataset. With an AP score of 52.5 on the COCO dataset, the model demonstrates high accuracy in real-world scenarios, utilizing attention mechanisms to precisely recognize objects based on user-provided labels and images. Developed using React Native and TypeScript, the app operates seamlessly across multiple platforms and integrates a self-hosted PostgreSQL database, ensuring data integrity and enhancing user privacy. Key functionalities include personalized nutrition profiles, real-time food scanning, and health insights, facilitating informed dietary choices for health management and lifestyle optimization. Future developments aim to integrate wearable technologies for more tailored health recommendations. Keywords: Food Image Recognition, Machine Learning in Nutrition, Zero-Shot Object Detection
[ { "created": "Sun, 2 Jun 2024 19:59:07 GMT", "version": "v1" } ]
2024-06-04
[ [ "Nossair", "Abdelilah", "" ], [ "Housni", "Hamza El", "" ] ]
2406.01026
Xue Mengge
Mengge Xue, Zhenyu Hu, Liqun Liu, Kuo Liao, Shuang Li, Honglin Han, Meng Zhao, Chengguo Yin
Strengthened Symbol Binding Makes Large Language Models Reliable Multiple-Choice Selectors
Accept at ACL2024 Main
ACL 2024
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multiple-Choice Questions (MCQs) constitute a critical area of research in the study of Large Language Models (LLMs). Previous works have investigated the selection bias problem in MCQs within few-shot scenarios, in which the LLM's performance may be influenced by the presentation of answer choices, leaving the selection bias during Supervised Fine-Tuning (SFT) unexplored. In this paper, we reveal that selection bias persists in the SFT phase , primarily due to the LLM's inadequate Multiple Choice Symbol Binding (MCSB) ability. This limitation implies that the model struggles to associate the answer options with their corresponding symbols (e.g., A/B/C/D) effectively. To enhance the model's MCSB capability, we first incorporate option contents into the loss function and subsequently adjust the weights of the option symbols and contents, guiding the model to understand the option content of the current symbol. Based on this, we introduce an efficient SFT algorithm for MCQs, termed Point-wise Intelligent Feedback (PIF). PIF constructs negative instances by randomly combining the incorrect option contents with all candidate symbols, and proposes a point-wise loss to provide feedback on these negative samples into LLMs. Our experimental results demonstrate that PIF significantly reduces the model's selection bias by improving its MCSB capability. Remarkably, PIF exhibits a substantial enhancement in the accuracy for MCQs.
[ { "created": "Mon, 3 Jun 2024 06:20:12 GMT", "version": "v1" }, { "created": "Thu, 6 Jun 2024 06:32:45 GMT", "version": "v2" } ]
2024-06-07
[ [ "Xue", "Mengge", "" ], [ "Hu", "Zhenyu", "" ], [ "Liu", "Liqun", "" ], [ "Liao", "Kuo", "" ], [ "Li", "Shuang", "" ], [ "Han", "Honglin", "" ], [ "Zhao", "Meng", "" ], [ "Yin", "Chengguo", "" ] ]
2406.01062
Qilong Zhangli
Qilong Zhangli, Jindong Jiang, Di Liu, Licheng Yu, Xiaoliang Dai, Ankit Ramchandani, Guan Pang, Dimitris N. Metaxas, Praveen Krishnan
Layout Agnostic Scene Text Image Synthesis with Diffusion Models
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 7496-7506
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 7496-7506
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
While diffusion models have significantly advanced the quality of image generation their capability to accurately and coherently render text within these images remains a substantial challenge. Conventional diffusion-based methods for scene text generation are typically limited by their reliance on an intermediate layout output. This dependency often results in a constrained diversity of text styles and fonts an inherent limitation stemming from the deterministic nature of the layout generation phase. To address these challenges this paper introduces SceneTextGen a novel diffusion-based model specifically designed to circumvent the need for a predefined layout stage. By doing so SceneTextGen facilitates a more natural and varied representation of text. The novelty of SceneTextGen lies in its integration of three key components: a character-level encoder for capturing detailed typographic properties coupled with a character-level instance segmentation model and a word-level spotting model to address the issues of unwanted text generation and minor character inaccuracies. We validate the performance of our method by demonstrating improved character recognition rates on generated images across different public visual text datasets in comparison to both standard diffusion based methods and text specific methods.
[ { "created": "Mon, 3 Jun 2024 07:20:34 GMT", "version": "v1" }, { "created": "Tue, 11 Jun 2024 01:17:02 GMT", "version": "v2" }, { "created": "Mon, 8 Jul 2024 02:10:06 GMT", "version": "v3" }, { "created": "Fri, 19 Jul 2024 19:22:24 GMT", "version": "v4" }, { "created": "Sun, 15 Sep 2024 21:46:02 GMT", "version": "v5" } ]
2024-09-17
[ [ "Zhangli", "Qilong", "" ], [ "Jiang", "Jindong", "" ], [ "Liu", "Di", "" ], [ "Yu", "Licheng", "" ], [ "Dai", "Xiaoliang", "" ], [ "Ramchandani", "Ankit", "" ], [ "Pang", "Guan", "" ], [ "Metaxas", "Dimitris N.", "" ], [ "Krishnan", "Praveen", "" ] ]
2406.01096
Anjanava Biswas
Wrick Talukdar, Anjanava Biswas
Synergizing Unsupervised and Supervised Learning: A Hybrid Approach for Accurate Natural Language Task Modeling
null
International Journal of Innovative Science and Research Technology: Vol. 9 (2024): No. 5, 1499-1508
10.38124/ijisrt/IJISRT24MAY2087
null
cs.CL cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
While supervised learning models have shown remarkable performance in various natural language processing (NLP) tasks, their success heavily relies on the availability of large-scale labeled datasets, which can be costly and time-consuming to obtain. Conversely, unsupervised learning techniques can leverage abundant unlabeled text data to learn rich representations, but they do not directly optimize for specific NLP tasks. This paper presents a novel hybrid approach that synergizes unsupervised and supervised learning to improve the accuracy of NLP task modeling. While supervised models excel at specific tasks, they rely on large labeled datasets. Unsupervised techniques can learn rich representations from abundant unlabeled text but don't directly optimize for tasks. Our methodology integrates an unsupervised module that learns representations from unlabeled corpora (e.g., language models, word embeddings) and a supervised module that leverages these representations to enhance task-specific models. We evaluate our approach on text classification and named entity recognition (NER), demonstrating consistent performance gains over supervised baselines. For text classification, contextual word embeddings from a language model pretrain a recurrent or transformer-based classifier. For NER, word embeddings initialize a BiLSTM sequence labeler. By synergizing techniques, our hybrid approach achieves SOTA results on benchmark datasets, paving the way for more data-efficient and robust NLP systems.
[ { "created": "Mon, 3 Jun 2024 08:31:35 GMT", "version": "v1" } ]
2024-06-04
[ [ "Talukdar", "Wrick", "" ], [ "Biswas", "Anjanava", "" ] ]
2406.01233
Viktor Scherbakov
Viktor Shcherbakov, Fedor Krasnov
Multi-word Term Embeddings Improve Lexical Product Retrieval
10 pages, 4 figures
In Proceedings of the Seventh Workshop on e-Commerce and NLP, LREC-COLING 2024, pages 115-124, Torino, Italia. ELRA and ICCL
null
null
cs.IR cs.CL
http://creativecommons.org/licenses/by/4.0/
Product search is uniquely different from search for documents, Internet resources or vacancies, therefore it requires the development of specialized search systems. The present work describes the H1 embdedding model, designed for an offline term indexing of product descriptions at e-commerce platforms. The model is compared to other state-of-the-art (SoTA) embedding models within a framework of hybrid product search system that incorporates the advantages of lexical methods for product retrieval and semantic embedding-based methods. We propose an approach to building semantically rich term vocabularies for search indexes. Compared to other production semantic models, H1 paired with the proposed approach stands out due to its ability to process multi-word product terms as one token. As an example, for search queries "new balance shoes", "gloria jeans kids wear" brand entity will be represented as one token - "new balance", "gloria jeans". This results in an increased precision of the system without affecting the recall. The hybrid search system with proposed model scores mAP@12 = 56.1% and R@1k = 86.6% on the WANDS public dataset, beating other SoTA analogues.
[ { "created": "Mon, 3 Jun 2024 11:52:52 GMT", "version": "v1" } ]
2024-06-04
[ [ "Shcherbakov", "Viktor", "" ], [ "Krasnov", "Fedor", "" ] ]
2406.01377
Weihao Zeng
Weihao Zeng, Joseph Campbell, Simon Stepputtis, Katia Sycara
Multi-Agent Transfer Learning via Temporal Contrastive Learning
6 pages, 6 figures
2024 IEEE International Conference on Robotics and Automation (ICRA) 2024
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces a novel transfer learning framework for deep multi-agent reinforcement learning. The approach automatically combines goal-conditioned policies with temporal contrastive learning to discover meaningful sub-goals. The approach involves pre-training a goal-conditioned agent, finetuning it on the target domain, and using contrastive learning to construct a planning graph that guides the agent via sub-goals. Experiments on multi-agent coordination Overcooked tasks demonstrate improved sample efficiency, the ability to solve sparse-reward and long-horizon problems, and enhanced interpretability compared to baselines. The results highlight the effectiveness of integrating goal-conditioned policies with unsupervised temporal abstraction learning for complex multi-agent transfer learning. Compared to state-of-the-art baselines, our method achieves the same or better performances while requiring only 21.7% of the training samples.
[ { "created": "Mon, 3 Jun 2024 14:42:14 GMT", "version": "v1" } ]
2024-06-04
[ [ "Zeng", "Weihao", "" ], [ "Campbell", "Joseph", "" ], [ "Stepputtis", "Simon", "" ], [ "Sycara", "Katia", "" ] ]
2406.01421
Zihao Zhang
Phillip Fernberg, Zihao Zhang
Problematizing AI Omnipresence in Landscape Architecture
null
Journal of Digital Landscape Architecture, 2024
10.14627/537752069
null
cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
This position paper argues for, and offers, a critical lens through which to examine the current AI frenzy in the landscape architecture profession. In it, the authors propose five archetypes or mental modes that landscape architects might inhabit when thinking about AI. Rather than limiting judgments of AI use to a single axis of acceleration, these archetypes and corresponding narratives exist along a relational spectrum and are permeable, allowing LAs to take on and switch between them according to context. We model these relationships between the archetypes and their contributions to AI advancement using a causal loop diagram (CLD), and with those interactions argue that more nuanced ways of approaching AI might also open new modes of practice in the new digital economy.
[ { "created": "Mon, 3 Jun 2024 15:20:05 GMT", "version": "v1" } ]
2024-06-04
[ [ "Fernberg", "Phillip", "" ], [ "Zhang", "Zihao", "" ] ]
2406.01618
Anjanava Biswas
Anjanava Biswas, Wrick Talukdar
FinEmbedDiff: A Cost-Effective Approach of Classifying Financial Documents with Vector Sampling using Multi-modal Embedding Models
10 pages, 3 figures
International Research Journal of Modernization in Engineering Technology and Science: Vol. 06 (2024): No. 5, 6142-6152
10.56726/IRJMETS57269
null
cs.IR cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Accurate classification of multi-modal financial documents, containing text, tables, charts, and images, is crucial but challenging. Traditional text-based approaches often fail to capture the complex multi-modal nature of these documents. We propose FinEmbedDiff, a cost-effective vector sampling method that leverages pre-trained multi-modal embedding models to classify financial documents. Our approach generates multi-modal embedding vectors for documents, and compares new documents with pre-computed class embeddings using vector similarity measures. Evaluated on a large dataset, FinEmbedDiff achieves competitive classification accuracy compared to state-of-the-art baselines while significantly reducing computational costs. The method exhibits strong generalization capabilities, making it a practical and scalable solution for real-world financial applications.
[ { "created": "Tue, 28 May 2024 16:34:24 GMT", "version": "v1" } ]
2024-06-05
[ [ "Biswas", "Anjanava", "" ], [ "Talukdar", "Wrick", "" ] ]
2406.01624
Alaa Nfissi
Alaa Nfissi, Wassim Bouachir, Nizar Bouguila, Brian Mishara
Unveiling Hidden Factors: Explainable AI for Feature Boosting in Speech Emotion Recognition
Published in: Springer Nature International Journal of Applied Intelligence (2024)
Applied Intelligence (2024), 1-24
10.1007/s10489-024-05536-5
null
eess.AS cs.AI cs.CL cs.LG cs.SD
http://creativecommons.org/licenses/by/4.0/
Speech emotion recognition (SER) has gained significant attention due to its several application fields, such as mental health, education, and human-computer interaction. However, the accuracy of SER systems is hindered by high-dimensional feature sets that may contain irrelevant and redundant information. To overcome this challenge, this study proposes an iterative feature boosting approach for SER that emphasizes feature relevance and explainability to enhance machine learning model performance. Our approach involves meticulous feature selection and analysis to build efficient SER systems. In addressing our main problem through model explainability, we employ a feature evaluation loop with Shapley values to iteratively refine feature sets. This process strikes a balance between model performance and transparency, which enables a comprehensive understanding of the model's predictions. The proposed approach offers several advantages, including the identification and removal of irrelevant and redundant features, leading to a more effective model. Additionally, it promotes explainability, facilitating comprehension of the model's predictions and the identification of crucial features for emotion determination. The effectiveness of the proposed method is validated on the SER benchmarks of the Toronto emotional speech set (TESS), Berlin Database of Emotional Speech (EMO-DB), Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS), and Surrey Audio-Visual Expressed Emotion (SAVEE) datasets, outperforming state-of-the-art methods. To the best of our knowledge, this is the first work to incorporate model explainability into an SER framework. The source code of this paper is publicly available via this https://github.com/alaaNfissi/Unveiling-Hidden-Factors-Explainable-AI-for-Feature-Boosting-in-Speech-Emotion-Recognition.
[ { "created": "Sat, 1 Jun 2024 00:39:55 GMT", "version": "v1" }, { "created": "Wed, 5 Jun 2024 22:21:55 GMT", "version": "v2" } ]
2024-06-07
[ [ "Nfissi", "Alaa", "" ], [ "Bouachir", "Wassim", "" ], [ "Bouguila", "Nizar", "" ], [ "Mishara", "Brian", "" ] ]
2406.01782
Leopoldo Carlos Agorio Grove
Leopoldo Agorio, Sean Van Alen, Miguel Calvo-Fullana, Santiago Paternain, Juan Andres Bazerque
Multi-agent assignment via state augmented reinforcement learning
12 pages, 3 figures, 6th Annual Conference on Learning for Dynamics and Control
Proceedings of Machine Learning Research vol 242 1 12, 2024. 6th Annual Conference on Learning for Dynamics and Control
null
null
eess.SY cs.AI cs.LG cs.MA cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We address the conflicting requirements of a multi-agent assignment problem through constrained reinforcement learning, emphasizing the inadequacy of standard regularization techniques for this purpose. Instead, we recur to a state augmentation approach in which the oscillation of dual variables is exploited by agents to alternate between tasks. In addition, we coordinate the actions of the multiple agents acting on their local states through these multipliers, which are gossiped through a communication network, eliminating the need to access other agent states. By these means, we propose a distributed multi-agent assignment protocol with theoretical feasibility guarantees that we corroborate in a monitoring numerical experiment.
[ { "created": "Mon, 3 Jun 2024 20:56:12 GMT", "version": "v1" } ]
2024-06-05
[ [ "Agorio", "Leopoldo", "" ], [ "Van Alen", "Sean", "" ], [ "Calvo-Fullana", "Miguel", "" ], [ "Paternain", "Santiago", "" ], [ "Bazerque", "Juan Andres", "" ] ]
2406.01789
Mario Truss
Mario Truss, Stephan Boehm
AI-based Classification of Customer Support Tickets: State of the Art and Implementation with AutoML
null
Proceedings of the IWEMB 2021/2022: Fifth and Sixth International Workshop on Entrepreneurship, Electronic and Mobile Business
null
null
cs.LG cs.AI cs.CL cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automation of support ticket classification is crucial to improve customer support performance and shortening resolution time for customer inquiries. This research aims to test the applicability of automated machine learning (AutoML) as a technology to train a machine learning model (ML model) that can classify support tickets. The model evaluation conducted in this research shows that AutoML can be used to train ML models with good classification performance. Moreover, this paper fills a research gap by providing new insights into developing AI solutions without a dedicated professional by utilizing AutoML, which makes this technology more accessible for companies without specialized AI departments and staff.
[ { "created": "Mon, 3 Jun 2024 21:13:02 GMT", "version": "v1" } ]
2024-06-05
[ [ "Truss", "Mario", "" ], [ "Boehm", "Stephan", "" ] ]
2406.01956
Panfeng Li
Zhicheng Ding, Panfeng Li, Qikai Yang, Siyang Li
Enhance Image-to-Image Generation with LLaVA-generated Prompts
Accepted by 2024 5th International Conference on Information Science, Parallel and Distributed Systems
Proceedings of the 2024 5th International Conference on Information Science, Parallel and Distributed Systems (ISPDS), 2024, pp. 77-81
10.1109/ISPDS62779.2024.10667513
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
This paper presents a novel approach to enhance image-to-image generation by leveraging the multimodal capabilities of the Large Language and Vision Assistant (LLaVA). We propose a framework where LLaVA analyzes input images and generates textual descriptions, hereinafter LLaVA-generated prompts. These prompts, along with the original image, are fed into the image-to-image generation pipeline. This enriched representation guides the generation process towards outputs that exhibit a stronger resemblance to the input image. Extensive experiments demonstrate the effectiveness of LLaVA-generated prompts in promoting image similarity. We observe a significant improvement in the visual coherence between the generated and input images compared to traditional methods. Future work will explore fine-tuning LLaVA prompts for increased control over the creative process. By providing more specific details within the prompts, we aim to achieve a delicate balance between faithfulness to the original image and artistic expression in the generated outputs.
[ { "created": "Tue, 4 Jun 2024 04:31:39 GMT", "version": "v1" }, { "created": "Fri, 20 Sep 2024 23:03:49 GMT", "version": "v2" } ]
2024-09-24
[ [ "Ding", "Zhicheng", "" ], [ "Li", "Panfeng", "" ], [ "Yang", "Qikai", "" ], [ "Li", "Siyang", "" ] ]
2406.02018
Manasi Sharma
Manasi Sharma, Ho Chit Siu, Rohan Paleja, Jaime D. Pe\~na
Why Would You Suggest That? Human Trust in Language Model Responses
null
ICML Humans, Algorithmic Decision-Making and Society: Modeling Interactions and Impact Workshop 2024
null
null
cs.CL cs.AI cs.HC
http://creativecommons.org/licenses/by/4.0/
The emergence of Large Language Models (LLMs) has revealed a growing need for human-AI collaboration, especially in creative decision-making scenarios where trust and reliance are paramount. Through human studies and model evaluations on the open-ended News Headline Generation task from the LaMP benchmark, we analyze how the framing and presence of explanations affect user trust and model performance. Overall, we provide evidence that adding an explanation in the model response to justify its reasoning significantly increases self-reported user trust in the model when the user has the opportunity to compare various responses. Position and faithfulness of these explanations are also important factors. However, these gains disappear when users are shown responses independently, suggesting that humans trust all model responses, including deceptive ones, equitably when they are shown in isolation. Our findings urge future research to delve deeper into the nuanced evaluation of trust in human-machine teaming systems.
[ { "created": "Tue, 4 Jun 2024 06:57:47 GMT", "version": "v1" }, { "created": "Fri, 4 Oct 2024 16:46:00 GMT", "version": "v2" } ]
2024-10-07
[ [ "Sharma", "Manasi", "" ], [ "Siu", "Ho Chit", "" ], [ "Paleja", "Rohan", "" ], [ "Peña", "Jaime D.", "" ] ]
2406.02338
Michele Mastromattei
Michele Mastromattei, Fabio Massimo Zanzotto
Linguistic Fingerprint in Transformer Models: How Language Variation Influences Parameter Selection in Irony Detection
null
Proceedings of the 3rd Workshop on Perspectivist Approaches to NLP (NLPerspectives) @ LREC-COLING 2024
null
null
cs.CL cs.AI
http://creativecommons.org/publicdomain/zero/1.0/
This paper explores the correlation between linguistic diversity, sentiment analysis and transformer model architectures. We aim to investigate how different English variations impact transformer-based models for irony detection. To conduct our study, we used the EPIC corpus to extract five diverse English variation-specific datasets and applied the KEN pruning algorithm on five different architectures. Our results reveal several similarities between optimal subnetworks, which provide insights into the linguistic variations that share strong resemblances and those that exhibit greater dissimilarities. We discovered that optimal subnetworks across models share at least 60% of their parameters, emphasizing the significance of parameter values in capturing and interpreting linguistic variations. This study highlights the inherent structural similarities between models trained on different variants of the same language and also the critical role of parameter values in capturing these nuances.
[ { "created": "Tue, 4 Jun 2024 14:09:36 GMT", "version": "v1" } ]
2024-06-05
[ [ "Mastromattei", "Michele", "" ], [ "Zanzotto", "Fabio Massimo", "" ] ]
2406.02562
Gwantae Kim
Gwantae Kim, Bokyeung Lee, Donghyeon Kim and Hanseok Ko
Gated Low-rank Adaptation for personalized Code-Switching Automatic Speech Recognition on the low-spec devices
Table 2 is revised
ICASSP 2024 Workshop(HSCMA 2024) paper
null
null
eess.AS cs.AI cs.CL
http://creativecommons.org/licenses/by-nc-nd/4.0/
In recent times, there has been a growing interest in utilizing personalized large models on low-spec devices, such as mobile and CPU-only devices. However, utilizing a personalized large model in the on-device is inefficient, and sometimes limited due to computational cost. To tackle the problem, this paper presents the weights separation method to minimize on-device model weights using parameter-efficient fine-tuning methods. Moreover, some people speak multiple languages in an utterance, as known as code-switching, the personalized ASR model is necessary to address such cases. However, current multilingual speech recognition models are limited to recognizing a single language within each utterance. To tackle this problem, we propose code-switching speech recognition models that incorporate fine-tuned monolingual and multilingual speech recognition models. Additionally, we introduce a gated low-rank adaptation(GLoRA) for parameter-efficient fine-tuning with minimal performance degradation. Our experiments, conducted on Korean-English code-switching datasets, demonstrate that fine-tuning speech recognition models for code-switching surpasses the performance of traditional code-switching speech recognition models trained from scratch. Furthermore, GLoRA enhances parameter-efficient fine-tuning performance compared to conventional LoRA.
[ { "created": "Wed, 24 Apr 2024 01:31:39 GMT", "version": "v1" } ]
2024-06-06
[ [ "Kim", "Gwantae", "" ], [ "Lee", "Bokyeung", "" ], [ "Kim", "Donghyeon", "" ], [ "Ko", "Hanseok", "" ] ]
2406.02579
Louis Ledoux
Louis Ledoux and Marc Casas
An Open-Source Framework for Efficient Numerically-Tailored Computations
6 pages, open-source
International Conference on Field Programmable Logic and Applications 2023
10.1109/FPL60245.2023.00011
null
cs.MS cs.AI cs.AR cs.LG cs.NA math.NA
http://creativecommons.org/licenses/by/4.0/
We present a versatile open-source framework designed to facilitate efficient, numerically-tailored Matrix-Matrix Multiplications (MMMs). The framework offers two primary contributions: first, a fine-tuned, automated pipeline for arithmetic datapath generation, enabling highly customizable systolic MMM kernels; second, seamless integration of the generated kernels into user code, irrespective of the programming language employed, without necessitating modifications. The framework demonstrates a systematic enhancement in accuracy per energy cost across diverse High Performance Computing (HPC) workloads displaying a variety of numerical requirements, such as Artificial Intelligence (AI) inference and Sea Surface Height (SSH) computation. For AI inference, we consider a set of state-of-the-art neural network models, namely ResNet18, ResNet34, ResNet50, DenseNet121, DenseNet161, DenseNet169, and VGG11, in conjunction with two datasets, two computer formats, and 27 distinct intermediate arithmetic datapaths. Our approach consistently reduces energy consumption across all cases, with a notable example being the reduction by factors of $3.3\times$ for IEEE754-32 and $1.4\times$ for Bfloat16 during ImageNet inference with ResNet50. This is accomplished while maintaining accuracies of $82.3\%$ and $86\%$, comparable to those achieved with conventional Floating-Point Units (FPUs). In the context of SSH computation, our method achieves fully-reproducible results using double-precision words, surpassing the accuracy of conventional double- and quad-precision arithmetic in FPUs. Our approach enhances SSH computation accuracy by a minimum of $5\times$ and $27\times$ compared to IEEE754-64 and IEEE754-128, respectively, resulting in $5.6\times$ and $15.1\times$ improvements in accuracy per power cost.
[ { "created": "Wed, 29 May 2024 10:10:53 GMT", "version": "v1" } ]
2024-06-06
[ [ "Ledoux", "Louis", "" ], [ "Casas", "Marc", "" ] ]
2406.02591
Ivan Dubrovsky
Ivan Dubrovsky, Andrei Dmitrenko, Aleksei Dmitrenko, Nikita Serov, Vladimir Vinogradov
Unveiling the Potential of AI for Nanomaterial Morphology Prediction
null
Proceedings of the 41 st International Conference on Machine Learning. PMLR 235, 2024, 11957--11978
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Creation of nanomaterials with specific morphology remains a complex experimental process, even though there is a growing demand for these materials in various industry sectors. This study explores the potential of AI to predict the morphology of nanoparticles within the data availability constraints. For that, we first generated a new multi-modal dataset that is double the size of analogous studies. Then, we systematically evaluated performance of classical machine learning and large language models in prediction of nanomaterial shapes and sizes. Finally, we prototyped a text-to-image system, discussed the obtained empirical results, as well as the limitations and promises of existing approaches.
[ { "created": "Fri, 31 May 2024 19:16:07 GMT", "version": "v1" } ]
2024-08-01
[ [ "Dubrovsky", "Ivan", "" ], [ "Dmitrenko", "Andrei", "" ], [ "Dmitrenko", "Aleksei", "" ], [ "Serov", "Nikita", "" ], [ "Vinogradov", "Vladimir", "" ] ]
2406.02921
Zhong Meng
Zhong Meng, Zelin Wu, Rohit Prabhavalkar, Cal Peyser, Weiran Wang, Nanxin Chen, Tara N. Sainath, Bhuvana Ramabhadran
Text Injection for Neural Contextual Biasing
5 pages, 1 figure
Interspeech 2024, Kos Island, Greece
null
null
cs.CL cs.AI cs.LG cs.NE eess.AS
http://creativecommons.org/licenses/by/4.0/
Neural contextual biasing effectively improves automatic speech recognition (ASR) for crucial phrases within a speaker's context, particularly those that are infrequent in the training data. This work proposes contextual text injection (CTI) to enhance contextual ASR. CTI leverages not only the paired speech-text data, but also a much larger corpus of unpaired text to optimize the ASR model and its biasing component. Unpaired text is converted into speech-like representations and used to guide the model's attention towards relevant bias phrases. Moreover, we introduce a contextual text-injected (CTI) minimum word error rate (MWER) training, which minimizes the expected WER caused by contextual biasing when unpaired text is injected into the model. Experiments show that CTI with 100 billion text sentences can achieve up to 43.3% relative WER reduction from a strong neural biasing model. CTI-MWER provides a further relative improvement of 23.5%.
[ { "created": "Wed, 5 Jun 2024 04:20:17 GMT", "version": "v1" }, { "created": "Tue, 11 Jun 2024 04:11:56 GMT", "version": "v2" } ]
2024-06-12
[ [ "Meng", "Zhong", "" ], [ "Wu", "Zelin", "" ], [ "Prabhavalkar", "Rohit", "" ], [ "Peyser", "Cal", "" ], [ "Wang", "Weiran", "" ], [ "Chen", "Nanxin", "" ], [ "Sainath", "Tara N.", "" ], [ "Ramabhadran", "Bhuvana", "" ] ]
2406.02996
Wooseong Jeong
Wooseong Jeong, Kuk-Jin Yoon
Quantifying Task Priority for Multi-Task Optimization
null
CVPR 2024
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The goal of multi-task learning is to learn diverse tasks within a single unified network. As each task has its own unique objective function, conflicts emerge during training, resulting in negative transfer among them. Earlier research identified these conflicting gradients in shared parameters between tasks and attempted to realign them in the same direction. However, we prove that such optimization strategies lead to sub-optimal Pareto solutions due to their inability to accurately determine the individual contributions of each parameter across various tasks. In this paper, we propose the concept of task priority to evaluate parameter contributions across different tasks. To learn task priority, we identify the type of connections related to links between parameters influenced by task-specific losses during backpropagation. The strength of connections is gauged by the magnitude of parameters to determine task priority. Based on these, we present a new method named connection strength-based optimization for multi-task learning which consists of two phases. The first phase learns the task priority within the network, while the second phase modifies the gradients while upholding this priority. This ultimately leads to finding new Pareto optimal solutions for multiple tasks. Through extensive experiments, we show that our approach greatly enhances multi-task performance in comparison to earlier gradient manipulation methods.
[ { "created": "Wed, 5 Jun 2024 06:52:29 GMT", "version": "v1" } ]
2024-06-06
[ [ "Jeong", "Wooseong", "" ], [ "Yoon", "Kuk-Jin", "" ] ]
2406.03030
Ali Malik
Ali Malik, Stephen Mayhew, Chris Piech, Klinton Bicknell
From Tarzan to Tolkien: Controlling the Language Proficiency Level of LLMs for Content Generation
null
In Findings of the Association for Computational Linguistics (ACL 2024)
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
We study the problem of controlling the difficulty level of text generated by Large Language Models (LLMs) for contexts where end-users are not fully proficient, such as language learners. Using a novel framework, we evaluate the effectiveness of several key approaches for this task, including few-shot prompting, supervised finetuning, and reinforcement learning (RL), utilising both GPT-4 and open source alternatives like LLama2-7B and Mistral-7B. Our findings reveal a large performance gap between GPT-4 and the open source models when using prompt-based strategies. However, we show how to bridge this gap with a careful combination of finetuning and RL alignment. Our best model, CALM (CEFR-Aligned Language Model), surpasses the performance of GPT-4 and other strategies, at only a fraction of the cost. We further validate the quality of our results through a small-scale human study.
[ { "created": "Wed, 5 Jun 2024 07:57:17 GMT", "version": "v1" } ]
2024-06-06
[ [ "Malik", "Ali", "" ], [ "Mayhew", "Stephen", "" ], [ "Piech", "Chris", "" ], [ "Bicknell", "Klinton", "" ] ]
2406.03117
Zhixun He
Zhixun He and Mukesh Singhal
VQUNet: Vector Quantization U-Net for Defending Adversarial Atacks by Regularizing Unwanted Noise
8 pages, 6 figures
2024 7th International Conference on Machine Vision and Applications (ICMVA)
10.1145/3653946.3653957
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Deep Neural Networks (DNN) have become a promising paradigm when developing Artificial Intelligence (AI) and Machine Learning (ML) applications. However, DNN applications are vulnerable to fake data that are crafted with adversarial attack algorithms. Under adversarial attacks, the prediction accuracy of DNN applications suffers, making them unreliable. In order to defend against adversarial attacks, we introduce a novel noise-reduction procedure, Vector Quantization U-Net (VQUNet), to reduce adversarial noise and reconstruct data with high fidelity. VQUNet features a discrete latent representation learning through a multi-scale hierarchical structure for both noise reduction and data reconstruction. The empirical experiments show that the proposed VQUNet provides better robustness to the target DNN models, and it outperforms other state-of-the-art noise-reduction-based defense methods under various adversarial attacks for both Fashion-MNIST and CIFAR10 datasets. When there is no adversarial attack, the defense method has less than 1% accuracy degradation for both datasets.
[ { "created": "Wed, 5 Jun 2024 10:10:03 GMT", "version": "v1" } ]
2024-06-06
[ [ "He", "Zhixun", "" ], [ "Singhal", "Mukesh", "" ] ]
2406.03194
Moises Diaz
Moises Diaz, Gioele Crispo, Antonio Parziale, Angelo Marcelli, Miguel A. Ferrer
Writing Order Recovery in Complex and Long Static Handwriting
null
International Journal of Interactive Multimedia and Artificial Intelligence, Volume 7, number 4, Pages 171-184, 2022
10.9781/ijimai.2021.04.003
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
The order in which the trajectory is executed is a powerful source of information for recognizers. However, there is still no general approach for recovering the trajectory of complex and long handwriting from static images. Complex specimens can result in multiple pen-downs and in a high number of trajectory crossings yielding agglomerations of pixels (also known as clusters). While the scientific literature describes a wide range of approaches for recovering the writing order in handwriting, these approaches nevertheless lack a common evaluation metric. In this paper, we introduce a new system to estimate the order recovery of thinned static trajectories, which allows to effectively resolve the clusters and select the order of the executed pen-downs. We evaluate how knowing the starting points of the pen-downs affects the quality of the recovered writing. Once the stability and sensitivity of the system is analyzed, we describe a series of experiments with three publicly available databases, showing competitive results in all cases. We expect the proposed system, whose code is made publicly available to the research community, to reduce potential confusion when the order of complex trajectories are recovered, and this will in turn make the trajectories recovered to be viable for further applications, such as velocity estimation.
[ { "created": "Wed, 5 Jun 2024 12:23:17 GMT", "version": "v1" } ]
2024-06-06
[ [ "Diaz", "Moises", "" ], [ "Crispo", "Gioele", "" ], [ "Parziale", "Antonio", "" ], [ "Marcelli", "Angelo", "" ], [ "Ferrer", "Miguel A.", "" ] ]
2406.03221
Pierre Nugues
Pierre Nugues
Linking Named Entities in Diderot's \textit{Encyclop\'edie} to Wikidata
6 pages, 3 figures
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pp. 10610--10615
null
null
cs.CL cs.IR
http://creativecommons.org/licenses/by-nc-sa/4.0/
Diderot's \textit{Encyclop\'edie} is a reference work from XVIIIth century in Europe that aimed at collecting the knowledge of its era. \textit{Wikipedia} has the same ambition with a much greater scope. However, the lack of digital connection between the two encyclopedias may hinder their comparison and the study of how knowledge has evolved. A key element of \textit{Wikipedia} is Wikidata that backs the articles with a graph of structured data. In this paper, we describe the annotation of more than 10,300 of the \textit{Encyclop\'edie} entries with Wikidata identifiers enabling us to connect these entries to the graph. We considered geographic and human entities. The \textit{Encyclop\'edie} does not contain biographic entries as they mostly appear as subentries of locations. We extracted all the geographic entries and we completely annotated all the entries containing a description of human entities. This represents more than 2,600 links referring to locations or human entities. In addition, we annotated more than 9,500 entries having a geographic content only. We describe the annotation process as well as application examples. This resource is available at https://github.com/pnugues/encyclopedie_1751
[ { "created": "Wed, 5 Jun 2024 13:00:04 GMT", "version": "v1" } ]
2024-06-06
[ [ "Nugues", "Pierre", "" ] ]
2406.03245
Aakash Gautam
Aakash Gautam
Reconfiguring Participatory Design to Resist AI Realism
6 pages, 1 table
Participatory Design Conference 2024
10.1145/3661455.3669867
null
cs.HC cs.AI cs.SI
http://creativecommons.org/licenses/by/4.0/
The growing trend of artificial intelligence (AI) as a solution to social and technical problems reinforces AI Realism -- the belief that AI is an inevitable and natural order. In response, this paper argues that participatory design (PD), with its focus on democratic values and processes, can play a role in questioning and resisting AI Realism. I examine three concerning aspects of AI Realism: the facade of democratization that lacks true empowerment, demands for human adaptability in contrast to AI systems' inflexibility, and the obfuscation of essential human labor enabling the AI system. I propose resisting AI Realism by reconfiguring PD to continue engaging with value-centered visions, increasing its exploration of non-AI alternatives, and making the essential human labor underpinning AI systems visible. I position PD as a means to generate friction against AI Realism and open space for alternative futures centered on human needs and values.
[ { "created": "Wed, 5 Jun 2024 13:21:46 GMT", "version": "v1" }, { "created": "Sat, 8 Jun 2024 18:19:00 GMT", "version": "v2" } ]
2024-06-11
[ [ "Gautam", "Aakash", "" ] ]
2406.03359
Cristhian David Forigua Diaz
Cristhian Forigua, Maria Escobar and Pablo Arbelaez
SuperFormer: Volumetric Transformer Architectures for MRI Super-Resolution
null
7th International Workshop, SASHIMI 2022, Held in Conjunction with MICCAI 2022, Singapore, September 18, 2022, Proceedings
10.1007/978-3-031-16980-9_13
null
eess.IV cs.CV
http://creativecommons.org/licenses/by/4.0/
This paper presents a novel framework for processing volumetric medical information using Visual Transformers (ViTs). First, We extend the state-of-the-art Swin Transformer model to the 3D medical domain. Second, we propose a new approach for processing volumetric information and encoding position in ViTs for 3D applications. We instantiate the proposed framework and present SuperFormer, a volumetric transformer-based approach for Magnetic Resonance Imaging (MRI) Super-Resolution. Our method leverages the 3D information of the MRI domain and uses a local self-attention mechanism with a 3D relative positional encoding to recover anatomical details. In addition, our approach takes advantage of multi-domain information from volume and feature domains and fuses them to reconstruct the High-Resolution MRI. We perform an extensive validation on the Human Connectome Project dataset and demonstrate the superiority of volumetric transformers over 3D CNN-based methods. Our code and pretrained models are available at https://github.com/BCV-Uniandes/SuperFormer.
[ { "created": "Wed, 5 Jun 2024 15:14:29 GMT", "version": "v1" } ]
2024-06-06
[ [ "Forigua", "Cristhian", "" ], [ "Escobar", "Maria", "" ], [ "Arbelaez", "Pablo", "" ] ]
2406.03388
Joaquim Jorge
Alexandre Duarte, Francisco Fernandes, Jo\~ao M. Pereira, Catarina Moreira, Jacinto C. Nascimento, Joaquim Jorge
SelfReDepth: Self-Supervised Real-Time Depth Restoration for Consumer-Grade Sensors
13pp, 5 figures, 1 table
Journal of Real-Time Image Processing 2024
10.1007/s11554-024-01491-z
null
cs.CV cs.AI cs.HC
http://creativecommons.org/licenses/by-sa/4.0/
Depth maps produced by consumer-grade sensors suffer from inaccurate measurements and missing data from either system or scene-specific sources. Data-driven denoising algorithms can mitigate such problems. However, they require vast amounts of ground truth depth data. Recent research has tackled this limitation using self-supervised learning techniques, but it requires multiple RGB-D sensors. Moreover, most existing approaches focus on denoising single isolated depth maps or specific subjects of interest, highlighting a need for methods to effectively denoise depth maps in real-time dynamic environments. This paper extends state-of-the-art approaches for depth-denoising commodity depth devices, proposing SelfReDepth, a self-supervised deep learning technique for depth restoration, via denoising and hole-filling by inpainting full-depth maps captured with RGB-D sensors. The algorithm targets depth data in video streams, utilizing multiple sequential depth frames coupled with color data to achieve high-quality depth videos with temporal coherence. Finally, SelfReDepth is designed to be compatible with various RGB-D sensors and usable in real-time scenarios as a pre-processing step before applying other depth-dependent algorithms. Our results demonstrate our approach's real-time performance on real-world datasets. They show that it outperforms state-of-the-art denoising and restoration performance at over 30fps on Commercial Depth Cameras, with potential benefits for augmented and mixed-reality applications.
[ { "created": "Wed, 5 Jun 2024 15:38:02 GMT", "version": "v1" } ]
2024-07-04
[ [ "Duarte", "Alexandre", "" ], [ "Fernandes", "Francisco", "" ], [ "Pereira", "João M.", "" ], [ "Moreira", "Catarina", "" ], [ "Nascimento", "Jacinto C.", "" ], [ "Jorge", "Joaquim", "" ] ]
2406.03470
Zekai Xu
Kang You, Zekai Xu, Chen Nie, Zhijie Deng, Qinghai Guo, Xiang Wang and Zhezhi He
SpikeZIP-TF: Conversion is All You Need for Transformer-based SNN
* These authors contributed equally to this work
International Conference on Machine Learning 2024
null
null
cs.NE cs.AI
http://creativecommons.org/licenses/by/4.0/
Spiking neural network (SNN) has attracted great attention due to its characteristic of high efficiency and accuracy. Currently, the ANN-to-SNN conversion methods can obtain ANN on-par accuracy SNN with ultra-low latency (8 time-steps) in CNN structure on computer vision (CV) tasks. However, as Transformer-based networks have achieved prevailing precision on both CV and natural language processing (NLP), the Transformer-based SNNs are still encounting the lower accuracy w.r.t the ANN counterparts. In this work, we introduce a novel ANN-to-SNN conversion method called SpikeZIP-TF, where ANN and SNN are exactly equivalent, thus incurring no accuracy degradation. SpikeZIP-TF achieves 83.82% accuracy on CV dataset (ImageNet) and 93.79% accuracy on NLP dataset (SST-2), which are higher than SOTA Transformer-based SNNs. The code is available in GitHub: https://github.com/Intelligent-Computing-Research-Group/SpikeZIP_transformer
[ { "created": "Wed, 5 Jun 2024 17:24:07 GMT", "version": "v1" } ]
2024-08-21
[ [ "You", "Kang", "" ], [ "Xu", "Zekai", "" ], [ "Nie", "Chen", "" ], [ "Deng", "Zhijie", "" ], [ "Guo", "Qinghai", "" ], [ "Wang", "Xiang", "" ], [ "He", "Zhezhi", "" ] ]
2406.03512
Nicolas Michael M\"uller
Nicolas M. M\"uller, Nicholas Evans, Hemlata Tak, Philip Sperl, Konstantin B\"ottinger
Harder or Different? Understanding Generalization of Audio Deepfake Detection
null
Interspeech 2024
null
null
cs.SD cs.AI eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent research has highlighted a key issue in speech deepfake detection: models trained on one set of deepfakes perform poorly on others. The question arises: is this due to the continuously improving quality of Text-to-Speech (TTS) models, i.e., are newer DeepFakes just 'harder' to detect? Or, is it because deepfakes generated with one model are fundamentally different to those generated using another model? We answer this question by decomposing the performance gap between in-domain and out-of-domain test data into 'hardness' and 'difference' components. Experiments performed using ASVspoof databases indicate that the hardness component is practically negligible, with the performance gap being attributed primarily to the difference component. This has direct implications for real-world deepfake detection, highlighting that merely increasing model capacity, the currently-dominant research trend, may not effectively address the generalization challenge.
[ { "created": "Wed, 5 Jun 2024 10:33:15 GMT", "version": "v1" }, { "created": "Fri, 7 Jun 2024 13:53:07 GMT", "version": "v2" }, { "created": "Wed, 12 Jun 2024 16:54:01 GMT", "version": "v3" } ]
2024-06-13
[ [ "Müller", "Nicolas M.", "" ], [ "Evans", "Nicholas", "" ], [ "Tak", "Hemlata", "" ], [ "Sperl", "Philip", "" ], [ "Böttinger", "Konstantin", "" ] ]
2406.03556
Utsab Saha
Utsab Saha, Sawradip Saha, Shaikh Anowarul Fattah, and Mohammad Saquib
Npix2Cpix: A GAN-Based Image-to-Image Translation Network With Retrieval- Classification Integration for Watermark Retrieval From Historical Document Images
null
IEEE Access 12 (2024) 95857-95870
10.1109/ACCESS.2024.3424662
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
The identification and restoration of ancient watermarks have long been a major topic in codicology and history. Classifying historical documents based on watermarks is challenging due to their diversity, noisy samples, multiple representation modes, and minor distinctions between classes and intra-class variations. This paper proposes a modified U-net-based conditional generative adversarial network (GAN) named Npix2Cpix to translate noisy raw historical watermarked images into clean, handwriting-free watermarked images by performing image translation from degraded (noisy) pixels to clean pixels. Using image-to-image translation and adversarial learning, the network creates clutter-free images for watermark restoration and categorization. The generator and discriminator of the proposed GAN are trained using two separate loss functions, each based on the distance between images, to learn the mapping from the input noisy image to the output clean image. After using the proposed GAN to pre-process noisy watermarked images, Siamese-based one-shot learning is employed for watermark classification. Experimental results on a large-scale historical watermark dataset demonstrate that cleaning the noisy watermarked images can help to achieve high one-shot classification accuracy. The qualitative and quantitative evaluation of the retrieved watermarked image highlights the effectiveness of the proposed approach.
[ { "created": "Wed, 5 Jun 2024 18:10:49 GMT", "version": "v1" }, { "created": "Wed, 24 Jul 2024 18:50:51 GMT", "version": "v2" }, { "created": "Mon, 16 Sep 2024 05:14:14 GMT", "version": "v3" } ]
2024-09-17
[ [ "Saha", "Utsab", "" ], [ "Saha", "Sawradip", "" ], [ "Fattah", "Shaikh Anowarul", "" ], [ "Saquib", "Mohammad", "" ] ]
2406.03665
Jihyeon Seong
Jihyeon Seong, Sekwang Oh, Jaesik Choi
Towards Dynamic Trend Filtering through Trend Point Detection with Reinforcement Learning
18 pages, 11 figures
IJCAI 2024
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Trend filtering simplifies complex time series data by applying smoothness to filter out noise while emphasizing proximity to the original data. However, existing trend filtering methods fail to reflect abrupt changes in the trend due to `approximateness,' resulting in constant smoothness. This approximateness uniformly filters out the tail distribution of time series data, characterized by extreme values, including both abrupt changes and noise. In this paper, we propose Trend Point Detection formulated as a Markov Decision Process (MDP), a novel approach to identifying essential points that should be reflected in the trend, departing from approximations. We term these essential points as Dynamic Trend Points (DTPs) and extract trends by interpolating them. To identify DTPs, we utilize Reinforcement Learning (RL) within a discrete action space and a forecasting sum-of-squares loss function as a reward, referred to as the Dynamic Trend Filtering network (DTF-net). DTF-net integrates flexible noise filtering, preserving critical original subsequences while removing noise as required for other subsequences. We demonstrate that DTF-net excels at capturing abrupt changes compared to other trend filtering algorithms and enhances forecasting performance, as abrupt changes are predicted rather than smoothed out.
[ { "created": "Thu, 6 Jun 2024 00:50:22 GMT", "version": "v1" } ]
2024-07-12
[ [ "Seong", "Jihyeon", "" ], [ "Oh", "Sekwang", "" ], [ "Choi", "Jaesik", "" ] ]
2406.03859
Moises Diaz
Miguel A. Ferrer, Josep A. Calduch-Giner, Moises D\'iaz, Javier Sosa, Enrique Rosell-Moll, Judith Santana Abril, Graciela Santana Sosa, Tom\'as Bautista Delgado, Cristina Carmona, Juan Antonio Martos-Sitcha, Enric Cabruja, Juan Manuel Afonso, Aurelio Vega, Manuel Lozano, Juan Antonio Montiel-Nelson, Jaume P\'erez-S\'anchez
From operculum and body tail movements to different coupling of physical activity and respiratory frequency in farmed gilthead sea bream and European sea bass. Insights on aquaculture biosensing
null
Computers and Electronics in Agriculture, col.175,pp.105531,2020
10.1016/j.compag.2020.105531
null
cs.CV q-bio.PE
http://creativecommons.org/licenses/by-nc-nd/4.0/
The AEFishBIT tri-axial accelerometer was externally attached to the operculum to assess the divergent activity and respiratory patterns of two marine farmed fish, the gilthead sea bream (Sparus aurata) and European sea bass (Dicentrarchus labrax). Analysis of raw data from exercised fish highlighted the large amplitude of operculum aperture and body tail movements in European sea bass, which were overall more stable at low-medium exercise intensity levels. Cosinor analysis in free-swimming fish (on-board data processing) highlighted a pronounced daily rhythmicity of locomotor activity and respiratory frequency in both gilthead sea bream and European sea bass. Acrophases of activity and respiration were coupled in gilthead sea bream, acting feeding time (once daily at 11:00 h) as a main synchronizing factor. By contrast, locomotor activity and respiratory frequency were out of phase in European sea bass with activity acrophase on early morning and respiration acrophase on the afternoon. The daily range of activity and respiration variation was also higher in European sea bass, probably as part of the adaptation of this fish species to act as a fast swimming predator. In any case, lower locomotor activity and enhanced respiration were associated with larger body weight in both fish species. This agrees with the notion that selection for fast growth in farming conditions is accompanied by a lower activity profile, which may favor an efficient feed conversion for growth purposes. Therefore, the use of behavioral monitoring is becoming a reliable and large-scale promising tool for selecting more efficient farmed fish, allowing researchers and farmers to establish stricter criteria of welfare for more sustainable and ethical fish production.
[ { "created": "Thu, 6 Jun 2024 08:46:00 GMT", "version": "v1" } ]
2024-06-07
[ [ "Ferrer", "Miguel A.", "" ], [ "Calduch-Giner", "Josep A.", "" ], [ "Díaz", "Moises", "" ], [ "Sosa", "Javier", "" ], [ "Rosell-Moll", "Enrique", "" ], [ "Abril", "Judith Santana", "" ], [ "Sosa", "Graciela Santana", "" ], [ "Delgado", "Tomás Bautista", "" ], [ "Carmona", "Cristina", "" ], [ "Martos-Sitcha", "Juan Antonio", "" ], [ "Cabruja", "Enric", "" ], [ "Afonso", "Juan Manuel", "" ], [ "Vega", "Aurelio", "" ], [ "Lozano", "Manuel", "" ], [ "Montiel-Nelson", "Juan Antonio", "" ], [ "Pérez-Sánchez", "Jaume", "" ] ]
2406.03881
Matthias Sperber
Matthias Sperber, Ond\v{r}ej Bojar, Barry Haddow, D\'avid Javorsk\'y, Xutai Ma, Matteo Negri, Jan Niehues, Peter Pol\'ak, Elizabeth Salesky, Katsuhito Sudoh, Marco Turchi
Evaluating the IWSLT2023 Speech Translation Tasks: Human Annotations, Automatic Metrics, and Segmentation
LREC-COLING2024 publication (with corrections for Table 3)
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Human evaluation is a critical component in machine translation system development and has received much attention in text translation research. However, little prior work exists on the topic of human evaluation for speech translation, which adds additional challenges such as noisy data and segmentation mismatches. We take first steps to fill this gap by conducting a comprehensive human evaluation of the results of several shared tasks from the last International Workshop on Spoken Language Translation (IWSLT 2023). We propose an effective evaluation strategy based on automatic resegmentation and direct assessment with segment context. Our analysis revealed that: 1) the proposed evaluation strategy is robust and scores well-correlated with other types of human judgements; 2) automatic metrics are usually, but not always, well-correlated with direct assessment scores; and 3) COMET as a slightly stronger automatic metric than chrF, despite the segmentation noise introduced by the resegmentation step systems. We release the collected human-annotated data in order to encourage further investigation.
[ { "created": "Thu, 6 Jun 2024 09:18:42 GMT", "version": "v1" } ]
2024-06-07
[ [ "Sperber", "Matthias", "" ], [ "Bojar", "Ondřej", "" ], [ "Haddow", "Barry", "" ], [ "Javorský", "Dávid", "" ], [ "Ma", "Xutai", "" ], [ "Negri", "Matteo", "" ], [ "Niehues", "Jan", "" ], [ "Polák", "Peter", "" ], [ "Salesky", "Elizabeth", "" ], [ "Sudoh", "Katsuhito", "" ], [ "Turchi", "Marco", "" ] ]
2406.03897
Tzuf Paz-Argaman
Tzuf Paz-Argaman, Itai Mondshine, Asaf Achi Mordechai, and Reut Tsarfaty
HeSum: a Novel Dataset for Abstractive Text Summarization in Hebrew
null
ACL 2024 Findings
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
While large language models (LLMs) excel in various natural language tasks in English, their performance in lower-resourced languages like Hebrew, especially for generative tasks such as abstractive summarization, remains unclear. The high morphological richness in Hebrew adds further challenges due to the ambiguity in sentence comprehension and the complexities in meaning construction. In this paper, we address this resource and evaluation gap by introducing HeSum, a novel benchmark specifically designed for abstractive text summarization in Modern Hebrew. HeSum consists of 10,000 article-summary pairs sourced from Hebrew news websites written by professionals. Linguistic analysis confirms HeSum's high abstractness and unique morphological challenges. We show that HeSum presents distinct difficulties for contemporary state-of-the-art LLMs, establishing it as a valuable testbed for generative language technology in Hebrew, and MRLs generative challenges in general.
[ { "created": "Thu, 6 Jun 2024 09:36:14 GMT", "version": "v1" }, { "created": "Mon, 10 Jun 2024 05:45:25 GMT", "version": "v2" } ]
2024-06-11
[ [ "Paz-Argaman", "Tzuf", "" ], [ "Mondshine", "Itai", "" ], [ "Mordechai", "Asaf Achi", "" ], [ "Tsarfaty", "Reut", "" ] ]
2406.03901
Adrian Galdran
Adrian Galdran
Polyp and Surgical Instrument Segmentation with Double Encoder-Decoder Networks
null
NMI, Vol. 1 No. 1 (2021): MedAI: Transparency in Medical Image Segmentation
10.5617/nmi.9107
null
eess.IV cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
This paper describes a solution for the MedAI competition, in which participants were required to segment both polyps and surgical instruments from endoscopic images. Our approach relies on a double encoder-decoder neural network which we have previously applied for polyp segmentation, but with a series of enhancements: a more powerful encoder architecture, an improved optimization procedure, and the post-processing of segmentations based on tempered model ensembling. Experimental results show that our method produces segmentations that show a good agreement with manual delineations provided by medical experts.
[ { "created": "Thu, 6 Jun 2024 09:37:46 GMT", "version": "v1" } ]
2024-06-07
[ [ "Galdran", "Adrian", "" ] ]
2406.03984
Sofija Engelson
Sofija Engelson, Jan Ehrhardt, Timo Kepp, Joshua Niemeijer and Heinz Handels
LNQ Challenge 2023: Learning Mediastinal Lymph Node Segmentation with a Probabilistic Lymph Node Atlas
Accepted for publication at the Journal of Machine Learning for Biomedical Imaging (MELBA) https://melba-journal.org/2024:009
Machine.Learning.for.Biomedical.Imaging. 2 (2024)
10.59275/j.melba.2024-009
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
The evaluation of lymph node metastases plays a crucial role in achieving precise cancer staging, influencing subsequent decisions regarding treatment options. Lymph node detection poses challenges due to the presence of unclear boundaries and the diverse range of sizes and morphological characteristics, making it a resource-intensive process. As part of the LNQ 2023 MICCAI challenge, we propose the use of anatomical priors as a tool to address the challenges that persist in mediastinal lymph node segmentation in combination with the partial annotation of the challenge training data. The model ensemble using all suggested modifications yields a Dice score of 0.6033 and segments 57% of the ground truth lymph nodes, compared to 27% when training on CT only. Segmentation accuracy is improved significantly by incorporating a probabilistic lymph node atlas in loss weighting and post-processing. The largest performance gains are achieved by oversampling fully annotated data to account for the partial annotation of the challenge training data, as well as adding additional data augmentation to address the high heterogeneity of the CT images and lymph node appearance. Our code is available at https://github.com/MICAI-IMI-UzL/LNQ2023.
[ { "created": "Thu, 6 Jun 2024 11:57:25 GMT", "version": "v1" } ]
2024-06-07
[ [ "Engelson", "Sofija", "" ], [ "Ehrhardt", "Jan", "" ], [ "Kepp", "Timo", "" ], [ "Niemeijer", "Joshua", "" ], [ "Handels", "Heinz", "" ] ]
2406.03986
Ankan Mullick
Ankan Mullick, Sombit Bose, Rounak Saha, Ayan Kumar Bhowmick, Pawan Goyal, Niloy Ganguly, Prasenjit Dey, Ravi Kokku
On The Persona-based Summarization of Domain-Specific Documents
null
ACL 2024 Findings (Association for Computational Linguistics)
null
null
cs.CL cs.IR
http://creativecommons.org/publicdomain/zero/1.0/
In an ever-expanding world of domain-specific knowledge, the increasing complexity of consuming, and storing information necessitates the generation of summaries from large information repositories. However, every persona of a domain has different requirements of information and hence their summarization. For example, in the healthcare domain, a persona-based (such as Doctor, Nurse, Patient etc.) approach is imperative to deliver targeted medical information efficiently. Persona-based summarization of domain-specific information by humans is a high cognitive load task and is generally not preferred. The summaries generated by two different humans have high variability and do not scale in cost and subject matter expertise as domains and personas grow. Further, AI-generated summaries using generic Large Language Models (LLMs) may not necessarily offer satisfactory accuracy for different domains unless they have been specifically trained on domain-specific data and can also be very expensive to use in day-to-day operations. Our contribution in this paper is two-fold: 1) We present an approach to efficiently fine-tune a domain-specific small foundation LLM using a healthcare corpus and also show that we can effectively evaluate the summarization quality using AI-based critiquing. 2) We further show that AI-based critiquing has good concordance with Human-based critiquing of the summaries. Hence, such AI-based pipelines to generate domain-specific persona-based summaries can be easily scaled to other domains such as legal, enterprise documents, education etc. in a very efficient and cost-effective manner.
[ { "created": "Thu, 6 Jun 2024 12:00:41 GMT", "version": "v1" } ]
2024-06-10
[ [ "Mullick", "Ankan", "" ], [ "Bose", "Sombit", "" ], [ "Saha", "Rounak", "" ], [ "Bhowmick", "Ayan Kumar", "" ], [ "Goyal", "Pawan", "" ], [ "Ganguly", "Niloy", "" ], [ "Dey", "Prasenjit", "" ], [ "Kokku", "Ravi", "" ] ]
2406.04050
Thomas Schmitt
Thomas H. Schmitt, Maximilian Bundscherer and Tobias Bocklet
Semmeldetector: Application of Machine Learning in Commercial Bakeries
null
2023 International Conference on Machine Learning and Applications (ICMLA), IEEE, 2023, pp. 878-883
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
The Semmeldetector, is a machine learning application that utilizes object detection models to detect, classify and count baked goods in images. Our application allows commercial bakers to track unsold baked goods, which allows them to optimize production and increase resource efficiency. We compiled a dataset comprising 1151 images that distinguishes between 18 different types of baked goods to train our detection models. To facilitate model training, we used a Copy-Paste augmentation pipeline to expand our dataset. We trained the state-of-the-art object detection model YOLOv8 on our detection task. We tested the impact of different training data, model scale, and online image augmentation pipelines on model performance. Our overall best performing model, achieved an AP@0.5 of 89.1% on our test set. Based on our results, we conclude that machine learning can be a valuable tool even for unforeseen industries like bakeries, even with very limited datasets.
[ { "created": "Thu, 6 Jun 2024 13:17:24 GMT", "version": "v1" } ]
2024-06-07
[ [ "Schmitt", "Thomas H.", "" ], [ "Bundscherer", "Maximilian", "" ], [ "Bocklet", "Tobias", "" ] ]
2406.04101
Yihang Chen
Yihang Chen, Qianyi Wu, Mehrtash Harandi, Jianfei Cai
How Far Can We Compress Instant-NGP-Based NeRF?
Project Page: https://yihangchen-ee.github.io/project_cnc/ Code: https://github.com/yihangchen-ee/cnc/. We further propose a 3DGS compression method HAC, which is based on CNC: https://yihangchen-ee.github.io/project_hac/
CVPR 2024
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, Neural Radiance Field (NeRF) has demonstrated remarkable capabilities in representing 3D scenes. To expedite the rendering process, learnable explicit representations have been introduced for combination with implicit NeRF representation, which however results in a large storage space requirement. In this paper, we introduce the Context-based NeRF Compression (CNC) framework, which leverages highly efficient context models to provide a storage-friendly NeRF representation. Specifically, we excavate both level-wise and dimension-wise context dependencies to enable probability prediction for information entropy reduction. Additionally, we exploit hash collision and occupancy grids as strong prior knowledge for better context modeling. To the best of our knowledge, we are the first to construct and exploit context models for NeRF compression. We achieve a size reduction of 100$\times$ and 70$\times$ with improved fidelity against the baseline Instant-NGP on Synthesic-NeRF and Tanks and Temples datasets, respectively. Additionally, we attain 86.7\% and 82.3\% storage size reduction against the SOTA NeRF compression method BiRF. Our code is available here: https://github.com/YihangChen-ee/CNC.
[ { "created": "Thu, 6 Jun 2024 14:16:03 GMT", "version": "v1" } ]
2024-06-07
[ [ "Chen", "Yihang", "" ], [ "Wu", "Qianyi", "" ], [ "Harandi", "Mehrtash", "" ], [ "Cai", "Jianfei", "" ] ]
2406.04109
Adil Soubki
Adil Soubki and Owen Rambow
Intention and Face in Dialog
null
May 2024. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 9143-9153, Torino, Italia. ELRA and ICCL
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The notion of face described by Brown and Levinson (1987) has been studied in great detail, but a critical aspect of the framework, that which focuses on how intentions mediate the planning of turns which impose upon face, has received far less attention. We present an analysis of three computational systems trained for classifying both intention and politeness, focusing on how the former influences the latter. In politeness theory, agents attend to the desire to have their wants appreciated (positive face), and a complementary desire to act unimpeded and maintain freedom (negative face). Similar to speech acts, utterances can perform so-called face acts which can either raise or threaten the positive or negative face of the speaker or hearer. We begin by using an existing corpus to train a model which classifies face acts, achieving a new SoTA in the process. We then observe that every face act has an underlying intention that motivates it and perform additional experiments integrating dialog act annotations to provide these intentions by proxy. Our analysis finds that dialog acts improve performance on face act detection for minority classes and points to a close relationship between aspects of face and intent.
[ { "created": "Thu, 6 Jun 2024 14:26:35 GMT", "version": "v1" } ]
2024-06-07
[ [ "Soubki", "Adil", "" ], [ "Rambow", "Owen", "" ] ]
2406.04624
Vipin Venugopal
Vipin V
Image Processing Based Forest Fire Detection
9 pages
International Journal of Emerging Technology and Advanced Engineering, 2(2), 87-95 (2012)
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
A novel approach for forest fire detection using image processing technique is proposed. A rule-based color model for fire pixel classification is used. The proposed algorithm uses RGB and YCbCr color space. The advantage of using YCbCr color space is that it can separate the luminance from the chrominance more effectively than RGB color space. The performance of the proposed algorithm is tested on two sets of images, one of which contains fire; the other contains fire-like regions. Standard methods are used for calculating the performance of the algorithm. The proposed method has both higher detection rate and lower false alarm rate. Since the algorithm is cheap in computation, it can be used for real-time forest fire detection.
[ { "created": "Fri, 7 Jun 2024 04:11:45 GMT", "version": "v1" } ]
2024-06-10
[ [ "V", "Vipin", "" ] ]
2406.04713
Benjamin Miller
Benjamin Kurt Miller, Ricky T. Q. Chen, Anuroop Sriram, Brandon M Wood
FlowMM: Generating Materials with Riemannian Flow Matching
https://github.com/facebookresearch/flowmm
ICML 2024
null
null
cs.LG cond-mat.mtrl-sci cs.AI physics.comp-ph stat.ML
http://creativecommons.org/licenses/by/4.0/
Crystalline materials are a fundamental component in next-generation technologies, yet modeling their distribution presents unique computational challenges. Of the plausible arrangements of atoms in a periodic lattice only a vanishingly small percentage are thermodynamically stable, which is a key indicator of the materials that can be experimentally realized. Two fundamental tasks in this area are to (a) predict the stable crystal structure of a known composition of elements and (b) propose novel compositions along with their stable structures. We present FlowMM, a pair of generative models that achieve state-of-the-art performance on both tasks while being more efficient and more flexible than competing methods. We generalize Riemannian Flow Matching to suit the symmetries inherent to crystals: translation, rotation, permutation, and periodic boundary conditions. Our framework enables the freedom to choose the flow base distributions, drastically simplifying the problem of learning crystal structures compared with diffusion models. In addition to standard benchmarks, we validate FlowMM's generated structures with quantum chemistry calculations, demonstrating that it is about 3x more efficient, in terms of integration steps, at finding stable materials compared to previous open methods.
[ { "created": "Fri, 7 Jun 2024 07:46:23 GMT", "version": "v1" } ]
2024-06-10
[ [ "Miller", "Benjamin Kurt", "" ], [ "Chen", "Ricky T. Q.", "" ], [ "Sriram", "Anuroop", "" ], [ "Wood", "Brandon M", "" ] ]
2406.05443
Asmaa Benchama
Asmaa Benchama, Khalid Zebbara
Novel Approach to Intrusion Detection: Introducing GAN-MSCNN-BILSTM with LIME Predictions
null
Data and Metadata, 2023 Dec. 28
10.56294/dm2023202
null
cs.CR cs.AI cs.NI
http://creativecommons.org/licenses/by/4.0/
This paper introduces an innovative intrusion detection system that harnesses Generative Adversarial Networks (GANs), Multi-Scale Convolutional Neural Networks (MSCNNs), and Bidirectional Long Short-Term Memory (BiLSTM) networks, supplemented by Local Interpretable Model-Agnostic Explanations (LIME) for interpretability. Employing a GAN, the system generates realistic network traffic data, encompassing both normal and attack patterns. This synthesized data is then fed into an MSCNN-BiLSTM architecture for intrusion detection. The MSCNN layer extracts features from the network traffic data at different scales, while the BiLSTM layer captures temporal dependencies within the traffic sequences. Integration of LIME allows for explaining the model's decisions. Evaluation on the Hogzilla dataset, a standard benchmark, showcases an impressive accuracy of 99.16\% for multi-class classification and 99.10\% for binary classification, while ensuring interpretability through LIME. This fusion of deep learning and interpretability presents a promising avenue for enhancing intrusion detection systems by improving transparency and decision support in network security.
[ { "created": "Sat, 8 Jun 2024 11:26:44 GMT", "version": "v1" } ]
2024-06-11
[ [ "Benchama", "Asmaa", "" ], [ "Zebbara", "Khalid", "" ] ]
2406.05506
Lior Limonad
Fabiana Fournier, Lior Limonad, Inna Skarbovsky
Towards a Benchmark for Causal Business Process Reasoning with LLMs
12 pages, 1 figure
NLP4BPM workshop at BPM 2024
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Large Language Models (LLMs) are increasingly used for boosting organizational efficiency and automating tasks. While not originally designed for complex cognitive processes, recent efforts have further extended to employ LLMs in activities such as reasoning, planning, and decision-making. In business processes, such abilities could be invaluable for leveraging on the massive corpora LLMs have been trained on for gaining deep understanding of such processes. In this work, we plant the seeds for the development of a benchmark to assess the ability of LLMs to reason about causal and process perspectives of business operations. We refer to this view as Causally-augmented Business Processes (BP^C). The core of the benchmark comprises a set of BP^C related situations, a set of questions about these situations, and a set of deductive rules employed to systematically resolve the ground truth answers to these questions. Also with the power of LLMs, the seed is then instantiated into a larger-scale set of domain-specific situations and questions. Reasoning on BP^C is of crucial importance for process interventions and process improvement. Our benchmark, accessible at https://huggingface.co/datasets/ibm/BPC, can be used in one of two possible modalities: testing the performance of any target LLM and training an LLM to advance its capability to reason about BP^C.
[ { "created": "Sat, 8 Jun 2024 16:10:53 GMT", "version": "v1" }, { "created": "Tue, 16 Jul 2024 15:48:32 GMT", "version": "v2" } ]
2024-08-13
[ [ "Fournier", "Fabiana", "" ], [ "Limonad", "Lior", "" ], [ "Skarbovsky", "Inna", "" ] ]
2406.05535
Junqi Gao
Junqi Gao, Biqing Qi, Yao Li, Zhichang Guo, Dong Li, Yuming Xing, Dazhi Zhang
Perturbation Towards Easy Samples Improves Targeted Adversarial Transferability
null
Advances in Neural Information Processing Systems 36, 2023
null
null
cs.LG cs.AI cs.CR
http://creativecommons.org/licenses/by-nc-sa/4.0/
The transferability of adversarial perturbations provides an effective shortcut for black-box attacks. Targeted perturbations have greater practicality but are more difficult to transfer between models. In this paper, we experimentally and theoretically demonstrated that neural networks trained on the same dataset have more consistent performance in High-Sample-Density-Regions (HSDR) of each class instead of low sample density regions. Therefore, in the target setting, adding perturbations towards HSDR of the target class is more effective in improving transferability. However, density estimation is challenging in high-dimensional scenarios. Further theoretical and experimental verification demonstrates that easy samples with low loss are more likely to be located in HSDR. Perturbations towards such easy samples in the target class can avoid density estimation for HSDR location. Based on the above facts, we verified that adding perturbations to easy samples in the target class improves targeted adversarial transferability of existing attack methods. A generative targeted attack strategy named Easy Sample Matching Attack (ESMA) is proposed, which has a higher success rate for targeted attacks and outperforms the SOTA generative method. Moreover, ESMA requires only 5% of the storage space and much less computation time comparing to the current SOTA, as ESMA attacks all classes with only one model instead of seperate models for each class. Our code is available at https://github.com/gjq100/ESMA.
[ { "created": "Sat, 8 Jun 2024 17:33:23 GMT", "version": "v1" } ]
2024-06-11
[ [ "Gao", "Junqi", "" ], [ "Qi", "Biqing", "" ], [ "Li", "Yao", "" ], [ "Guo", "Zhichang", "" ], [ "Li", "Dong", "" ], [ "Xing", "Yuming", "" ], [ "Zhang", "Dazhi", "" ] ]