id
stringlengths
10
10
submitter
stringlengths
3
52
authors
stringlengths
6
7.24k
title
stringlengths
12
217
comments
stringlengths
1
446
journal-ref
stringlengths
4
297
doi
stringlengths
12
118
report-no
stringclasses
237 values
categories
stringlengths
5
71
license
stringclasses
6 values
abstract
stringlengths
90
3.26k
versions
listlengths
1
17
update_date
stringclasses
969 values
authors_parsed
sequencelengths
1
451
2401.03749
Ziwei Sun
Ziwei Sun, Zexi Hua, Hengchao Li, and Yan Li
A Flying Bird Object Detection Method for Surveillance Video
null
in IEEE Transactions on Instrumentation and Measurement, vol. 73, pp. 1-14, 2024
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Aiming at the specific characteristics of flying bird objects in surveillance video, such as the typically non-obvious features in single-frame images, small size in most instances, and asymmetric shapes, this paper proposes a Flying Bird Object Detection method for Surveillance Video (FBOD-SV). Firstly, a new feature aggregation module, the Correlation Attention Feature Aggregation (Co-Attention-FA) module, is designed to aggregate the features of the flying bird object according to the bird object's correlation on multiple consecutive frames of images. Secondly, a Flying Bird Object Detection Network (FBOD-Net) with down-sampling followed by up-sampling is designed, which utilizes a large feature layer that fuses fine spatial information and large receptive field information to detect special multi-scale (mostly small-scale) bird objects. Finally, the SimOTA dynamic label allocation method is applied to One-Category object detection, and the SimOTA-OC dynamic label strategy is proposed to solve the difficult problem of label allocation caused by irregular flying bird objects. In this paper, the performance of the FBOD-SV is validated using experimental datasets of flying bird objects in traction substation surveillance videos. The experimental results show that the FBOD-SV effectively improves the detection performance of flying bird objects in surveillance video.
[ { "created": "Mon, 8 Jan 2024 09:20:46 GMT", "version": "v1" }, { "created": "Sat, 13 Apr 2024 05:56:09 GMT", "version": "v2" }, { "created": "Thu, 29 Aug 2024 08:52:40 GMT", "version": "v3" } ]
2024-08-30
[ [ "Sun", "Ziwei", "" ], [ "Hua", "Zexi", "" ], [ "Li", "Hengchao", "" ], [ "Li", "Yan", "" ] ]
2401.03844
Bingyin Zhao
Bingyin Zhao, Zhiding Yu, Shiyi Lan, Yutao Cheng, Anima Anandkumar, Yingjie Lao, Jose M. Alvarez
Fully Attentional Networks with Self-emerging Token Labeling
null
Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 5585-5595
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Recent studies indicate that Vision Transformers (ViTs) are robust against out-of-distribution scenarios. In particular, the Fully Attentional Network (FAN) - a family of ViT backbones, has achieved state-of-the-art robustness. In this paper, we revisit the FAN models and improve their pre-training with a self-emerging token labeling (STL) framework. Our method contains a two-stage training framework. Specifically, we first train a FAN token labeler (FAN-TL) to generate semantically meaningful patch token labels, followed by a FAN student model training stage that uses both the token labels and the original class label. With the proposed STL framework, our best model based on FAN-L-Hybrid (77.3M parameters) achieves 84.8% Top-1 accuracy and 42.1% mCE on ImageNet-1K and ImageNet-C, and sets a new state-of-the-art for ImageNet-A (46.1%) and ImageNet-R (56.6%) without using extra data, outperforming the original FAN counterpart by significant margins. The proposed framework also demonstrates significantly enhanced performance on downstream tasks such as semantic segmentation, with up to 1.7% improvement in robustness over the counterpart model. Code is available at https://github.com/NVlabs/STL.
[ { "created": "Mon, 8 Jan 2024 12:14:15 GMT", "version": "v1" } ]
2024-01-09
[ [ "Zhao", "Bingyin", "" ], [ "Yu", "Zhiding", "" ], [ "Lan", "Shiyi", "" ], [ "Cheng", "Yutao", "" ], [ "Anandkumar", "Anima", "" ], [ "Lao", "Yingjie", "" ], [ "Alvarez", "Jose M.", "" ] ]
2401.03922
Chollette Olisah Dr
Simisola Odimayo, Chollette C. Olisah, and Khadija Mohammed
SNeurodCNN: Structure-focused Neurodegeneration Convolutional Neural Network for Modelling and Classification of Alzheimer's Disease
36 Pages, 10 figures, 4 tables
Scientific Reports 2024
10.12751/g-node.aa605a/
Volume 14,15270 (2024)
eess.IV cs.CV cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Alzheimer's disease (AD), the predominant form of dementia, is a growing global challenge, emphasizing the urgent need for accurate and early diagnosis. Current clinical diagnoses rely on radiologist expert interpretation, which is prone to human error. Deep learning has thus far shown promise for early AD diagnosis. However, existing methods often overlook focal structural atrophy critical for enhanced understanding of the cerebral cortex neurodegeneration. This paper proposes a deep learning framework that includes a novel structure-focused neurodegeneration CNN architecture named SNeurodCNN and an image brightness enhancement preprocessor using gamma correction. The SNeurodCNN architecture takes as input the focal structural atrophy features resulting from segmentation of brain structures captured through magnetic resonance imaging (MRI). As a result, the architecture considers only necessary CNN components, which comprises of two downsampling convolutional blocks and two fully connected layers, for achieving the desired classification task, and utilises regularisation techniques to regularise learnable parameters. Leveraging mid-sagittal and para-sagittal brain image viewpoints from the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, our framework demonstrated exceptional performance. The para-sagittal viewpoint achieved 97.8% accuracy, 97.0% specificity, and 98.5% sensitivity, while the mid-sagittal viewpoint offered deeper insights with 98.1% accuracy, 97.2% specificity, and 99.0% sensitivity. Model analysis revealed the ability of SNeurodCNN to capture the structural dynamics of mild cognitive impairment (MCI) and AD in the frontal lobe, occipital lobe, cerebellum, temporal, and parietal lobe, suggesting its potential as a brain structural change digi-biomarker for early AD diagnosis. This work can be reproduced using code we made available on GitHub.
[ { "created": "Mon, 8 Jan 2024 14:33:57 GMT", "version": "v1" }, { "created": "Wed, 10 Jan 2024 07:06:42 GMT", "version": "v2" }, { "created": "Fri, 31 May 2024 01:10:42 GMT", "version": "v3" } ]
2024-07-12
[ [ "Odimayo", "Simisola", "" ], [ "Olisah", "Chollette C.", "" ], [ "Mohammed", "Khadija", "" ] ]
2401.03925
Marcus Vin\'icius Borela Castro
Marcus Vinicius Borela de Castro and Remis Balaniuk
Rastro-DM: data mining with a trail
It was published in the Brazilian Federal Court of Accounts Journal n. 145 on 2021 (https://revista.tcu.gov.br/ojs/index.php/RTCU/article/view/1733)
Revista do TCU (Brazilian Federal Court of Accounts), 145 (2021): 79-106
null
REVISTATCU_145
cs.DB cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
This paper proposes a methodology for documenting data mining (DM) projects, Rastro-DM (Trail Data Mining), with a focus not on the model that is generated, but on the processes behind its construction, in order to leave a trail (Rastro in Portuguese) of planned actions, training completed, results obtained, and lessons learned. The proposed practices are complementary to structuring methodologies of DM, such as CRISP-DM, which establish a methodological and paradigmatic framework for the DM process. The application of best practices and their benefits is illustrated in a project called 'Cladop' that was created for the classification of PDF documents associated with the investigative process of damages to the Brazilian Federal Public Treasury. Building the Rastro-DM kit in the context of a project is a small step that can lead to an institutional leap to be achieved by sharing and using the trail across the enterprise.
[ { "created": "Mon, 8 Jan 2024 14:39:21 GMT", "version": "v1" } ]
2024-01-09
[ [ "de Castro", "Marcus Vinicius Borela", "" ], [ "Balaniuk", "Remis", "" ] ]
2401.04105
Chen Zhao
Chen Zhao, Shuming Liu, Karttikeya Mangalam, Guocheng Qian, Fatimah Zohra, Abdulmohsen Alghannam, Jitendra Malik, Bernard Ghanem
Dr$^2$Net: Dynamic Reversible Dual-Residual Networks for Memory-Efficient Finetuning
null
the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2024
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large pretrained models are increasingly crucial in modern computer vision tasks. These models are typically used in downstream tasks by end-to-end finetuning, which is highly memory-intensive for tasks with high-resolution data, e.g., video understanding, small object detection, and point cloud analysis. In this paper, we propose Dynamic Reversible Dual-Residual Networks, or Dr$^2$Net, a novel family of network architectures that acts as a surrogate network to finetune a pretrained model with substantially reduced memory consumption. Dr$^2$Net contains two types of residual connections, one maintaining the residual structure in the pretrained models, and the other making the network reversible. Due to its reversibility, intermediate activations, which can be reconstructed from output, are cleared from memory during training. We use two coefficients on either type of residual connections respectively, and introduce a dynamic training strategy that seamlessly transitions the pretrained model to a reversible network with much higher numerical precision. We evaluate Dr$^2$Net on various pretrained models and various tasks, and show that it can reach comparable performance to conventional finetuning but with significantly less memory usage.
[ { "created": "Mon, 8 Jan 2024 18:59:31 GMT", "version": "v1" }, { "created": "Sat, 30 Mar 2024 08:06:01 GMT", "version": "v2" } ]
2024-04-02
[ [ "Zhao", "Chen", "" ], [ "Liu", "Shuming", "" ], [ "Mangalam", "Karttikeya", "" ], [ "Qian", "Guocheng", "" ], [ "Zohra", "Fatimah", "" ], [ "Alghannam", "Abdulmohsen", "" ], [ "Malik", "Jitendra", "" ], [ "Ghanem", "Bernard", "" ] ]
2401.04116
Li Yang
Yang Li and Huaqiang Jiang and Yangkai Wu
Semantic Draw Engineering for Text-to-Image Creation
6pages, 5 figures
Journal of Advances in Information Science and Technology, Volume 1, Issue 1, 2023, Pages 1-6
null
null
cs.HC cs.CV
http://creativecommons.org/licenses/by-sa/4.0/
Text-to-image generation is conducted through Generative Adversarial Networks (GANs) or transformer models. However, the current challenge lies in accurately generating images based on textual descriptions, especially in scenarios where the content and theme of the target image are ambiguous. In this paper, we propose a method that utilizes artificial intelligence models for thematic creativity, followed by a classification modeling of the actual painting process. The method involves converting all visual elements into quantifiable data structures before creating images. We evaluate the effectiveness of this approach in terms of semantic accuracy, image reproducibility, and computational efficiency, in comparison with existing image generation algorithms.
[ { "created": "Sat, 23 Dec 2023 05:35:15 GMT", "version": "v1" } ]
2024-01-10
[ [ "Li", "Yang", "" ], [ "Jiang", "Huaqiang", "" ], [ "Wu", "Yangkai", "" ] ]
2401.04192
Jos\'e Ra\'ul Romero
Aurora Ram\'irez and Jos\'e Ra\'ul Romero and Sebasti\'an Ventura
Interactive Multi-Objective Evolutionary Optimization of Software Architectures
41 pages, 5 figures, journal "Information Sciences"
Information Sciences, vol. 463-464, pp. 92-109, 2018
10.1016/j.ins.2018.06.034
null
cs.SE cs.AI cs.NE
http://creativecommons.org/licenses/by/4.0/
While working on a software specification, designers usually need to evaluate different architectural alternatives to be sure that quality criteria are met. Even when these quality aspects could be expressed in terms of multiple software metrics, other qualitative factors cannot be numerically measured, but they are extracted from the engineer's know-how and prior experiences. In fact, detecting not only strong but also weak points in the different solutions seems to fit better with the way humans make their decisions. Putting the human in the loop brings new challenges to the search-based software engineering field, especially for those human-centered activities within the early analysis phase. This paper explores how the interactive evolutionary computation can serve as a basis for integrating the human's judgment into the search process. An interactive approach is proposed to discover software architectures, in which both quantitative and qualitative criteria are applied to guide a multi-objective evolutionary algorithm. The obtained feedback is incorporated into the fitness function using architectural preferences allowing the algorithm to discern between promising and poor solutions. Experimentation with real users has revealed that the proposed interaction mechanism can effectively guide the search towards those regions of the search space that are of real interest to the expert.
[ { "created": "Mon, 8 Jan 2024 19:15:40 GMT", "version": "v1" } ]
2024-01-10
[ [ "Ramírez", "Aurora", "" ], [ "Romero", "José Raúl", "" ], [ "Ventura", "Sebastián", "" ] ]
2401.04206
Robert Kaufman
Robert Kaufman, Jean Costa, Everlyne Kimani
Effects of Multimodal Explanations for Autonomous Driving on Driving Performance, Cognitive Load, Expertise, Confidence, and Trust
14 pages, published in Scientific Reports
Scientific Reports volume 14, Article number: 13061 (2024)
10.1038/s41598-024-62052-9
null
cs.HC cs.AI cs.RO
http://creativecommons.org/licenses/by/4.0/
Advances in autonomous driving provide an opportunity for AI-assisted driving instruction that directly addresses the critical need for human driving improvement. How should an AI instructor convey information to promote learning? In a pre-post experiment (n = 41), we tested the impact of an AI Coach's explanatory communications modeled after performance driving expert instructions. Participants were divided into four (4) groups to assess two (2) dimensions of the AI coach's explanations: information type ('what' and 'why'-type explanations) and presentation modality (auditory and visual). We compare how different explanatory techniques impact driving performance, cognitive load, confidence, expertise, and trust via observational learning. Through interview, we delineate participant learning processes. Results show AI coaching can effectively teach performance driving skills to novices. We find the type and modality of information influences performance outcomes. Differences in how successfully participants learned are attributed to how information directs attention, mitigates uncertainty, and influences overload experienced by participants. Results suggest efficient, modality-appropriate explanations should be opted for when designing effective HMI communications that can instruct without overwhelming. Further, results support the need to align communications with human learning and cognitive processes. We provide eight design implications for future autonomous vehicle HMI and AI coach design.
[ { "created": "Mon, 8 Jan 2024 19:33:57 GMT", "version": "v1" }, { "created": "Wed, 10 Jan 2024 19:52:42 GMT", "version": "v2" }, { "created": "Fri, 19 Apr 2024 21:06:28 GMT", "version": "v3" }, { "created": "Thu, 13 Jun 2024 17:01:00 GMT", "version": "v4" } ]
2024-06-14
[ [ "Kaufman", "Robert", "" ], [ "Costa", "Jean", "" ], [ "Kimani", "Everlyne", "" ] ]
2401.04290
Sean Kulinski
Sean Kulinski, Nicholas R. Waytowich, James Z. Hare, David I. Inouye
StarCraftImage: A Dataset For Prototyping Spatial Reasoning Methods For Multi-Agent Environments
Published in CVPR 23'
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023
null
null
cs.CV cs.AI cs.MA
http://creativecommons.org/licenses/by/4.0/
Spatial reasoning tasks in multi-agent environments such as event prediction, agent type identification, or missing data imputation are important for multiple applications (e.g., autonomous surveillance over sensor networks and subtasks for reinforcement learning (RL)). StarCraft II game replays encode intelligent (and adversarial) multi-agent behavior and could provide a testbed for these tasks; however, extracting simple and standardized representations for prototyping these tasks is laborious and hinders reproducibility. In contrast, MNIST and CIFAR10, despite their extreme simplicity, have enabled rapid prototyping and reproducibility of ML methods. Following the simplicity of these datasets, we construct a benchmark spatial reasoning dataset based on StarCraft II replays that exhibit complex multi-agent behaviors, while still being as easy to use as MNIST and CIFAR10. Specifically, we carefully summarize a window of 255 consecutive game states to create 3.6 million summary images from 60,000 replays, including all relevant metadata such as game outcome and player races. We develop three formats of decreasing complexity: Hyperspectral images that include one channel for every unit type (similar to multispectral geospatial images), RGB images that mimic CIFAR10, and grayscale images that mimic MNIST. We show how this dataset can be used for prototyping spatial reasoning methods. All datasets, code for extraction, and code for dataset loading can be found at https://starcraftdata.davidinouye.com
[ { "created": "Tue, 9 Jan 2024 00:05:56 GMT", "version": "v1" } ]
2024-01-10
[ [ "Kulinski", "Sean", "" ], [ "Waytowich", "Nicholas R.", "" ], [ "Hare", "James Z.", "" ], [ "Inouye", "David I.", "" ] ]
2401.04422
Tim Vor Der Br\"uck
Tim vor der Br\"uck and Marc Pouly
Estimating Text Similarity based on Semantic Concept Embeddings
null
IARIA Congress Proceedings, 2023
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Due to their ease of use and high accuracy, Word2Vec (W2V) word embeddings enjoy great success in the semantic representation of words, sentences, and whole documents as well as for semantic similarity estimation. However, they have the shortcoming that they are directly extracted from a surface representation, which does not adequately represent human thought processes and also performs poorly for highly ambiguous words. Therefore, we propose Semantic Concept Embeddings (CE) based on the MultiNet Semantic Network (SN) formalism, which addresses both shortcomings. The evaluation on a marketing target group distribution task showed that the accuracy of predicted target groups can be increased by combining traditional word embeddings with semantic CEs.
[ { "created": "Tue, 9 Jan 2024 08:29:46 GMT", "version": "v1" } ]
2024-01-10
[ [ "der Brück", "Tim vor", "" ], [ "Pouly", "Marc", "" ] ]
2401.04478
Maximilian Schuh
Maximilian G. Schuh, Davide Boldini, Stephan A. Sieber
TwinBooster: Synergising Large Language Models with Barlow Twins and Gradient Boosting for Enhanced Molecular Property Prediction
13(+9) pages(+appendix), 5 figures, 11 tables
J. Chem. Inf. Model. 2024, 64, 12, 4640-4650
10.1021/acs.jcim.4c00765
null
q-bio.BM cs.AI cs.CL cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
The success of drug discovery and development relies on the precise prediction of molecular activities and properties. While in silico molecular property prediction has shown remarkable potential, its use has been limited so far to assays for which large amounts of data are available. In this study, we use a fine-tuned large language model to integrate biological assays based on their textual information, coupled with Barlow Twins, a Siamese neural network using a novel self-supervised learning approach. This architecture uses both assay information and molecular fingerprints to extract the true molecular information. TwinBooster enables the prediction of properties of unseen bioassays and molecules by providing state-of-the-art zero-shot learning tasks. Remarkably, our artificial intelligence pipeline shows excellent performance on the FS-Mol benchmark. This breakthrough demonstrates the application of deep learning to critical property prediction tasks where data is typically scarce. By accelerating the early identification of active molecules in drug discovery and development, this method has the potential to help streamline the identification of novel therapeutics.
[ { "created": "Tue, 9 Jan 2024 10:36:20 GMT", "version": "v1" }, { "created": "Tue, 30 Jan 2024 09:29:47 GMT", "version": "v2" } ]
2024-09-04
[ [ "Schuh", "Maximilian G.", "" ], [ "Boldini", "Davide", "" ], [ "Sieber", "Stephan A.", "" ] ]
2401.04680
Andreas D\"opp
Sunny Howard, Peter Norreys and Andreas D\"opp
CoordGate: Efficiently Computing Spatially-Varying Convolutions in Convolutional Neural Networks
null
BMVC 2023
null
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Optical imaging systems are inherently limited in their resolution due to the point spread function (PSF), which applies a static, yet spatially-varying, convolution to the image. This degradation can be addressed via Convolutional Neural Networks (CNNs), particularly through deblurring techniques. However, current solutions face certain limitations in efficiently computing spatially-varying convolutions. In this paper we propose CoordGate, a novel lightweight module that uses a multiplicative gate and a coordinate encoding network to enable efficient computation of spatially-varying convolutions in CNNs. CoordGate allows for selective amplification or attenuation of filters based on their spatial position, effectively acting like a locally connected neural network. The effectiveness of the CoordGate solution is demonstrated within the context of U-Nets and applied to the challenging problem of image deblurring. The experimental results show that CoordGate outperforms conventional approaches, offering a more robust and spatially aware solution for CNNs in various computer vision applications.
[ { "created": "Tue, 9 Jan 2024 17:13:58 GMT", "version": "v1" } ]
2024-01-10
[ [ "Howard", "Sunny", "" ], [ "Norreys", "Peter", "" ], [ "Döpp", "Andreas", "" ] ]
2401.04732
Laurent Bou\'e
Manpreet Singh, Ravdeep Pasricha, Nitish Singh, Ravi Prasad Kondapalli, Manoj R, Kiran R, Laurent Bou\'e
A case study of Generative AI in MSX Sales Copilot: Improving seller productivity with a real-time question-answering system for content recommendation
null
Microsoft Journal of Applied Research, Volume 20, 2024
null
null
cs.IR cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
In this paper, we design a real-time question-answering system specifically targeted for helping sellers get relevant material/documentation they can share live with their customers or refer to during a call. Taking the Seismic content repository as a relatively large scale example of a diverse dataset of sales material, we demonstrate how LLM embeddings of sellers' queries can be matched with the relevant content. We achieve this by engineering prompts in an elaborate fashion that makes use of the rich set of meta-features available for documents and sellers. Using a bi-encoder with cross-encoder re-ranker architecture, we show how the solution returns the most relevant content recommendations in just a few seconds even for large datasets. Our recommender system is deployed as an AML endpoint for real-time inferencing and has been integrated into a Copilot interface that is now deployed in the production version of the Dynamics CRM, known as MSX, used daily by Microsoft sellers.
[ { "created": "Thu, 4 Jan 2024 13:32:44 GMT", "version": "v1" } ]
2024-01-11
[ [ "Singh", "Manpreet", "" ], [ "Pasricha", "Ravdeep", "" ], [ "Singh", "Nitish", "" ], [ "Kondapalli", "Ravi Prasad", "" ], [ "R", "Manoj", "" ], [ "R", "Kiran", "" ], [ "Boué", "Laurent", "" ] ]
2401.04740
Dwith Chenna
Dwith Chenna, Suyash Bhogawar
Segment anything model (SAM) for brain extraction in fMRI studies
null
International Journal of Artificial Intelligence In Medicine (IJAIMED, Volume 1, Issue 01, Jan-Dec 2023, pp. 1-8
10.17605/OSF.IO/35N7E
null
eess.IV cs.CV
http://creativecommons.org/licenses/by/4.0/
Brain extraction and removal of skull artifacts from magnetic resonance images (MRI) is an important preprocessing step in neuroimaging analysis. There are many tools developed to handle human fMRI images, which could involve manual steps for verifying results from brain segmentation that makes it time consuming and inefficient. In this study, we will use the segment anything model (SAM), a freely available neural network released by Meta[4], which has shown promising results in many generic segmentation applications. We will analyze the efficiency of SAM for neuroimaging brain segmentation by removing skull artifacts. The results of the experiments showed promising results that explore using automated segmentation algorithms for neuroimaging without the need to train on custom medical imaging dataset.
[ { "created": "Tue, 9 Jan 2024 06:25:09 GMT", "version": "v1" } ]
2024-01-11
[ [ "Chenna", "Dwith", "" ], [ "Bhogawar", "Suyash", "" ] ]
2401.04748
Chollette Olisah Dr
Chollette C. Olisah, Ben Trewhella, Bo Li, Melvyn L. Smith, Benjamin Winstone, E. Charles Whitfield, Felicidad Fern\'andez Fern\'andez, Harriet Duncalfe
Convolutional Neural Network Ensemble Learning for Hyperspectral Imaging-based Blackberry Fruit Ripeness Detection in Uncontrolled Farm Environment
25 pages, 10 figures, 6 tables; submited to EAAI
Engineering Applications of Artificial Intelligence, June 2024, 107945
10.1016/j.engappai.2024.107945
Volume 132,
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Fruit ripeness estimation models have for decades depended on spectral index features or colour-based features, such as mean, standard deviation, skewness, colour moments, and/or histograms for learning traits of fruit ripeness. Recently, few studies have explored the use of deep learning techniques to extract features from images of fruits with visible ripeness cues. However, the blackberry (Rubus fruticosus) fruit does not show obvious and reliable visible traits of ripeness when mature and therefore poses great difficulty to fruit pickers. The mature blackberry, to the human eye, is black before, during, and post-ripening. To address this engineering application challenge, this paper proposes a novel multi-input convolutional neural network (CNN) ensemble classifier for detecting subtle traits of ripeness in blackberry fruits. The multi-input CNN was created from a pre-trained visual geometry group 16-layer deep convolutional network (VGG16) model trained on the ImageNet dataset. The fully connected layers were optimized for learning traits of ripeness of mature blackberry fruits. The resulting model served as the base for building homogeneous ensemble learners that were ensemble using the stack generalization ensemble (SGE) framework. The input to the network is images acquired with a stereo sensor using visible and near-infrared (VIS-NIR) spectral filters at wavelengths of 700 nm and 770 nm. Through experiments, the proposed model achieved 95.1% accuracy on unseen sets and 90.2% accuracy with in-field conditions. Further experiments reveal that machine sensory is highly and positively correlated to human sensory over blackberry fruit skin texture.
[ { "created": "Tue, 9 Jan 2024 12:00:17 GMT", "version": "v1" } ]
2024-06-03
[ [ "Olisah", "Chollette C.", "" ], [ "Trewhella", "Ben", "" ], [ "Li", "Bo", "" ], [ "Smith", "Melvyn L.", "" ], [ "Winstone", "Benjamin", "" ], [ "Whitfield", "E. Charles", "" ], [ "Fernández", "Felicidad Fernández", "" ], [ "Duncalfe", "Harriet", "" ] ]
2401.04853
Tamara Babaian
Tamara Babaian, Jennifer Xu
Entity Recognition from Colloquial Text
null
Decision Support Systems, Volume 179, 2024,114172, ISSN 0167-9236
10.1016/j.dss.2024.114172
null
cs.CL
http://creativecommons.org/licenses/by-nc-nd/4.0/
Extraction of concepts and entities of interest from non-formal texts such as social media posts and informal communication is an important capability for decision support systems in many domains, including healthcare, customer relationship management, and others. Despite the recent advances in training large language models for a variety of natural language processing tasks, the developed models and techniques have mainly focused on formal texts and do not perform as well on colloquial data, which is characterized by a number of distinct challenges. In our research, we focus on the healthcare domain and investigate the problem of symptom recognition from colloquial texts by designing and evaluating several training strategies for BERT-based model fine-tuning. These strategies are distinguished by the choice of the base model, the training corpora, and application of term perturbations in the training data. The best-performing models trained using these strategies outperform the state-of-the-art specialized symptom recognizer by a large margin. Through a series of experiments, we have found specific patterns of model behavior associated with the training strategies we designed. We present design principles for training strategies for effective entity recognition in colloquial texts based on our findings.
[ { "created": "Tue, 9 Jan 2024 23:52:32 GMT", "version": "v1" } ]
2024-01-11
[ [ "Babaian", "Tamara", "" ], [ "Xu", "Jennifer", "" ] ]
2401.04950
Dionissios Hristopulos Prof.
Dionissios T. Hristopulos
Information Flow Rate for Cross-Correlated Stochastic Processes
16 pages, 5 figures; to appear in IEEE Transactions on Signal Processing
IEEE Transactions on Signal Processing, vol. 72, pp. 839-854, 2024
10.1109/TSP.2024.3358580
null
physics.data-an cs.AI cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
Causal inference seeks to identify cause-and-effect interactions in coupled systems. A recently proposed method by Liang detects causal relations by quantifying the direction and magnitude of information flow between time series. The theoretical formulation of information flow for stochastic dynamical systems provides a general expression and a data-driven statistic for the rate of entropy transfer between different system units. To advance understanding of information flow rate in terms of intuitive concepts and physically meaningful parameters, we investigate statistical properties of the data-driven information flow rate between coupled stochastic processes. We derive relations between the expectation of the information flow rate statistic and properties of the auto- and cross-correlation functions. Thus, we elucidate the dependence of the information flow rate on the analytical properties and characteristic times of the correlation functions. Our analysis provides insight into the influence of the sampling step, the strength of cross-correlations, and the temporal delay of correlations on information flow rate. We support the theoretical results with numerical simulations of correlated Gaussian processes.
[ { "created": "Wed, 10 Jan 2024 06:08:06 GMT", "version": "v1" } ]
2024-03-20
[ [ "Hristopulos", "Dionissios T.", "" ] ]
2401.04980
Daniel Attard
Daniel Attard and Josef Bajada
Autonomous Navigation of Tractor-Trailer Vehicles through Roundabout Intersections
null
TACTFUL 2023
null
null
cs.RO cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
In recent years, significant advancements have been made in the field of autonomous driving with the aim of increasing safety and efficiency. However, research that focuses on tractor-trailer vehicles is relatively sparse. Due to the physical characteristics and articulated joints, such vehicles require tailored models. While turning, the back wheels of the trailer turn at a tighter radius and the truck often has to deviate from the centre of the lane to accommodate this. Due to the lack of publicly available models, this work develops truck and trailer models using the high-fidelity simulation software CARLA, together with several roundabout scenarios, to establish a baseline dataset for benchmarks. Using a twin-q soft actor-critic algorithm, we train a quasi-end-to-end autonomous driving model which is able to achieve a 73% success rate on different roundabouts.
[ { "created": "Wed, 10 Jan 2024 07:55:11 GMT", "version": "v1" } ]
2024-01-11
[ [ "Attard", "Daniel", "" ], [ "Bajada", "Josef", "" ] ]
2401.05073
Florin Leon
Florin Leon, Marius Gavrilescu, Sabina-Adriana Floria, Alina-Adriana Minea
Hierarchical Classification of Transversal Skills in Job Ads Based on Sentence Embeddings
19 pages, 6 figures, 6 tables, 43 references
Information, vol. 15, no. 3, article number 151, 18 pag., 2024
10.3390/info15030151
null
cs.LG cs.CL
http://creativecommons.org/licenses/by-nc-nd/4.0/
This paper proposes a classification framework aimed at identifying correlations between job ad requirements and transversal skill sets, with a focus on predicting the necessary skills for individual job descriptions using a deep learning model. The approach involves data collection, preprocessing, and labeling using ESCO (European Skills, Competences, and Occupations) taxonomy. Hierarchical classification and multi-label strategies are used for skill identification, while augmentation techniques address data imbalance, enhancing model robustness. A comparison between results obtained with English-specific and multi-language sentence embedding models reveals close accuracy. The experimental case studies detail neural network configurations, hyperparameters, and cross-validation results, highlighting the efficacy of the hierarchical approach and the suitability of the multi-language model for the diverse European job market. Thus, a new approach is proposed for the hierarchical classification of transversal skills from job ads.
[ { "created": "Wed, 10 Jan 2024 11:07:32 GMT", "version": "v1" } ]
2024-03-12
[ [ "Leon", "Florin", "" ], [ "Gavrilescu", "Marius", "" ], [ "Floria", "Sabina-Adriana", "" ], [ "Minea", "Alina-Adriana", "" ] ]
2401.05137
Gwenole Quellec
Mostafa El Habib Daho, Yihao Li, Rachid Zeghlache, Hugo Le Boit\'e, Pierre Deman, Laurent Borderie, Hugang Ren, Niranchana Mannivanan, Capucine Lepicard, B\'eatrice Cochener, Aude Couturier, Ramin Tadayoni, Pierre-Henri Conze, Mathieu Lamard, Gwenol\'e Quellec
DISCOVER: 2-D Multiview Summarization of Optical Coherence Tomography Angiography for Automatic Diabetic Retinopathy Diagnosis
null
Artificial Intelligence in Medicine 2024, 102803
10.1016/j.artmed.2024.102803
null
eess.IV cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Diabetic Retinopathy (DR), an ocular complication of diabetes, is a leading cause of blindness worldwide. Traditionally, DR is monitored using Color Fundus Photography (CFP), a widespread 2-D imaging modality. However, DR classifications based on CFP have poor predictive power, resulting in suboptimal DR management. Optical Coherence Tomography Angiography (OCTA) is a recent 3-D imaging modality offering enhanced structural and functional information (blood flow) with a wider field of view. This paper investigates automatic DR severity assessment using 3-D OCTA. A straightforward solution to this task is a 3-D neural network classifier. However, 3-D architectures have numerous parameters and typically require many training samples. A lighter solution consists in using 2-D neural network classifiers processing 2-D en-face (or frontal) projections and/or 2-D cross-sectional slices. Such an approach mimics the way ophthalmologists analyze OCTA acquisitions: 1) en-face flow maps are often used to detect avascular zones and neovascularization, and 2) cross-sectional slices are commonly analyzed to detect macular edemas, for instance. However, arbitrary data reduction or selection might result in information loss. Two complementary strategies are thus proposed to optimally summarize OCTA volumes with 2-D images: 1) a parametric en-face projection optimized through deep learning and 2) a cross-sectional slice selection process controlled through gradient-based attribution. The full summarization and DR classification pipeline is trained from end to end. The automatic 2-D summary can be displayed in a viewer or printed in a report to support the decision. We show that the proposed 2-D summarization and classification pipeline outperforms direct 3-D classification with the advantage of improved interpretability.
[ { "created": "Wed, 10 Jan 2024 13:06:40 GMT", "version": "v1" } ]
2024-02-07
[ [ "Daho", "Mostafa El Habib", "" ], [ "Li", "Yihao", "" ], [ "Zeghlache", "Rachid", "" ], [ "Boité", "Hugo Le", "" ], [ "Deman", "Pierre", "" ], [ "Borderie", "Laurent", "" ], [ "Ren", "Hugang", "" ], [ "Mannivanan", "Niranchana", "" ], [ "Lepicard", "Capucine", "" ], [ "Cochener", "Béatrice", "" ], [ "Couturier", "Aude", "" ], [ "Tadayoni", "Ramin", "" ], [ "Conze", "Pierre-Henri", "" ], [ "Lamard", "Mathieu", "" ], [ "Quellec", "Gwenolé", "" ] ]
2401.05390
Rebeca D\'iaz-Redondo
Francisco Troncoso-Pastoriza, Pablo Egu\'ia-Oller, Rebeca P. D\'iaz-Redondo, Enrique Granada-\'Alvarez
Generation of BIM data based on the automatic detection, identification and localization of lamps in buildings
12 pages, 19 figures, journal
Sustainable cities and society, 2018, vol. 36, p. 59-70
10.1016/j.scs.2017.10.015
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we introduce a method that supports the detection, identification and localization of lamps in a building, with the main goal of automatically feeding its energy model by means of Building Information Modeling (BIM) methods. The proposed method, thus, provides useful information to apply energy-saving strategies to reduce energy consumption in the building sector through the correct management of the lighting infrastructure. Based on the unique geometry and brightness of lamps and the use of only greyscale images, our methodology is able to obtain accurate results despite its low computational needs, resulting in near-real-time processing. The main novelty is that the focus of the candidate search is not over the entire image but instead only on a limited region that summarizes the specific characteristics of the lamp. The information obtained from our approach was used on the Green Building XML Schema to illustrate the automatic generation of BIM data from the results of the algorithm.
[ { "created": "Mon, 18 Dec 2023 16:54:48 GMT", "version": "v1" } ]
2024-01-12
[ [ "Troncoso-Pastoriza", "Francisco", "" ], [ "Eguía-Oller", "Pablo", "" ], [ "Díaz-Redondo", "Rebeca P.", "" ], [ "Granada-Álvarez", "Enrique", "" ] ]
2401.05395
Bowei Chen
Ruixin Ding and Bowei Chen and James M. Wilson and Zhi Yan and Yufei Huang
SRNI-CAR: A comprehensive dataset for analyzing the Chinese automotive market
null
Proceedings of 2023 IEEE International Conference on Big Data (BigData), page 3405-3412
null
null
econ.GN cs.AI cs.CY cs.LG q-fin.EC
http://creativecommons.org/licenses/by/4.0/
The automotive industry plays a critical role in the global economy, and particularly important is the expanding Chinese automobile market due to its immense scale and influence. However, existing automotive sector datasets are limited in their coverage, failing to adequately consider the growing demand for more and diverse variables. This paper aims to bridge this data gap by introducing a comprehensive dataset spanning the years from 2016 to 2022, encompassing sales data, online reviews, and a wealth of information related to the Chinese automotive industry. This dataset serves as a valuable resource, significantly expanding the available data. Its impact extends to various dimensions, including improving forecasting accuracy, expanding the scope of business applications, informing policy development and regulation, and advancing academic research within the automotive sector. To illustrate the dataset's potential applications in both business and academic contexts, we present two application examples. Our developed dataset enhances our understanding of the Chinese automotive market and offers a valuable tool for researchers, policymakers, and industry stakeholders worldwide.
[ { "created": "Tue, 19 Dec 2023 09:32:32 GMT", "version": "v1" } ]
2024-01-12
[ [ "Ding", "Ruixin", "" ], [ "Chen", "Bowei", "" ], [ "Wilson", "James M.", "" ], [ "Yan", "Zhi", "" ], [ "Huang", "Yufei", "" ] ]
2401.05398
Wenwen Li
Wenwen Li
GeoAI in Social Science
Artificial Intelligence; social science; deep learning; convergence; knowledge graph
Handbook of Spatial Analysis in the Social Sciences, 291 (2022)
null
null
cs.CY cs.AI
http://creativecommons.org/licenses/by/4.0/
GeoAI, or geospatial artificial intelligence, is an exciting new area that leverages artificial intelligence (AI), geospatial big data, and massive computing power to solve problems with high automation and intelligence. This paper reviews the progress of AI in social science research, highlighting important advancements in using GeoAI to fill critical data and knowledge gaps. It also discusses the importance of breaking down data silos, accelerating convergence among GeoAI research methods, as well as moving GeoAI beyond geospatial benefits.
[ { "created": "Tue, 19 Dec 2023 20:23:18 GMT", "version": "v1" } ]
2024-01-12
[ [ "Li", "Wenwen", "" ] ]
2401.05577
Chenbin Pan
Chenbin Pan, Burhaneddin Yaman, Tommaso Nesti, Abhirup Mallik, Alessandro G Allievi, Senem Velipasalar, Liu Ren
VLP: Vision Language Planning for Autonomous Driving
CVPR2024
CVPR2024
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Autonomous driving is a complex and challenging task that aims at safe motion planning through scene understanding and reasoning. While vision-only autonomous driving methods have recently achieved notable performance, through enhanced scene understanding, several key issues, including lack of reasoning, low generalization performance and long-tail scenarios, still need to be addressed. In this paper, we present VLP, a novel Vision-Language-Planning framework that exploits language models to bridge the gap between linguistic understanding and autonomous driving. VLP enhances autonomous driving systems by strengthening both the source memory foundation and the self-driving car's contextual understanding. VLP achieves state-of-the-art end-to-end planning performance on the challenging NuScenes dataset by achieving 35.9\% and 60.5\% reduction in terms of average L2 error and collision rates, respectively, compared to the previous best method. Moreover, VLP shows improved performance in challenging long-tail scenarios and strong generalization capabilities when faced with new urban environments.
[ { "created": "Wed, 10 Jan 2024 23:00:40 GMT", "version": "v1" }, { "created": "Sun, 14 Jan 2024 16:47:10 GMT", "version": "v2" }, { "created": "Sat, 9 Mar 2024 20:22:04 GMT", "version": "v3" } ]
2024-03-12
[ [ "Pan", "Chenbin", "" ], [ "Yaman", "Burhaneddin", "" ], [ "Nesti", "Tommaso", "" ], [ "Mallik", "Abhirup", "" ], [ "Allievi", "Alessandro G", "" ], [ "Velipasalar", "Senem", "" ], [ "Ren", "Liu", "" ] ]
2401.05610
Victoria Magdalena Dax
Victoria M. Dax, Jiachen Li, Kevin Leahy, Mykel J. Kochenderfer
Graph Q-Learning for Combinatorial Optimization
null
GLIndA Workshop NeurIPS 2022
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph-structured data is ubiquitous throughout natural and social sciences, and Graph Neural Networks (GNNs) have recently been shown to be effective at solving prediction and inference problems on graph data. In this paper, we propose and demonstrate that GNNs can be applied to solve Combinatorial Optimization (CO) problems. CO concerns optimizing a function over a discrete solution space that is often intractably large. To learn to solve CO problems, we formulate the optimization process as a sequential decision making problem, where the return is related to how close the candidate solution is to optimality. We use a GNN to learn a policy to iteratively build increasingly promising candidate solutions. We present preliminary evidence that GNNs trained through Q-Learning can solve CO problems with performance approaching state-of-the-art heuristic-based solvers, using only a fraction of the parameters and training time.
[ { "created": "Thu, 11 Jan 2024 01:15:28 GMT", "version": "v1" } ]
2024-01-12
[ [ "Dax", "Victoria M.", "" ], [ "Li", "Jiachen", "" ], [ "Leahy", "Kevin", "" ], [ "Kochenderfer", "Mykel J.", "" ] ]
2401.05698
Licai Sun
Licai Sun, Zheng Lian, Bin Liu, Jianhua Tao
HiCMAE: Hierarchical Contrastive Masked Autoencoder for Self-Supervised Audio-Visual Emotion Recognition
Accepted by Information Fusion. The code is available at https://github.com/sunlicai/HiCMAE
Information Fusion, 2024
10.1016/j.inffus.2024.102382
null
cs.CV cs.HC cs.MM cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Audio-Visual Emotion Recognition (AVER) has garnered increasing attention in recent years for its critical role in creating emotion-ware intelligent machines. Previous efforts in this area are dominated by the supervised learning paradigm. Despite significant progress, supervised learning is meeting its bottleneck due to the longstanding data scarcity issue in AVER. Motivated by recent advances in self-supervised learning, we propose Hierarchical Contrastive Masked Autoencoder (HiCMAE), a novel self-supervised framework that leverages large-scale self-supervised pre-training on vast unlabeled audio-visual data to promote the advancement of AVER. Following prior arts in self-supervised audio-visual representation learning, HiCMAE adopts two primary forms of self-supervision for pre-training, namely masked data modeling and contrastive learning. Unlike them which focus exclusively on top-layer representations while neglecting explicit guidance of intermediate layers, HiCMAE develops a three-pronged strategy to foster hierarchical audio-visual feature learning and improve the overall quality of learned representations. To verify the effectiveness of HiCMAE, we conduct extensive experiments on 9 datasets covering both categorical and dimensional AVER tasks. Experimental results show that our method significantly outperforms state-of-the-art supervised and self-supervised audio-visual methods, which indicates that HiCMAE is a powerful audio-visual emotion representation learner. Codes and models will be publicly available at https://github.com/sunlicai/HiCMAE.
[ { "created": "Thu, 11 Jan 2024 07:00:07 GMT", "version": "v1" }, { "created": "Mon, 1 Apr 2024 07:19:40 GMT", "version": "v2" } ]
2024-04-02
[ [ "Sun", "Licai", "" ], [ "Lian", "Zheng", "" ], [ "Liu", "Bin", "" ], [ "Tao", "Jianhua", "" ] ]
2401.05815
Jan Kaiser
Jan Kaiser, Chenran Xu, Annika Eichler, Andrea Santamaria Garcia
Cheetah: Bridging the Gap Between Machine Learning and Particle Accelerator Physics with High-Speed, Differentiable Simulations
16 pages, 9 figures, 3 tables
Phys. Rev. Accel. Beams 27 (2024) 054601
10.1103/PhysRevAccelBeams.27.054601
PUBDB-2023-07854
physics.acc-ph cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Machine learning has emerged as a powerful solution to the modern challenges in accelerator physics. However, the limited availability of beam time, the computational cost of simulations, and the high-dimensionality of optimisation problems pose significant challenges in generating the required data for training state-of-the-art machine learning models. In this work, we introduce Cheetah, a PyTorch-based high-speed differentiable linear-beam dynamics code. Cheetah enables the fast collection of large data sets by reducing computation times by multiple orders of magnitude and facilitates efficient gradient-based optimisation for accelerator tuning and system identification. This positions Cheetah as a user-friendly, readily extensible tool that integrates seamlessly with widely adopted machine learning tools. We showcase the utility of Cheetah through five examples, including reinforcement learning training, gradient-based beamline tuning, gradient-based system identification, physics-informed Bayesian optimisation priors, and modular neural network surrogate modelling of space charge effects. The use of such a high-speed differentiable simulation code will simplify the development of machine learning-based methods for particle accelerators and fast-track their integration into everyday operations of accelerator facilities.
[ { "created": "Thu, 11 Jan 2024 10:30:40 GMT", "version": "v1" } ]
2024-05-30
[ [ "Kaiser", "Jan", "" ], [ "Xu", "Chenran", "" ], [ "Eichler", "Annika", "" ], [ "Garcia", "Andrea Santamaria", "" ] ]
2401.05822
Andrew Langworthy
Michael Free, Andrew Langworthy, Mary Dimitropoulaki, Simon Thompson
Towards Goal-Oriented Agents for Evolving Problems Observed via Conversation
15 pages, 7 figures
Artificial Intelligence XL. SGAI 2023. Lecture Notes in Computer Science, vol 14381. 142-155
10.1007/978-3-031-47994-6_11
null
cs.AI cs.CL
http://creativecommons.org/licenses/by/4.0/
The objective of this work is to train a chatbot capable of solving evolving problems through conversing with a user about a problem the chatbot cannot directly observe. The system consists of a virtual problem (in this case a simple game), a simulated user capable of answering natural language questions that can observe and perform actions on the problem, and a Deep Q-Network (DQN)-based chatbot architecture. The chatbot is trained with the goal of solving the problem through dialogue with the simulated user using reinforcement learning. The contributions of this paper are as follows: a proposed architecture to apply a conversational DQN-based agent to evolving problems, an exploration of training methods such as curriculum learning on model performance and the effect of modified reward functions in the case of increasing environment complexity.
[ { "created": "Thu, 11 Jan 2024 10:38:43 GMT", "version": "v1" } ]
2024-01-12
[ [ "Free", "Michael", "" ], [ "Langworthy", "Andrew", "" ], [ "Dimitropoulaki", "Mary", "" ], [ "Thompson", "Simon", "" ] ]
2401.05971
Rouwan Wu
Rouwan Wu, Xiaoya Cheng, Juelin Zhu, Xuxiang Liu, Maojun Zhang, Shen Yan
UAVD4L: A Large-Scale Dataset for UAV 6-DoF Localization
null
3DV 2024
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite significant progress in global localization of Unmanned Aerial Vehicles (UAVs) in GPS-denied environments, existing methods remain constrained by the availability of datasets. Current datasets often focus on small-scale scenes and lack viewpoint variability, accurate ground truth (GT) pose, and UAV build-in sensor data. To address these limitations, we introduce a large-scale 6-DoF UAV dataset for localization (UAVD4L) and develop a two-stage 6-DoF localization pipeline (UAVLoc), which consists of offline synthetic data generation and online visual localization. Additionally, based on the 6-DoF estimator, we design a hierarchical system for tracking ground target in 3D space. Experimental results on the new dataset demonstrate the effectiveness of the proposed approach. Code and dataset are available at https://github.com/RingoWRW/UAVD4L
[ { "created": "Thu, 11 Jan 2024 15:19:21 GMT", "version": "v1" } ]
2024-01-12
[ [ "Wu", "Rouwan", "" ], [ "Cheng", "Xiaoya", "" ], [ "Zhu", "Juelin", "" ], [ "Liu", "Xuxiang", "" ], [ "Zhang", "Maojun", "" ], [ "Yan", "Shen", "" ] ]
2401.05994
Viktor Reshniak
Qian Gong, Jieyang Chen, Ben Whitney, Xin Liang, Viktor Reshniak, Tania Banerjee, Jaemoon Lee, Anand Rangarajan, Lipeng Wan, Nicolas Vidal, Qing Liu, Ana Gainaru, Norbert Podhorszki, Richard Archibald, Sanjay Ranka, Scott Klasky
MGARD: A multigrid framework for high-performance, error-controlled data compression and refactoring
20 pages, 8 figures
SoftwareX, 24(2023), 101590
10.1016/j.softx.2023.101590
null
cs.CV cs.NA math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe MGARD, a software providing MultiGrid Adaptive Reduction for floating-point scientific data on structured and unstructured grids. With exceptional data compression capability and precise error control, MGARD addresses a wide range of requirements, including storage reduction, high-performance I/O, and in-situ data analysis. It features a unified application programming interface (API) that seamlessly operates across diverse computing architectures. MGARD has been optimized with highly-tuned GPU kernels and efficient memory and device management mechanisms, ensuring scalable and rapid operations.
[ { "created": "Thu, 11 Jan 2024 15:52:20 GMT", "version": "v1" } ]
2024-01-12
[ [ "Gong", "Qian", "" ], [ "Chen", "Jieyang", "" ], [ "Whitney", "Ben", "" ], [ "Liang", "Xin", "" ], [ "Reshniak", "Viktor", "" ], [ "Banerjee", "Tania", "" ], [ "Lee", "Jaemoon", "" ], [ "Rangarajan", "Anand", "" ], [ "Wan", "Lipeng", "" ], [ "Vidal", "Nicolas", "" ], [ "Liu", "Qing", "" ], [ "Gainaru", "Ana", "" ], [ "Podhorszki", "Norbert", "" ], [ "Archibald", "Richard", "" ], [ "Ranka", "Sanjay", "" ], [ "Klasky", "Scott", "" ] ]
2401.06019
Pablo Alonso P\'erez
Pablo Alonso, Jon Ander I\~niguez de Gordoa, Juan Diego Ortega, Sara Garc\'ia, Francisco Javier Iriarte, Marcos Nieto
Automatic UAV-based Airport Pavement Inspection Using Mixed Real and Virtual Scenarios
12 pages, 6 figures, published in proceedings of 15th International Conference on Machine Vision (ICMV)
Proc. SPIE 12701, Fifteenth International Conference on Machine Vision (ICMV 2022), 1270118
10.1117/12.2679734
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Runway and taxiway pavements are exposed to high stress during their projected lifetime, which inevitably leads to a decrease in their condition over time. To make sure airport pavement condition ensure uninterrupted and resilient operations, it is of utmost importance to monitor their condition and conduct regular inspections. UAV-based inspection is recently gaining importance due to its wide range monitoring capabilities and reduced cost. In this work, we propose a vision-based approach to automatically identify pavement distress using images captured by UAVs. The proposed method is based on Deep Learning (DL) to segment defects in the image. The DL architecture leverages the low computational capacities of embedded systems in UAVs by using an optimised implementation of EfficientNet feature extraction and Feature Pyramid Network segmentation. To deal with the lack of annotated data for training we have developed a synthetic dataset generation methodology to extend available distress datasets. We demonstrate that the use of a mixed dataset composed of synthetic and real training images yields better results when testing the training models in real application scenarios.
[ { "created": "Thu, 11 Jan 2024 16:30:07 GMT", "version": "v1" } ]
2024-01-12
[ [ "Alonso", "Pablo", "" ], [ "de Gordoa", "Jon Ander Iñiguez", "" ], [ "Ortega", "Juan Diego", "" ], [ "García", "Sara", "" ], [ "Iriarte", "Francisco Javier", "" ], [ "Nieto", "Marcos", "" ] ]
2401.06148
Guillaume Jaume
Andrew H. Song, Guillaume Jaume, Drew F.K. Williamson, Ming Y. Lu, Anurag Vaidya, Tiffany R. Miller, Faisal Mahmood
Artificial Intelligence for Digital and Computational Pathology
null
Nature Reviews Bioengineering 2023
10.1038/s44222-023-00096-8
null
eess.IV cs.AI cs.CV q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Advances in digitizing tissue slides and the fast-paced progress in artificial intelligence, including deep learning, have boosted the field of computational pathology. This field holds tremendous potential to automate clinical diagnosis, predict patient prognosis and response to therapy, and discover new morphological biomarkers from tissue images. Some of these artificial intelligence-based systems are now getting approved to assist clinical diagnosis; however, technical barriers remain for their widespread clinical adoption and integration as a research tool. This Review consolidates recent methodological advances in computational pathology for predicting clinical end points in whole-slide images and highlights how these developments enable the automation of clinical practice and the discovery of new biomarkers. We then provide future perspectives as the field expands into a broader range of clinical and research tasks with increasingly diverse modalities of clinical data.
[ { "created": "Wed, 13 Dec 2023 00:22:52 GMT", "version": "v1" } ]
2024-01-17
[ [ "Song", "Andrew H.", "" ], [ "Jaume", "Guillaume", "" ], [ "Williamson", "Drew F. K.", "" ], [ "Lu", "Ming Y.", "" ], [ "Vaidya", "Anurag", "" ], [ "Miller", "Tiffany R.", "" ], [ "Mahmood", "Faisal", "" ] ]
2401.06210
Hao-Ming Fu
Hao-Ming Fu, Pu-Jen Cheng
Learning Unsupervised Semantic Document Representation for Fine-grained Aspect-based Sentiment Analysis
International ACM SIGIR Conference 2019
SIGIR 2019: Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, Pages 1105 to 1108
10.1145/3331184.3331320
null
cs.LG cs.AI cs.CL cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Document representation is the core of many NLP tasks on machine understanding. A general representation learned in an unsupervised manner reserves generality and can be used for various applications. In practice, sentiment analysis (SA) has been a challenging task that is regarded to be deeply semantic-related and is often used to assess general representations. Existing methods on unsupervised document representation learning can be separated into two families: sequential ones, which explicitly take the ordering of words into consideration, and non-sequential ones, which do not explicitly do so. However, both of them suffer from their own weaknesses. In this paper, we propose a model that overcomes difficulties encountered by both families of methods. Experiments show that our model outperforms state-of-the-art methods on popular SA datasets and a fine-grained aspect-based SA by a large margin.
[ { "created": "Thu, 11 Jan 2024 18:59:52 GMT", "version": "v1" } ]
2024-01-15
[ [ "Fu", "Hao-Ming", "" ], [ "Cheng", "Pu-Jen", "" ] ]
2401.06407
Jincheng Zhang
Jincheng Zhang, Artur Wolek, and Andrew R. Willis
UAV-Borne Mapping Algorithms for Low-Altitude and High-Speed Drone Applications
null
Sensors 24, no. 7: 2204
10.3390/s24072204
null
cs.RO cs.CV
http://creativecommons.org/licenses/by/4.0/
This article presents an analysis of current state-of-the-art sensors and how these sensors work with several mapping algorithms for UAV (Unmanned Aerial Vehicle) applications, focusing on low-altitude and high-speed scenarios. A new experimental construct is created using highly realistic environments made possible by integrating the AirSim simulator with Google 3D maps models using the Cesium Tiles plugin. Experiments are conducted in this high-realism simulated environment to evaluate the performance of three distinct mapping algorithms: (1) Direct Sparse Odometry (DSO), (2) Stereo DSO (SDSO), and (3) DSO Lite (DSOL). Experimental results evaluate algorithms based on their measured geometric accuracy and computational speed. The results provide valuable insights into the strengths and limitations of each algorithm. Findings quantify compromises in UAV algorithm selection, allowing researchers to find the mapping solution best suited to their application, which often requires a compromise between computational performance and the density and accuracy of geometric map estimates. Results indicate that for UAVs with restrictive computing resources, DSOL is the best option. For systems with payload capacity and modest compute resources, SDSO is the best option. If only one camera is available, DSO is the option to choose for applications that require dense mapping results.
[ { "created": "Fri, 12 Jan 2024 07:04:44 GMT", "version": "v1" }, { "created": "Fri, 29 Mar 2024 18:02:27 GMT", "version": "v2" } ]
2024-04-02
[ [ "Zhang", "Jincheng", "" ], [ "Wolek", "Artur", "" ], [ "Willis", "Andrew R.", "" ] ]
2401.06495
Thibaud Leteno
Thibaud Leteno, Antoine Gourru, Charlotte Laclau, Christophe Gravier
An investigation of structures responsible for gender bias in BERT and DistilBERT
null
21st International Symposium on Intelligent Data Analysis, IDA 2023
10.1007/978-3-031-30047-9_20
null
cs.CL cs.CY cs.LG
http://creativecommons.org/licenses/by/4.0/
In recent years, large Transformer-based Pre-trained Language Models (PLM) have changed the Natural Language Processing (NLP) landscape, by pushing the performance boundaries of the state-of-the-art on a wide variety of tasks. However, this performance gain goes along with an increase in complexity, and as a result, the size of such models (up to billions of parameters) represents a constraint for their deployment on embedded devices or short-inference time tasks. To cope with this situation, compressed models emerged (e.g. DistilBERT), democratizing their usage in a growing number of applications that impact our daily lives. A crucial issue is the fairness of the predictions made by both PLMs and their distilled counterparts. In this paper, we propose an empirical exploration of this problem by formalizing two questions: (1) Can we identify the neural mechanism(s) responsible for gender bias in BERT (and by extension DistilBERT)? (2) Does distillation tend to accentuate or mitigate gender bias (e.g. is DistilBERT more prone to gender bias than its uncompressed version, BERT)? Our findings are the following: (I) one cannot identify a specific layer that produces bias; (II) every attention head uniformly encodes bias; except in the context of underrepresented classes with a high imbalance of the sensitive attribute; (III) this subset of heads is different as we re-fine tune the network; (IV) bias is more homogeneously produced by the heads in the distilled model.
[ { "created": "Fri, 12 Jan 2024 10:42:20 GMT", "version": "v1" } ]
2024-01-15
[ [ "Leteno", "Thibaud", "" ], [ "Gourru", "Antoine", "" ], [ "Laclau", "Charlotte", "" ], [ "Gravier", "Christophe", "" ] ]
2401.06588
Giampiero Salvi
Giampiero Salvi
Dynamic Behaviour of Connectionist Speech Recognition with Strong Latency Constraints
null
Speech Communication Volume 48, Issue 7, July 2006, Pages 802-818
10.1016/j.specom.2005.05.005
null
eess.AS cs.AI cs.CV cs.LG cs.SD
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes the use of connectionist techniques in phonetic speech recognition with strong latency constraints. The constraints are imposed by the task of deriving the lip movements of a synthetic face in real time from the speech signal, by feeding the phonetic string into an articulatory synthesiser. Particular attention has been paid to analysing the interaction between the time evolution model learnt by the multi-layer perceptrons and the transition model imposed by the Viterbi decoder, in different latency conditions. Two experiments were conducted in which the time dependencies in the language model (LM) were controlled by a parameter. The results show a strong interaction between the three factors involved, namely the neural network topology, the length of time dependencies in the LM and the decoder latency.
[ { "created": "Fri, 12 Jan 2024 14:10:28 GMT", "version": "v1" } ]
2024-01-15
[ [ "Salvi", "Giampiero", "" ] ]
2401.06654
Stefan Bl\"ucher
Stefan Bl\"ucher, Johanna Vielhaben, Nils Strodthoff
Decoupling Pixel Flipping and Occlusion Strategy for Consistent XAI Benchmarks
28 pages, 8 figures
Version published by Transactions on Machine Learning Research in 2024 (TMLR ISSN 2835-8856) https://openreview.net/forum?id=bIiLXdtUVM
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Feature removal is a central building block for eXplainable AI (XAI), both for occlusion-based explanations (Shapley values) as well as their evaluation (pixel flipping, PF). However, occlusion strategies can vary significantly from simple mean replacement up to inpainting with state-of-the-art diffusion models. This ambiguity limits the usefulness of occlusion-based approaches. For example, PF benchmarks lead to contradicting rankings. This is amplified by competing PF measures: Features are either removed starting with most influential first (MIF) or least influential first (LIF). This study proposes two complementary perspectives to resolve this disagreement problem. Firstly, we address the common criticism of occlusion-based XAI, that artificial samples lead to unreliable model evaluations. We propose to measure the reliability by the R(eference)-Out-of-Model-Scope (OMS) score. The R-OMS score enables a systematic comparison of occlusion strategies and resolves the disagreement problem by grouping consistent PF rankings. Secondly, we show that the insightfulness of MIF and LIF is conversely dependent on the R-OMS score. To leverage this, we combine the MIF and LIF measures into the symmetric relevance gain (SRG) measure. This breaks the inherent connection to the underlying occlusion strategy and leads to consistent rankings. This resolves the disagreement problem, which we verify for a set of 40 different occlusion strategies.
[ { "created": "Fri, 12 Jan 2024 16:01:17 GMT", "version": "v1" } ]
2024-08-27
[ [ "Blücher", "Stefan", "" ], [ "Vielhaben", "Johanna", "" ], [ "Strodthoff", "Nils", "" ] ]
2401.06757
Muhammad Naveed Riaz
Muhammad Naveed Riaz, Maciej Wielgosz, Abel Garcia Romera, Antonio M. Lopez
Synthetic Data Generation Framework, Dataset, and Efficient Deep Model for Pedestrian Intention Prediction
null
26th IEEE International Conference on Intelligent Transportation Systems ITSC 2023
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pedestrian intention prediction is crucial for autonomous driving. In particular, knowing if pedestrians are going to cross in front of the ego-vehicle is core to performing safe and comfortable maneuvers. Creating accurate and fast models that predict such intentions from sequential images is challenging. A factor contributing to this is the lack of datasets with diverse crossing and non-crossing (C/NC) scenarios. We address this scarceness by introducing a framework, named ARCANE, which allows programmatically generating synthetic datasets consisting of C/NC video clip samples. As an example, we use ARCANE to generate a large and diverse dataset named PedSynth. We will show how PedSynth complements widely used real-world datasets such as JAAD and PIE, so enabling more accurate models for C/NC prediction. Considering the onboard deployment of C/NC prediction models, we also propose a deep model named PedGNN, which is fast and has a very low memory footprint. PedGNN is based on a GNN-GRU architecture that takes a sequence of pedestrian skeletons as input to predict crossing intentions.
[ { "created": "Fri, 12 Jan 2024 18:44:01 GMT", "version": "v1" }, { "created": "Sat, 15 Jun 2024 13:44:22 GMT", "version": "v2" } ]
2024-06-18
[ [ "Riaz", "Muhammad Naveed", "" ], [ "Wielgosz", "Maciej", "" ], [ "Romera", "Abel Garcia", "" ], [ "Lopez", "Antonio M.", "" ] ]
2401.06787
Mahdi Miraz
Sristy Shidul Nath, Razuan Karim and Mahdi H. Miraz
Deep Learning Based Cyberbullying Detection in Bangla Language
null
Annals of Emerging Technologies in Computing (AETiC), Print ISSN: 2516-0281, Online ISSN: 2516-029X, pp. 50-65, Vol. 8, No. 1, 1st January 2024, Available: http://aetic.theiaer.org/archive/v8/v8n1/p5.html
10.33166/AETiC.2024.01.005
null
cs.CL cs.AI cs.LG cs.SI
http://creativecommons.org/licenses/by/4.0/
The Internet is currently the largest platform for global communication including expressions of opinions, reviews, contents, images, videos and so forth. Moreover, social media has now become a very broad and highly engaging platform due to its immense popularity and swift adoption trend. Increased social networking, however, also has detrimental impacts on the society leading to a range of unwanted phenomena, such as online assault, intimidation, digital bullying, criminality and trolling. Hence, cyberbullying has become a pervasive and worrying problem that poses considerable psychological and emotional harm to the people, particularly amongst the teens and the young adults. In order to lessen its negative effects and provide victims with prompt support, a great deal of research to identify cyberbullying instances at various online platforms is emerging. In comparison to other languages, Bangla (also known as Bengali) has fewer research studies in this domain. This study demonstrates a deep learning strategy for identifying cyberbullying in Bengali, using a dataset of 12282 versatile comments from multiple social media sites. In this study, a two-layer bidirectional long short-term memory (Bi-LSTM) model has been built to identify cyberbullying, using a variety of optimisers as well as 5-fold cross validation. To evaluate the functionality and efficacy of the proposed system, rigorous assessment and validation procedures have been employed throughout the project. The results of this study reveals that the proposed model's accuracy, using momentum-based stochastic gradient descent (SGD) optimiser, is 94.46%. It also reflects a higher accuracy of 95.08% and a F1 score of 95.23% using Adam optimiser as well as a better accuracy of 94.31% in 5-fold cross validation.
[ { "created": "Sun, 7 Jan 2024 04:58:59 GMT", "version": "v1" } ]
2024-01-17
[ [ "Nath", "Sristy Shidul", "" ], [ "Karim", "Razuan", "" ], [ "Miraz", "Mahdi H.", "" ] ]
2401.07042
Jos\'e Ra\'ul Romero
Rafael Barbudo and Aurora Ram\'irez and Francisco Servant and Jos\'e Ra\'ul Romero
GEML: A Grammar-based Evolutionary Machine Learning Approach for Design-Pattern Detection
27 pages, 18 tables, 10 figures, journal paper
Journal of Systems and Software, Volume 175, May 2021, 110919
10.1016/j.jss.2021.110919
null
cs.SE cs.AI
http://creativecommons.org/licenses/by/4.0/
Design patterns (DPs) are recognised as a good practice in software development. However, the lack of appropriate documentation often hampers traceability, and their benefits are blurred among thousands of lines of code. Automatic methods for DP detection have become relevant but are usually based on the rigid analysis of either software metrics or specific properties of the source code. We propose GEML, a novel detection approach based on evolutionary machine learning using software properties of diverse nature. Firstly, GEML makes use of an evolutionary algorithm to extract those characteristics that better describe the DP, formulated in terms of human-readable rules, whose syntax is conformant with a context-free grammar. Secondly, a rule-based classifier is built to predict whether new code contains a hidden DP implementation. GEML has been validated over five DPs taken from a public repository recurrently adopted by machine learning studies. Then, we increase this number up to 15 diverse DPs, showing its effectiveness and robustness in terms of detection capability. An initial parameter study served to tune a parameter setup whose performance guarantees the general applicability of this approach without the need to adjust complex parameters to a specific pattern. Finally, a demonstration tool is also provided.
[ { "created": "Sat, 13 Jan 2024 11:05:24 GMT", "version": "v1" } ]
2024-01-17
[ [ "Barbudo", "Rafael", "" ], [ "Ramírez", "Aurora", "" ], [ "Servant", "Francisco", "" ], [ "Romero", "José Raúl", "" ] ]
2401.07072
Jos\'e Ra\'ul Romero
Pedro Delgado-P\'erez and Aurora Ram\'irez and Kevin J. Valle-G\'omez and Inmaculada Medina-Bulo and Jos\'e Ra\'ul Romero
InterEvo-TR: Interactive Evolutionary Test Generation With Readability Assessment
17 pages, 10 figures, 5 tables, journal paper
IEEE Transactions on Software Engineering (Volume: 49, Issue: 4, 01 April 2023)
10.1109/TSE.2022.3227418
null
cs.SE cs.AI
http://creativecommons.org/licenses/by/4.0/
Automated test case generation has proven to be useful to reduce the usually high expenses of software testing. However, several studies have also noted the skepticism of testers regarding the comprehension of generated test suites when compared to manually designed ones. This fact suggests that involving testers in the test generation process could be helpful to increase their acceptance of automatically-produced test suites. In this paper, we propose incorporating interactive readability assessments made by a tester into EvoSuite, a widely-known evolutionary test generation tool. Our approach, InterEvo-TR, interacts with the tester at different moments during the search and shows different test cases covering the same coverage target for their subjective evaluation. The design of such an interactive approach involves a schedule of interaction, a method to diversify the selected targets, a plan to save and handle the readability values, and some mechanisms to customize the level of engagement in the revision, among other aspects. To analyze the potential and practicability of our proposal, we conduct a controlled experiment in which 39 participants, including academics, professional developers, and student collaborators, interact with InterEvo-TR. Our results show that the strategy to select and present intermediate results is effective for the purpose of readability assessment. Furthermore, the participants' actions and responses to a questionnaire allowed us to analyze the aspects influencing test code readability and the benefits and limitations of an interactive approach in the context of test case generation, paving the way for future developments based on interactivity.
[ { "created": "Sat, 13 Jan 2024 13:14:29 GMT", "version": "v1" } ]
2024-01-17
[ [ "Delgado-Pérez", "Pedro", "" ], [ "Ramírez", "Aurora", "" ], [ "Valle-Gómez", "Kevin J.", "" ], [ "Medina-Bulo", "Inmaculada", "" ], [ "Romero", "José Raúl", "" ] ]
2401.07124
Farhad Kooban
Sara Shomal Zadeh, Sina Aalipour birgani, Meisam Khorshidi, Farhad Kooban
Concrete Surface Crack Detection with Convolutional-based Deep Learning Models
11 pages, 3 figures, Journal paper
International Journal of Novel Research in Civil Structural and Earth Sciences, Vol. 10, Issue 3, (2023) pp: (25-35)
10.5281/zenodo.10061654
null
cs.CV cs.LG eess.IV
http://creativecommons.org/licenses/by/4.0/
Effective crack detection is pivotal for the structural health monitoring and inspection of buildings. This task presents a formidable challenge to computer vision techniques due to the inherently subtle nature of cracks, which often exhibit low-level features that can be easily confounded with background textures, foreign objects, or irregularities in construction. Furthermore, the presence of issues like non-uniform lighting and construction irregularities poses significant hurdles for autonomous crack detection during building inspection and monitoring. Convolutional neural networks (CNNs) have emerged as a promising framework for crack detection, offering high levels of accuracy and precision. Additionally, the ability to adapt pre-trained networks through transfer learning provides a valuable tool for users, eliminating the need for an in-depth understanding of algorithm intricacies. Nevertheless, it is imperative to acknowledge the limitations and considerations when deploying CNNs, particularly in contexts where the outcomes carry immense significance, such as crack detection in buildings. In this paper, our approach to surface crack detection involves the utilization of various deep-learning models. Specifically, we employ fine-tuning techniques on pre-trained deep learning architectures: VGG19, ResNet50, Inception V3, and EfficientNetV2. These models are chosen for their established performance and versatility in image analysis tasks. We compare deep learning models using precision, recall, and F1 scores.
[ { "created": "Sat, 13 Jan 2024 17:31:12 GMT", "version": "v1" } ]
2024-01-17
[ [ "Zadeh", "Sara Shomal", "" ], [ "birgani", "Sina Aalipour", "" ], [ "Khorshidi", "Meisam", "" ], [ "Kooban", "Farhad", "" ] ]
2401.07139
Yi Xiao
Yi Xiao and Qiangqiang Yuan and Qiang Zhang and Liangpei Zhang
Deep Blind Super-Resolution for Satellite Video
Published in IEEE TGRS
IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1-16, 2023, Art no. 5516316
10.1109/TGRS.2023.3291822
null
cs.CV cs.AI eess.IV
http://creativecommons.org/licenses/by/4.0/
Recent efforts have witnessed remarkable progress in Satellite Video Super-Resolution (SVSR). However, most SVSR methods usually assume the degradation is fixed and known, e.g., bicubic downsampling, which makes them vulnerable in real-world scenes with multiple and unknown degradations. To alleviate this issue, blind SR has thus become a research hotspot. Nevertheless, existing approaches are mainly engaged in blur kernel estimation while losing sight of another critical aspect for VSR tasks: temporal compensation, especially compensating for blurry and smooth pixels with vital sharpness from severely degraded satellite videos. Therefore, this paper proposes a practical Blind SVSR algorithm (BSVSR) to explore more sharp cues by considering the pixel-wise blur levels in a coarse-to-fine manner. Specifically, we employed multi-scale deformable convolution to coarsely aggregate the temporal redundancy into adjacent frames by window-slid progressive fusion. Then the adjacent features are finely merged into mid-feature using deformable attention, which measures the blur levels of pixels and assigns more weights to the informative pixels, thus inspiring the representation of sharpness. Moreover, we devise a pyramid spatial transformation module to adjust the solution space of sharp mid-feature, resulting in flexible feature adaptation in multi-level domains. Quantitative and qualitative evaluations on both simulated and real-world satellite videos demonstrate that our BSVSR performs favorably against state-of-the-art non-blind and blind SR models. Code will be available at https://github.com/XY-boy/Blind-Satellite-VSR
[ { "created": "Sat, 13 Jan 2024 18:56:18 GMT", "version": "v1" } ]
2024-01-17
[ [ "Xiao", "Yi", "" ], [ "Yuan", "Qiangqiang", "" ], [ "Zhang", "Qiang", "" ], [ "Zhang", "Liangpei", "" ] ]
2401.07353
Usman Gohar
Usman Gohar, Michael C. Hunter, Agnieszka Marczak-Czajka, Robyn R. Lutz, Myra B. Cohen, Jane Cleland-Huang
Towards Engineering Fair and Equitable Software Systems for Managing Low-Altitude Airspace Authorizations
null
ICSE-SEIS 2024
10.1145/3639475.3640103
null
cs.SE cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Small Unmanned Aircraft Systems (sUAS) have gained widespread adoption across a diverse range of applications. This has introduced operational complexities within shared airspaces and an increase in reported incidents, raising safety concerns. In response, the U.S. Federal Aviation Administration (FAA) is developing a UAS Traffic Management (UTM) system to control access to airspace based on an sUAS's predicted ability to safely complete its mission. However, a fully automated system capable of swiftly approving or denying flight requests can be prone to bias and must consider safety, transparency, and fairness to diverse stakeholders. In this paper, we present an initial study that explores stakeholders' perspectives on factors that should be considered in an automated system. Results indicate flight characteristics and environmental conditions were perceived as most important but pilot and drone capabilities should also be considered. Further, several respondents indicated an aversion to any AI-supported automation, highlighting the need for full transparency in automated decision-making. Results provide a societal perspective on the challenges of automating UTM flight authorization decisions and help frame the ongoing design of a solution acceptable to the broader sUAS community.
[ { "created": "Sun, 14 Jan 2024 19:40:32 GMT", "version": "v1" }, { "created": "Sat, 3 Feb 2024 14:55:07 GMT", "version": "v2" } ]
2024-02-06
[ [ "Gohar", "Usman", "" ], [ "Hunter", "Michael C.", "" ], [ "Marczak-Czajka", "Agnieszka", "" ], [ "Lutz", "Robyn R.", "" ], [ "Cohen", "Myra B.", "" ], [ "Cleland-Huang", "Jane", "" ] ]
2401.07359
Luigi Scorzato
Luigi Scorzato
Reliability and Interpretability in Science and Deep Learning
To appear in Minds and Machines
Minds & Machines 34, 27 (2024)
10.1007/s11023-024-09682-0
null
cs.AI cs.LG physics.hist-ph
http://creativecommons.org/licenses/by/4.0/
In recent years, the question of the reliability of Machine Learning (ML) methods has acquired significant importance, and the analysis of the associated uncertainties has motivated a growing amount of research. However, most of these studies have applied standard error analysis to ML models, and in particular Deep Neural Network (DNN) models, which represent a rather significant departure from standard scientific modelling. It is therefore necessary to integrate the standard error analysis with a deeper epistemological analysis of the possible differences between DNN models and standard scientific modelling and the possible implications of these differences in the assessment of reliability. This article offers several contributions. First, it emphasises the ubiquitous role of model assumptions (both in ML and traditional Science) against the illusion of theory-free science. Secondly, model assumptions are analysed from the point of view of their (epistemic) complexity, which is shown to be language-independent. It is argued that the high epistemic complexity of DNN models hinders the estimate of their reliability and also their prospect of long-term progress. Some potential ways forward are suggested. Thirdly, this article identifies the close relation between a model's epistemic complexity and its interpretability, as introduced in the context of responsible AI. This clarifies in which sense, and to what extent, the lack of understanding of a model (black-box problem) impacts its interpretability in a way that is independent of individual skills. It also clarifies how interpretability is a precondition for assessing the reliability of any model, which cannot be based on statistical analysis alone. This article focuses on the comparison between traditional scientific models and DNN models. But, Random Forest and Logistic Regression models are also briefly considered.
[ { "created": "Sun, 14 Jan 2024 20:14:07 GMT", "version": "v1" }, { "created": "Wed, 31 Jan 2024 21:46:10 GMT", "version": "v2" }, { "created": "Wed, 12 Jun 2024 06:18:04 GMT", "version": "v3" } ]
2024-07-01
[ [ "Scorzato", "Luigi", "" ] ]
2401.07489
Hussam Alhussein Dr.
Hussam Alhussein, Mohammed Daqaq
The Principle of Minimum Pressure Gradient: An Alternative Basis for Physics-Informed Learning of Incompressible Fluid Mechanics
null
AIP Advances. 14 (2024) 045112
10.1063/5.0197860
null
physics.flu-dyn cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Recent advances in the application of physics-informed learning into the field of fluid mechanics have been predominantly grounded in the Newtonian framework, primarly leveraging Navier-Stokes Equation or one of its various derivative to train a neural network. Here, we propose an alternative approach based on variational methods. The proposed approach uses the principle of minimum pressure gradient combined with the continuity constraint to train a neural network and predict the flow field in incompressible fluids. We describe the underlying principles of the proposed approach, then use a demonstrative example to illustrate its implementation and show that it reduces the computational time per training epoch when compared to the conventional approach.
[ { "created": "Mon, 15 Jan 2024 06:12:22 GMT", "version": "v1" } ]
2024-04-25
[ [ "Alhussein", "Hussam", "" ], [ "Daqaq", "Mohammed", "" ] ]
2401.07582
Mamoona Shami Ms
Mamoona Birkhez Shami, Gabriel Kiss, Trond Arve Haakonsen, Frank Lindseth
Geo-locating Road Objects using Inverse Haversine Formula with NVIDIA Driveworks
null
Norsk IKT-konferanse for forskning og utdanning. No. 1. (2023)
null
null
cs.RO cs.CV
http://creativecommons.org/licenses/by/4.0/
Geolocation is integral to the seamless functioning of autonomous vehicles and advanced traffic monitoring infrastructures. This paper introduces a methodology to geolocate road objects using a monocular camera, leveraging the NVIDIA DriveWorks platform. We use the Centimeter Positioning Service (CPOS) and the inverse Haversine formula to geo-locate road objects accurately. The real-time algorithm processing capability of the NVIDIA DriveWorks platform enables instantaneous object recognition and spatial localization for Advanced Driver Assistance Systems (ADAS) and autonomous driving platforms. We present a measurement pipeline suitable for autonomous driving (AD) platforms and provide detailed guidelines for calibrating cameras using NVIDIA DriveWorks. Experiments were carried out to validate the accuracy of the proposed method for geolocating targets in both controlled and dynamic settings. We show that our approach can locate targets with less than 1m error when the AD platform is stationary and less than 4m error at higher speeds (i.e. up to 60km/h) within a 15m radius.
[ { "created": "Mon, 15 Jan 2024 10:38:07 GMT", "version": "v1" } ]
2024-01-17
[ [ "Shami", "Mamoona Birkhez", "" ], [ "Kiss", "Gabriel", "" ], [ "Haakonsen", "Trond Arve", "" ], [ "Lindseth", "Frank", "" ] ]
2401.07856
Aydogan Ozcan
Bijie Bai, Ryan Lee, Yuhang Li, Tianyi Gan, Yuntian Wang, Mona Jarrahi, and Aydogan Ozcan
Information hiding cameras: optical concealment of object information into ordinary images
26 Pages, 8 Figures
Science Advances (2024)
10.1126/sciadv.adn9420
null
physics.optics cs.CV physics.app-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data protection methods like cryptography, despite being effective, inadvertently signal the presence of secret communication, thereby drawing undue attention. Here, we introduce an optical information hiding camera integrated with an electronic decoder, optimized jointly through deep learning. This information hiding-decoding system employs a diffractive optical processor as its front-end, which transforms and hides input images in the form of ordinary-looking patterns that deceive/mislead human observers. This information hiding transformation is valid for infinitely many combinations of secret messages, all of which are transformed into ordinary-looking output patterns, achieved all-optically through passive light-matter interactions within the optical processor. By processing these ordinary-looking output images, a jointly-trained electronic decoder neural network accurately reconstructs the original information hidden within the deceptive output pattern. We numerically demonstrated our approach by designing an information hiding diffractive camera along with a jointly-optimized convolutional decoder neural network. The efficacy of this system was demonstrated under various lighting conditions and noise levels, showing its robustness. We further extended this information hiding camera to multi-spectral operation, allowing the concealment and decoding of multiple images at different wavelengths, all performed simultaneously in a single feed-forward operation. The feasibility of our framework was also demonstrated experimentally using THz radiation. This optical encoder-electronic decoder-based co-design provides a novel information hiding camera interface that is both high-speed and energy-efficient, offering an intriguing solution for visual information security.
[ { "created": "Mon, 15 Jan 2024 17:37:27 GMT", "version": "v1" } ]
2024-06-13
[ [ "Bai", "Bijie", "" ], [ "Lee", "Ryan", "" ], [ "Li", "Yuhang", "" ], [ "Gan", "Tianyi", "" ], [ "Wang", "Yuntian", "" ], [ "Jarrahi", "Mona", "" ], [ "Ozcan", "Aydogan", "" ] ]
2401.07931
Paul K. Mandal
Paul K. Mandal, Cole Leo
Vertical Federated Image Segmentation
11 pages, 5 figures
IFIP International Conference on Artificial Intelligence Applications and Innovations (2024) (pp. 54-65)
10.1007/978-3-031-63223-5_5
null
cs.CV cs.AI cs.DC cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
With the popularization of AI solutions for image based problems, there has been a growing concern for both data privacy and acquisition. In a large number of cases, information is located on separate data silos and it can be difficult for a developer to consolidate all of it in a fashion that is appropriate for machine learning model development. Alongside this, a portion of these localized data regions may not have access to a labelled ground truth. This indicates that they have the capacity to reach conclusions numerically, but are not able to assign classifications amid a lack of pertinent information. Such a determination is often negligible, especially when attempting to develop image based solutions that often necessitate this capability. With this being the case, we propose an innovative vertical federated learning (VFL) model architecture that can operate under this common set of conditions. This is the first (and currently the only) implementation of a system that can work under the constraints of a VFL environment and perform image segmentation while maintaining nominal accuracies. We achieved this by utilizing an FCN that boasts the ability to operate on federates that lack labelled data and privately share the respective weights with a central server, that of which hosts the necessary features for classification. Tests were conducted on the CamVid dataset in order to determine the impact of heavy feature compression required for the transfer of information between federates, as well as to reach nominal conclusions about the overall performance metrics when working under such constraints.
[ { "created": "Mon, 15 Jan 2024 19:47:14 GMT", "version": "v1" }, { "created": "Tue, 19 Mar 2024 17:07:40 GMT", "version": "v2" } ]
2024-09-26
[ [ "Mandal", "Paul K.", "" ], [ "Leo", "Cole", "" ] ]
2401.08003
Enrique Yeguas
Jos\'e M. Alcalde-Llergo, Enrique Yeguas-Bol\'ivar, Andrea Zingoni and Alejandro Fuerte-Jurado
Jewelry Recognition via Encoder-Decoder Models
6 pages, 5 figures, MetroXRAINE 2023 Conference
2023 IEEE International Conference on Metrology for Extended Reality, Artificial Intelligence and Neural Engineering (MetroXRAINE), Milano, Italy, 2023, pp. 116-121
10.1109/MetroXRAINE58569.2023.10405609
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Jewelry recognition is a complex task due to the different styles and designs of accessories. Precise descriptions of the various accessories is something that today can only be achieved by experts in the field of jewelry. In this work, we propose an approach for jewelry recognition using computer vision techniques and image captioning, trying to simulate this expert human behavior of analyzing accessories. The proposed methodology consist on using different image captioning models to detect the jewels from an image and generate a natural language description of the accessory. Then, this description is also utilized to classify the accessories at different levels of detail. The generated caption includes details such as the type of jewel, color, material, and design. To demonstrate the effectiveness of the proposed method in accurately recognizing different types of jewels, a dataset consisting of images of accessories belonging to jewelry stores in C\'ordoba (Spain) has been created. After testing the different image captioning architectures designed, the final model achieves a captioning accuracy of 95\%. The proposed methodology has the potential to be used in various applications such as jewelry e-commerce, inventory management or automatic jewels recognition to analyze people's tastes and social status.
[ { "created": "Mon, 15 Jan 2024 23:10:50 GMT", "version": "v1" } ]
2024-04-03
[ [ "Alcalde-Llergo", "José M.", "" ], [ "Yeguas-Bolívar", "Enrique", "" ], [ "Zingoni", "Andrea", "" ], [ "Fuerte-Jurado", "Alejandro", "" ] ]
2401.08008
Enrique Yeguas
Jos\'e M. Alcalde-Llergo, Carlos Garc\'ia-Mart\'inez, Manuel Vaquero-Abell\'an, Pilar Aparicio-Mart\'inez and Enrique Yeguas-Bol\'ivar
Analysing the Needs of Homeless People Using Feature Selection and Mining Association Rules
6 pages, 4 figures, 4 tables, MetroXRAINE 2022
2022 IEEE International Conference on Metrology for Extended Reality, Artificial Intelligence and Neural Engineering (MetroXRAINE), Rome, Italy, 2022, pp. 568-573
10.1109/MetroXRAINE54828.2022.9967612
null
cs.AI cs.CY
http://creativecommons.org/licenses/by/4.0/
Homelessness is a social and health problem with great repercussions in Europe. Many non-governmental organisations help homeless people by collecting and analysing large amounts of information about them. However, these tasks are not always easy to perform, and hinder other of the organisations duties. The SINTECH project was created to tackle this issue proposing two different tools: a mobile application to quickly and easily collect data; and a software based on artificial intelligence which obtains interesting information from the collected data. The first one has been distributed to some Spanish organisations which are using it to conduct surveys of homeless people. The second tool implements different feature selection and association rules mining methods. These artificial intelligence techniques have allowed us to identify the most relevant features and some interesting association rules from previously collected homeless data.
[ { "created": "Mon, 15 Jan 2024 23:28:55 GMT", "version": "v1" } ]
2024-01-17
[ [ "Alcalde-Llergo", "José M.", "" ], [ "García-Martínez", "Carlos", "" ], [ "Vaquero-Abellán", "Manuel", "" ], [ "Aparicio-Martínez", "Pilar", "" ], [ "Yeguas-Bolívar", "Enrique", "" ] ]
2401.08099
Hancheng Zuo
Hancheng Zuo and Bernard Tiddeman
Inpainting Normal Maps for Lightstage data
8 pages, 4 figures, CGVC Conference, The Eurographics Association
Computer Graphics and Visual Computing (CGVC), 2023, pp. 45-52
10.2312/cgvc.20231190
null
cs.CV cs.AI cs.GR
http://creativecommons.org/licenses/by/4.0/
This study introduces a novel method for inpainting normal maps using a generative adversarial network (GAN). Normal maps, often derived from a lightstage, are crucial in performance capture but can have obscured areas due to movement (e.g., by arms, hair, or props). Inpainting fills these missing areas with plausible data. Our approach extends previous general image inpainting techniques, employing a bow tie-like generator network and a discriminator network, with alternating training phases. The generator aims to synthesize images aligning with the ground truth and deceive the discriminator, which differentiates between real and processed images. Periodically, the discriminator undergoes retraining to enhance its ability to identify processed images. Importantly, our method adapts to the unique characteristics of normal map data, necessitating modifications to the loss function. We utilize a cosine loss instead of mean squared error loss for generator training. Limited training data availability, even with synthetic datasets, demands significant augmentation, considering the specific nature of the input data. This includes appropriate image flipping and in-plane rotations to accurately alter normal vectors. Throughout training, we monitored key metrics such as average loss, Structural Similarity Index Measure (SSIM), and Peak Signal-to-Noise Ratio (PSNR) for the generator, along with average loss and accuracy for the discriminator. Our findings suggest that the proposed model effectively generates high-quality, realistic inpainted normal maps, suitable for performance capture applications. These results establish a foundation for future research, potentially involving more advanced networks and comparisons with inpainting of source images used to create the normal maps.
[ { "created": "Tue, 16 Jan 2024 03:59:07 GMT", "version": "v1" } ]
2024-01-17
[ [ "Zuo", "Hancheng", "" ], [ "Tiddeman", "Bernard", "" ] ]
2401.08103
Conrad Sanderson
Conrad Sanderson, Emma Schleiger, David Douglas, Petra Kuhnert, Qinghua Lu
Resolving Ethics Trade-offs in Implementing Responsible AI
null
IEEE Conference on Artificial Intelligence, 2024
10.1109/CAI59869.2024.00215
null
cs.CY cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While the operationalisation of high-level AI ethics principles into practical AI/ML systems has made progress, there is still a theory-practice gap in managing tensions between the underlying AI ethics aspects. We cover five approaches for addressing the tensions via trade-offs, ranging from rudimentary to complex. The approaches differ in the types of considered context, scope, methods for measuring contexts, and degree of justification. None of the approaches is likely to be appropriate for all organisations, systems, or applications. To address this, we propose a framework which consists of: (i) proactive identification of tensions, (ii) prioritisation and weighting of ethics aspects, (iii) justification and documentation of trade-off decisions. The proposed framework aims to facilitate the implementation of well-rounded AI/ML systems that are appropriate for potential regulatory requirements.
[ { "created": "Tue, 16 Jan 2024 04:14:23 GMT", "version": "v1" }, { "created": "Thu, 8 Feb 2024 02:12:19 GMT", "version": "v2" }, { "created": "Mon, 1 Apr 2024 06:50:45 GMT", "version": "v3" }, { "created": "Mon, 9 Sep 2024 05:34:48 GMT", "version": "v4" } ]
2024-09-10
[ [ "Sanderson", "Conrad", "" ], [ "Schleiger", "Emma", "" ], [ "Douglas", "David", "" ], [ "Kuhnert", "Petra", "" ], [ "Lu", "Qinghua", "" ] ]
2401.08194
Yuefeng Zhang
Yuefeng Zhang and Kai Lin
End-to-End Optimized Image Compression with the Frequency-Oriented Transform
25 pages, accepted by MVAP
Machine Vision and Applications,Volume 35, article number 27, (2024)
10.1007/s00138-023-01507-x
null
cs.CV cs.AI cs.MM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Image compression constitutes a significant challenge amidst the era of information explosion. Recent studies employing deep learning methods have demonstrated the superior performance of learning-based image compression methods over traditional codecs. However, an inherent challenge associated with these methods lies in their lack of interpretability. Following an analysis of the varying degrees of compression degradation across different frequency bands, we propose the end-to-end optimized image compression model facilitated by the frequency-oriented transform. The proposed end-to-end image compression model consists of four components: spatial sampling, frequency-oriented transform, entropy estimation, and frequency-aware fusion. The frequency-oriented transform separates the original image signal into distinct frequency bands, aligning with the human-interpretable concept. Leveraging the non-overlapping hypothesis, the model enables scalable coding through the selective transmission of arbitrary frequency components. Extensive experiments are conducted to demonstrate that our model outperforms all traditional codecs including next-generation standard H.266/VVC on MS-SSIM metric. Moreover, visual analysis tasks (i.e., object detection and semantic segmentation) are conducted to verify the proposed compression method could preserve semantic fidelity besides signal-level precision.
[ { "created": "Tue, 16 Jan 2024 08:16:10 GMT", "version": "v1" } ]
2024-05-07
[ [ "Zhang", "Yuefeng", "" ], [ "Lin", "Kai", "" ] ]
2401.08374
Miquel Espl\`a-Gomis
Miquel Espl\`a-Gomis, V\'ictor M. S\'anchez-Cartagena, Juan Antonio P\'erez-Ortiz, Felipe S\'anchez-Mart\'inez
Cross-lingual neural fuzzy matching for exploiting target-language monolingual corpora in computer-aided translation
null
In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (pp. 7532-7543)
10.18653/v1/2022.emnlp-main.511
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computer-aided translation (CAT) tools based on translation memories (MT) play a prominent role in the translation workflow of professional translators. However, the reduced availability of in-domain TMs, as compared to in-domain monolingual corpora, limits its adoption for a number of translation tasks. In this paper, we introduce a novel neural approach aimed at overcoming this limitation by exploiting not only TMs, but also in-domain target-language (TL) monolingual corpora, and still enabling a similar functionality to that offered by conventional TM-based CAT tools. Our approach relies on cross-lingual sentence embeddings to retrieve translation proposals from TL monolingual corpora, and on a neural model to estimate their post-editing effort. The paper presents an automatic evaluation of these techniques on four language pairs that shows that our approach can successfully exploit monolingual texts in a TM-based CAT environment, increasing the amount of useful translation proposals, and that our neural model for estimating the post-editing effort enables the combination of translation proposals obtained from monolingual corpora and from TMs in the usual way. A human evaluation performed on a single language pair confirms the results of the automatic evaluation and seems to indicate that the translation proposals retrieved with our approach are more useful than what the automatic evaluation shows.
[ { "created": "Tue, 16 Jan 2024 14:00:28 GMT", "version": "v1" } ]
2024-01-17
[ [ "Esplà-Gomis", "Miquel", "" ], [ "Sánchez-Cartagena", "Víctor M.", "" ], [ "Pérez-Ortiz", "Juan Antonio", "" ], [ "Sánchez-Martínez", "Felipe", "" ] ]
2401.08396
Qiao Jin
Qiao Jin, Fangyuan Chen, Yiliang Zhou, Ziyang Xu, Justin M. Cheung, Robert Chen, Ronald M. Summers, Justin F. Rousseau, Peiyun Ni, Marc J Landsman, Sally L. Baxter, Subhi J. Al'Aref, Yijia Li, Alex Chen, Josef A. Brejt, Michael F. Chiang, Yifan Peng, Zhiyong Lu
Hidden flaws behind expert-level accuracy of multimodal GPT-4 vision in medicine
null
npj Digital Medicine, 2024
10.1038/s41746-024-01185-7
null
cs.CV cs.AI cs.CL
http://creativecommons.org/licenses/by/4.0/
Recent studies indicate that Generative Pre-trained Transformer 4 with Vision (GPT-4V) outperforms human physicians in medical challenge tasks. However, these evaluations primarily focused on the accuracy of multi-choice questions alone. Our study extends the current scope by conducting a comprehensive analysis of GPT-4V's rationales of image comprehension, recall of medical knowledge, and step-by-step multimodal reasoning when solving New England Journal of Medicine (NEJM) Image Challenges - an imaging quiz designed to test the knowledge and diagnostic capabilities of medical professionals. Evaluation results confirmed that GPT-4V performs comparatively to human physicians regarding multi-choice accuracy (81.6% vs. 77.8%). GPT-4V also performs well in cases where physicians incorrectly answer, with over 78% accuracy. However, we discovered that GPT-4V frequently presents flawed rationales in cases where it makes the correct final choices (35.5%), most prominent in image comprehension (27.2%). Regardless of GPT-4V's high accuracy in multi-choice questions, our findings emphasize the necessity for further in-depth evaluations of its rationales before integrating such multimodal AI models into clinical workflows.
[ { "created": "Tue, 16 Jan 2024 14:41:20 GMT", "version": "v1" }, { "created": "Wed, 24 Jan 2024 17:12:51 GMT", "version": "v2" }, { "created": "Mon, 22 Apr 2024 23:04:41 GMT", "version": "v3" }, { "created": "Sat, 31 Aug 2024 23:51:14 GMT", "version": "v4" } ]
2024-09-04
[ [ "Jin", "Qiao", "" ], [ "Chen", "Fangyuan", "" ], [ "Zhou", "Yiliang", "" ], [ "Xu", "Ziyang", "" ], [ "Cheung", "Justin M.", "" ], [ "Chen", "Robert", "" ], [ "Summers", "Ronald M.", "" ], [ "Rousseau", "Justin F.", "" ], [ "Ni", "Peiyun", "" ], [ "Landsman", "Marc J", "" ], [ "Baxter", "Sally L.", "" ], [ "Al'Aref", "Subhi J.", "" ], [ "Li", "Yijia", "" ], [ "Chen", "Alex", "" ], [ "Brejt", "Josef A.", "" ], [ "Chiang", "Michael F.", "" ], [ "Peng", "Yifan", "" ], [ "Lu", "Zhiyong", "" ] ]
2401.08397
Enrico Magliano
Enrico Magliano, Alessio Carpegna, Alessadro Savino, Stefano Di Carlo
A Micro Architectural Events Aware Real-Time Embedded System Fault Injector
null
2024 IEEE 25th Latin American Test Symposium (LATS)
10.1109/LATS62223.2024.10534595
null
cs.AR cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
In contemporary times, the increasing complexity of the system poses significant challenges to the reliability, trustworthiness, and security of the SACRES. Key issues include the susceptibility to phenomena such as instantaneous voltage spikes, electromagnetic interference, neutron strikes, and out-of-range temperatures. These factors can induce switch state changes in transistors, resulting in bit-flipping, soft errors, and transient corruption of stored data in memory. The occurrence of soft errors, in turn, may lead to system faults that can propel the system into a hazardous state. Particularly in critical sectors like automotive, avionics, or aerospace, such malfunctions can have real-world implications, potentially causing harm to individuals. This paper introduces a novel fault injector designed to facilitate the monitoring, aggregation, and examination of micro-architectural events. This is achieved by harnessing the microprocessor's PMU and the debugging interface, specifically focusing on ensuring the repeatability of fault injections. The fault injection methodology targets bit-flipping within the memory system, affecting CPU registers and RAM. The outcomes of these fault injections enable a thorough analysis of the impact of soft errors and establish a robust correlation between the identified faults and the essential timing predictability demanded by SACRES.
[ { "created": "Tue, 16 Jan 2024 14:41:20 GMT", "version": "v1" }, { "created": "Tue, 11 Jun 2024 08:44:00 GMT", "version": "v2" } ]
2024-06-21
[ [ "Magliano", "Enrico", "" ], [ "Carpegna", "Alessio", "" ], [ "Savino", "Alessadro", "" ], [ "Di Carlo", "Stefano", "" ] ]
2401.08458
Hyejun Jeong
Hyejun Jeong, Tai-Myoung Chung
Security and Privacy Issues and Solutions in Federated Learning for Digital Healthcare
null
International Conference on Future Data and Security Engineering (2022) 316-331
10.1007/978-981-19-8069-5_21
null
cs.CR cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
The advent of Federated Learning has enabled the creation of a high-performing model as if it had been trained on a considerable amount of data. A multitude of participants and a server cooperatively train a model without the need for data disclosure or collection. The healthcare industry, where security and privacy are paramount, can substantially benefit from this new learning paradigm, as data collection is no longer feasible due to stringent data policies. Nonetheless, unaddressed challenges and insufficient attack mitigation are hampering its adoption. Attack surfaces differ from traditional centralized learning in that the server and clients communicate between each round of training. In this paper, we thus present vulnerabilities, attacks, and defenses based on the widened attack surfaces, as well as suggest promising new research directions toward a more robust FL.
[ { "created": "Tue, 16 Jan 2024 16:07:53 GMT", "version": "v1" } ]
2024-01-17
[ [ "Jeong", "Hyejun", "" ], [ "Chung", "Tai-Myoung", "" ] ]
2401.08518
Philipp Erler
Philipp Erler and Lizeth Fuentes and Pedro Hermosilla and Paul Guerrero and Renato Pajarola and Michael Wimmer
PPSURF: Combining Patches and Point Convolutions for Detailed Surface Reconstruction
Published in Computer Graphics Forum (Jan 2024): https://onlinelibrary.wiley.com/doi/10.1111/cgf.15000
Computer Graphics Forum e15000, 2024
10.1111/cgf.15000
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
3D surface reconstruction from point clouds is a key step in areas such as content creation, archaeology, digital cultural heritage, and engineering. Current approaches either try to optimize a non-data-driven surface representation to fit the points, or learn a data-driven prior over the distribution of commonly occurring surfaces and how they correlate with potentially noisy point clouds. Data-driven methods enable robust handling of noise and typically either focus on a global or a local prior, which trade-off between robustness to noise on the global end and surface detail preservation on the local end. We propose PPSurf as a method that combines a global prior based on point convolutions and a local prior based on processing local point cloud patches. We show that this approach is robust to noise while recovering surface details more accurately than the current state-of-the-art. Our source code, pre-trained model and dataset are available at: https://github.com/cg-tuwien/ppsurf
[ { "created": "Tue, 16 Jan 2024 17:31:43 GMT", "version": "v1" }, { "created": "Thu, 8 Feb 2024 15:10:39 GMT", "version": "v2" } ]
2024-02-09
[ [ "Erler", "Philipp", "" ], [ "Fuentes", "Lizeth", "" ], [ "Hermosilla", "Pedro", "" ], [ "Guerrero", "Paul", "" ], [ "Pajarola", "Renato", "" ], [ "Wimmer", "Michael", "" ] ]
2401.08537
Dominic Widdows
Emily Gao, Dominic Widdows
Spatial Entity Resolution between Restaurant Locations and Transportation Destinations in Southeast Asia
null
6th International Conference on Geospatial Information Systems Theory, Applications, and Management. GISTAM 2020, Prague, Czech Republic, May 7-9, 2020
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
As a tech company, Grab has expanded from transportation to food delivery, aiming to serve Southeast Asia with hyperlocalized applications. Information about places as transportation destinations can help to improve our knowledge about places as restaurants, so long as the spatial entity resolution problem between these datasets can be solved. In this project, we attempted to recognize identical place entities from databases of Points-of-Interest (POI) and GrabFood restaurants, using their spatial and textual attributes, i.e., latitude, longitude, place name, and street address. Distance metrics were calculated for these attributes and fed to tree-based classifiers. POI-restaurant matching was conducted separately for Singapore, Philippines, Indonesia, and Malaysia. Experimental estimates demonstrate that a matching POI can be found for over 35% of restaurants in these countries. As part of these estimates, test datasets were manually created, and RandomForest, AdaBoost, Gradient Boosting, and XGBoost perform well, with most accuracy, precision, and recall scores close to or higher than 90% for matched vs. unmatched classification. To the authors' knowledge, there are no previous published scientific papers devoted to matching of spatial entities for the Southeast Asia region.
[ { "created": "Tue, 16 Jan 2024 17:59:54 GMT", "version": "v1" } ]
2024-01-17
[ [ "Gao", "Emily", "" ], [ "Widdows", "Dominic", "" ] ]
2401.08714
Enrique Yeguas
Alessia Bisio, Enrique Yeguas-Bol\'ivar, Pilar Aparicio-Mart\'inez, Mar\'ia Dolores Redel-Mac\'ias, Sara Pinzi, Stefano Rossi and Juri Taborri
Training program on sign language: social inclusion through Virtual Reality in ISENSE project
6 pages, 4 figures, MetroXRAINE 2023 Conference, ISENSE european project
2023 IEEE International Conference on Metrology for Extended Reality, Artificial Intelligence and Neural Engineering (MetroXRAINE), Milano, Italy, 2023, pp. 104-109
10.1109/MetroXRAINE58569.2023.10405777
null
cs.HC cs.AI cs.CV cs.GR
http://creativecommons.org/licenses/by/4.0/
Structured hand gestures that incorporate visual motions and signs are used in sign language. Sign language is a valuable means of daily communication for individuals who are deaf or have speech impairments, but it is still rare among hearing people, and fewer are capable of understand it. Within the academic context, parents and teachers play a crucial role in supporting deaf students from childhood by facilitating their learning of sign language. In the last years, among all the teaching tools useful for learning sign language, the use of Virtual Reality (VR) has increased, as it has been demonstrated to improve retention, memory and attention during the learning process. The ISENSE project has been created to assist students with deafness during their academic life by proposing different technological tools for teaching sign language to the hearing community in the academic context. As part of the ISENSE project, this work aims to develop an application for Spanish and Italian sign language recognition that exploits the VR environment to quickly and easily create a comprehensive database of signs and an Artificial Intelligence (AI)-based software to accurately classify and recognize static and dynamic signs: from letters to sentences.
[ { "created": "Mon, 15 Jan 2024 20:40:46 GMT", "version": "v1" } ]
2024-04-03
[ [ "Bisio", "Alessia", "" ], [ "Yeguas-Bolívar", "Enrique", "" ], [ "Aparicio-Martínez", "Pilar", "" ], [ "Redel-Macías", "María Dolores", "" ], [ "Pinzi", "Sara", "" ], [ "Rossi", "Stefano", "" ], [ "Taborri", "Juri", "" ] ]
2401.08720
Gianmarco Roggiolani
Gianmarco Roggiolani, Federico Magistri, Tiziano Guadagnino, Jens Behley, Cyrill Stachniss
Unsupervised Pre-Training for 3D Leaf Instance Segmentation
8 pages, 7 images, RA-L
IEEE Robotics and Automation Letters (RA-L), vol. 8, pp. 7448-7455, 2023
10.1109/LRA.2023.3320018
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Crops for food, feed, fiber, and fuel are key natural resources for our society. Monitoring plants and measuring their traits is an important task in agriculture often referred to as plant phenotyping. Traditionally, this task is done manually, which is time- and labor-intensive. Robots can automate phenotyping providing reproducible and high-frequency measurements. Today's perception systems use deep learning to interpret these measurements, but require a substantial amount of annotated data to work well. Obtaining such labels is challenging as it often requires background knowledge on the side of the labelers. This paper addresses the problem of reducing the labeling effort required to perform leaf instance segmentation on 3D point clouds, which is a first step toward phenotyping in 3D. Separating all leaves allows us to count them and compute relevant traits as their areas, lengths, and widths. We propose a novel self-supervised task-specific pre-training approach to initialize the backbone of a network for leaf instance segmentation. We also introduce a novel automatic postprocessing that considers the difficulty of correctly segmenting the points close to the stem, where all the leaves petiole overlap. The experiments presented in this paper suggest that our approach boosts the performance over all the investigated scenarios. We also evaluate the embeddings to assess the quality of the fully unsupervised approach and see a higher performance of our domain-specific postprocessing.
[ { "created": "Tue, 16 Jan 2024 08:11:08 GMT", "version": "v1" } ]
2024-01-18
[ [ "Roggiolani", "Gianmarco", "" ], [ "Magistri", "Federico", "" ], [ "Guadagnino", "Tiziano", "" ], [ "Behley", "Jens", "" ], [ "Stachniss", "Cyrill", "" ] ]
2401.08721
Idoia Berges
David Anton, Idoia Berges, Jes\'us Berm\'udez, Alfredo Go\~ni, Arantza Illarramendi
A Telerehabilitation System for the Selection, Evaluation and Remote Management of Therapies
null
Sensors 18(5): 1459 (2018)
10.3390/s18051459
null
cs.HC cs.AI
http://creativecommons.org/licenses/by/4.0/
Telerehabilitation systems that support physical therapy sessions anywhere can help save healthcare costs while also improving the quality of life of the users that need rehabilitation. The main contribution of this paper is to present, as a whole, all the features supported by the innovative Kinect-based Telerehabilitation System (KiReS). In addition to the functionalities provided by current systems, it handles two new ones that could be incorporated into them, in order to give a step forward towards a new generation of telerehabilitation systems. The knowledge extraction functionality handles knowledge about the physical therapy record of patients and treatment protocols described in an ontology, named TRHONT, to select the adequate exercises for the rehabilitation of patients. The teleimmersion functionality provides a convenient, effective and user-friendly experience when performing the telerehabilitation, through a two-way real-time multimedia communication. The ontology contains about 2300 classes and 100 properties, and the system allows a reliable transmission of Kinect video depth, audio and skeleton data, being able to adapt to various network conditions. Moreover, the system has been tested with patients who suffered from shoulder disorders or total hip replacement.
[ { "created": "Tue, 16 Jan 2024 08:35:36 GMT", "version": "v1" } ]
2024-01-18
[ [ "Anton", "David", "" ], [ "Berges", "Idoia", "" ], [ "Bermúdez", "Jesús", "" ], [ "Goñi", "Alfredo", "" ], [ "Illarramendi", "Arantza", "" ] ]
2401.08732
Linfeng Ye
Linfeng Ye, Shayan Mohajer Hamidi, Renhao Tan, En-Hui Yang
Bayes Conditional Distribution Estimation for Knowledge Distillation Based on Conditional Mutual Information
32 pages, 19 figures, Published as a conference paper at ICLR 2024
International Conference on Learning Representations 2024 (ICLR)
null
null
cs.LG cs.CV cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
It is believed that in knowledge distillation (KD), the role of the teacher is to provide an estimate for the unknown Bayes conditional probability distribution (BCPD) to be used in the student training process. Conventionally, this estimate is obtained by training the teacher using maximum log-likelihood (MLL) method. To improve this estimate for KD, in this paper we introduce the concept of conditional mutual information (CMI) into the estimation of BCPD and propose a novel estimator called the maximum CMI (MCMI) method. Specifically, in MCMI estimation, both the log-likelihood and CMI of the teacher are simultaneously maximized when the teacher is trained. Through Eigen-CAM, it is further shown that maximizing the teacher's CMI value allows the teacher to capture more contextual information in an image cluster. Via conducting a thorough set of experiments, we show that by employing a teacher trained via MCMI estimation rather than one trained via MLL estimation in various state-of-the-art KD frameworks, the student's classification accuracy consistently increases, with the gain of up to 3.32\%. This suggests that the teacher's BCPD estimate provided by MCMI method is more accurate than that provided by MLL method. In addition, we show that such improvements in the student's accuracy are more drastic in zero-shot and few-shot settings. Notably, the student's accuracy increases with the gain of up to 5.72\% when 5\% of the training samples are available to the student (few-shot), and increases from 0\% to as high as 84\% for an omitted class (zero-shot). The code is available at \url{https://github.com/iclr2024mcmi/ICLRMCMI}.
[ { "created": "Tue, 16 Jan 2024 16:01:37 GMT", "version": "v1" }, { "created": "Thu, 7 Mar 2024 22:57:25 GMT", "version": "v2" } ]
2024-03-11
[ [ "Ye", "Linfeng", "" ], [ "Hamidi", "Shayan Mohajer", "" ], [ "Tan", "Renhao", "" ], [ "Yang", "En-Hui", "" ] ]
2401.08840
Sudarshan Devkota
Sudarshan Devkota, Sumanta Pattanaik
Efficient Neural Representation of Volumetric Data using Coordinate-Based Networks
null
Computer Graphics Forum (2023), 42: e14955
10.1111/cgf.14955
null
cs.CV cs.GR
http://creativecommons.org/licenses/by/4.0/
In this paper, we propose an efficient approach for the compression and representation of volumetric data utilizing coordinate-based networks and multi-resolution hash encoding. Efficient compression of volumetric data is crucial for various applications, such as medical imaging and scientific simulations. Our approach enables effective compression by learning a mapping between spatial coordinates and intensity values. We compare different encoding schemes and demonstrate the superiority of multi-resolution hash encoding in terms of compression quality and training efficiency. Furthermore, we leverage optimization-based meta-learning, specifically using the Reptile algorithm, to learn weight initialization for neural representations tailored to volumetric data, enabling faster convergence during optimization. Additionally, we compare our approach with state-of-the-art methods to showcase improved image quality and compression ratios. These findings highlight the potential of coordinate-based networks and multi-resolution hash encoding for an efficient and accurate representation of volumetric data, paving the way for advancements in large-scale data visualization and other applications.
[ { "created": "Tue, 16 Jan 2024 21:33:01 GMT", "version": "v1" } ]
2024-01-18
[ [ "Devkota", "Sudarshan", "" ], [ "Pattanaik", "Sumanta", "" ] ]
2401.08923
Aydogan Ozcan
Jingtian Hu, Kun Liao, Niyazi Ulas Dinc, Carlo Gigli, Bijie Bai, Tianyi Gan, Xurong Li, Hanlong Chen, Xilin Yang, Yuhang Li, Cagatay Isil, Md Sadman Sakib Rahman, Jingxi Li, Xiaoyong Hu, Mona Jarrahi, Demetri Psaltis, and Aydogan Ozcan
Subwavelength Imaging using a Solid-Immersion Diffractive Optical Processor
32 Pages, 9 Figures
eLight (2024)
10.1186/s43593-024-00067-5
null
physics.optics cs.CV physics.app-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Phase imaging is widely used in biomedical imaging, sensing, and material characterization, among other fields. However, direct imaging of phase objects with subwavelength resolution remains a challenge. Here, we demonstrate subwavelength imaging of phase and amplitude objects based on all-optical diffractive encoding and decoding. To resolve subwavelength features of an object, the diffractive imager uses a thin, high-index solid-immersion layer to transmit high-frequency information of the object to a spatially-optimized diffractive encoder, which converts/encodes high-frequency information of the input into low-frequency spatial modes for transmission through air. The subsequent diffractive decoder layers (in air) are jointly designed with the encoder using deep-learning-based optimization, and communicate with the encoder layer to create magnified images of input objects at its output, revealing subwavelength features that would otherwise be washed away due to diffraction limit. We demonstrate that this all-optical collaboration between a diffractive solid-immersion encoder and the following decoder layers in air can resolve subwavelength phase and amplitude features of input objects in a highly compact design. To experimentally demonstrate its proof-of-concept, we used terahertz radiation and developed a fabrication method for creating monolithic multi-layer diffractive processors. Through these monolithically fabricated diffractive encoder-decoder pairs, we demonstrated phase-to-intensity transformations and all-optically reconstructed subwavelength phase features of input objects by directly transforming them into magnified intensity features at the output. This solid-immersion-based diffractive imager, with its compact and cost-effective design, can find wide-ranging applications in bioimaging, endoscopy, sensing and materials characterization.
[ { "created": "Wed, 17 Jan 2024 02:12:57 GMT", "version": "v1" } ]
2024-06-14
[ [ "Hu", "Jingtian", "" ], [ "Liao", "Kun", "" ], [ "Dinc", "Niyazi Ulas", "" ], [ "Gigli", "Carlo", "" ], [ "Bai", "Bijie", "" ], [ "Gan", "Tianyi", "" ], [ "Li", "Xurong", "" ], [ "Chen", "Hanlong", "" ], [ "Yang", "Xilin", "" ], [ "Li", "Yuhang", "" ], [ "Isil", "Cagatay", "" ], [ "Rahman", "Md Sadman Sakib", "" ], [ "Li", "Jingxi", "" ], [ "Hu", "Xiaoyong", "" ], [ "Jarrahi", "Mona", "" ], [ "Psaltis", "Demetri", "" ], [ "Ozcan", "Aydogan", "" ] ]
2401.09008
Sulthan Rafif
Sulthan Rafif, Mochamad Arfan Ravy Wahyu Pratama, Mohammad Faris Azhar, Ahmad Mustafidul Ibad, Lailil Muflikhah, Novanto Yudistira
Hybrid of DiffStride and Spectral Pooling in Convolutional Neural Networks
null
CSIAM Transactions on Applied Mathematics; R. Riad et al, "Learning strides in convolutional neural networks," pp. 1-16, 2022. [Online];
10.1145/3626641.3626930
null
cs.CV cs.AI
http://creativecommons.org/publicdomain/zero/1.0/
Stride determines the distance between adjacent filter positions as the filter moves across the input. A fixed stride causes important information contained in the image can not be captured, so that important information is not classified. Therefore, in previous research, the DiffStride Method was applied, namely the Strided Convolution Method with which it can learn its own stride value. Severe Quantization and a constraining lower bound on preserved information are arises with Max Pooling Downsampling Method. Spectral Pooling reduce the constraint lower bound on preserved information by cutting off the representation in the frequency domain. In this research a CNN Model is proposed with the Downsampling Learnable Stride Technique performed by Backpropagation combined with the Spectral Pooling Technique. Diffstride and Spectral Pooling techniques are expected to maintain most of the information contained in the image. In this study, we compare the Hybrid Method, which is a combined implementation of Spectral Pooling and DiffStride against the Baseline Method, which is the DiffStride implementation on ResNet 18. The accuracy result of the DiffStride combination with Spectral Pooling improves over DiffStride which is baseline method by 0.0094. This shows that the Hybrid Method can maintain most of the information by cutting of the representation in the frequency domain and determine the stride of the learning result through Backpropagation.
[ { "created": "Wed, 17 Jan 2024 07:06:56 GMT", "version": "v1" } ]
2024-01-18
[ [ "Rafif", "Sulthan", "" ], [ "Pratama", "Mochamad Arfan Ravy Wahyu", "" ], [ "Azhar", "Mohammad Faris", "" ], [ "Ibad", "Ahmad Mustafidul", "" ], [ "Muflikhah", "Lailil", "" ], [ "Yudistira", "Novanto", "" ] ]
2401.09057
Yunze Liu
Yunze Liu, Changxi Chen, Zifan Wang, Li Yi
CrossVideo: Self-supervised Cross-modal Contrastive Learning for Point Cloud Video Understanding
null
ICRA2024
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces a novel approach named CrossVideo, which aims to enhance self-supervised cross-modal contrastive learning in the field of point cloud video understanding. Traditional supervised learning methods encounter limitations due to data scarcity and challenges in label acquisition. To address these issues, we propose a self-supervised learning method that leverages the cross-modal relationship between point cloud videos and image videos to acquire meaningful feature representations. Intra-modal and cross-modal contrastive learning techniques are employed to facilitate effective comprehension of point cloud video. We also propose a multi-level contrastive approach for both modalities. Through extensive experiments, we demonstrate that our method significantly surpasses previous state-of-the-art approaches, and we conduct comprehensive ablation studies to validate the effectiveness of our proposed designs.
[ { "created": "Wed, 17 Jan 2024 08:46:47 GMT", "version": "v1" } ]
2024-01-30
[ [ "Liu", "Yunze", "" ], [ "Chen", "Changxi", "" ], [ "Wang", "Zifan", "" ], [ "Yi", "Li", "" ] ]
2401.09109
Johannes Theodoridis
Johannes Theodoridis, Jessica Hofmann, Johannes Maucher, Andreas Schilling
Trapped in texture bias? A large scale comparison of deep instance segmentation
Accepted at ECCV 2022. Code: https://github.com/JohannesTheo/trapped-in-texture-bias
ECCV 2022 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part VIII. Springer-Verlag, Berlin, Heidelberg, 609-627
10.1007/978-3-031-20074-8_35
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Do deep learning models for instance segmentation generalize to novel objects in a systematic way? For classification, such behavior has been questioned. In this study, we aim to understand if certain design decisions such as framework, architecture or pre-training contribute to the semantic understanding of instance segmentation. To answer this question, we consider a special case of robustness and compare pre-trained models on a challenging benchmark for object-centric, out-of-distribution texture. We do not introduce another method in this work. Instead, we take a step back and evaluate a broad range of existing literature. This includes Cascade and Mask R-CNN, Swin Transformer, BMask, YOLACT(++), DETR, BCNet, SOTR and SOLOv2. We find that YOLACT++, SOTR and SOLOv2 are significantly more robust to out-of-distribution texture than other frameworks. In addition, we show that deeper and dynamic architectures improve robustness whereas training schedules, data augmentation and pre-training have only a minor impact. In summary we evaluate 68 models on 61 versions of MS COCO for a total of 4148 evaluations.
[ { "created": "Wed, 17 Jan 2024 10:21:08 GMT", "version": "v1" } ]
2024-01-18
[ [ "Theodoridis", "Johannes", "" ], [ "Hofmann", "Jessica", "" ], [ "Maucher", "Johannes", "" ], [ "Schilling", "Andreas", "" ] ]
2401.09245
Jan K\"uchler
Jan K\"uchler (1), Daniel Kr\"oll (1), Sebastian Schoenen (1), Andreas Witte (1) ((1) ControlExpert GmbH, Langenfeld, Germany)
Uncertainty estimates for semantic segmentation: providing enhanced reliability for automated motor claims handling
11 pages, 10 figures, 3 tables
Machine Vision and Applications 35, 66 (2024)
10.1007/s00138-024-01541-3
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Deep neural network models for image segmentation can be a powerful tool for the automation of motor claims handling processes in the insurance industry. A crucial aspect is the reliability of the model outputs when facing adverse conditions, such as low quality photos taken by claimants to document damages. We explore the use of a meta-classification model to empirically assess the precision of segments predicted by a model trained for the semantic segmentation of car body parts. Different sets of features correlated with the quality of a segment are compared, and an AUROC score of 0.915 is achieved for distinguishing between high- and low-quality segments. By removing low-quality segments, the average mIoU of the segmentation output is improved by 16 percentage points and the number of wrongly predicted segments is reduced by 77%.
[ { "created": "Wed, 17 Jan 2024 14:47:26 GMT", "version": "v1" }, { "created": "Fri, 17 May 2024 08:05:18 GMT", "version": "v2" } ]
2024-05-20
[ [ "Küchler", "Jan", "", "ControlExpert GmbH, Langenfeld, Germany" ], [ "Kröll", "Daniel", "", "ControlExpert GmbH, Langenfeld, Germany" ], [ "Schoenen", "Sebastian", "", "ControlExpert GmbH, Langenfeld, Germany" ], [ "Witte", "Andreas", "", "ControlExpert GmbH, Langenfeld, Germany" ] ]
2401.09252
Thiago L. T. da Silveira
Thiago Lopes Trugillo da Silveira, Paulo Gamarra Lessa Pinto, Jeffri Erwin Murrugarra Llerena, Claudio Rosito Jung
3D Scene Geometry Estimation from 360$^\circ$ Imagery: A Survey
Published in ACM Computing Surveys
ACM Comput. Surv. 55, 4, Article 68, 2023
10.1145/3519021
null
cs.CV cs.AI cs.GR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper provides a comprehensive survey on pioneer and state-of-the-art 3D scene geometry estimation methodologies based on single, two, or multiple images captured under the omnidirectional optics. We first revisit the basic concepts of the spherical camera model, and review the most common acquisition technologies and representation formats suitable for omnidirectional (also called 360$^\circ$, spherical or panoramic) images and videos. We then survey monocular layout and depth inference approaches, highlighting the recent advances in learning-based solutions suited for spherical data. The classical stereo matching is then revised on the spherical domain, where methodologies for detecting and describing sparse and dense features become crucial. The stereo matching concepts are then extrapolated for multiple view camera setups, categorizing them among light fields, multi-view stereo, and structure from motion (or visual simultaneous localization and mapping). We also compile and discuss commonly adopted datasets and figures of merit indicated for each purpose and list recent results for completeness. We conclude this paper by pointing out current and future trends.
[ { "created": "Wed, 17 Jan 2024 14:57:27 GMT", "version": "v1" } ]
2024-01-18
[ [ "da Silveira", "Thiago Lopes Trugillo", "" ], [ "Pinto", "Paulo Gamarra Lessa", "" ], [ "Llerena", "Jeffri Erwin Murrugarra", "" ], [ "Jung", "Claudio Rosito", "" ] ]
2401.09428
Eric L. Wisotzky
Eric L. Wisotzky and Jost Triller and Anna Hilsmann and Peter Eisert
Multispectral Stereo-Image Fusion for 3D Hyperspectral Scene Reconstruction
VISAPP 2024 - 19th International Conference on Computer Vision Theory and Applications
In Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 3: VISAPP; ISBN 978-989-758-679-8, SciTePress, pages 88-99, 2024
10.5220/0012354400003660
null
eess.IV cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spectral imaging enables the analysis of optical material properties that are invisible to the human eye. Different spectral capturing setups, e.g., based on filter-wheel, push-broom, line-scanning, or mosaic cameras, have been introduced in the last years to support a wide range of applications in agriculture, medicine, and industrial surveillance. However, these systems often suffer from different disadvantages, such as lack of real-time capability, limited spectral coverage or low spatial resolution. To address these drawbacks, we present a novel approach combining two calibrated multispectral real-time capable snapshot cameras, covering different spectral ranges, into a stereo-system. Therefore, a hyperspectral data-cube can be continuously captured. The combined use of different multispectral snapshot cameras enables both 3D reconstruction and spectral analysis. Both captured images are demosaicked avoiding spatial resolution loss. We fuse the spectral data from one camera into the other to receive a spatially and spectrally high resolution video stream. Experiments demonstrate the feasibility of this approach and the system is investigated with regard to its applicability for surgical assistance monitoring.
[ { "created": "Fri, 15 Dec 2023 13:20:35 GMT", "version": "v1" } ]
2024-10-01
[ [ "Wisotzky", "Eric L.", "" ], [ "Triller", "Jost", "" ], [ "Hilsmann", "Anna", "" ], [ "Eisert", "Peter", "" ] ]
2401.09450
Lars Ole Schwen
Norman Zerbe, Lars Ole Schwen, Christian Gei{\ss}ler, Katja Wiesemann, Tom Bisson, Peter Boor, Rita Carvalho, Michael Franz, Christoph Jansen, Tim-Rasmus Kiehl, Bj\"orn Lindequist, Nora Charlotte Pohlan, Sarah Schmell, Klaus Strohmenger, Falk Zakrzewski, Markus Plass, Michael Takla, Tobias K\"uster, Andr\'e Homeyer, Peter Hufnagl
Joining Forces for Pathology Diagnostics with AI Assistance: The EMPAIA Initiative
null
Journal of Pathology Informatics 2024
10.1016/j.jpi.2024.100387
null
cs.CY cs.AI cs.CV cs.HC
http://creativecommons.org/licenses/by/4.0/
Over the past decade, artificial intelligence (AI) methods in pathology have advanced substantially. However, integration into routine clinical practice has been slow due to numerous challenges, including technical and regulatory hurdles in translating research results into clinical diagnostic products and the lack of standardized interfaces. The open and vendor-neutral EMPAIA initiative addresses these challenges. Here, we provide an overview of EMPAIA's achievements and lessons learned. EMPAIA integrates various stakeholders of the pathology AI ecosystem, i.e., pathologists, computer scientists, and industry. In close collaboration, we developed technical interoperability standards, recommendations for AI testing and product development, and explainability methods. We implemented the modular and open-source EMPAIA platform and successfully integrated 14 AI-based image analysis apps from 8 different vendors, demonstrating how different apps can use a single standardized interface. We prioritized requirements and evaluated the use of AI in real clinical settings with 14 different pathology laboratories in Europe and Asia. In addition to technical developments, we created a forum for all stakeholders to share information and experiences on digital pathology and AI. Commercial, clinical, and academic stakeholders can now adopt EMPAIA's common open-source interfaces, providing a unique opportunity for large-scale standardization and streamlining of processes. Further efforts are needed to effectively and broadly establish AI assistance in routine laboratory use. To this end, a sustainable infrastructure, the non-profit association EMPAIA International, has been established to continue standardization and support broad implementation and advocacy for an AI-assisted digital pathology future.
[ { "created": "Fri, 22 Dec 2023 11:15:16 GMT", "version": "v1" }, { "created": "Tue, 16 Apr 2024 07:35:41 GMT", "version": "v2" } ]
2024-06-03
[ [ "Zerbe", "Norman", "" ], [ "Schwen", "Lars Ole", "" ], [ "Geißler", "Christian", "" ], [ "Wiesemann", "Katja", "" ], [ "Bisson", "Tom", "" ], [ "Boor", "Peter", "" ], [ "Carvalho", "Rita", "" ], [ "Franz", "Michael", "" ], [ "Jansen", "Christoph", "" ], [ "Kiehl", "Tim-Rasmus", "" ], [ "Lindequist", "Björn", "" ], [ "Pohlan", "Nora Charlotte", "" ], [ "Schmell", "Sarah", "" ], [ "Strohmenger", "Klaus", "" ], [ "Zakrzewski", "Falk", "" ], [ "Plass", "Markus", "" ], [ "Takla", "Michael", "" ], [ "Küster", "Tobias", "" ], [ "Homeyer", "André", "" ], [ "Hufnagl", "Peter", "" ] ]
2401.09479
Rahul Vishwakarma
Rahul Vishwakarma, Amin Rezaei
Uncertainty-Aware Hardware Trojan Detection Using Multimodal Deep Learning
2024 Design, Automation and Test in Europe Conference | The European Event for Electronic System Design & Test (accepted)
2024 Design, Automation and Test in Europe Conference | The European Event for Electronic System Design & Test
null
null
cs.CR cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
The risk of hardware Trojans being inserted at various stages of chip production has increased in a zero-trust fabless era. To counter this, various machine learning solutions have been developed for the detection of hardware Trojans. While most of the focus has been on either a statistical or deep learning approach, the limited number of Trojan-infected benchmarks affects the detection accuracy and restricts the possibility of detecting zero-day Trojans. To close the gap, we first employ generative adversarial networks to amplify our data in two alternative representation modalities, a graph and a tabular, ensuring that the dataset is distributed in a representative manner. Further, we propose a multimodal deep learning approach to detect hardware Trojans and evaluate the results from both early fusion and late fusion strategies. We also estimate the uncertainty quantification metrics of each prediction for risk-aware decision-making. The outcomes not only confirms the efficacy of our proposed hardware Trojan detection method but also opens a new door for future studies employing multimodality and uncertainty quantification to address other hardware security challenges.
[ { "created": "Mon, 15 Jan 2024 05:45:51 GMT", "version": "v1" }, { "created": "Tue, 23 Jan 2024 07:04:18 GMT", "version": "v2" } ]
2024-01-24
[ [ "Vishwakarma", "Rahul", "" ], [ "Rezaei", "Amin", "" ] ]
2401.09489
Audrey Der
Audrey Der, Chin-Chia Michael Yeh, Yan Zheng, Junpeng Wang, Zhongfang Zhuang, Liang Wang, Wei Zhang, Eamonn J. Keogh
PUPAE: Intuitive and Actionable Explanations for Time Series Anomalies
9 Page Manuscript, 1 Page Supplementary (Supplement not published in conference proceedings.)
SIAM SDM 2024
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years there has been significant progress in time series anomaly detection. However, after detecting an (perhaps tentative) anomaly, can we explain it? Such explanations would be useful to triage anomalies. For example, in an oil refinery, should we respond to an anomaly by dispatching a hydraulic engineer, or an intern to replace the battery on a sensor? There have been some parallel efforts to explain anomalies, however many proposed techniques produce explanations that are indirect, and often seem more complex than the anomaly they seek to explain. Our review of the literature/checklists/user-manuals used by frontline practitioners in various domains reveals an interesting near-universal commonality. Most practitioners discuss, explain and report anomalies in the following format: The anomaly would be like normal data A, if not for the corruption B. The reader will appreciate that is a type of counterfactual explanation. In this work we introduce a domain agnostic counterfactual explanation technique to produce explanations for time series anomalies. As we will show, our method can produce both visual and text-based explanations that are objectively correct, intuitive and in many circumstances, directly actionable.
[ { "created": "Tue, 16 Jan 2024 20:13:46 GMT", "version": "v1" } ]
2024-01-19
[ [ "Der", "Audrey", "" ], [ "Yeh", "Chin-Chia Michael", "" ], [ "Zheng", "Yan", "" ], [ "Wang", "Junpeng", "" ], [ "Zhuang", "Zhongfang", "" ], [ "Wang", "Liang", "" ], [ "Zhang", "Wei", "" ], [ "Keogh", "Eamonn J.", "" ] ]
2401.09553
Shreya Rajpal
Shreya Rajpal (1,2), Ricardo Usbeck (1) ((1) Universit\"at Hamburg, Hamburg, Germany,(2) Vellore Institute of Technology, Vellore, Tamil Nadu, India)
BERTologyNavigator: Advanced Question Answering with BERT-based Semantics
Accepted in Scholarly QALD Challenge @ ISWC 2023
Joint Proceedings of Scholarly QALD 2023 and SemREC 2023 co-located with 22nd International Semantic Web Conference ISWC 2023. Athens, Greece, November 6-10, 2023
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
The development and integration of knowledge graphs and language models has significance in artificial intelligence and natural language processing. In this study, we introduce the BERTologyNavigator -- a two-phased system that combines relation extraction techniques and BERT embeddings to navigate the relationships within the DBLP Knowledge Graph (KG). Our approach focuses on extracting one-hop relations and labelled candidate pairs in the first phases. This is followed by employing BERT's CLS embeddings and additional heuristics for relation selection in the second phase. Our system reaches an F1 score of 0.2175 on the DBLP QuAD Final test dataset for Scholarly QALD and 0.98 F1 score on the subset of the DBLP QuAD test dataset during the QA phase.
[ { "created": "Wed, 17 Jan 2024 19:11:30 GMT", "version": "v1" } ]
2024-01-19
[ [ "Rajpal", "Shreya", "" ], [ "Usbeck", "Ricardo", "" ] ]
2401.09789
Idoia Berges
Idoia Berges, V\'ictor Julio Ram\'irez-Dur\'an, Arantza Illarramendi
A Semantic Approach for Big Data Exploration in Industry 4.0
Published version of paper: Idoia Berges, V\'ictor Julio Ram\'irez-Dur\'an, Arantza Illarramendi: A Semantic Approach for Big Data Exploration in Industry 4.0. Big Data Res. 25: 100222 (2021). DOI: 10.1016/j.bdr.2021.100222
Big Data Res. 25: 100222 (2021)
10.1016/j.bdr.2021.100222
null
cs.AI cs.DB
http://creativecommons.org/licenses/by-nc-nd/4.0/
The growing trends in automation, Internet of Things, big data and cloud computing technologies have led to the fourth industrial revolution (Industry 4.0), where it is possible to visualize and identify patterns and insights, which results in a better understanding of the data and can improve the manufacturing process. However, many times, the task of data exploration results difficult for manufacturing experts because they might be interested in analyzing also data that does not appear in pre-designed visualizations and therefore they must be assisted by Information Technology experts. In this paper, we present a proposal materialized in a semantic-based visual query system developed for a real Industry 4.0 scenario that allows domain experts to explore and visualize data in a friendly way. The main novelty of the system is the combined use that it makes of captured data that are semantically annotated first, and a 2D customized digital representation of a machine that is also linked with semantic descriptions. Those descriptions are expressed using terms of an ontology, where, among others, the sensors that are used to capture indicators about the performance of a machine that belongs to a Industry 4.0 scenario have been modeled. Moreover, this semantic description allows to: formulate queries at a higher level of abstraction, provide customized graphical visualizations of the results based on the format and nature of the data, and download enriched data enabling further types of analysis.
[ { "created": "Thu, 18 Jan 2024 08:20:19 GMT", "version": "v1" } ]
2024-01-19
[ [ "Berges", "Idoia", "" ], [ "Ramírez-Durán", "Víctor Julio", "" ], [ "Illarramendi", "Arantza", "" ] ]
2401.09798
Kazuhiro Takemoto
Kazuhiro Takemoto
All in How You Ask for It: Simple Black-Box Method for Jailbreak Attacks
12 pages, 4 figures, 3 tables
Appl. Sci. 14, 3558 (2024)
10.3390/app14093558
null
cs.CL cs.AI cs.CY
http://creativecommons.org/licenses/by/4.0/
Large Language Models (LLMs), such as ChatGPT, encounter `jailbreak' challenges, wherein safeguards are circumvented to generate ethically harmful prompts. This study introduces a straightforward black-box method for efficiently crafting jailbreak prompts, addressing the significant complexity and computational costs associated with conventional methods. Our technique iteratively transforms harmful prompts into benign expressions directly utilizing the target LLM, predicated on the hypothesis that LLMs can autonomously generate expressions that evade safeguards. Through experiments conducted with ChatGPT (GPT-3.5 and GPT-4) and Gemini-Pro, our method consistently achieved an attack success rate exceeding 80% within an average of five iterations for forbidden questions and proved robust against model updates. The jailbreak prompts generated were not only naturally-worded and succinct but also challenging to defend against. These findings suggest that the creation of effective jailbreak prompts is less complex than previously believed, underscoring the heightened risk posed by black-box jailbreak attacks.
[ { "created": "Thu, 18 Jan 2024 08:36:54 GMT", "version": "v1" }, { "created": "Mon, 22 Jan 2024 06:22:55 GMT", "version": "v2" }, { "created": "Mon, 12 Feb 2024 02:29:28 GMT", "version": "v3" } ]
2024-04-25
[ [ "Takemoto", "Kazuhiro", "" ] ]
2401.09839
Ankan Mullick
Ankan Mullick, Akash Ghosh, G Sai Chaitanya, Samir Ghui, Tapas Nayak, Seung-Cheol Lee, Satadeep Bhattacharjee, Pawan Goyal
MatSciRE: Leveraging Pointer Networks to Automate Entity and Relation Extraction for Material Science Knowledge-base Construction
null
Computational Material Science 2023 (Elsevier)
10.1016/j.commatsci.2023.112659
null
cs.CL cs.CE cs.IR
http://creativecommons.org/publicdomain/zero/1.0/
Material science literature is a rich source of factual information about various categories of entities (like materials and compositions) and various relations between these entities, such as conductivity, voltage, etc. Automatically extracting this information to generate a material science knowledge base is a challenging task. In this paper, we propose MatSciRE (Material Science Relation Extractor), a Pointer Network-based encoder-decoder framework, to jointly extract entities and relations from material science articles as a triplet ($entity1, relation, entity2$). Specifically, we target the battery materials and identify five relations to work on - conductivity, coulombic efficiency, capacity, voltage, and energy. Our proposed approach achieved a much better F1-score (0.771) than a previous attempt using ChemDataExtractor (0.716). The overall graphical framework of MatSciRE is shown in Fig 1. The material information is extracted from material science literature in the form of entity-relation triplets using MatSciRE.
[ { "created": "Thu, 18 Jan 2024 09:54:18 GMT", "version": "v1" } ]
2024-01-19
[ [ "Mullick", "Ankan", "" ], [ "Ghosh", "Akash", "" ], [ "Chaitanya", "G Sai", "" ], [ "Ghui", "Samir", "" ], [ "Nayak", "Tapas", "" ], [ "Lee", "Seung-Cheol", "" ], [ "Bhattacharjee", "Satadeep", "" ], [ "Goyal", "Pawan", "" ] ]
2401.09870
Sao Mai Nguyen
Mehdi Zadem, Sergio Mover, Sao Mai Nguyen
Reconciling Spatial and Temporal Abstractions for Goal Representation
null
ICLR 2024
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Goal representation affects the performance of Hierarchical Reinforcement Learning (HRL) algorithms by decomposing the complex learning problem into easier subtasks. Recent studies show that representations that preserve temporally abstract environment dynamics are successful in solving difficult problems and provide theoretical guarantees for optimality. These methods however cannot scale to tasks where environment dynamics increase in complexity i.e. the temporally abstract transition relations depend on larger number of variables. On the other hand, other efforts have tried to use spatial abstraction to mitigate the previous issues. Their limitations include scalability to high dimensional environments and dependency on prior knowledge. In this paper, we propose a novel three-layer HRL algorithm that introduces, at different levels of the hierarchy, both a spatial and a temporal goal abstraction. We provide a theoretical study of the regret bounds of the learned policies. We evaluate the approach on complex continuous control tasks, demonstrating the effectiveness of spatial and temporal abstractions learned by this approach. Find open-source code at https://github.com/cosynus-lix/STAR.
[ { "created": "Thu, 18 Jan 2024 10:33:30 GMT", "version": "v1" }, { "created": "Sun, 30 Jun 2024 09:02:37 GMT", "version": "v2" } ]
2024-07-02
[ [ "Zadem", "Mehdi", "" ], [ "Mover", "Sergio", "" ], [ "Nguyen", "Sao Mai", "" ] ]
2401.09923
Guanxiong Sun
Guanxiong Sun, Yang Hua, Guosheng Hu, Neil Robertson
MAMBA: Multi-level Aggregation via Memory Bank for Video Object Detection
update code url https://github.com/guanxiongsun/vfe.pytorch
In Proceedings of the AAAI Conference on Artificial Intelligence 2021 (Vol. 35, No. 3, pp. 2620-2627)
10.1609/aaai.v35i3.16365
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
State-of-the-art video object detection methods maintain a memory structure, either a sliding window or a memory queue, to enhance the current frame using attention mechanisms. However, we argue that these memory structures are not efficient or sufficient because of two implied operations: (1) concatenating all features in memory for enhancement, leading to a heavy computational cost; (2) frame-wise memory updating, preventing the memory from capturing more temporal information. In this paper, we propose a multi-level aggregation architecture via memory bank called MAMBA. Specifically, our memory bank employs two novel operations to eliminate the disadvantages of existing methods: (1) light-weight key-set construction which can significantly reduce the computational cost; (2) fine-grained feature-wise updating strategy which enables our method to utilize knowledge from the whole video. To better enhance features from complementary levels, i.e., feature maps and proposals, we further propose a generalized enhancement operation (GEO) to aggregate multi-level features in a unified manner. We conduct extensive evaluations on the challenging ImageNetVID dataset. Compared with existing state-of-the-art methods, our method achieves superior performance in terms of both speed and accuracy. More remarkably, MAMBA achieves mAP of 83.7/84.6% at 12.6/9.1 FPS with ResNet-101. Code is available at https://github.com/guanxiongsun/vfe.pytorch.
[ { "created": "Thu, 18 Jan 2024 12:13:06 GMT", "version": "v1" }, { "created": "Thu, 1 Feb 2024 18:43:06 GMT", "version": "v2" } ]
2024-02-02
[ [ "Sun", "Guanxiong", "" ], [ "Hua", "Yang", "" ], [ "Hu", "Guosheng", "" ], [ "Robertson", "Neil", "" ] ]
2401.09942
Vladimir Somers
Amir M. Mansourian, Vladimir Somers, Christophe De Vleeschouwer, Shohreh Kasaei
Multi-task Learning for Joint Re-identification, Team Affiliation, and Role Classification for Sports Visual Tracking
null
Proceedings of the 6th International Workshop on Multimedia Content Analysis in Sports (MMSports 2023), October 29, 2023, Ottawa, ON, Canada
10.1145/3606038.3616172
null
cs.CV cs.AI
http://creativecommons.org/licenses/by-sa/4.0/
Effective tracking and re-identification of players is essential for analyzing soccer videos. But, it is a challenging task due to the non-linear motion of players, the similarity in appearance of players from the same team, and frequent occlusions. Therefore, the ability to extract meaningful embeddings to represent players is crucial in developing an effective tracking and re-identification system. In this paper, a multi-purpose part-based person representation method, called PRTreID, is proposed that performs three tasks of role classification, team affiliation, and re-identification, simultaneously. In contrast to available literature, a single network is trained with multi-task supervision to solve all three tasks, jointly. The proposed joint method is computationally efficient due to the shared backbone. Also, the multi-task learning leads to richer and more discriminative representations, as demonstrated by both quantitative and qualitative results. To demonstrate the effectiveness of PRTreID, it is integrated with a state-of-the-art tracking method, using a part-based post-processing module to handle long-term tracking. The proposed tracking method outperforms all existing tracking methods on the challenging SoccerNet tracking dataset.
[ { "created": "Thu, 18 Jan 2024 12:45:14 GMT", "version": "v1" } ]
2024-01-19
[ [ "Mansourian", "Amir M.", "" ], [ "Somers", "Vladimir", "" ], [ "De Vleeschouwer", "Christophe", "" ], [ "Kasaei", "Shohreh", "" ] ]
2401.10129
Marcelo Saval Calvo
Alejandro Gal\'an-Cuenca, Antonio Javier Gallego, Marcelo Saval-Calvo, Antonio Pertusa
Few-shot learning for COVID-19 Chest X-Ray Classification with Imbalanced Data: An Inter vs. Intra Domain Study
Submited to Pattern Analysis and Applications
Pattern Anal Applic 27, 69 (2024)
10.1007/s10044-024-01285-w
null
eess.IV cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Medical image datasets are essential for training models used in computer-aided diagnosis, treatment planning, and medical research. However, some challenges are associated with these datasets, including variability in data distribution, data scarcity, and transfer learning issues when using models pre-trained from generic images. This work studies the effect of these challenges at the intra- and inter-domain level in few-shot learning scenarios with severe data imbalance. For this, we propose a methodology based on Siamese neural networks in which a series of techniques are integrated to mitigate the effects of data scarcity and distribution imbalance. Specifically, different initialization and data augmentation methods are analyzed, and four adaptations to Siamese networks of solutions to deal with imbalanced data are introduced, including data balancing and weighted loss, both separately and combined, and with a different balance of pairing ratios. Moreover, we also assess the inference process considering four classifiers, namely Histogram, $k$NN, SVM, and Random Forest. Evaluation is performed on three chest X-ray datasets with annotated cases of both positive and negative COVID-19 diagnoses. The accuracy of each technique proposed for the Siamese architecture is analyzed separately and their results are compared to those obtained using equivalent methods on a state-of-the-art CNN. We conclude that the introduced techniques offer promising improvements over the baseline in almost all cases, and that the selection of the technique may vary depending on the amount of data available and the level of imbalance.
[ { "created": "Thu, 18 Jan 2024 16:59:27 GMT", "version": "v1" } ]
2024-09-27
[ [ "Galán-Cuenca", "Alejandro", "" ], [ "Gallego", "Antonio Javier", "" ], [ "Saval-Calvo", "Marcelo", "" ], [ "Pertusa", "Antonio", "" ] ]
2401.10178
Zahra Babaiee
Zahra Babaiee, Peyman M. Kiasari, Daniela Rus, Radu Grosu
Neural Echos: Depthwise Convolutional Filters Replicate Biological Receptive Fields
null
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (2024) 8216-8225
null
null
cs.CV cs.AI cs.NE
http://creativecommons.org/licenses/by-nc-sa/4.0/
In this study, we present evidence suggesting that depthwise convolutional kernels are effectively replicating the structural intricacies of the biological receptive fields observed in the mammalian retina. We provide analytics of trained kernels from various state-of-the-art models substantiating this evidence. Inspired by this intriguing discovery, we propose an initialization scheme that draws inspiration from the biological receptive fields. Experimental analysis of the ImageNet dataset with multiple CNN architectures featuring depthwise convolutions reveals a marked enhancement in the accuracy of the learned model when initialized with biologically derived weights. This underlies the potential for biologically inspired computational models to further our understanding of vision processing systems and to improve the efficacy of convolutional networks.
[ { "created": "Thu, 18 Jan 2024 18:06:22 GMT", "version": "v1" } ]
2024-01-19
[ [ "Babaiee", "Zahra", "" ], [ "Kiasari", "Peyman M.", "" ], [ "Rus", "Daniela", "" ], [ "Grosu", "Radu", "" ] ]
2401.10316
Hao-Ming Fu
Chu-Jen Shao, Hao-Ming Fu, Pu-Jen Cheng
Improving One-class Recommendation with Multi-tasking on Various Preference Intensities
RecSys 2020 (ACM Conference on Recommender Systems 2020)
RecSys 2020: Proceedings of the 14th ACM Conference on Recommender Systems, Pages 498 to 502
10.1145/3383313.3412224
null
cs.IR cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the one-class recommendation problem, it's required to make recommendations basing on users' implicit feedback, which is inferred from their action and inaction. Existing works obtain representations of users and items by encoding positive and negative interactions observed from training data. However, these efforts assume that all positive signals from implicit feedback reflect a fixed preference intensity, which is not realistic. Consequently, representations learned with these methods usually fail to capture informative entity features that reflect various preference intensities. In this paper, we propose a multi-tasking framework taking various preference intensities of each signal from implicit feedback into consideration. Representations of entities are required to satisfy the objective of each subtask simultaneously, making them more robust and generalizable. Furthermore, we incorporate attentive graph convolutional layers to explore high-order relationships in the user-item bipartite graph and dynamically capture the latent tendencies of users toward the items they interact with. Experimental results show that our method performs better than state-of-the-art methods by a large margin on three large-scale real-world benchmark datasets.
[ { "created": "Thu, 18 Jan 2024 18:59:55 GMT", "version": "v1" } ]
2024-01-22
[ [ "Shao", "Chu-Jen", "" ], [ "Fu", "Hao-Ming", "" ], [ "Cheng", "Pu-Jen", "" ] ]
2401.10487
Peiwen Yuan
Peiwen Yuan, Xinglin Wang, Shaoxiong Feng, Boyuan Pan, Yiwei Li, Heda Wang, Xupeng Miao, Kan Li
Generative Dense Retrieval: Memory Can Be a Burden
EACL 2024 main
EACL 2024 main
null
null
cs.IR cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generative Retrieval (GR), autoregressively decoding relevant document identifiers given a query, has been shown to perform well under the setting of small-scale corpora. By memorizing the document corpus with model parameters, GR implicitly achieves deep interaction between query and document. However, such a memorizing mechanism faces three drawbacks: (1) Poor memory accuracy for fine-grained features of documents; (2) Memory confusion gets worse as the corpus size increases; (3) Huge memory update costs for new documents. To alleviate these problems, we propose the Generative Dense Retrieval (GDR) paradigm. Specifically, GDR first uses the limited memory volume to achieve inter-cluster matching from query to relevant document clusters. Memorizing-free matching mechanism from Dense Retrieval (DR) is then introduced to conduct fine-grained intra-cluster matching from clusters to relevant documents. The coarse-to-fine process maximizes the advantages of GR's deep interaction and DR's scalability. Besides, we design a cluster identifier constructing strategy to facilitate corpus memory and a cluster-adaptive negative sampling strategy to enhance the intra-cluster mapping ability. Empirical results show that GDR obtains an average of 3.0 R@100 improvement on NQ dataset under multiple settings and has better scalability.
[ { "created": "Fri, 19 Jan 2024 04:24:07 GMT", "version": "v1" } ]
2024-01-22
[ [ "Yuan", "Peiwen", "" ], [ "Wang", "Xinglin", "" ], [ "Feng", "Shaoxiong", "" ], [ "Pan", "Boyuan", "" ], [ "Li", "Yiwei", "" ], [ "Wang", "Heda", "" ], [ "Miao", "Xupeng", "" ], [ "Li", "Kan", "" ] ]
2401.10732
Nam Le
Nam Le, Honglei Zhang, Francesco Cricri, Ramin G. Youvalari, Hamed Rezazadegan Tavakoli, Emre Aksu, Miska M. Hannuksela, Esa Rahtu
Bridging the gap between image coding for machines and humans
null
IEEE International Conference on Image Processing (ICIP), Bordeaux, France, 2022, pp. 3411-3415
10.1109/ICIP46576.2022.9897916
null
eess.IV cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Image coding for machines (ICM) aims at reducing the bitrate required to represent an image while minimizing the drop in machine vision analysis accuracy. In many use cases, such as surveillance, it is also important that the visual quality is not drastically deteriorated by the compression process. Recent works on using neural network (NN) based ICM codecs have shown significant coding gains against traditional methods; however, the decompressed images, especially at low bitrates, often contain checkerboard artifacts. We propose an effective decoder finetuning scheme based on adversarial training to significantly enhance the visual quality of ICM codecs, while preserving the machine analysis accuracy, without adding extra bitcost or parameters at the inference phase. The results show complete removal of the checkerboard artifacts at the negligible cost of -1.6% relative change in task performance score. In the cases where some amount of artifacts is tolerable, such as when machine consumption is the primary target, this technique can enhance both pixel-fidelity and feature-fidelity scores without losing task performance.
[ { "created": "Fri, 19 Jan 2024 14:49:56 GMT", "version": "v1" } ]
2024-01-22
[ [ "Le", "Nam", "" ], [ "Zhang", "Honglei", "" ], [ "Cricri", "Francesco", "" ], [ "Youvalari", "Ramin G.", "" ], [ "Tavakoli", "Hamed Rezazadegan", "" ], [ "Aksu", "Emre", "" ], [ "Hannuksela", "Miska M.", "" ], [ "Rahtu", "Esa", "" ] ]
2401.10786
Zuoyue Li
Zuoyue Li, Zhenqiang Li, Zhaopeng Cui, Marc Pollefeys, Martin R. Oswald
Sat2Scene: 3D Urban Scene Generation from Satellite Images with Diffusion
null
CVPR 2024
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Directly generating scenes from satellite imagery offers exciting possibilities for integration into applications like games and map services. However, challenges arise from significant view changes and scene scale. Previous efforts mainly focused on image or video generation, lacking exploration into the adaptability of scene generation for arbitrary views. Existing 3D generation works either operate at the object level or are difficult to utilize the geometry obtained from satellite imagery. To overcome these limitations, we propose a novel architecture for direct 3D scene generation by introducing diffusion models into 3D sparse representations and combining them with neural rendering techniques. Specifically, our approach generates texture colors at the point level for a given geometry using a 3D diffusion model first, which is then transformed into a scene representation in a feed-forward manner. The representation can be utilized to render arbitrary views which would excel in both single-frame quality and inter-frame consistency. Experiments in two city-scale datasets show that our model demonstrates proficiency in generating photo-realistic street-view image sequences and cross-view urban scenes from satellite imagery.
[ { "created": "Fri, 19 Jan 2024 16:15:37 GMT", "version": "v1" }, { "created": "Mon, 1 Apr 2024 14:53:00 GMT", "version": "v2" } ]
2024-04-02
[ [ "Li", "Zuoyue", "" ], [ "Li", "Zhenqiang", "" ], [ "Cui", "Zhaopeng", "" ], [ "Pollefeys", "Marc", "" ], [ "Oswald", "Martin R.", "" ] ]
2401.10840
Hong Qian
Junhao Shen and Hong Qian and Wei Zhang and Aimin Zhou
Symbolic Cognitive Diagnosis via Hybrid Optimization for Intelligent Education Systems
null
Published in AAAI 2024
null
null
cs.CY cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cognitive diagnosis assessment is a fundamental and crucial task for student learning. It models the student-exercise interaction, and discovers the students' proficiency levels on each knowledge attribute. In real-world intelligent education systems, generalization and interpretability of cognitive diagnosis methods are of equal importance. However, most existing methods can hardly make the best of both worlds due to the complicated student-exercise interaction. To this end, this paper proposes a symbolic cognitive diagnosis~(SCD) framework to simultaneously enhance generalization and interpretability. The SCD framework incorporates the symbolic tree to explicably represent the complicated student-exercise interaction function, and utilizes gradient-based optimization methods to effectively learn the student and exercise parameters. Meanwhile, the accompanying challenge is that we need to tunnel the discrete symbolic representation and continuous parameter optimization. To address this challenge, we propose to hybridly optimize the representation and parameters in an alternating manner. To fulfill SCD, it alternately learns the symbolic tree by derivative-free genetic programming and learns the student and exercise parameters via gradient-based Adam. The extensive experimental results on various real-world datasets show the superiority of SCD on both generalization and interpretability. The ablation study verifies the efficacy of each ingredient in SCD, and the case study explicitly showcases how the interpretable ability of SCD works.
[ { "created": "Sat, 30 Dec 2023 09:40:10 GMT", "version": "v1" } ]
2024-01-22
[ [ "Shen", "Junhao", "" ], [ "Qian", "Hong", "" ], [ "Zhang", "Wei", "" ], [ "Zhou", "Aimin", "" ] ]
2401.10917
Jos\'e Ra\'ul Romero
Jos\'e de la Torre-L\'opez and Aurora Ram\'irez and Jos\'e Ra\'ul Romero
Artificial intelligence to automate the systematic review of scientific literature
25 pages, 3 figures, 1 table, journal paper
Computing, Volume 105, pages 2171-2194, 2023
10.1007/s00607-023-01181-x
null
cs.IR cs.AI
http://creativecommons.org/licenses/by/4.0/
Artificial intelligence (AI) has acquired notorious relevance in modern computing as it effectively solves complex tasks traditionally done by humans. AI provides methods to represent and infer knowledge, efficiently manipulate texts and learn from vast amount of data. These characteristics are applicable in many activities that human find laborious or repetitive, as is the case of the analysis of scientific literature. Manually preparing and writing a systematic literature review (SLR) takes considerable time and effort, since it requires planning a strategy, conducting the literature search and analysis, and reporting the findings. Depending on the area under study, the number of papers retrieved can be of hundreds or thousands, meaning that filtering those relevant ones and extracting the key information becomes a costly and error-prone process. However, some of the involved tasks are repetitive and, therefore, subject to automation by means of AI. In this paper, we present a survey of AI techniques proposed in the last 15 years to help researchers conduct systematic analyses of scientific literature. We describe the tasks currently supported, the types of algorithms applied, and available tools proposed in 34 primary studies. This survey also provides a historical perspective of the evolution of the field and the role that humans can play in an increasingly automated SLR process.
[ { "created": "Sat, 13 Jan 2024 19:12:49 GMT", "version": "v1" } ]
2024-01-23
[ [ "de la Torre-López", "José", "" ], [ "Ramírez", "Aurora", "" ], [ "Romero", "José Raúl", "" ] ]
2401.10926
Enrique Yeguas
Jos\'e M. Alcalde-Llergo, Enrique Yeguas-Bol\'ivar, Pilar Aparicio-Mart\'inez, Andrea Zingoni, Juri Taborri and Sara Pinzi
A VR Serious Game to Increase Empathy towards Students with Phonological Dyslexia
5 pages, 5 figures, MetroXRAINE 2023
2023 IEEE International Conference on Metrology for Extended Reality, Artificial Intelligence and Neural Engineering (MetroXRAINE), Milano, Italy, 2023, pp. 184-188
null
null
cs.HC cs.CV cs.GR
http://creativecommons.org/licenses/by/4.0/
Dyslexia is a neurodevelopmental disorder that is estimated to affect about 5-10% of the population. In particular, phonological dyslexia causes problems in connecting the sounds of words with their written forms. This results in difficulties such as slow reading speed, inaccurate reading, and difficulty decoding unfamiliar words. Moreover, dyslexia can also be a challenging and frustrating experience for students as they may feel misunderstood or stigmatized by their peers or educators. For these reasons, the use of compensatory tools and strategies is of crucial importance for dyslexic students to have the same opportunities as non-dyslexic ones. However, generally, people underestimate the problem and are not aware of the importance of support methodologies. In the light of this, the main purpose of this paper is to propose a virtual reality (VR) serious game through which teachers, students and, in general, non-dyslexic people could understand which are some of the issues of student with dyslexia and the fundamental utility of offering support to them. In the game, players must create a potion by following a recipe written in an alphabet that is specifically designed to replicate the reading difficulties experienced by individuals with dyslexia. The task must be solved first without any help and then by receiving supporting tools and strategies with the idea that the player can put himself in the place of the dyslexic person and understand the real need for support methodologies.
[ { "created": "Mon, 15 Jan 2024 23:47:23 GMT", "version": "v1" } ]
2024-01-23
[ [ "Alcalde-Llergo", "José M.", "" ], [ "Yeguas-Bolívar", "Enrique", "" ], [ "Aparicio-Martínez", "Pilar", "" ], [ "Zingoni", "Andrea", "" ], [ "Taborri", "Juri", "" ], [ "Pinzi", "Sara", "" ] ]
2401.10940
Hamed Mohammadshahi
Majid Ramezani, Hamed Mohammadshahi, Mahshid Daliry, Soroor Rahmani, Amir-Hosein Asghari
RELIANCE: Reliable Ensemble Learning for Information and News Credibility Evaluation
Published in: 2024 20th CSI International Symposium on Artificial Intelligence and Signal Processing (AISP) Publisher: IEEE Conference Location: Babol, Iran Date of Conference: 21-22 February 2024 Date Added to IEEE Xplore: 25 March 2024 pages={1-9}, https://ieeexplore.ieee.org/document/10475305
2024 20th CSI International Symposium on Artificial Intelligence and Signal Processing (AISP) (2024) page 89
10.1109/AISP61396.2024.10475305
null
cs.IR cs.CL cs.LG cs.SI
http://creativecommons.org/licenses/by/4.0/
In the era of information proliferation, discerning the credibility of news content poses an ever-growing challenge. This paper introduces RELIANCE, a pioneering ensemble learning system designed for robust information and fake news credibility evaluation. Comprising five diverse base models, including Support Vector Machine (SVM), naive Bayes, logistic regression, random forest, and Bidirectional Long Short Term Memory Networks (BiLSTMs), RELIANCE employs an innovative approach to integrate their strengths, harnessing the collective intelligence of the ensemble for enhanced accuracy. Experiments demonstrate the superiority of RELIANCE over individual models, indicating its efficacy in distinguishing between credible and non-credible information sources. RELIANCE, also surpasses baseline models in information and news credibility assessment, establishing itself as an effective solution for evaluating the reliability of information sources.
[ { "created": "Wed, 17 Jan 2024 13:11:09 GMT", "version": "v1" }, { "created": "Sat, 20 Apr 2024 17:48:05 GMT", "version": "v2" } ]
2024-04-23
[ [ "Ramezani", "Majid", "" ], [ "Mohammadshahi", "Hamed", "" ], [ "Daliry", "Mahshid", "" ], [ "Rahmani", "Soroor", "" ], [ "Asghari", "Amir-Hosein", "" ] ]
2401.10965
Sascha Ossowski
Marin Lujak, Stefano Giordani, Andrea Omicini, Sascha Ossowski
Decentralizing Coordination in Open Vehicle Fleets for Scalable and Dynamic Task Allocation
null
Complexity, Volume 2020, Article ID 1047369
10.1155/2020/1047369
null
cs.MA cs.AI
http://creativecommons.org/licenses/by/4.0/
One of the major challenges in the coordination of large, open, collaborative, and commercial vehicle fleets is dynamic task allocation. Self-concerned individually rational vehicle drivers have both local and global objectives, which require coordination using some fair and efficient task allocation method. In this paper, we review the literature on scalable and dynamic task allocation focusing on deterministic and dynamic two-dimensional linear assignment problems. We focus on multiagent system representation of open vehicle fleets where dynamically appearing vehicles are represented by software agents that should be allocated to a set of dynamically appearing tasks. We give a comparison and critical analysis of recent research results focusing on centralized, distributed, and decentralized solution approaches. Moreover, we propose mathematical models for dynamic versions of the following assignment problems well known in combinatorial optimization: the assignment problem, bottleneck assignment problem, fair matching problem, dynamic minimum deviation assignment problem, $\sum_{k}$-assignment problem, the semiassignment problem, the assignment problem with side constraints, and the assignment problem while recognizing agent qualification; all while considering the main aspect of open vehicle fleets: random arrival of tasks and vehicles (agents) that may become available after assisting previous tasks or by participating in the fleet at times based on individual interest.
[ { "created": "Fri, 19 Jan 2024 12:47:27 GMT", "version": "v1" } ]
2024-01-23
[ [ "Lujak", "Marin", "" ], [ "Giordani", "Stefano", "" ], [ "Omicini", "Andrea", "" ], [ "Ossowski", "Sascha", "" ] ]
2401.11052
Luca Foppiano
Luca Foppiano, Guillaume Lambard, Toshiyuki Amagasa, Masashi Ishii
Mining experimental data from Materials Science literature with Large Language Models: an evaluation study
40 pages: 5 figures and 1 table in the body. 32 Tables in the Appendix / Supplementary materials
Science and Technology of Advanced Materials: Methods (2024)
10.1080/27660400.2024.2356506
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
This study is dedicated to assessing the capabilities of large language models (LLMs) such as GPT-3.5-Turbo, GPT-4, and GPT-4-Turbo in extracting structured information from scientific documents in materials science. To this end, we primarily focus on two critical tasks of information extraction: (i) a named entity recognition (NER) of studied materials and physical properties and (ii) a relation extraction (RE) between these entities. Due to the evident lack of datasets within Materials Informatics (MI), we evaluated using SuperMat, based on superconductor research, and MeasEval, a generic measurement evaluation corpus. The performance of LLMs in executing these tasks is benchmarked against traditional models based on the BERT architecture and rule-based approaches (baseline). We introduce a novel methodology for the comparative analysis of intricate material expressions, emphasising the standardisation of chemical formulas to tackle the complexities inherent in materials science information assessment. For NER, LLMs fail to outperform the baseline with zero-shot prompting and exhibit only limited improvement with few-shot prompting. However, a GPT-3.5-Turbo fine-tuned with the appropriate strategy for RE outperforms all models, including the baseline. Without any fine-tuning, GPT-4 and GPT-4-Turbo display remarkable reasoning and relationship extraction capabilities after being provided with merely a couple of examples, surpassing the baseline. Overall, the results suggest that although LLMs demonstrate relevant reasoning skills in connecting concepts, specialised models are currently a better choice for tasks requiring extracting complex domain-specific entities like materials. These insights provide initial guidance applicable to other materials science sub-domains in future work.
[ { "created": "Fri, 19 Jan 2024 23:00:31 GMT", "version": "v1" }, { "created": "Tue, 9 Apr 2024 07:32:37 GMT", "version": "v2" }, { "created": "Thu, 30 May 2024 20:28:08 GMT", "version": "v3" } ]
2024-06-03
[ [ "Foppiano", "Luca", "" ], [ "Lambard", "Guillaume", "" ], [ "Amagasa", "Toshiyuki", "" ], [ "Ishii", "Masashi", "" ] ]
2401.11218
Elena Chistova
Elena Chistova
End-to-End Argument Mining over Varying Rhetorical Structures
null
Findings of the Association for Computational Linguistics: ACL 2023, 3376-3391
10.18653/v1/2023.findings-acl.209
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Rhetorical Structure Theory implies no single discourse interpretation of a text, and the limitations of RST parsers further exacerbate inconsistent parsing of similar structures. Therefore, it is important to take into account that the same argumentative structure can be found in semantically similar texts with varying rhetorical structures. In this work, the differences between paraphrases within the same argument scheme are evaluated from a rhetorical perspective. The study proposes a deep dependency parsing model to assess the connection between rhetorical and argument structures. The model utilizes rhetorical relations; RST structures of paraphrases serve as training data augmentations. The method allows for end-to-end argumentation analysis using a rhetorical tree instead of a word sequence. It is evaluated on the bilingual Microtexts corpus, and the first results on fully-fledged argument parsing for the Russian version of the corpus are reported. The results suggest that argument mining can benefit from multiple variants of discourse structure.
[ { "created": "Sat, 20 Jan 2024 12:00:40 GMT", "version": "v1" } ]
2024-01-23
[ [ "Chistova", "Elena", "" ] ]
2401.11268
Kamer Ali Yuksel
Golara Javadi, Kamer Ali Yuksel, Yunsu Kim, Thiago Castro Ferreira, Mohamed Al-Badrashiny
Word-Level ASR Quality Estimation for Efficient Corpus Sampling and Post-Editing through Analyzing Attentions of a Reference-Free Metric
null
2024 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2024), Seoul, Korea
null
null
cs.CL cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
In the realm of automatic speech recognition (ASR), the quest for models that not only perform with high accuracy but also offer transparency in their decision-making processes is crucial. The potential of quality estimation (QE) metrics is introduced and evaluated as a novel tool to enhance explainable artificial intelligence (XAI) in ASR systems. Through experiments and analyses, the capabilities of the NoRefER (No Reference Error Rate) metric are explored in identifying word-level errors to aid post-editors in refining ASR hypotheses. The investigation also extends to the utility of NoRefER in the corpus-building process, demonstrating its effectiveness in augmenting datasets with insightful annotations. The diagnostic aspects of NoRefER are examined, revealing its ability to provide valuable insights into model behaviors and decision patterns. This has proven beneficial for prioritizing hypotheses in post-editing workflows and fine-tuning ASR models. The findings suggest that NoRefER is not merely a tool for error detection but also a comprehensive framework for enhancing ASR systems' transparency, efficiency, and effectiveness. To ensure the reproducibility of the results, all source codes of this study are made publicly available.
[ { "created": "Sat, 20 Jan 2024 16:48:55 GMT", "version": "v1" }, { "created": "Fri, 2 Feb 2024 22:54:18 GMT", "version": "v2" } ]
2024-02-06
[ [ "Javadi", "Golara", "" ], [ "Yuksel", "Kamer Ali", "" ], [ "Kim", "Yunsu", "" ], [ "Ferreira", "Thiago Castro", "" ], [ "Al-Badrashiny", "Mohamed", "" ] ]
2401.11448
Jichang Li
Jichang Li, Guanbin Li, Yizhou Yu
Adaptive Betweenness Clustering for Semi-Supervised Domain Adaptation
16 pages, 9 figures, published to IEEE TIP
IEEE Transactions on Image Processing, vol. 32, pp. 5580-5594, October 2023
10.1109/TIP.2023.3319274
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Compared to unsupervised domain adaptation, semi-supervised domain adaptation (SSDA) aims to significantly improve the classification performance and generalization capability of the model by leveraging the presence of a small amount of labeled data from the target domain. Several SSDA approaches have been developed to enable semantic-aligned feature confusion between labeled (or pseudo labeled) samples across domains; nevertheless, owing to the scarcity of semantic label information of the target domain, they were arduous to fully realize their potential. In this study, we propose a novel SSDA approach named Graph-based Adaptive Betweenness Clustering (G-ABC) for achieving categorical domain alignment, which enables cross-domain semantic alignment by mandating semantic transfer from labeled data of both the source and target domains to unlabeled target samples. In particular, a heterogeneous graph is initially constructed to reflect the pairwise relationships between labeled samples from both domains and unlabeled ones of the target domain. Then, to degrade the noisy connectivity in the graph, connectivity refinement is conducted by introducing two strategies, namely Confidence Uncertainty based Node Removal and Prediction Dissimilarity based Edge Pruning. Once the graph has been refined, Adaptive Betweenness Clustering is introduced to facilitate semantic transfer by using across-domain betweenness clustering and within-domain betweenness clustering, thereby propagating semantic label information from labeled samples across domains to unlabeled target data. Extensive experiments on three standard benchmark datasets, namely DomainNet, Office-Home, and Office-31, indicated that our method outperforms previous state-of-the-art SSDA approaches, demonstrating the superiority of the proposed G-ABC algorithm.
[ { "created": "Sun, 21 Jan 2024 09:57:56 GMT", "version": "v1" } ]
2024-01-23
[ [ "Li", "Jichang", "" ], [ "Li", "Guanbin", "" ], [ "Yu", "Yizhou", "" ] ]
2401.11485
Param Hanji
Rafal K. Mantiuk, Param Hanji, Maliha Ashraf, Yuta Asano, Alexandre Chapiro
ColorVideoVDP: A visual difference predictor for image, video and display distortions
28 pages
SIGGRAPH 2024 Technical Papers, Article 129
10.1145/3658144
null
cs.CV cs.GR eess.IV
http://creativecommons.org/licenses/by/4.0/
ColorVideoVDP is a video and image quality metric that models spatial and temporal aspects of vision, for both luminance and color. The metric is built on novel psychophysical models of chromatic spatiotemporal contrast sensitivity and cross-channel contrast masking. It accounts for the viewing conditions, geometric, and photometric characteristics of the display. It was trained to predict common video streaming distortions (e.g. video compression, rescaling, and transmission errors), and also 8 new distortion types related to AR/VR displays (e.g. light source and waveguide non-uniformities). To address the latter application, we collected our novel XR-Display-Artifact-Video quality dataset (XR-DAVID), comprised of 336 distorted videos. Extensive testing on XR-DAVID, as well as several datasets from the literature, indicate a significant gain in prediction performance compared to existing metrics. ColorVideoVDP opens the doors to many novel applications which require the joint automated spatiotemporal assessment of luminance and color distortions, including video streaming, display specification and design, visual comparison of results, and perceptually-guided quality optimization.
[ { "created": "Sun, 21 Jan 2024 13:16:33 GMT", "version": "v1" }, { "created": "Tue, 2 Jul 2024 21:16:38 GMT", "version": "v2" } ]
2024-07-04
[ [ "Mantiuk", "Rafal K.", "" ], [ "Hanji", "Param", "" ], [ "Ashraf", "Maliha", "" ], [ "Asano", "Yuta", "" ], [ "Chapiro", "Alexandre", "" ] ]
2401.11553
Sascha Ossowski
Holger Billhardt, Alberto Fern\'andez, Sascha Ossowski, Javier Palanca, Javier Bajo
Taxi dispatching strategies with compensations
null
Expert Systems with Applications, Volume 122 (2019)
10.1016/j.eswa.2019.01.001
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Urban mobility efficiency is of utmost importance in big cities. Taxi vehicles are key elements in daily traffic activity. The advance of ICT and geo-positioning systems has given rise to new opportunities for improving the efficiency of taxi fleets in terms of waiting times of passengers, cost and time for drivers, traffic density, CO2 emissions, etc., by using more informed, intelligent dispatching. Still, the explicit spatial and temporal components, as well as the scale and, in particular, the dynamicity of the problem of pairing passengers and taxis in big towns, render traditional approaches for solving standard assignment problem useless for this purpose, and call for intelligent approximation strategies based on domain-specific heuristics. Furthermore, taxi drivers are often autonomous actors and may not agree to participate in assignments that, though globally efficient, may not be sufficently beneficial for them individually. This paper presents a new heuristic algorithm for taxi assignment to customers that considers taxi reassignments if this may lead to globally better solutions. In addition, as such new assignments may reduce the expected revenues of individual drivers, we propose an economic compensation scheme to make individually rational drivers agree to proposed modifications in their assigned clients. We carried out a set of experiments, where several commonly used assignment strategies are compared to three different instantiations of our heuristic algorithm. The results indicate that our proposal has the potential to reduce customer waiting times in fleets of autonomous taxis, while being also beneficial from an economic point of view.
[ { "created": "Sun, 21 Jan 2024 17:54:46 GMT", "version": "v1" } ]
2024-01-23
[ [ "Billhardt", "Holger", "" ], [ "Fernández", "Alberto", "" ], [ "Ossowski", "Sascha", "" ], [ "Palanca", "Javier", "" ], [ "Bajo", "Javier", "" ] ]
2401.11609
Maria Lymperaiou
Angeliki Dimitriou, Nikolaos Chaidos, Maria Lymperaiou, Giorgos Stamou
Graph Edits for Counterfactual Explanations: A comparative study
null
The World Conference on eXplainable Artificial Intelligence (XAI 2024)
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Counterfactuals have been established as a popular explainability technique which leverages a set of minimal edits to alter the prediction of a classifier. When considering conceptual counterfactuals on images, the edits requested should correspond to salient concepts present in the input data. At the same time, conceptual distances are defined by knowledge graphs, ensuring the optimality of conceptual edits. In this work, we extend previous endeavors on graph edits as counterfactual explanations by conducting a comparative study which encompasses both supervised and unsupervised Graph Neural Network (GNN) approaches. To this end, we pose the following significant research question: should we represent input data as graphs, which is the optimal GNN approach in terms of performance and time efficiency to generate minimal and meaningful counterfactual explanations for black-box image classifiers?
[ { "created": "Sun, 21 Jan 2024 22:11:29 GMT", "version": "v1" }, { "created": "Wed, 20 Mar 2024 19:12:28 GMT", "version": "v2" }, { "created": "Thu, 18 Apr 2024 14:29:29 GMT", "version": "v3" } ]
2024-05-06
[ [ "Dimitriou", "Angeliki", "" ], [ "Chaidos", "Nikolaos", "" ], [ "Lymperaiou", "Maria", "" ], [ "Stamou", "Giorgos", "" ] ]