id
stringlengths
10
10
submitter
stringlengths
3
52
authors
stringlengths
6
7.24k
title
stringlengths
12
217
comments
stringlengths
1
446
journal-ref
stringlengths
4
297
doi
stringlengths
12
118
report-no
stringclasses
237 values
categories
stringlengths
5
71
license
stringclasses
6 values
abstract
stringlengths
90
3.26k
versions
listlengths
1
17
update_date
stringclasses
969 values
authors_parsed
sequencelengths
1
451
2401.11617
Mennatullah Siam M.S.
Abdul-Hakeem Omotayo, Ashery Mbilinyi, Lukman Ismaila, Houcemeddine Turki, Mahmoud Abdien, Karim Gamal, Idriss Tondji, Yvan Pimi, Naome A. Etori, Marwa M. Matar, Clifford Broni-Bediako, Abigail Oppong, Mai Gamal, Eman Ehab, Gbetondji Dovonon, Zainab Akinjobi, Daniel Ajisafe, Oluwabukola G. Adegboro, Mennatullah Siam
The State of Computer Vision Research in Africa
Community Work of Ro'ya Grassroots, https://ro-ya-cv4africa.github.io/homepage/. Published in JAIR,. arXiv admin note: text overlap with arXiv:2305.06773
JAIR 2024
10.1613/jair.1.16653
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Despite significant efforts to democratize artificial intelligence (AI), computer vision which is a sub-field of AI, still lags in Africa. A significant factor to this, is the limited access to computing resources, datasets, and collaborations. As a result, Africa's contribution to top-tier publications in this field has only been 0.06% over the past decade. Towards improving the computer vision field and making it more accessible and inclusive, this study analyzes 63,000 Scopus-indexed computer vision publications from Africa. We utilize large language models to automatically parse their abstracts, to identify and categorize topics and datasets. This resulted in listing more than 100 African datasets. Our objective is to provide a comprehensive taxonomy of dataset categories to facilitate better understanding and utilization of these resources. We also analyze collaboration trends of researchers within and outside the continent. Additionally, we conduct a large-scale questionnaire among African computer vision researchers to identify the structural barriers they believe require urgent attention. In conclusion, our study offers a comprehensive overview of the current state of computer vision research in Africa, to empower marginalized communities to participate in the design and development of computer vision systems.
[ { "created": "Sun, 21 Jan 2024 22:50:44 GMT", "version": "v1" }, { "created": "Sun, 4 Feb 2024 18:17:27 GMT", "version": "v2" }, { "created": "Fri, 13 Sep 2024 22:49:08 GMT", "version": "v3" } ]
2024-09-17
[ [ "Omotayo", "Abdul-Hakeem", "" ], [ "Mbilinyi", "Ashery", "" ], [ "Ismaila", "Lukman", "" ], [ "Turki", "Houcemeddine", "" ], [ "Abdien", "Mahmoud", "" ], [ "Gamal", "Karim", "" ], [ "Tondji", "Idriss", "" ], [ "Pimi", "Yvan", "" ], [ "Etori", "Naome A.", "" ], [ "Matar", "Marwa M.", "" ], [ "Broni-Bediako", "Clifford", "" ], [ "Oppong", "Abigail", "" ], [ "Gamal", "Mai", "" ], [ "Ehab", "Eman", "" ], [ "Dovonon", "Gbetondji", "" ], [ "Akinjobi", "Zainab", "" ], [ "Ajisafe", "Daniel", "" ], [ "Adegboro", "Oluwabukola G.", "" ], [ "Siam", "Mennatullah", "" ] ]
2401.11645
Aditya Patil
Aditya Patil, Vikas Joshi, Purvi Agrawal, Rupesh Mehta
Streaming Bilingual End-to-End ASR model using Attention over Multiple Softmax
Published in IEEE's Spoken Language Technology (SLT) 2022, 8 pages (6 + 2 for references), 5 figures
2022 IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar, 2023, pp. 252-259
10.1109/SLT54892.2023.10022475
null
eess.AS cs.CL cs.SD
http://creativecommons.org/licenses/by/4.0/
Even with several advancements in multilingual modeling, it is challenging to recognize multiple languages using a single neural model, without knowing the input language and most multilingual models assume the availability of the input language. In this work, we propose a novel bilingual end-to-end (E2E) modeling approach, where a single neural model can recognize both languages and also support switching between the languages, without any language input from the user. The proposed model has shared encoder and prediction networks, with language-specific joint networks that are combined via a self-attention mechanism. As the language-specific posteriors are combined, it produces a single posterior probability over all the output symbols, enabling a single beam search decoding and also allowing dynamic switching between the languages. The proposed approach outperforms the conventional bilingual baseline with 13.3%, 8.23% and 1.3% word error rate relative reduction on Hindi, English and code-mixed test sets, respectively.
[ { "created": "Mon, 22 Jan 2024 01:44:42 GMT", "version": "v1" } ]
2024-01-23
[ [ "Patil", "Aditya", "" ], [ "Joshi", "Vikas", "" ], [ "Agrawal", "Purvi", "" ], [ "Mehta", "Rupesh", "" ] ]
2401.11649
Mengmeng Wang
Mengmeng Wang, Jiazheng Xing, Boyuan Jiang, Jun Chen, Jianbiao Mei, Xingxing Zuo, Guang Dai, Jingdong Wang, Yong Liu
M2-CLIP: A Multimodal, Multi-task Adapting Framework for Video Action Recognition
null
AAAI2024
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, the rise of large-scale vision-language pretrained models like CLIP, coupled with the technology of Parameter-Efficient FineTuning (PEFT), has captured substantial attraction in video action recognition. Nevertheless, prevailing approaches tend to prioritize strong supervised performance at the expense of compromising the models' generalization capabilities during transfer. In this paper, we introduce a novel Multimodal, Multi-task CLIP adapting framework named \name to address these challenges, preserving both high supervised performance and robust transferability. Firstly, to enhance the individual modality architectures, we introduce multimodal adapters to both the visual and text branches. Specifically, we design a novel visual TED-Adapter, that performs global Temporal Enhancement and local temporal Difference modeling to improve the temporal representation capabilities of the visual encoder. Moreover, we adopt text encoder adapters to strengthen the learning of semantic label information. Secondly, we design a multi-task decoder with a rich set of supervisory signals to adeptly satisfy the need for strong supervised performance and generalization within a multimodal framework. Experimental results validate the efficacy of our approach, demonstrating exceptional performance in supervised learning while maintaining strong generalization in zero-shot scenarios.
[ { "created": "Mon, 22 Jan 2024 02:03:31 GMT", "version": "v1" } ]
2024-01-23
[ [ "Wang", "Mengmeng", "" ], [ "Xing", "Jiazheng", "" ], [ "Jiang", "Boyuan", "" ], [ "Chen", "Jun", "" ], [ "Mei", "Jianbiao", "" ], [ "Zuo", "Xingxing", "" ], [ "Dai", "Guang", "" ], [ "Wang", "Jingdong", "" ], [ "Liu", "Yong", "" ] ]
2401.11673
Xinlin Ren
Chenjie Cao, Xinlin Ren, Yanwei Fu
MVSFormer++: Revealing the Devil in Transformer's Details for Multi-View Stereo
Accepted to ICLR2024
ICLR(International Conference on Learning Representations) 2024
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advancements in learning-based Multi-View Stereo (MVS) methods have prominently featured transformer-based models with attention mechanisms. However, existing approaches have not thoroughly investigated the profound influence of transformers on different MVS modules, resulting in limited depth estimation capabilities. In this paper, we introduce MVSFormer++, a method that prudently maximizes the inherent characteristics of attention to enhance various components of the MVS pipeline. Formally, our approach involves infusing cross-view information into the pre-trained DINOv2 model to facilitate MVS learning. Furthermore, we employ different attention mechanisms for the feature encoder and cost volume regularization, focusing on feature and spatial aggregations respectively. Additionally, we uncover that some design details would substantially impact the performance of transformer modules in MVS, including normalized 3D positional encoding, adaptive attention scaling, and the position of layer normalization. Comprehensive experiments on DTU, Tanks-and-Temples, BlendedMVS, and ETH3D validate the effectiveness of the proposed method. Notably, MVSFormer++ achieves state-of-the-art performance on the challenging DTU and Tanks-and-Temples benchmarks.
[ { "created": "Mon, 22 Jan 2024 03:22:49 GMT", "version": "v1" } ]
2024-01-23
[ [ "Cao", "Chenjie", "" ], [ "Ren", "Xinlin", "" ], [ "Fu", "Yanwei", "" ] ]
2401.11790
Francesc Xavier Gaya Morey
F. Xavier Gaya-Morey, Cristina Manresa-Yee, Jose M. Buades-Rubio
Deep Learning for Computer Vision based Activity Recognition and Fall Detection of the Elderly: a Systematic Review
null
Gaya-Morey, F.X., Manresa-Yee, C. and Buades-Rubio, J.M. Deep learning for computer vision based activity recognition and fall detection of the elderly: a systematic review. Appl Intell 54, 8982-9007 (2024)
10.1007/s10489-024-05645-1
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
As the percentage of elderly people in developed countries increases worldwide, the healthcare of this collective is a worrying matter, especially if it includes the preservation of their autonomy. In this direction, many studies are being published on Ambient Assisted Living (AAL) systems, which help to reduce the preoccupations raised by the independent living of the elderly. In this study, a systematic review of the literature is presented on fall detection and Human Activity Recognition (HAR) for the elderly, as the two main tasks to solve to guarantee the safety of elderly people living alone. To address the current tendency to perform these two tasks, the review focuses on the use of Deep Learning (DL) based approaches on computer vision data. In addition, different collections of data like DL models, datasets or hardware (e.g. depth or thermal cameras) are gathered from the reviewed studies and provided for reference in future studies. Strengths and weaknesses of existing approaches are also discussed and, based on them, our recommendations for future works are provided.
[ { "created": "Mon, 22 Jan 2024 09:40:52 GMT", "version": "v1" }, { "created": "Wed, 28 Aug 2024 09:09:34 GMT", "version": "v2" }, { "created": "Tue, 3 Sep 2024 07:34:44 GMT", "version": "v3" } ]
2024-09-04
[ [ "Gaya-Morey", "F. Xavier", "" ], [ "Manresa-Yee", "Cristina", "" ], [ "Buades-Rubio", "Jose M.", "" ] ]
2401.11831
Vincent Christlein
Richin Sukesh, Mathias Seuret, Anguelos Nicolaou, Martin Mayr, Vincent Christlein
A Fair Evaluation of Various Deep Learning-Based Document Image Binarization Approaches
DAS 2022
Document Analysis Systems. DAS 2022. Lecture Notes in Computer Science, vol 13237. Springer, Cham
10.1007/978-3-031-06555-2_52
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Binarization of document images is an important pre-processing step in the field of document analysis. Traditional image binarization techniques usually rely on histograms or local statistics to identify a valid threshold to differentiate between different aspects of the image. Deep learning techniques are able to generate binarized versions of the images by learning context-dependent features that are less error-prone to degradation typically occurring in document images. In recent years, many deep learning-based methods have been developed for document binarization. But which one to choose? There have been no studies that compare these methods rigorously. Therefore, this work focuses on the evaluation of different deep learning-based methods under the same evaluation protocol. We evaluate them on different Document Image Binarization Contest (DIBCO) datasets and obtain very heterogeneous results. We show that the DE-GAN model was able to perform better compared to other models when evaluated on the DIBCO2013 dataset while DP-LinkNet performed best on the DIBCO2017 dataset. The 2-StageGAN performed best on the DIBCO2018 dataset while SauvolaNet outperformed the others on the DIBCO2019 challenge. Finally, we make the code, all models and evaluation publicly available (https://github.com/RichSu95/Document_Binarization_Collection) to ensure reproducibility and simplify future binarization evaluations.
[ { "created": "Mon, 22 Jan 2024 10:42:51 GMT", "version": "v1" } ]
2024-01-23
[ [ "Sukesh", "Richin", "" ], [ "Seuret", "Mathias", "" ], [ "Nicolaou", "Anguelos", "" ], [ "Mayr", "Martin", "" ], [ "Christlein", "Vincent", "" ] ]
2401.11848
Idoia Berges
V\'ictor Julio Ram\'irez-Dur\'an, Idoia Berges, Arantza Illarramendi
ExtruOnt: An ontology for describing a type of manufacturing machine for Industry 4.0 systems
This is the accepted manuscript. The definitive, peer reviewed and edited version of this article is published in Semantic Web 11(6): 887-909 (2020) https://doi.org/10.3233/sw-200376
Semantic Web 11(6): 887-909 (2020)
10.3233/sw-200376
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Semantically rich descriptions of manufacturing machines, offered in a machine-interpretable code, can provide interesting benefits in Industry 4.0 scenarios. However, the lack of that type of descriptions is evident. In this paper we present the development effort made to build an ontology, called ExtruOnt, for describing a type of manufacturing machine, more precisely, a type that performs an extrusion process (extruder). Although the scope of the ontology is restricted to a concrete domain, it could be used as a model for the development of other ontologies for describing manufacturing machines in Industry 4.0 scenarios. The terms of the ExtruOnt ontology provide different types of information related with an extruder, which are reflected in distinct modules that constitute the ontology. Thus, it contains classes and properties for expressing descriptions about components of an extruder, spatial connections, features, and 3D representations of those components, and finally the sensors used to capture indicators about the performance of this type of machine. The ontology development process has been carried out in close collaboration with domain experts.
[ { "created": "Mon, 22 Jan 2024 11:05:54 GMT", "version": "v1" } ]
2024-01-23
[ [ "Ramírez-Durán", "Víctor Julio", "" ], [ "Berges", "Idoia", "" ], [ "Illarramendi", "Arantza", "" ] ]
2401.11865
Idoia Berges
Idoia Berges, Jes\'us Berm\'udez, Arantza Illarramendi
Toward Semantic Interoperability of Electronic Health Records
This is the Accepted Manuscript. The definitive, peer reviewed and edited version of this article is: Idoia Berges, Jes\'us Berm\'udez, Arantza Illarramendi: Toward Semantic Interoperability of Electronic Health Records. IEEE Trans. Inf. Technol. Biomed. 16(3): 424-431 (2012). DOI:10.1109/TITB.2011.2180917. Copyright 2011 IEEE
IEEE Trans. Inf. Technol. Biomed. 16(3): 424-431 (2012)
10.1109/TITB.2011.2180917
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although the goal of achieving semantic interoperability of electronic health records (EHRs) is pursued by many researchers, it has not been accomplished yet. In this paper, we present a proposal that smoothes out the way toward the achievement of that goal. In particular, our study focuses on medical diagnoses statements. In summary, the main contributions of our ontology-based proposal are the following: first, it includes a canonical ontology whose EHR-related terms focus on semantic aspects. As a result, their descriptions are independent of languages and technology aspects used in different organizations to represent EHRs. Moreover, those terms are related to their corresponding codes in well-known medical terminologies. Second, it deals with modules that allow obtaining rich ontological representations of EHR information managed by proprietary models of health information systems. The features of one specific module are shown as reference. Third, it considers the necessary mapping axioms between ontological terms enhanced with so-called path mappings. This feature smoothes out structural differences between heterogeneous EHR representations, allowing proper alignment of information.
[ { "created": "Mon, 22 Jan 2024 11:39:55 GMT", "version": "v1" } ]
2024-01-23
[ [ "Berges", "Idoia", "" ], [ "Bermúdez", "Jesús", "" ], [ "Illarramendi", "Arantza", "" ] ]
2401.11898
EPTCS
Salwa Tabet Gonzalez (University of Strasbourg), Predrag Jani\v{c}i\'c (University of Belgrade), Julien Narboux (University of Strasbourg)
Automated Completion of Statements and Proofs in Synthetic Geometry: an Approach based on Constraint Solving
In Proceedings ADG 2023, arXiv:2401.10725
EPTCS 398, 2024, pp. 21-37
10.4204/EPTCS.398.6
null
cs.AI cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Conjecturing and theorem proving are activities at the center of mathematical practice and are difficult to separate. In this paper, we propose a framework for completing incomplete conjectures and incomplete proofs. The framework can turn a conjecture with missing assumptions and with an under-specified goal into a proper theorem. Also, the proposed framework can help in completing a proof sketch into a human-readable and machine-checkable proof. Our approach is focused on synthetic geometry, and uses coherent logic and constraint solving. The proposed approach is uniform for all three kinds of tasks, flexible and, to our knowledge, unique such approach.
[ { "created": "Mon, 22 Jan 2024 12:49:08 GMT", "version": "v1" } ]
2024-01-25
[ [ "Gonzalez", "Salwa Tabet", "", "University of Strasbourg" ], [ "Janičić", "Predrag", "", "University of Belgrade" ], [ "Narboux", "Julien", "", "University of Strasbourg" ] ]
2401.11900
EPTCS
Zolt\'an Kov\'acs (The Private University College of Education of the Diocese of Linz, Austria), Tom\'as Recio (Escuela Polit\'ecnica Superior, Universidad Antonio de Nebrija, Madrid, Spain), M. Pilar V\'elez (Escuela Polit\'ecnica Superior, Universidad Antonio de Nebrija, Madrid, Spain)
Showing Proofs, Assessing Difficulty with GeoGebra Discovery
In Proceedings ADG 2023, arXiv:2401.10725
EPTCS 398, 2024, pp. 43-52
10.4204/EPTCS.398.8
null
cs.SC cs.AI cs.CG
http://creativecommons.org/licenses/by/4.0/
In our contribution we describe some on-going improvements concerning the Automated Reasoning Tools developed in GeoGebra Discovery, providing different examples of the performance of these new features. We describe the new ShowProof command, that outputs both the sequence of the different steps performed by GeoGebra Discovery to confirm a certain statement, as well as a number intending to grade the difficulty or interest of the assertion. The proposal of this assessment measure, involving the comparison of the expression of the thesis (or conclusion) as a combination of the hypotheses, will be developed.
[ { "created": "Mon, 22 Jan 2024 12:50:12 GMT", "version": "v1" } ]
2024-01-25
[ [ "Kovács", "Zoltán", "", "The Private University College of Education of the\n Diocese of Linz, Austria" ], [ "Recio", "Tomás", "", "Escuela Politécnica Superior,\n Universidad Antonio de Nebrija, Madrid, Spain" ], [ "Vélez", "M. Pilar", "", "Escuela\n Politécnica Superior, Universidad Antonio de Nebrija, Madrid, Spain" ] ]
2401.11903
EPTCS
Milan Bankovi\'c (Faculty of Mathematics, University of Belgrade, Serbia)
Automation of Triangle Ruler-and-Compass Constructions Using Constraint Solvers
In Proceedings ADG 2023, arXiv:2401.10725
EPTCS 398, 2024, pp. 62-72
10.4204/EPTCS.398.10
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
In this paper, we present an approach to automated solving of triangle ruler-and-compass construction problems using finite-domain constraint solvers. The constraint model is described in the MiniZinc modeling language, and is based on the automated planning. The main benefit of using general constraint solvers for such purpose, instead of developing dedicated tools, is that we can rely on the efficient search that is already implemented within the solver, enabling us to focus on geometric aspects of the problem. We may also use the solver's built-in optimization capabilities to search for the shortest possible constructions. We evaluate our approach on 74 solvable problems from the Wernick's list, and compare it to the dedicated triangle construction solver ArgoTriCS. The results show that our approach is comparable to dedicated tools, while it requires much less effort to implement. Also, our model often finds shorter constructions, thanks to the optimization capabilities offered by the constraint solvers.
[ { "created": "Mon, 22 Jan 2024 12:50:46 GMT", "version": "v1" } ]
2024-01-23
[ [ "Banković", "Milan", "", "Faculty of Mathematics, University of Belgrade,\n Serbia" ] ]
2401.11905
EPTCS
Pedro Quaresma (University of Coimbra), Pierluigi Graziani (University of Urbino), Stefano M. Nicoletti (University of Twente)
Considerations on Approaches and Metrics in Automated Theorem Generation/Finding in Geometry
In Proceedings ADG 2023, arXiv:2401.10725
EPTCS 398, 2024, pp. 85-100
10.4204/EPTCS.398.12
null
cs.AI cs.LO
http://creativecommons.org/licenses/by/4.0/
The pursue of what are properties that can be identified to permit an automated reasoning program to generate and find new and interesting theorems is an interesting research goal (pun intended). The automatic discovery of new theorems is a goal in itself, and it has been addressed in specific areas, with different methods. The separation of the "weeds", uninteresting, trivial facts, from the "wheat", new and interesting facts, is much harder, but is also being addressed by different authors using different approaches. In this paper we will focus on geometry. We present and discuss different approaches for the automatic discovery of geometric theorems (and properties), and different metrics to find the interesting theorems among all those that were generated. After this description we will introduce the first result of this article: an undecidability result proving that having an algorithmic procedure that decides for every possible Turing Machine that produces theorems, whether it is able to produce also interesting theorems, is an undecidable problem. Consequently, we will argue that judging whether a theorem prover is able to produce interesting theorems remains a non deterministic task, at best a task to be addressed by program based in an algorithm guided by heuristics criteria. Therefore, as a human, to satisfy this task two things are necessary: an expert survey that sheds light on what a theorem prover/finder of interesting geometric theorems is, and - to enable this analysis - other surveys that clarify metrics and approaches related to the interestingness of geometric theorems. In the conclusion of this article we will introduce the structure of two of these surveys - the second result of this article - and we will discuss some future work.
[ { "created": "Mon, 22 Jan 2024 12:51:19 GMT", "version": "v1" } ]
2024-01-23
[ [ "Quaresma", "Pedro", "", "University of Coimbra" ], [ "Graziani", "Pierluigi", "", "University\n of Urbino" ], [ "Nicoletti", "Stefano M.", "", "University of Twente" ] ]
2401.11906
EPTCS
Bel\'en Ari\~no-Morera (Departamento de Econom\'ia Financiera y Contabilidad, Universidad Rey Juan Carlos, Madrid, Spain), Zolt\'an Kov\'acs (The Private University College of Education of the Diocese of Linz, Austria), Tom\'as Recio (Escuela Polit\'ecnica Superior, Universidad Antonio de Nebrija, Madrid, Spain), Piedad Tolmos (Departamento de Econom\'ia Financiera y Contabilidad, Universidad Rey Juan Carlos, Madrid, Spain)
Solving with GeoGebra Discovery an Austrian Mathematics Olympiad problem: Lessons Learned
In Proceedings ADG 2023, arXiv:2401.10725
EPTCS 398, 2024, pp. 101-109
10.4204/EPTCS.398.13
null
cs.SC cs.AI cs.CG
http://creativecommons.org/licenses/by/4.0/
We address, through the automated reasoning tools in GeoGebra Discovery, a problem from a regional phase of the Austrian Mathematics Olympiad 2023. Trying to solve this problem gives rise to four different kind of feedback: the almost instantaneous, automated solution of the proposed problem; the measure of its complexity, according to some recent proposals; the automated discovery of a generalization of the given assertion, showing that the same statement is true over more general polygons than those mentioned in the problem; and the difficulties associated to the analysis of the surprising and involved high number of degenerate cases that appear when using the LocusEquation command in this problem. In our communication we will describe and reflect on these diverse issues, enhancing its exemplar role for showing some of the advantages, problems, and current fields of development of GeoGebra Discovery.
[ { "created": "Mon, 22 Jan 2024 12:51:35 GMT", "version": "v1" } ]
2024-01-25
[ [ "Ariño-Morera", "Belén", "", "Departamento de Economía Financiera y\n Contabilidad, Universidad Rey Juan Carlos, Madrid, Spain" ], [ "Kovács", "Zoltán", "", "The Private University College of Education of the Diocese of Linz,\n Austria" ], [ "Recio", "Tomás", "", "Escuela Politécnica Superior, Universidad Antonio\n de Nebrija, Madrid, Spain" ], [ "Tolmos", "Piedad", "", "Departamento de Economía\n Financiera y Contabilidad, Universidad Rey Juan Carlos, Madrid, Spain" ] ]
2401.12108
Jeremias D\"otterl
Jeremias D\"otterl, Ralf Bruns, J\"urgen Dunkel, Sascha Ossowski
On-Time Delivery in Crowdshipping Systems: An Agent-Based Approach Using Streaming Data
null
Frontiers in Artificial Intelligence and Applications. Volume 325: ECAI 2020. Pages 51-58
10.3233/FAIA200075
null
cs.AI cs.LG cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In parcel delivery, the "last mile" from the parcel hub to the customer is costly, especially for time-sensitive delivery tasks that have to be completed within hours after arrival. Recently, crowdshipping has attracted increased attention as a new alternative to traditional delivery modes. In crowdshipping, private citizens ("the crowd") perform short detours in their daily lives to contribute to parcel delivery in exchange for small incentives. However, achieving desirable crowd behavior is challenging as the crowd is highly dynamic and consists of autonomous, self-interested individuals. Leveraging crowdshipping for time-sensitive deliveries remains an open challenge. In this paper, we present an agent-based approach to on-time parcel delivery with crowds. Our system performs data stream processing on the couriers' smartphone sensor data to predict delivery delays. Whenever a delay is predicted, the system attempts to forge an agreement for transferring the parcel from the current deliverer to a more promising courier nearby. Our experiments show that through accurate delay predictions and purposeful task transfers many delays can be prevented that would occur without our approach.
[ { "created": "Mon, 22 Jan 2024 16:45:15 GMT", "version": "v1" } ]
2024-01-23
[ [ "Dötterl", "Jeremias", "" ], [ "Bruns", "Ralf", "" ], [ "Dunkel", "Jürgen", "" ], [ "Ossowski", "Sascha", "" ] ]
2401.12259
Sascha Ossowski
Holger Billhardt, Alberto Fern\'andez, Marin Lujak, Sascha Ossowski
Agreement Technologies for Coordination in Smart Cities
null
Applied Sciences, Volume 8, Issue 5 (2018)
10.3390/app8050816
null
cs.MA cs.AI
http://creativecommons.org/licenses/by/4.0/
Many challenges in today's society can be tackled by distributed open systems. This is particularly true for domains that are commonly perceived under the umbrella of smart cities, such as intelligent transportation, smart energy grids, or participative governance. When designing computer applications for these domains, it is necessary to account for the fact that the elements of such systems, often called software agents, are usually made by different designers and act on behalf of particular stakeholders. Furthermore, it is unknown at design time when such agents will enter or leave the system, and what interests new agents will represent. To instil coordination in such systems is particularly demanding, as usually only part of them can be directly controlled at runtime. Agreement technologies refer to a sandbox of tools and mechanisms for the development of such open multiagent systems, which are based on the notion of agreement. In this paper, we argue that agreement technologies are a suitable means for achieving coordination in smart city domains, and back our claim through examples of several real-world applications.
[ { "created": "Sun, 21 Jan 2024 17:43:08 GMT", "version": "v1" } ]
2024-01-24
[ [ "Billhardt", "Holger", "" ], [ "Fernández", "Alberto", "" ], [ "Lujak", "Marin", "" ], [ "Ossowski", "Sascha", "" ] ]
2401.12322
Sascha Ossowski
Holger Billhardt, Alberto Fern\'andez, Sascha Ossowski
Smart Recommendations for Renting Bikes in Bike Sharing Systems
null
Applied Sciences, Volume 11, Issue 20 (2021)
10.3390/app11209654
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Vehicle-sharing systems -- such as bike-, car-, or motorcycle-sharing systems -- have become increasingly popular in big cities in recent years. On the one hand, they provide a cheaper and environmentally friendlier means of transportation than private cars, and on the other hand, they satisfy the individual mobility demands of citizens better than traditional public transport systems. One of their advantages in this regard is their availability, e.g., the possibility of taking (or leaving) a vehicle almost anywhere in a city. This availability obviously depends on different strategic and operational management decisions and policies, such as the dimension of the fleet or the (re)distribution of vehicles. Agglutination problems -- where, due to usage patterns, available vehicles are concentrated in certain areas, whereas no vehicles are available in others -- are quite common in such systems, and need to be dealt with. Research has been dedicated to this problem, specifying different techniques to reduce imbalanced situations. In this paper, we present and compare strategies for recommending stations to users who wish to rent or return bikes in station-based bike-sharing systems. Our first contribution is a novel recommendation strategy based on queuing theory that recommends stations based on their utility to the user in terms of lower distance and higher probability of finding a bike or slot. Then, we go one step further, defining a strategy that recommends stations by combining the utility of a particular user with the utility of the global system, measured in terms of the improvement in the distribution of bikes and slots with respect to the expected future demand, with the aim of implicitly avoiding or alleviating balancing problems. We present several experiments to evaluate our proposal with real data from the bike sharing system BiciMAD in Madrid.
[ { "created": "Mon, 22 Jan 2024 19:29:33 GMT", "version": "v1" } ]
2024-01-24
[ [ "Billhardt", "Holger", "" ], [ "Fernández", "Alberto", "" ], [ "Ossowski", "Sascha", "" ] ]
2401.12324
Sascha Ossowski
Holger Billhardt, Jos\'e-Antonio Santos, Alberto Fern\'andez, Mar Moreno, Sascha Ossowski, Jos\'e A. Rodr\'iguez
Streamlining Advanced Taxi Assignment Strategies based on Legal Analysis
null
Neurocomputing, Volume 438 (2022)
10.1016/j.neucom.2021.10.085
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
In recent years many novel applications have appeared that promote the provision of services and activities in a collaborative manner. The key idea behind such systems is to take advantage of idle or underused capacities of existing resources, in order to provide improved services that assist people in their daily tasks, with additional functionality, enhanced efficiency, and/or reduced cost. Particularly in the domain of urban transportation, many researchers have put forward novel ideas, which are then implemented and evaluated through prototypes that usually draw upon AI methods and tools. However, such proposals also bring up multiple non-technical issues that need to be identified and addressed adequately if such systems are ever meant to be applied to the real world. While, in practice, legal and ethical aspects related to such AI-based systems are seldomly considered in the beginning of the research and development process, we argue that they not only restrict design decisions, but can also help guiding them. In this manuscript, we set out from a prototype of a taxi coordination service that mediates between individual (and autonomous) taxis and potential customers. After representing key aspects of its operation in a semi-structured manner, we analyse its viability from the viewpoint of current legal restrictions and constraints, so as to identify additional non-functional requirements as well as options to address them. Then, we go one step ahead, and actually modify the existing prototype to incorporate the previously identified recommendations. Performing experiments with this improved system helps us identify the most adequate option among several legally admissible alternatives.
[ { "created": "Mon, 22 Jan 2024 19:35:28 GMT", "version": "v1" } ]
2024-01-24
[ [ "Billhardt", "Holger", "" ], [ "Santos", "José-Antonio", "" ], [ "Fernández", "Alberto", "" ], [ "Moreno", "Mar", "" ], [ "Ossowski", "Sascha", "" ], [ "Rodríguez", "José A.", "" ] ]
2401.12329
Sascha Ossowski
Holger Billhardt, Alberto Fern\'andez, Pasqual Mart\'i, Javier Prieto Tejedor, Sascha Ossowski
Towards a prioritised use of transportation infrastructures: the case of vehicle-specific dynamic access restrictions to city centres
null
Electronics, Volume 11, Issue 4 (2022)
10.3390/electronics11040576
null
physics.soc-ph cs.AI
http://creativecommons.org/licenses/by/4.0/
One of the main problems that local authorities of large cities have to face is the regulation of urban mobility. They need to provide the means to allow for the efficient movement of people and distribution of goods. However, the provisioning of transportation services needs to take into account general global objectives, like reducing emissions and having more healthy living environments, which may not always be aligned with individual interests. Urban mobility is usually provided through a transport infrastructure that includes all the elements that support mobility. On many occasions, the capacity of the elements of this infrastructure is lower than the actual demand and thus different transportation activities compete for their use. In this paper, we argue that scarce transport infrastructure elements should be assigned dynamically and in a prioritised manner to transport activities that have a higher utility from the point of view of society; for example, activities that produce less pollution and provide more value to society. In this paper, we define a general model for prioritizing the use of a particular type of transportation infrastructure element called time-unlimited elements, whose usage time is unknown a priori, and illustrate its dynamics through two use cases: vehicle-specific dynamic access restriction in city centres (i) based on the usage levels of available parking spaces and (ii) to assure sustained admissible air quality levels in the city centre. We carry out several experiments using the SUMO traffic simulation tool to evaluate our proposal.
[ { "created": "Mon, 22 Jan 2024 19:43:54 GMT", "version": "v1" } ]
2024-01-24
[ [ "Billhardt", "Holger", "" ], [ "Fernández", "Alberto", "" ], [ "Martí", "Pasqual", "" ], [ "Tejedor", "Javier Prieto", "" ], [ "Ossowski", "Sascha", "" ] ]
2401.12375
Ikechukwu Onyenwe
Tubo Faustinah Nemieboka, Ikechukwu E. Onyenwe, Doris C. Asogwa
Development of an NLP-driven computer-based test guide for visually impaired students
10 pages, 6 figures
International Journal of Advanced Research in Computer and Communication Engineering (IJARCCE) Vol. 12, Issue 9, September 2023
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
In recent years, advancements in Natural Language Processing (NLP) techniques have revolutionized the field of accessibility and exclusivity of testing, particularly for visually impaired students (VIS). CBT has shown in years back its relevance in terms of administering exams electronically, making the test process easier, providing quicker and more accurate results, and offering greater flexibility and accessibility for candidates. Yet, its relevance was not felt by the visually impaired students as they cannot access printed documents. Hence, in this paper, we present an NLP-driven Computer-Based Test guide for visually impaired students. It employs a speech technology pre-trained methods to provide real-time assistance and support to visually impaired students. The system utilizes NLP technologies to convert the text-based questions and the associated options in a machine-readable format. Subsequently, the speech technology pre-trained model processes the converted text enabling the VIS to comprehend and analyze the content. Furthermore, we validated that this pre-trained model is not perverse by testing for accuracy using sample audio datasets labels (A, B, C, D, E, F, G) to compare with the voice recordings obtained from 20 VIS which is been predicted by the system to attain values for precision, recall, and F1-scores. These metrics are used to assess the performance of the pre-trained model and have indicated that it is proficient enough to give its better performance to the evaluated system. The methodology adopted for this system is Object Oriented Analysis and Design Methodology (OOADM) where Objects are discussed and built by modeling real-world instances.
[ { "created": "Mon, 22 Jan 2024 21:59:00 GMT", "version": "v1" } ]
2024-01-24
[ [ "Nemieboka", "Tubo Faustinah", "" ], [ "Onyenwe", "Ikechukwu E.", "" ], [ "Asogwa", "Doris C.", "" ] ]
2401.12451
Shun Fang
Shun Fang, Ming Cui, Xing Feng, Yanna Lv
Methods and strategies for improving the novel view synthesis quality of neural radiation field
null
IEEE ACCESS 12 (2024) 50548-50555
10.1109/ACCESS.2024.3382997
null
cs.CV cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Neural Radiation Field (NeRF) technology can learn a 3D implicit model of a scene from 2D images and synthesize realistic novel view images. This technology has received widespread attention from the industry and has good application prospects. In response to the problem that the rendering quality of NeRF images needs to be improved, many researchers have proposed various methods to improve the rendering quality in the past three years. The latest relevant papers are classified and reviewed, the technical principles behind quality improvement are analyzed, and the future evolution direction of quality improvement methods is discussed. This study can help researchers quickly understand the current state and evolutionary context of technology in this field, which is helpful in inspiring the development of more efficient algorithms and promoting the application of NeRF technology in related fields.
[ { "created": "Tue, 23 Jan 2024 02:30:16 GMT", "version": "v1" }, { "created": "Thu, 18 Apr 2024 01:37:42 GMT", "version": "v2" } ]
2024-04-19
[ [ "Fang", "Shun", "" ], [ "Cui", "Ming", "" ], [ "Feng", "Xing", "" ], [ "Lv", "Yanna", "" ] ]
2401.12554
Daniel Nichols
Daniel Nichols, Joshua H. Davis, Zhaojun Xie, Arjun Rajaram, Abhinav Bhatele
Can Large Language Models Write Parallel Code?
null
The 33rd International Symposium on High-Performance Parallel and Distributed Computing (HPDC '24), June 3-7, 2024, Pisa, Italy. ACM, New York, NY, USA, 14 pages
10.1145/3625549.3658689
null
cs.DC cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large language models are increasingly becoming a popular tool for software development. Their ability to model and generate source code has been demonstrated in a variety of contexts, including code completion, summarization, translation, and lookup. However, they often struggle to generate code for complex programs. In this paper, we study the capabilities of state-of-the-art language models to generate parallel code. In order to evaluate language models, we create a benchmark, ParEval, consisting of prompts that represent 420 different coding tasks related to scientific and parallel computing. We use ParEval to evaluate the effectiveness of several state-of-the-art open- and closed-source language models on these tasks. We introduce novel metrics for evaluating the performance of generated code, and use them to explore how well each large language model performs for 12 different computational problem types and six different parallel programming models.
[ { "created": "Tue, 23 Jan 2024 08:25:12 GMT", "version": "v1" }, { "created": "Mon, 1 Apr 2024 05:34:36 GMT", "version": "v2" }, { "created": "Tue, 14 May 2024 15:07:58 GMT", "version": "v3" } ]
2024-05-15
[ [ "Nichols", "Daniel", "" ], [ "Davis", "Joshua H.", "" ], [ "Xie", "Zhaojun", "" ], [ "Rajaram", "Arjun", "" ], [ "Bhatele", "Abhinav", "" ] ]
2401.12609
Alexandre Zouaoui
Behnood Rasti (HZDR), Alexandre Zouaoui (Thoth), Julien Mairal (Thoth), Jocelyn Chanussot (Thoth)
Fast Semisupervised Unmixing Using Nonconvex Optimization
null
IEEE TGRS, 2024, 62
10.1109/TGRS.2024.3440663
null
cs.CV cs.LG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we introduce a novel linear model tailored for semisupervised/library-based unmixing. Our model incorporates considerations for library mismatch while enabling the enforcement of the abundance sum-to-one constraint (ASC). Unlike conventional sparse unmixing methods, this model involves nonconvex optimization, presenting significant computational challenges. We demonstrate the efficacy of Alternating Methods of Multipliers (ADMM) in cyclically solving these intricate problems. We propose two semisupervised unmixing approaches, each relying on distinct priors applied to the new model in addition to the ASC: sparsity prior and convexity constraint. Our experimental results validate that enforcing the convexity constraint outperforms the sparsity prior for the endmember library. These results are corroborated across three simulated datasets (accounting for spectral variability and varying pixel purity levels) and the Cuprite dataset. Additionally, our comparison with conventional sparse unmixing methods showcases considerable advantages of our proposed model, which entails nonconvex optimization. Notably, our implementations of the proposed algorithms-fast semisupervised unmixing (FaSUn) and sparse unmixing using soft-shrinkage (SUnS)-prove considerably more efficient than traditional sparse unmixing methods. SUnS and FaSUn were implemented using PyTorch and provided in a dedicated Python package called Fast Semisupervised Unmixing (FUnmix), which is open-source and available at https://github.com/BehnoodRasti/FUnmix
[ { "created": "Tue, 23 Jan 2024 10:07:41 GMT", "version": "v1" }, { "created": "Mon, 30 Sep 2024 09:10:34 GMT", "version": "v2" } ]
2024-10-01
[ [ "Rasti", "Behnood", "", "HZDR" ], [ "Zouaoui", "Alexandre", "", "Thoth" ], [ "Mairal", "Julien", "", "Thoth" ], [ "Chanussot", "Jocelyn", "", "Thoth" ] ]
2401.12708
Andrea Pugnana
Andrea Pugnana and Lorenzo Perini and Jesse Davis and Salvatore Ruggieri
Deep Neural Network Benchmarks for Selective Classification
Published in The Journal of Data centric Machine Learning Research (DMLR), Vol 1, (17):1-58 (2024)
Journal of Data-centric Machine Learning Research (DMLR), Vol 1, (17):1-58, (2024)
null
null
cs.LG cs.AI stat.ML
http://creativecommons.org/licenses/by/4.0/
With the increasing deployment of machine learning models in many socially sensitive tasks, there is a growing demand for reliable and trustworthy predictions. One way to accomplish these requirements is to allow a model to abstain from making a prediction when there is a high risk of making an error. This requires adding a selection mechanism to the model, which selects those examples for which the model will provide a prediction. The selective classification framework aims to design a mechanism that balances the fraction of rejected predictions (i.e., the proportion of examples for which the model does not make a prediction) versus the improvement in predictive performance on the selected predictions. Multiple selective classification frameworks exist, most of which rely on deep neural network architectures. However, the empirical evaluation of the existing approaches is still limited to partial comparisons among methods and settings, providing practitioners with little insight into their relative merits. We fill this gap by benchmarking 18 baselines on a diverse set of 44 datasets that includes both image and tabular data. Moreover, there is a mix of binary and multiclass tasks. We evaluate these approaches using several criteria, including selective error rate, empirical coverage, distribution of rejected instance's classes, and performance on out-of-distribution instances. The results indicate that there is not a single clear winner among the surveyed baselines, and the best method depends on the users' objectives.
[ { "created": "Tue, 23 Jan 2024 12:15:47 GMT", "version": "v1" }, { "created": "Wed, 18 Sep 2024 07:48:33 GMT", "version": "v2" } ]
2024-09-19
[ [ "Pugnana", "Andrea", "" ], [ "Perini", "Lorenzo", "" ], [ "Davis", "Jesse", "" ], [ "Ruggieri", "Salvatore", "" ] ]
2401.12822
Esmaeel Mohammadi
Esmaeel Mohammadi, Mikkel Stokholm-Bjerregaard, Aviaja Anna Hansen, Per Halkj{\ae}r Nielsen, Daniel Ortiz-Arroyo, Petar Durdevic
Deep Learning Based Simulators for the Phosphorus Removal Process Control in Wastewater Treatment via Deep Reinforcement Learning Algorithms
Journal Paper
Engineering Applications of Artificial Intelligence 133 (2024) 107992
10.1016/j.engappai.2024.107992
null
eess.SY cs.AI cs.LG cs.SY
http://creativecommons.org/licenses/by-nc-nd/4.0/
Phosphorus removal is vital in wastewater treatment to reduce reliance on limited resources. Deep reinforcement learning (DRL) is a machine learning technique that can optimize complex and nonlinear systems, including the processes in wastewater treatment plants, by learning control policies through trial and error. However, applying DRL to chemical and biological processes is challenging due to the need for accurate simulators. This study trained six models to identify the phosphorus removal process and used them to create a simulator for the DRL environment. Although the models achieved high accuracy (>97%), uncertainty and incorrect prediction behavior limited their performance as simulators over longer horizons. Compounding errors in the models' predictions were identified as one of the causes of this problem. This approach for improving process control involves creating simulation environments for DRL algorithms, using data from supervisory control and data acquisition (SCADA) systems with a sufficient historical horizon without complex system modeling or parameter estimation.
[ { "created": "Tue, 23 Jan 2024 14:55:46 GMT", "version": "v1" } ]
2024-03-25
[ [ "Mohammadi", "Esmaeel", "" ], [ "Stokholm-Bjerregaard", "Mikkel", "" ], [ "Hansen", "Aviaja Anna", "" ], [ "Nielsen", "Per Halkjær", "" ], [ "Ortiz-Arroyo", "Daniel", "" ], [ "Durdevic", "Petar", "" ] ]
2401.12851
Alfonso L\'opez Ruiz
Alfonso L\'opez, Carlos Javier Ogayar, Francisco Ram\'on Feito, Joaquim Jo\~ao Sousa
Classification of grapevine varieties using UAV hyperspectral imaging
null
https://www.mdpi.com/2072-4292/16/12/2103
10.3390/rs16122103
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The classification of different grapevine varieties is a relevant phenotyping task in Precision Viticulture since it enables estimating the growth of vineyard rows dedicated to different varieties, among other applications concerning the wine industry. This task can be performed with destructive methods that require time-consuming tasks, including data collection and analysis in the laboratory. However, Unmanned Aerial Vehicles (UAV) provide a more efficient and less prohibitive approach to collecting hyperspectral data, despite acquiring noisier data. Therefore, the first task is the processing of these data to correct and downsample large amounts of data. In addition, the hyperspectral signatures of grape varieties are very similar. In this work, a Convolutional Neural Network (CNN) is proposed for classifying seventeen varieties of red and white grape variants. Rather than classifying single samples, these are processed together with their neighbourhood. Hence, the extraction of spatial and spectral features is addressed with 1) a spatial attention layer and 2) Inception blocks. The pipeline goes from processing to dataset elaboration, finishing with the training phase. The fitted model is evaluated in terms of response time, accuracy and data separability, and compared with other state-of-the-art CNNs for classifying hyperspectral data. Our network was proven to be much more lightweight with a reduced number of input bands, a lower number of trainable weights and therefore, reduced training time. Despite this, the evaluated metrics showed much better results for our network (~99% overall accuracy), in comparison with previous works barely achieving 81% OA.
[ { "created": "Tue, 23 Jan 2024 15:35:50 GMT", "version": "v1" } ]
2024-07-30
[ [ "López", "Alfonso", "" ], [ "Ogayar", "Carlos Javier", "" ], [ "Feito", "Francisco Ramón", "" ], [ "Sousa", "Joaquim João", "" ] ]
2401.12866
Jeremias D\"otterl
Ralf Bruns, Jeremias D\"otterl, J\"urgen Dunkel, Sascha Ossowski
Evaluating Collaborative and Autonomous Agents in Data-Stream-Supported Coordination of Mobile Crowdsourcing
null
Sensors 2023, 23(2), 614
10.3390/s23020614
null
cs.AI cs.LG cs.MA
http://creativecommons.org/licenses/by/4.0/
Mobile crowdsourcing refers to systems where the completion of tasks necessarily requires physical movement of crowdworkers in an on-demand workforce. Evidence suggests that in such systems, tasks often get assigned to crowdworkers who struggle to complete those tasks successfully, resulting in high failure rates and low service quality. A promising solution to ensure higher quality of service is to continuously adapt the assignment and respond to failure-causing events by transferring tasks to better-suited workers who use different routes or vehicles. However, implementing task transfers in mobile crowdsourcing is difficult because workers are autonomous and may reject transfer requests. Moreover, task outcomes are uncertain and need to be predicted. In this paper, we propose different mechanisms to achieve outcome prediction and task coordination in mobile crowdsourcing. First, we analyze different data stream learning approaches for the prediction of task outcomes. Second, based on the suggested prediction model, we propose and evaluate two different approaches for task coordination with different degrees of autonomy: an opportunistic approach for crowdshipping with collaborative, but non-autonomous workers, and a market-based model with autonomous workers for crowdsensing.
[ { "created": "Tue, 23 Jan 2024 16:00:45 GMT", "version": "v1" } ]
2024-01-24
[ [ "Bruns", "Ralf", "" ], [ "Dötterl", "Jeremias", "" ], [ "Dunkel", "Jürgen", "" ], [ "Ossowski", "Sascha", "" ] ]
2401.12914
Salwa Mostafa
Salwa Mostafa, Mateus P. Mota, Alvaro Valcarce, and Mehdi Bennis
Emergent Communication Protocol Learning for Task Offloading in Industrial Internet of Things
null
GLOBECOM 2023
null
null
cs.IT cs.AI cs.MA math.IT
http://creativecommons.org/licenses/by/4.0/
In this paper, we leverage a multi-agent reinforcement learning (MARL) framework to jointly learn a computation offloading decision and multichannel access policy with corresponding signaling. Specifically, the base station and industrial Internet of Things mobile devices are reinforcement learning agents that need to cooperate to execute their computation tasks within a deadline constraint. We adopt an emergent communication protocol learning framework to solve this problem. The numerical results illustrate the effectiveness of emergent communication in improving the channel access success rate and the number of successfully computed tasks compared to contention-based, contention-free, and no-communication approaches. Moreover, the proposed task offloading policy outperforms remote and local computation baselines.
[ { "created": "Tue, 23 Jan 2024 17:06:13 GMT", "version": "v1" } ]
2024-01-24
[ [ "Mostafa", "Salwa", "" ], [ "Mota", "Mateus P.", "" ], [ "Valcarce", "Alvaro", "" ], [ "Bennis", "Mehdi", "" ] ]
2401.12985
Kausik Lakkaraju
Kausik Lakkaraju, Aniket Gupta, Biplav Srivastava, Marco Valtorta, Dezhi Wu
The Effect of Human v/s Synthetic Test Data and Round-tripping on Assessment of Sentiment Analysis Systems for Bias
arXiv admin note: text overlap with arXiv:2302.02038
The Fifth IEEE International Conference on Trust, Privacy and Security in Intelligent Systems, and Applications (2023)
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-nd/4.0/
Sentiment Analysis Systems (SASs) are data-driven Artificial Intelligence (AI) systems that output polarity and emotional intensity when given a piece of text as input. Like other AIs, SASs are also known to have unstable behavior when subjected to changes in data which can make it problematic to trust out of concerns like bias when AI works with humans and data has protected attributes like gender, race, and age. Recently, an approach was introduced to assess SASs in a blackbox setting without training data or code, and rating them for bias using synthetic English data. We augment it by introducing two human-generated chatbot datasets and also consider a round-trip setting of translating the data from one language to the same through an intermediate language. We find that these settings show SASs performance in a more realistic light. Specifically, we find that rating SASs on the chatbot data showed more bias compared to the synthetic data, and round-tripping using Spanish and Danish as intermediate languages reduces the bias (up to 68% reduction) in human-generated data while, in synthetic data, it takes a surprising turn by increasing the bias! Our findings will help researchers and practitioners refine their SAS testing strategies and foster trust as SASs are considered part of more mission-critical applications for global use.
[ { "created": "Mon, 15 Jan 2024 15:27:18 GMT", "version": "v1" } ]
2024-01-30
[ [ "Lakkaraju", "Kausik", "" ], [ "Gupta", "Aniket", "" ], [ "Srivastava", "Biplav", "" ], [ "Valtorta", "Marco", "" ], [ "Wu", "Dezhi", "" ] ]
2401.12997
Yujie Chen
Cunhang Fan, Yujie Chen, Jun Xue, Yonghui Kong, Jianhua Tao, Zhao Lv
Progressive Distillation Based on Masked Generation Feature Method for Knowledge Graph Completion
Accepted by AAAI2024
(2024) Vol. 38 No. 8: AAAI-24 Technical Tracks 8 Vol. 38 No. 8: AAAI-24 Technical Tracks 8 Vol. 38 No. 8: AAAI-24 Technical Tracks 8 Proceedings of the AAAI Conference on Artificial Intelligence, 38(8), 8380-8388
10.1609/aaai.v38i8.28680
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
In recent years, knowledge graph completion (KGC) models based on pre-trained language model (PLM) have shown promising results. However, the large number of parameters and high computational cost of PLM models pose challenges for their application in downstream tasks. This paper proposes a progressive distillation method based on masked generation features for KGC task, aiming to significantly reduce the complexity of pre-trained models. Specifically, we perform pre-distillation on PLM to obtain high-quality teacher models, and compress the PLM network to obtain multi-grade student models. However, traditional feature distillation suffers from the limitation of having a single representation of information in teacher models. To solve this problem, we propose masked generation of teacher-student features, which contain richer representation information. Furthermore, there is a significant gap in representation ability between teacher and student. Therefore, we design a progressive distillation method to distill student models at each grade level, enabling efficient knowledge transfer from teachers to students. The experimental results demonstrate that the model in the pre-distillation stage surpasses the existing state-of-the-art methods. Furthermore, in the progressive distillation stage, the model significantly reduces the model parameters while maintaining a certain level of performance. Specifically, the model parameters of the lower-grade student model are reduced by 56.7\% compared to the baseline.
[ { "created": "Fri, 19 Jan 2024 07:34:36 GMT", "version": "v1" }, { "created": "Mon, 10 Jun 2024 09:50:54 GMT", "version": "v2" } ]
2024-06-11
[ [ "Fan", "Cunhang", "" ], [ "Chen", "Yujie", "" ], [ "Xue", "Jun", "" ], [ "Kong", "Yonghui", "" ], [ "Tao", "Jianhua", "" ], [ "Lv", "Zhao", "" ] ]
2401.13002
EPTCS
Philip Todd (Saltire Software)
Theorem Discovery Amongst Cyclic Polygons
In Proceedings ADG 2023, arXiv:2401.10725
EPTCS 398, 2024, pp. 153-164
10.4204/EPTCS.398.18
null
cs.CG cs.AI
http://creativecommons.org/licenses/by/4.0/
We examine a class of geometric theorems on cyclic 2n-gons. We prove that if we take n disjoint pairs of sides, each pair separated by an even number of polygon sides, then there is a linear combination of the angles between those sides which is constant. We present a formula for the linear combination, which provides a theorem statement in terms of those angles. We describe a program which uses this result to generate new geometry proof problems and their solutions.
[ { "created": "Mon, 22 Jan 2024 12:52:55 GMT", "version": "v1" } ]
2024-01-25
[ [ "Todd", "Philip", "", "Saltire Software" ] ]
2401.13076
Mingyang Li
Mingyang Li, Yue Ma, and Qinru Qiu
SemanticSLAM: Learning based Semantic Map Construction and Robust Camera Localization
2023 IEEE Symposium Series on Computational Intelligence (SSCI) 6 pages
2023 IEEE Symposium Series on Computational Intelligence (SSCI)
null
null
cs.RO cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current techniques in Visual Simultaneous Localization and Mapping (VSLAM) estimate camera displacement by comparing image features of consecutive scenes. These algorithms depend on scene continuity, hence requires frequent camera inputs. However, processing images frequently can lead to significant memory usage and computation overhead. In this study, we introduce SemanticSLAM, an end-to-end visual-inertial odometry system that utilizes semantic features extracted from an RGB-D sensor. This approach enables the creation of a semantic map of the environment and ensures reliable camera localization. SemanticSLAM is scene-agnostic, which means it doesn't require retraining for different environments. It operates effectively in indoor settings, even with infrequent camera input, without prior knowledge. The strength of SemanticSLAM lies in its ability to gradually refine the semantic map and improve pose estimation. This is achieved by a convolutional long-short-term-memory (ConvLSTM) network, trained to correct errors during map construction. Compared to existing VSLAM algorithms, SemanticSLAM improves pose estimation by 17%. The resulting semantic map provides interpretable information about the environment and can be easily applied to various downstream tasks, such as path planning, obstacle avoidance, and robot navigation. The code will be publicly available at https://github.com/Leomingyangli/SemanticSLAM
[ { "created": "Tue, 23 Jan 2024 20:02:02 GMT", "version": "v1" } ]
2024-01-25
[ [ "Li", "Mingyang", "" ], [ "Ma", "Yue", "" ], [ "Qiu", "Qinru", "" ] ]
2401.13157
Baris Coskunuzer
Baris Coskunuzer, Ignacio Segovia-Dominguez, Yuzhou Chen and Yulia R. Gel
Time-Aware Knowledge Representations of Dynamic Objects with Multidimensional Persistence
null
AAAI 2024
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning time-evolving objects such as multivariate time series and dynamic networks requires the development of novel knowledge representation mechanisms and neural network architectures, which allow for capturing implicit time-dependent information contained in the data. Such information is typically not directly observed but plays a key role in the learning task performance. In turn, lack of time dimension in knowledge encoding mechanisms for time-dependent data leads to frequent model updates, poor learning performance, and, as a result, subpar decision-making. Here we propose a new approach to a time-aware knowledge representation mechanism that notably focuses on implicit time-dependent topological information along multiple geometric dimensions. In particular, we propose a new approach, named \textit{Temporal MultiPersistence} (TMP), which produces multidimensional topological fingerprints of the data by using the existing single parameter topological summaries. The main idea behind TMP is to merge the two newest directions in topological representation learning, that is, multi-persistence which simultaneously describes data shape evolution along multiple key parameters, and zigzag persistence to enable us to extract the most salient data shape information over time. We derive theoretical guarantees of TMP vectorizations and show its utility, in application to forecasting on benchmark traffic flow, Ethereum blockchain, and electrocardiogram datasets, demonstrating the competitive performance, especially, in scenarios of limited data records. In addition, our TMP method improves the computational efficiency of the state-of-the-art multipersistence summaries up to 59.5 times.
[ { "created": "Wed, 24 Jan 2024 00:33:53 GMT", "version": "v1" } ]
2024-01-25
[ [ "Coskunuzer", "Baris", "" ], [ "Segovia-Dominguez", "Ignacio", "" ], [ "Chen", "Yuzhou", "" ], [ "Gel", "Yulia R.", "" ] ]
2401.13193
Minsoo Kang
Minsoo Kang, Minkoo Kang, Suhyun Kim
Catch-Up Mix: Catch-Up Class for Struggling Filters in CNN
Published at AAAI2024, Equal contribution of first two authors
Proceedings of the AAAI Conference on Artificial Intelligence, 38(3), 2024, 2705-2713
10.1609/aaai.v38i3.28049
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep learning has made significant advances in computer vision, particularly in image classification tasks. Despite their high accuracy on training data, deep learning models often face challenges related to complexity and overfitting. One notable concern is that the model often relies heavily on a limited subset of filters for making predictions. This dependency can result in compromised generalization and an increased vulnerability to minor variations. While regularization techniques like weight decay, dropout, and data augmentation are commonly used to address this issue, they may not directly tackle the reliance on specific filters. Our observations reveal that the heavy reliance problem gets severe when slow-learning filters are deprived of learning opportunities due to fast-learning filters. Drawing inspiration from image augmentation research that combats over-reliance on specific image regions by removing and replacing parts of images, our idea is to mitigate the problem of over-reliance on strong filters by substituting highly activated features. To this end, we present a novel method called Catch-up Mix, which provides learning opportunities to a wide range of filters during training, focusing on filters that may lag behind. By mixing activation maps with relatively lower norms, Catch-up Mix promotes the development of more diverse representations and reduces reliance on a small subset of filters. Experimental results demonstrate the superiority of our method in various vision classification datasets, providing enhanced robustness.
[ { "created": "Wed, 24 Jan 2024 02:42:50 GMT", "version": "v1" } ]
2024-04-09
[ [ "Kang", "Minsoo", "" ], [ "Kang", "Minkoo", "" ], [ "Kim", "Suhyun", "" ] ]
2401.13298
Lin Hongzhan
Hongzhan Lin, Ziyang Luo, Wei Gao, Jing Ma, Bo Wang, Ruichao Yang
Towards Explainable Harmful Meme Detection through Multimodal Debate between Large Language Models
The first work towards explainable harmful meme detection by harnessing advanced LLMs
The ACM Web Conference 2024
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The age of social media is flooded with Internet memes, necessitating a clear grasp and effective identification of harmful ones. This task presents a significant challenge due to the implicit meaning embedded in memes, which is not explicitly conveyed through the surface text and image. However, existing harmful meme detection methods do not present readable explanations that unveil such implicit meaning to support their detection decisions. In this paper, we propose an explainable approach to detect harmful memes, achieved through reasoning over conflicting rationales from both harmless and harmful positions. Specifically, inspired by the powerful capacity of Large Language Models (LLMs) on text generation and reasoning, we first elicit multimodal debate between LLMs to generate the explanations derived from the contradictory arguments. Then we propose to fine-tune a small language model as the debate judge for harmfulness inference, to facilitate multimodal fusion between the harmfulness rationales and the intrinsic multimodal information within memes. In this way, our model is empowered to perform dialectical reasoning over intricate and implicit harm-indicative patterns, utilizing multimodal explanations originating from both harmless and harmful arguments. Extensive experiments on three public meme datasets demonstrate that our harmful meme detection approach achieves much better performance than state-of-the-art methods and exhibits a superior capacity for explaining the meme harmfulness of the model predictions.
[ { "created": "Wed, 24 Jan 2024 08:37:16 GMT", "version": "v1" } ]
2024-01-25
[ [ "Lin", "Hongzhan", "" ], [ "Luo", "Ziyang", "" ], [ "Gao", "Wei", "" ], [ "Ma", "Jing", "" ], [ "Wang", "Bo", "" ], [ "Yang", "Ruichao", "" ] ]
2401.13311
Rohan Wadhawan
Rohan Wadhawan, Hritik Bansal, Kai-Wei Chang, Nanyun Peng
ConTextual: Evaluating Context-Sensitive Text-Rich Visual Reasoning in Large Multimodal Models
null
PMLR 235:49733-49787, 2024
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many real-world tasks require an agent to reason jointly over text and visual objects, (e.g., navigating in public spaces), which we refer to as context-sensitive text-rich visual reasoning. Specifically, these tasks require an understanding of the context in which the text interacts with visual elements within an image. However, there is a lack of existing datasets to benchmark the state-of-the-art multimodal models' capability on context-sensitive text-rich visual reasoning. In this paper, we introduce ConTextual, a novel dataset featuring human-crafted instructions that require context-sensitive reasoning for text-rich images. We conduct experiments to assess the performance of 14 foundation models (GPT-4V, Gemini-Pro-Vision, LLaVA-Next) and establish a human performance baseline. Further, we perform human evaluations of the model responses and observe a significant performance gap of 30.8% between GPT-4V (the current best-performing Large Multimodal Model) and human performance. Our fine-grained analysis reveals that GPT-4V encounters difficulties interpreting time-related data and infographics. However, it demonstrates proficiency in comprehending abstract visual contexts such as memes and quotes. Finally, our qualitative analysis uncovers various factors contributing to poor performance including lack of precise visual perception and hallucinations. Our dataset, code, and leaderboard can be found on the project page https://con-textual.github.io/
[ { "created": "Wed, 24 Jan 2024 09:07:11 GMT", "version": "v1" }, { "created": "Sun, 16 Jun 2024 00:38:24 GMT", "version": "v2" }, { "created": "Tue, 16 Jul 2024 03:36:29 GMT", "version": "v3" } ]
2024-07-30
[ [ "Wadhawan", "Rohan", "" ], [ "Bansal", "Hritik", "" ], [ "Chang", "Kai-Wei", "" ], [ "Peng", "Nanyun", "" ] ]
2401.13418
Gian Luca Marcialis
Gian Luca Marcialis, Paolo Mastinu, and Fabio Roli
Serial fusion of multi-modal biometric systems
null
IEEE International Workshop on Biometric Measurements and Systems for Security and Medical Applications (BioMS2010), September, 9, 2010, Taranto (Italy), ISBN: 978-1-4244-6302-2
10.1109/BIOMS.2010.5610438
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Serial, or sequential, fusion of multiple biometric matchers has been not thoroughly investigated so far. However, this approach exhibits some advantages with respect to the widely adopted parallel approaches. In this paper, we propose a novel theoretical framework for the assessment of performance of such systems, based on a previous work of the authors. Benefits in terms of performance are theoretically evaluated, as well as estimation errors in the model parameters computation. Model is analyzed from the viewpoint of its pros and cons, by mean of preliminary experiments performed on NIST Biometric Score Set 1.
[ { "created": "Wed, 24 Jan 2024 12:30:04 GMT", "version": "v1" } ]
2024-01-25
[ [ "Marcialis", "Gian Luca", "" ], [ "Mastinu", "Paolo", "" ], [ "Roli", "Fabio", "" ] ]
2401.13512
Mat\'u\v{s} Falis
Mat\'u\v{s} Falis, Aryo Pradipta Gema, Hang Dong, Luke Daines, Siddharth Basetti, Michael Holder, Rose S Penfold, Alexandra Birch, Beatrice Alex
Can GPT-3.5 Generate and Code Discharge Summaries?
15 pages; 250 words in abstract; 4,152 words in main body; 4 figures (1 black and white, 3 colour); 4 tables; 34 references; Accepted and published by the Journal of the American Medical Informatics Association
Journal of the American Medical Informatics Association, 2024
10.1093/jamia/ocae132
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Objective: To investigate GPT-3.5 in generating and coding medical documents with ICD-10 codes for data augmentation on low-resources labels. Materials and Methods: Employing GPT-3.5 we generated and coded 9,606 discharge summaries based on lists of ICD-10 code descriptions of patients with infrequent (generation) codes within the MIMIC-IV dataset. Combined with the baseline training set, this formed an augmented training set. Neural coding models were trained on baseline and augmented data and evaluated on a MIMIC-IV test set. We report micro- and macro-F1 scores on the full codeset, generation codes, and their families. Weak Hierarchical Confusion Matrices were employed to determine within-family and outside-of-family coding errors in the latter codesets. The coding performance of GPT-3.5 was evaluated both on prompt-guided self-generated data and real MIMIC-IV data. Clinical professionals evaluated the clinical acceptability of the generated documents. Results: Augmentation slightly hinders the overall performance of the models but improves performance for the generation candidate codes and their families, including one unseen in the baseline training data. Augmented models display lower out-of-family error rates. GPT-3.5 can identify ICD-10 codes by the prompted descriptions, but performs poorly on real data. Evaluators note the correctness of generated concepts while suffering in variety, supporting information, and narrative. Discussion and Conclusion: GPT-3.5 alone is unsuitable for ICD-10 coding. Augmentation positively affects generation code families but mainly benefits codes with existing examples. Augmentation reduces out-of-family errors. Discharge summaries generated by GPT-3.5 state prompted concepts correctly but lack variety, and authenticity in narratives. They are unsuitable for clinical practice.
[ { "created": "Wed, 24 Jan 2024 15:10:13 GMT", "version": "v1" }, { "created": "Mon, 16 Sep 2024 16:44:11 GMT", "version": "v2" } ]
2024-09-17
[ [ "Falis", "Matúš", "" ], [ "Gema", "Aryo Pradipta", "" ], [ "Dong", "Hang", "" ], [ "Daines", "Luke", "" ], [ "Basetti", "Siddharth", "" ], [ "Holder", "Michael", "" ], [ "Penfold", "Rose S", "" ], [ "Birch", "Alexandra", "" ], [ "Alex", "Beatrice", "" ] ]
2401.13596
Rodrigo Aldana-L\'opez
Rodrigo Aldana-L\'opez, Rosario Arag\"u\'es, Carlos Sag\"u\'es
PLATE: A perception-latency aware estimator,
This is the accepted version an already published manuscript. See journal reference for details
ISA Transactions, vol. 142, pp. 716-730, 2023, ISSN 0019-0578
10.1016/j.isatra.2023.08.013
null
eess.SY cs.CV cs.SY math.OC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Target tracking is a popular problem with many potential applications. There has been a lot of effort on improving the quality of the detection of targets using cameras through different techniques. In general, with higher computational effort applied, i.e., a longer perception-latency, a better detection accuracy is obtained. However, it is not always useful to apply the longest perception-latency allowed, particularly when the environment doesn't require to and when the computational resources are shared between other tasks. In this work, we propose a new Perception-LATency aware Estimator (PLATE), which uses different perception configurations in different moments of time in order to optimize a certain performance measure. This measure takes into account a perception-latency and accuracy trade-off aiming for a good compromise between quality and resource usage. Compared to other heuristic frame-skipping techniques, PLATE comes with a formal complexity and optimality analysis. The advantages of PLATE are verified by several experiments including an evaluation over a standard benchmark with real data and using state of the art deep learning object detection methods for the perception stage.
[ { "created": "Wed, 24 Jan 2024 17:04:18 GMT", "version": "v1" } ]
2024-01-25
[ [ "Aldana-López", "Rodrigo", "" ], [ "Aragüés", "Rosario", "" ], [ "Sagüés", "Carlos", "" ] ]
2401.13604
Jeremias D\"otterl
Jeremias D\"otterl, Ralf Bruns, J\"urgen Dunkel, Sascha Ossowski
Stream-based perception for cognitive agents in mobile ecosystems
null
AI Communications, vol. 32, no. 4, pp. 271-286, 2019
10.3233/AIC-190614
null
cs.AI cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cognitive agent abstractions can help to engineer intelligent systems across mobile devices. On smartphones, the data obtained from onboard sensors can give valuable insights into the user's current situation. Unfortunately, today's cognitive agent frameworks cannot cope well with the challenging characteristics of sensor data. Sensor data is located on a low abstraction level and the individual data elements are not meaningful when observed in isolation. In contrast, cognitive agents operate on high-level percepts and lack the means to effectively detect complex spatio-temporal patterns in sequences of multiple percepts. In this paper, we present a stream-based perception approach that enables the agents to perceive meaningful situations in low-level sensor data streams. We present a crowdshipping case study where autonomous, self-interested agents collaborate to deliver parcels to their destinations. We show how situations derived from smartphone sensor data can trigger and guide auctions, which the agents use to reach agreements. Experiments with real smartphone data demonstrate the benefits of stream-based agent perception.
[ { "created": "Wed, 24 Jan 2024 17:14:50 GMT", "version": "v1" } ]
2024-01-25
[ [ "Dötterl", "Jeremias", "" ], [ "Bruns", "Ralf", "" ], [ "Dunkel", "Jürgen", "" ], [ "Ossowski", "Sascha", "" ] ]
2401.13641
Ruben Tolosana
Ivan DeAndres-Tame, Ruben Tolosana, Ruben Vera-Rodriguez, Aythami Morales, Julian Fierrez, Javier Ortega-Garcia
How Good is ChatGPT at Face Biometrics? A First Look into Recognition, Soft Biometrics, and Explainability
null
IEEE Access, February 2024
10.1109/ACCESS.2024.3370437
null
cs.CV cs.AI cs.CY cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Large Language Models (LLMs) such as GPT developed by OpenAI, have already shown astonishing results, introducing quick changes in our society. This has been intensified by the release of ChatGPT which allows anyone to interact in a simple conversational way with LLMs, without any experience in the field needed. As a result, ChatGPT has been rapidly applied to many different tasks such as code- and song-writer, education, virtual assistants, etc., showing impressive results for tasks for which it was not trained (zero-shot learning). The present study aims to explore the ability of ChatGPT, based on the recent GPT-4 multimodal LLM, for the task of face biometrics. In particular, we analyze the ability of ChatGPT to perform tasks such as face verification, soft-biometrics estimation, and explainability of the results. ChatGPT could be very valuable to further increase the explainability and transparency of automatic decisions in human scenarios. Experiments are carried out in order to evaluate the performance and robustness of ChatGPT, using popular public benchmarks and comparing the results with state-of-the-art methods in the field. The results achieved in this study show the potential of LLMs such as ChatGPT for face biometrics, especially to enhance explainability. For reproducibility reasons, we release all the code in GitHub.
[ { "created": "Wed, 24 Jan 2024 18:10:39 GMT", "version": "v1" }, { "created": "Tue, 27 Feb 2024 11:00:35 GMT", "version": "v2" } ]
2024-02-28
[ [ "DeAndres-Tame", "Ivan", "" ], [ "Tolosana", "Ruben", "" ], [ "Vera-Rodriguez", "Ruben", "" ], [ "Morales", "Aythami", "" ], [ "Fierrez", "Julian", "" ], [ "Ortega-Garcia", "Javier", "" ] ]
2401.13693
Isabelle Guyon
Hugo Jair Escalante Balderas, Isabelle Guyon (LISN, TAU), Addison Howard, Walter Reade, Sebastien Treguer (TAU)
Challenge design roadmap
null
AI Competitions and Benchmarks: The Science Behind the Contests, In press
null
null
cs.OH cs.AI cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Challenges can be seen as a type of game that motivates participants to solve serious tasks. As a result, competition organizers must develop effective game rules. However, these rules have multiple objectives beyond making the game enjoyable for participants. These objectives may include solving real-world problems, advancing scientific or technical areas, making scientific discoveries, and educating the public. In many ways, creating a challenge is similar to launching a product. It requires the same level of excitement and rigorous testing, and the goal is to attract ''customers'' in the form of participants. The process begins with a solid plan, such as a competition proposal that will eventually be submitted to an international conference and subjected to peer review. Although peer review does not guarantee quality, it does force organizers to consider the impact of their challenge, identify potential oversights, and generally improve its quality. This chapter provides guidelines for creating a strong plan for a challenge. The material draws on the preparation guidelines from organizations such as Kaggle 1 , ChaLearn 2 and Tailor 3 , as well as the NeurIPS proposal template, which some of the authors contributed to.
[ { "created": "Mon, 15 Jan 2024 10:58:30 GMT", "version": "v1" } ]
2024-01-26
[ [ "Balderas", "Hugo Jair Escalante", "", "LISN, TAU" ], [ "Guyon", "Isabelle", "", "LISN, TAU" ], [ "Howard", "Addison", "", "TAU" ], [ "Reade", "Walter", "", "TAU" ], [ "Treguer", "Sebastien", "", "TAU" ] ]
2401.13700
EPTCS
Vesna Marinkovi\'c (Faculty of Mathematics, University of Belgrade), Tijana \v{S}ukilovi\'c (Faculty of Mathematics, University of Belgrade), Filip Mari\'c (Faculty of Mathematics, University of Belgrade)
Towards Automated Readable Proofs of Ruler and Compass Constructions
In Proceedings ADG 2023, arXiv:2401.10725
EPTCS 398, 2024, pp. 11-20
10.4204/EPTCS.398.5
null
cs.LO cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although there are several systems that successfully generate construction steps for ruler and compass construction problems, none of them provides readable synthetic correctness proofs for generated constructions. In the present work, we demonstrate how our triangle construction solver ArgoTriCS can cooperate with automated theorem provers for first order logic and coherent logic so that it generates construction correctness proofs, that are both human-readable and formal (can be checked by interactive theorem provers such as Coq or Isabelle/HOL). These proofs currently rely on many high-level lemmas and our goal is to have them all formally shown from the basic axioms of geometry.
[ { "created": "Mon, 22 Jan 2024 12:48:51 GMT", "version": "v1" } ]
2024-01-26
[ [ "Marinković", "Vesna", "", "Faculty of Mathematics, University of Belgrade" ], [ "Šukilović", "Tijana", "", "Faculty of Mathematics, University of Belgrade" ], [ "Marić", "Filip", "", "Faculty of Mathematics, University of Belgrade" ] ]
2401.13703
EPTCS
Amela Hota (The Private University College of Education of the Diocese of Linz, Austria), Zolt\'an Kov\'acs (The Private University College of Education of the Diocese of Linz, Austria), Alexander Vujic (The Private University College of Education of the Diocese of Linz, Austria)
Solving Some Geometry Problems of the N\'aboj 2023 Contest with Automated Deduction in GeoGebra Discovery
In Proceedings ADG 2023, arXiv:2401.10725
EPTCS 398, 2024, pp. 110-123
10.4204/EPTCS.398.14
null
math.HO cs.AI cs.CG cs.SC
http://creativecommons.org/licenses/by/4.0/
In this article, we solve some of the geometry problems of the N\'aboj 2023 competition with the help of a computer, using examples that the software tool GeoGebra Discovery can calculate. In each case, the calculation requires symbolic computations. We analyze the difficulty of feeding the problem into the machine and set further goals to make the problems of this type of contests even more tractable in the future.
[ { "created": "Mon, 22 Jan 2024 12:51:51 GMT", "version": "v1" } ]
2024-01-26
[ [ "Hota", "Amela", "", "The Private University College of Education of the Diocese\n of Linz, Austria" ], [ "Kovács", "Zoltán", "", "The Private University College of\n Education of the Diocese of Linz, Austria" ], [ "Vujic", "Alexander", "", "The Private\n University College of Education of the Diocese of Linz, Austria" ] ]
2401.13704
EPTCS
Ines Ganglmayr (The Private University College of Education of the Diocese of Linz, Austria), Zolt\'an Kov\'acs (The Private University College of Education of the Diocese of Linz, Austria)
Using Java Geometry Expert as Guide in the Preparations for Math Contests
In Proceedings ADG 2023, arXiv:2401.10725
EPTCS 398, 2024, pp. 124-131
10.4204/EPTCS.398.15
null
cs.CY cs.AI cs.CG cs.SC
http://creativecommons.org/licenses/by/4.0/
We give an insight into Java Geometry Expert (JGEX) in use in a school context, focusing on the Austrian school system. JGEX can offer great support in some classroom situations, especially for solving mathematical competition tasks. Also, we discuss some limitations of the program.
[ { "created": "Mon, 22 Jan 2024 12:52:07 GMT", "version": "v1" } ]
2024-01-26
[ [ "Ganglmayr", "Ines", "", "The Private University College of Education of the\n Diocese of Linz, Austria" ], [ "Kovács", "Zoltán", "", "The Private University College\n of Education of the Diocese of Linz, Austria" ] ]
2401.13713
Baris Coskunuzer
Ignacio Segovia-Dominguez, Yuzhou Chen, Cuneyt G. Akcora, Zhiwei Zhen, Murat Kantarcioglu, Yulia R. Gel, Baris Coskunuzer
EMP: Effective Multidimensional Persistence for Graph Representation Learning
arXiv admin note: text overlap with arXiv:2401.13157
LoG 2023
null
null
cs.LG cs.AI cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Topological data analysis (TDA) is gaining prominence across a wide spectrum of machine learning tasks that spans from manifold learning to graph classification. A pivotal technique within TDA is persistent homology (PH), which furnishes an exclusive topological imprint of data by tracing the evolution of latent structures as a scale parameter changes. Present PH tools are confined to analyzing data through a single filter parameter. However, many scenarios necessitate the consideration of multiple relevant parameters to attain finer insights into the data. We address this issue by introducing the Effective Multidimensional Persistence (EMP) framework. This framework empowers the exploration of data by simultaneously varying multiple scale parameters. The framework integrates descriptor functions into the analysis process, yielding a highly expressive data summary. It seamlessly integrates established single PH summaries into multidimensional counterparts like EMP Landscapes, Silhouettes, Images, and Surfaces. These summaries represent data's multidimensional aspects as matrices and arrays, aligning effectively with diverse ML models. We provide theoretical guarantees and stability proofs for EMP summaries. We demonstrate EMP's utility in graph classification tasks, showing its effectiveness. Results reveal that EMP enhances various single PH descriptors, outperforming cutting-edge methods on multiple benchmark datasets.
[ { "created": "Wed, 24 Jan 2024 00:41:51 GMT", "version": "v1" } ]
2024-01-26
[ [ "Segovia-Dominguez", "Ignacio", "" ], [ "Chen", "Yuzhou", "" ], [ "Akcora", "Cuneyt G.", "" ], [ "Zhen", "Zhiwei", "" ], [ "Kantarcioglu", "Murat", "" ], [ "Gel", "Yulia R.", "" ], [ "Coskunuzer", "Baris", "" ] ]
2401.13716
Vibeke Binz Vallevik Mrs
Vibeke Binz Vallevik, Aleksandar Babic, Serena Elizabeth Marshall, Severin Elvatun, Helga Br{\o}gger, Sharmini Alagaratnam, Bj{\o}rn Edwin, Narasimha Raghavan Veeraragavan, Anne Kjersti Befring, Jan Franz Nyg{\aa}rd
Can I trust my fake data -- A comprehensive quality assessment framework for synthetic tabular data in healthcare
null
Int. J. Med. Inform.185 (2024)
10.1016/j.ijmedinf.2024.105413
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Ensuring safe adoption of AI tools in healthcare hinges on access to sufficient data for training, testing and validation. In response to privacy concerns and regulatory requirements, using synthetic data has been suggested. Synthetic data is created by training a generator on real data to produce a dataset with similar statistical properties. Competing metrics with differing taxonomies for quality evaluation have been suggested, resulting in a complex landscape. Optimising quality entails balancing considerations that make the data fit for use, yet relevant dimensions are left out of existing frameworks. We performed a comprehensive literature review on the use of quality evaluation metrics on SD within the scope of tabular healthcare data and SD made using deep generative methods. Based on this and the collective team experiences, we developed a conceptual framework for quality assurance. The applicability was benchmarked against a practical case from the Dutch National Cancer Registry. We present a conceptual framework for quality assurance of SD for AI applications in healthcare that aligns diverging taxonomies, expands on common quality dimensions to include the dimensions of Fairness and Carbon footprint, and proposes stages necessary to support real-life applications. Building trust in synthetic data by increasing transparency and reducing the safety risk will accelerate the development and uptake of trustworthy AI tools for the benefit of patients. Despite the growing emphasis on algorithmic fairness and carbon footprint, these metrics were scarce in the literature review. The overwhelming focus was on statistical similarity using distance metrics while sequential logic detection was scarce. A consensus-backed framework that includes all relevant quality dimensions can provide assurance for safe and responsible real-life applications of SD.
[ { "created": "Wed, 24 Jan 2024 08:14:20 GMT", "version": "v1" } ]
2024-04-19
[ [ "Vallevik", "Vibeke Binz", "" ], [ "Babic", "Aleksandar", "" ], [ "Marshall", "Serena Elizabeth", "" ], [ "Elvatun", "Severin", "" ], [ "Brøgger", "Helga", "" ], [ "Alagaratnam", "Sharmini", "" ], [ "Edwin", "Bjørn", "" ], [ "Veeraragavan", "Narasimha Raghavan", "" ], [ "Befring", "Anne Kjersti", "" ], [ "Nygård", "Jan Franz", "" ] ]
2401.13827
Eslam Eldeeb
Eslam Eldeeb, Mohammad Shehab and Hirley Alves
Traffic Learning and Proactive UAV Trajectory Planning for Data Uplink in Markovian IoT Models
null
IEEE Internet of Things Journal
10.1109/JIOT.2023.3339514
null
cs.LG cs.AI cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The age of information (AoI) is used to measure the freshness of the data. In IoT networks, the traditional resource management schemes rely on a message exchange between the devices and the base station (BS) before communication which causes high AoI, high energy consumption, and low reliability. Unmanned aerial vehicles (UAVs) as flying BSs have many advantages in minimizing the AoI, energy-saving, and throughput improvement. In this paper, we present a novel learning-based framework that estimates the traffic arrival of IoT devices based on Markovian events. The learning proceeds to optimize the trajectory of multiple UAVs and their scheduling policy. First, the BS predicts the future traffic of the devices. We compare two traffic predictors: the forward algorithm (FA) and the long short-term memory (LSTM). Afterward, we propose a deep reinforcement learning (DRL) approach to optimize the optimal policy of each UAV. Finally, we manipulate the optimum reward function for the proposed DRL approach. Simulation results show that the proposed algorithm outperforms the random-walk (RW) baseline model regarding the AoI, scheduling accuracy, and transmission power.
[ { "created": "Wed, 24 Jan 2024 21:57:55 GMT", "version": "v1" } ]
2024-01-26
[ [ "Eldeeb", "Eslam", "" ], [ "Shehab", "Mohammad", "" ], [ "Alves", "Hirley", "" ] ]
2401.13945
Tong Niu
Tong Niu, Haoyu Huang, Yu Du, Weihao Zhang, Luping Shi, Rong Zhao
General Automatic Solution Generation of Social Problems
null
Machine Intelligence Research 2024
10.1007/s11633-024-1496-2
null
cs.CY cs.AI cs.CE cs.MA
http://creativecommons.org/licenses/by/4.0/
Given the escalating intricacy and multifaceted nature of contemporary social systems, manually generating solutions to address pertinent social issues has become a formidable task. In response to this challenge, the rapid development of artificial intelligence has spurred the exploration of computational methodologies aimed at automatically generating solutions. However, current methods for auto-generation of solutions mainly concentrate on local social regulations that pertain to specific scenarios. Here, we report an automatic social operating system (ASOS) designed for general social solution generation, which is built upon agent-based models, enabling both global and local analyses and regulations of social problems across spatial and temporal dimensions. ASOS adopts a hypergraph with extensible social semantics for a comprehensive and structured representation of social dynamics. It also incorporates a generalized protocol for standardized hypergraph operations and a symbolic hybrid framework that delivers interpretable solutions, yielding a balance between regulatory efficacy and function viability. To demonstrate the effectiveness of ASOS, we apply it to the domain of averting extreme events within international oil futures markets. By generating a new trading role supplemented by new mechanisms, ASOS can adeptly discern precarious market conditions and make front-running interventions for non-profit purposes. This study demonstrates that ASOS provides an efficient and systematic approach for generating solutions for enhancing our society.
[ { "created": "Thu, 25 Jan 2024 05:00:46 GMT", "version": "v1" } ]
2024-05-21
[ [ "Niu", "Tong", "" ], [ "Huang", "Haoyu", "" ], [ "Du", "Yu", "" ], [ "Zhang", "Weihao", "" ], [ "Shi", "Luping", "" ], [ "Zhao", "Rong", "" ] ]
2401.14067
Saud Althabiti
Saud Althabiti, Mohammad Ammar Alsalka, and Eric Atwell
Ta'keed: The First Generative Fact-Checking System for Arabic Claims
9 pages, conference paper
VOLUME 14 NUMBER 01 2024
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
This paper introduces Ta'keed, an explainable Arabic automatic fact-checking system. While existing research often focuses on classifying claims as "True" or "False," there is a limited exploration of generating explanations for claim credibility, particularly in Arabic. Ta'keed addresses this gap by assessing claim truthfulness based on retrieved snippets, utilizing two main components: information retrieval and LLM-based claim verification. We compiled the ArFactEx, a testing gold-labelled dataset with manually justified references, to evaluate the system. The initial model achieved a promising F1 score of 0.72 in the classification task. Meanwhile, the system's generated explanations are compared with gold-standard explanations syntactically and semantically. The study recommends evaluating using semantic similarities, resulting in an average cosine similarity score of 0.76. Additionally, we explored the impact of varying snippet quantities on claim classification accuracy, revealing a potential correlation, with the model using the top seven hits outperforming others with an F1 score of 0.77.
[ { "created": "Thu, 25 Jan 2024 10:43:00 GMT", "version": "v1" } ]
2024-01-26
[ [ "Althabiti", "Saud", "" ], [ "Alsalka", "Mohammad Ammar", "" ], [ "Atwell", "Eric", "" ] ]
2401.14185
Samuel Pegg
Samuel Pegg, Kai Li, Xiaolin Hu
TDFNet: An Efficient Audio-Visual Speech Separation Model with Top-down Fusion
null
2023 13th International Conference on Information Science and Technology (ICIST), Cairo, Egypt, 2023, pp. 243-252
10.1109/ICIST59754.2023.10367130
null
cs.SD cs.AI eess.AS
http://creativecommons.org/licenses/by/4.0/
Audio-visual speech separation has gained significant traction in recent years due to its potential applications in various fields such as speech recognition, diarization, scene analysis and assistive technologies. Designing a lightweight audio-visual speech separation network is important for low-latency applications, but existing methods often require higher computational costs and more parameters to achieve better separation performance. In this paper, we present an audio-visual speech separation model called Top-Down-Fusion Net (TDFNet), a state-of-the-art (SOTA) model for audio-visual speech separation, which builds upon the architecture of TDANet, an audio-only speech separation method. TDANet serves as the architectural foundation for the auditory and visual networks within TDFNet, offering an efficient model with fewer parameters. On the LRS2-2Mix dataset, TDFNet achieves a performance increase of up to 10\% across all performance metrics compared with the previous SOTA method CTCNet. Remarkably, these results are achieved using fewer parameters and only 28\% of the multiply-accumulate operations (MACs) of CTCNet. In essence, our method presents a highly effective and efficient solution to the challenges of speech separation within the audio-visual domain, making significant strides in harnessing visual information optimally.
[ { "created": "Thu, 25 Jan 2024 13:47:22 GMT", "version": "v1" } ]
2024-01-26
[ [ "Pegg", "Samuel", "" ], [ "Li", "Kai", "" ], [ "Hu", "Xiaolin", "" ] ]
2401.14206
Daniele Perlo
Daniele Perlo and Luca Berton and Alessia Delpiano and Francesca Menchini and Stefano Tibaldi and Marco Grosso and Paolo Fonio
Exploiting Liver CT scans in Colorectal Carcinoma genomics mutation classification
null
2022 IEEE International Conference on Big Data (Big Data)
10.1109/BigData55660.2022.10020613
null
eess.IV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The liver is the most involved organ by distant metastasis in colon-rectal cancer (CRC) patients and it comes necessary to be aware of the mutational status of the lesions to correctly design the best individual treatment. So far, efforts have been made in order to develop non-invasive and real-time methods that permit the analysis of the whole tumor, using new artificial intelligence tools to analyze the tumor's image obtained by Computed Tomography (CT) scan. In order to address the current medical workflow, that is biopsy analysis-based, we propose the first DeepLearning-based exploration, to our knowledge, of such classification approach from the patient medical imaging. We propose i) a solid pipeline for managing undersized datasets of available CT scans and ii) a baseline study for genomics mutation diagnosis support for preemptive patient follow-up. Our method is able to identify CRC RAS mutation family from CT images with 0.73 F1 score.
[ { "created": "Thu, 25 Jan 2024 14:40:58 GMT", "version": "v1" } ]
2024-01-26
[ [ "Perlo", "Daniele", "" ], [ "Berton", "Luca", "" ], [ "Delpiano", "Alessia", "" ], [ "Menchini", "Francesca", "" ], [ "Tibaldi", "Stefano", "" ], [ "Grosso", "Marco", "" ], [ "Fonio", "Paolo", "" ] ]
2401.14414
Keshav Kumar K Mr
NVSL Narasimham, Keshav Kumar K
Fuzzy Logic-Based System for Brain Tumour Detection and Classification
14 pages, 9 figures
Applications of Fuzzy Theory in Applied Sciences and Computer Applications-2024
10.52305/LWCM6152
null
eess.IV cs.CV math.OC
http://creativecommons.org/licenses/by-sa/4.0/
Brain Tumours (BT) are extremely dangerous and difficult to treat. Currently, doctors must manually examine images and manually mark out tumour regions to diagnose BT; this process is time-consuming and error-prone. In recent times, experts have proposed automating approaches for detecting BT at an early stage. The poor accuracy and highly incorrect prediction results of these methods caused them to start the research. In this study, we suggest a fuzzy logic-based system for categorising BT. This study used a dataset of 253 Magnetic Resonance Imaging (MRI) brain images that included tumour and healthy images. The images were first pre-processed. After that, we pull out features like tumour size and the image's global threshold value. The watershed and region-growing approach is used to calculate the tumour size. After that, the fuzzy system receives the two features as input. Accuracy, F1-score, precision, and recall are used to assess the results of the fuzzy by employing both size determination approaches. With the size input variable discovered by the region growth method and global threshold values, the fuzzy system outperforms the watershed method. The significance of this research lies in its potential to revolutionize brain tumour diagnosis by offering a more accurate and efficient automated classification system. By reducing human intervention and providing reliable results, this approach could assist medical professionals in making timely and precise decisions, leading to improved patient outcomes and potentially saving lives. The advancement of such automated techniques has the potential to pave the way for enhanced medical imaging analysis and, ultimately, better management of brain tumour cases.
[ { "created": "Sun, 21 Jan 2024 01:07:00 GMT", "version": "v1" } ]
2024-07-02
[ [ "Narasimham", "NVSL", "" ], [ "K", "Keshav Kumar", "" ] ]
2401.14417
Lubomir Kralik
Martin Klimo, Lubomir Kralik
Fuzzy Logic Function as a Post-hoc Explanator of the Nonlinear Classifier
null
Fuzzy Logic and Technology, and Aggregation Operators. EUSFLAT AGOP 2023 2023. LNCS, vol. 14069, pp. 431-442. Springer, Cham (2023)
10.1007/978-3-031-39965-7_36
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Pattern recognition systems implemented using deep neural networks achieve better results than linear models. However, their drawback is the black box property. This property means that one with no experience utilising nonlinear systems may need help understanding the outcome of the decision. Such a solution is unacceptable to the user responsible for the final decision. He must not only believe in the decision but also understand it. Therefore, recognisers must have an architecture that allows interpreters to interpret the findings. The idea of post-hoc explainable classifiers is to design an interpretable classifier parallel to the black box classifier, giving the same decisions as the black box classifier. This paper shows that the explainable classifier completes matching classification decisions with the black box classifier on the MNIST and FashionMNIST databases if Zadeh`s fuzzy logic function forms the classifier and DeconvNet importance gives the truth values. Since the other tested significance measures achieved lower performance than DeconvNet, it is the optimal transformation of the feature values to their truth values as inputs to the fuzzy logic function for the databases and recogniser architecture used.
[ { "created": "Mon, 22 Jan 2024 13:58:03 GMT", "version": "v1" } ]
2024-01-29
[ [ "Klimo", "Martin", "" ], [ "Kralik", "Lubomir", "" ] ]
2401.14446
Stephen Casper
Stephen Casper, Carson Ezell, Charlotte Siegmann, Noam Kolt, Taylor Lynn Curtis, Benjamin Bucknall, Andreas Haupt, Kevin Wei, J\'er\'emy Scheurer, Marius Hobbhahn, Lee Sharkey, Satyapriya Krishna, Marvin Von Hagen, Silas Alberti, Alan Chan, Qinyi Sun, Michael Gerovitch, David Bau, Max Tegmark, David Krueger, Dylan Hadfield-Menell
Black-Box Access is Insufficient for Rigorous AI Audits
FAccT 2024
The 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT '24), June 3-6, 2024, Rio de Janeiro, Brazil
10.1145/3630106.3659037
null
cs.CY cs.AI cs.CR
http://creativecommons.org/licenses/by/4.0/
External audits of AI systems are increasingly recognized as a key mechanism for AI governance. The effectiveness of an audit, however, depends on the degree of access granted to auditors. Recent audits of state-of-the-art AI systems have primarily relied on black-box access, in which auditors can only query the system and observe its outputs. However, white-box access to the system's inner workings (e.g., weights, activations, gradients) allows an auditor to perform stronger attacks, more thoroughly interpret models, and conduct fine-tuning. Meanwhile, outside-the-box access to training and deployment information (e.g., methodology, code, documentation, data, deployment details, findings from internal evaluations) allows auditors to scrutinize the development process and design more targeted evaluations. In this paper, we examine the limitations of black-box audits and the advantages of white- and outside-the-box audits. We also discuss technical, physical, and legal safeguards for performing these audits with minimal security risks. Given that different forms of access can lead to very different levels of evaluation, we conclude that (1) transparency regarding the access and methods used by auditors is necessary to properly interpret audit results, and (2) white- and outside-the-box access allow for substantially more scrutiny than black-box access alone.
[ { "created": "Thu, 25 Jan 2024 18:58:05 GMT", "version": "v1" }, { "created": "Sun, 12 May 2024 03:24:23 GMT", "version": "v2" }, { "created": "Wed, 29 May 2024 13:56:29 GMT", "version": "v3" } ]
2024-06-11
[ [ "Casper", "Stephen", "" ], [ "Ezell", "Carson", "" ], [ "Siegmann", "Charlotte", "" ], [ "Kolt", "Noam", "" ], [ "Curtis", "Taylor Lynn", "" ], [ "Bucknall", "Benjamin", "" ], [ "Haupt", "Andreas", "" ], [ "Wei", "Kevin", "" ], [ "Scheurer", "Jérémy", "" ], [ "Hobbhahn", "Marius", "" ], [ "Sharkey", "Lee", "" ], [ "Krishna", "Satyapriya", "" ], [ "Von Hagen", "Marvin", "" ], [ "Alberti", "Silas", "" ], [ "Chan", "Alan", "" ], [ "Sun", "Qinyi", "" ], [ "Gerovitch", "Michael", "" ], [ "Bau", "David", "" ], [ "Tegmark", "Max", "" ], [ "Krueger", "David", "" ], [ "Hadfield-Menell", "Dylan", "" ] ]
2401.14511
Sascha Ossowski
Joaqu\'in Arias, Mar Moreno-Rebato, Jos\'e A. Rodr\'iguez-Garc\'ia, Sascha Ossowski
Automated legal reasoning with discretion to act using s(LAW)
null
Artificial Intelligence and Law (2023)
10.1007/s10506-023-09376-5
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Automated legal reasoning and its application in smart contracts and automated decisions are increasingly attracting interest. In this context, ethical and legal concerns make it necessary for automated reasoners to justify in human-understandable terms the advice given. Logic Programming, specially Answer Set Programming, has a rich semantics and has been used to very concisely express complex knowledge. However, modelling discretionality to act and other vague concepts such as ambiguity cannot be expressed in top-down execution models based on Prolog, and in bottom-up execution models based on ASP the justifications are incomplete and/or not scalable. We propose to use s(CASP), a top-down execution model for predicate ASP, to model vague concepts following a set of patterns. We have implemented a framework, called s(LAW), to model, reason, and justify the applicable legislation and validate it by translating (and benchmarking) a representative use case, the criteria for the admission of students in the "Comunidad de Madrid".
[ { "created": "Thu, 25 Jan 2024 21:11:08 GMT", "version": "v1" } ]
2024-01-29
[ [ "Arias", "Joaquín", "" ], [ "Moreno-Rebato", "Mar", "" ], [ "Rodríguez-García", "José A.", "" ], [ "Ossowski", "Sascha", "" ] ]
2401.14705
Konrad Klimaszewski
Oleksandr Fedoruk, Konrad Klimaszewski, Aleksander Ogonowski and Micha{\l} Kruk
Additional Look into GAN-based Augmentation for Deep Learning COVID-19 Image Classification
Submitted to Machine Graphics & Vision. Version with updated acknowledgments
Machine Graphics and Vision, 32(3/4), 107-124 (2023)
10.22630/MGV.2023.32.3.6
null
eess.IV cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
The availability of training data is one of the main limitations in deep learning applications for medical imaging. Data augmentation is a popular approach to overcome this problem. A new approach is a Machine Learning based augmentation, in particular usage of Generative Adversarial Networks (GAN). In this case, GANs generate images similar to the original dataset so that the overall training data amount is bigger, which leads to better performance of trained networks. A GAN model consists of two networks, a generator and a discriminator interconnected in a feedback loop which creates a competitive environment. This work is a continuation of the previous research where we trained StyleGAN2-ADA by Nvidia on the limited COVID-19 chest X-ray image dataset. In this paper, we study the dependence of the GAN-based augmentation performance on dataset size with a focus on small samples. Two datasets are considered, one with 1000 images per class (4000 images in total) and the second with 500 images per class (2000 images in total). We train StyleGAN2-ADA with both sets and then, after validating the quality of generated images, we use trained GANs as one of the augmentations approaches in multi-class classification problems. We compare the quality of the GAN-based augmentation approach to two different approaches (classical augmentation and no augmentation at all) by employing transfer learning-based classification of COVID-19 chest X-ray images. The results are quantified using different classification quality metrics and compared to the results from the literature. The GAN-based augmentation approach is found to be comparable with classical augmentation in the case of medium and large datasets but underperforms in the case of smaller datasets. The correlation between the size of the original dataset and the quality of classification is visible independently from the augmentation approach.
[ { "created": "Fri, 26 Jan 2024 08:28:13 GMT", "version": "v1" }, { "created": "Fri, 2 Feb 2024 20:53:01 GMT", "version": "v2" } ]
2024-06-17
[ [ "Fedoruk", "Oleksandr", "" ], [ "Klimaszewski", "Konrad", "" ], [ "Ogonowski", "Aleksander", "" ], [ "Kruk", "Michał", "" ] ]
2401.14811
Joar Skalse
Joar Skalse and Alessandro Abate
On the Limitations of Markovian Rewards to Express Multi-Objective, Risk-Sensitive, and Modal Tasks
null
Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence, PMLR 216:1974-1984, 2023
null
null
cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study the expressivity of scalar, Markovian reward functions in Reinforcement Learning (RL), and identify several limitations to what they can express. Specifically, we look at three classes of RL tasks; multi-objective RL, risk-sensitive RL, and modal RL. For each class, we derive necessary and sufficient conditions that describe when a problem in this class can be expressed using a scalar, Markovian reward. Moreover, we find that scalar, Markovian rewards are unable to express most of the instances in each of these three classes. We thereby contribute to a more complete understanding of what standard reward functions can and cannot express. In addition to this, we also call attention to modal problems as a new class of problems, since they have so far not been given any systematic treatment in the RL literature. We also briefly outline some approaches for solving some of the problems we discuss, by means of bespoke RL algorithms.
[ { "created": "Fri, 26 Jan 2024 12:18:29 GMT", "version": "v1" } ]
2024-01-29
[ [ "Skalse", "Joar", "" ], [ "Abate", "Alessandro", "" ] ]
2401.14933
Idoia Berges
Idoia Berges, Jes\'us Berm\'udez, Arantza Illarramendi
SSDOnt: an Ontology for representing Single-Subject Design Studies
This document is the Accepted Manuscript version of a Published Work that appeared in final form in Methods of Information in Medicine 57(01/02) : 55-61 (2018), copyright 2018 Schattauer. To access the final edited and published work see https://doi.org/10.3414/ME17-01-0109
Methods of Information in Medicine 57(01/02) : 55-61 (2018)
10.3414/ME17-01-0109
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: Single-Subject Design is used in several areas such as education and biomedicine. However, no suited formal vocabulary exists for annotating the detailed configuration and the results of this type of research studies with the appropriate granularity for looking for information about them. Therefore, the search for those study designs relies heavily on a syntactical search on the abstract, keywords or full text of the publications about the study, which entails some limitations. Objective: To present SSDOnt, a specific purpose ontology for describing and annotating single-subject design studies, so that complex questions can be asked about them afterwards. Methods: The ontology was developed following the NeOn methodology. Once the requirements of the ontology were defined, a formal model was described in a Description Logic and later implemented in the ontology language OWL 2 DL. Results: We show how the ontology provides a reference model with a suitable terminology for the annotation and searching of single-subject design studies and their main components, such as the phases, the intervention types, the outcomes and the results. Some mappings with terms of related ontologies have been established. We show as proof-of-concept that classes in the ontology can be easily extended to annotate more precise information about specific interventions and outcomes such as those related to autism. Moreover, we provide examples of some types of queries that can be posed to the ontology. Conclusions: SSDOnt has achieved the purpose of covering the descriptions of the domain of single-subject research studies.
[ { "created": "Fri, 26 Jan 2024 15:11:31 GMT", "version": "v1" } ]
2024-01-29
[ [ "Berges", "Idoia", "" ], [ "Bermúdez", "Jesús", "" ], [ "Illarramendi", "Arantza", "" ] ]
2401.14968
Guadalupe Ortiz
Guadalupe Ortiz, Meftah Zouai, Okba Kazar, Alfonso Garcia-de-Prado, Juan Boubeta-Puig
Atmosphere: Context and situational-aware collaborative IoT architecture for edge-fog-cloud computing
null
Comput. Stand. Interfaces 79: 103550 (2022)
10.1016/j.csi.2021.103550
null
cs.DC cs.AI cs.SE
http://creativecommons.org/licenses/by/4.0/
The Internet of Things (IoT) has grown significantly in popularity, accompanied by increased capacity and lower cost of communications, and overwhelming development of technologies. At the same time, big data and real-time data analysis have taken on great importance and have been accompanied by unprecedented interest in sharing data among citizens, public administrations and other organisms, giving rise to what is known as the Collaborative Internet of Things. This growth in data and infrastructure must be accompanied by a software architecture that allows its exploitation. Although there are various proposals focused on the exploitation of the IoT at edge, fog and/or cloud levels, it is not easy to find a software solution that exploits the three tiers together, taking maximum advantage not only of the analysis of contextual and situational data at each tier, but also of two-way communications between adjacent ones. In this paper, we propose an architecture that solves these deficiencies by proposing novel technologies which are appropriate for managing the resources of each tier: edge, fog and cloud. In addition, the fact that two-way communications along the three tiers of the architecture is allowed considerably enriches the contextual and situational information in each layer, and substantially assists decision making in real time. The paper illustrates the proposed software architecture through a case study of respiratory disease surveillance in hospitals. As a result, the proposed architecture permits efficient communications between the different tiers responding to the needs of these types of IoT scenarios.
[ { "created": "Fri, 26 Jan 2024 16:01:09 GMT", "version": "v1" } ]
2024-01-29
[ [ "Ortiz", "Guadalupe", "" ], [ "Zouai", "Meftah", "" ], [ "Kazar", "Okba", "" ], [ "Garcia-de-Prado", "Alfonso", "" ], [ "Boubeta-Puig", "Juan", "" ] ]
2401.15018
Ascensi\'on Gallardo-Antol\'in
Kerlos Atia Abdalmalak and Ascensi\'on Gallardo-Antol'in
Enhancement of a Text-Independent Speaker Verification System by using Feature Combination and Parallel-Structure Classifiers
null
Neural Computing and Applications 29 (2018) 637-651
10.1007/s00521-016-2470-x
null
eess.AS cs.AI cs.LG cs.SD
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Speaker Verification (SV) systems involve mainly two individual stages: feature extraction and classification. In this paper, we explore these two modules with the aim of improving the performance of a speaker verification system under noisy conditions. On the one hand, the choice of the most appropriate acoustic features is a crucial factor for performing robust speaker verification. The acoustic parameters used in the proposed system are: Mel Frequency Cepstral Coefficients (MFCC), their first and second derivatives (Deltas and Delta- Deltas), Bark Frequency Cepstral Coefficients (BFCC), Perceptual Linear Predictive (PLP), and Relative Spectral Transform - Perceptual Linear Predictive (RASTA-PLP). In this paper, a complete comparison of different combinations of the previous features is discussed. On the other hand, the major weakness of a conventional Support Vector Machine (SVM) classifier is the use of generic traditional kernel functions to compute the distances among data points. However, the kernel function of an SVM has great influence on its performance. In this work, we propose the combination of two SVM-based classifiers with different kernel functions: Linear kernel and Gaussian Radial Basis Function (RBF) kernel with a Logistic Regression (LR) classifier. The combination is carried out by means of a parallel structure approach, in which different voting rules to take the final decision are considered. Results show that significant improvement in the performance of the SV system is achieved by using the combined features with the combined classifiers either with clean speech or in the presence of noise. Finally, to enhance the system more in noisy environments, the inclusion of the multiband noise removal technique as a preprocessing stage is proposed.
[ { "created": "Fri, 26 Jan 2024 17:19:59 GMT", "version": "v1" } ]
2024-02-06
[ [ "Abdalmalak", "Kerlos Atia", "" ], [ "Gallardo-Antol'in", "Ascensión", "" ] ]
2401.15022
Jan-Philipp Redlich
Jan-Philipp Redlich, Friedrich Feuerhake, Joachim Weis, Nadine S. Schaadt, Sarah Teuber-Hanselmann, Christoph Buck, Sabine Luttmann, Andrea Eberle, Stefan Nikolin, Arno Appenzeller, Andreas Portmann, Andr\'e Homeyer
Applications of artificial intelligence in the analysis of histopathology images of gliomas: a review
null
npj Imaging 2024
10.1038/s44303-024-00020-8
null
eess.IV cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
In recent years, the diagnosis of gliomas has become increasingly complex. Analysis of glioma histopathology images using artificial intelligence (AI) offers new opportunities to support diagnosis and outcome prediction. To give an overview of the current state of research, this review examines 83 publicly available research studies that have proposed AI-based methods for whole-slide histopathology images of human gliomas, covering the diagnostic tasks of subtyping (23/83), grading (27/83), molecular marker prediction (20/83), and survival prediction (29/83). All studies were reviewed with regard to methodological aspects as well as clinical applicability. It was found that the focus of current research is the assessment of hematoxylin and eosin-stained tissue sections of adult-type diffuse gliomas. The majority of studies (52/83) are based on the publicly available glioblastoma and low-grade glioma datasets from The Cancer Genome Atlas (TCGA) and only a few studies employed other datasets in isolation (16/83) or in addition to the TCGA datasets (15/83). Current approaches mostly rely on convolutional neural networks (63/83) for analyzing tissue at 20x magnification (35/83). A new field of research is the integration of clinical data, omics data, or magnetic resonance imaging (29/83). So far, AI-based methods have achieved promising results, but are not yet used in real clinical settings. Future work should focus on the independent validation of methods on larger, multi-site datasets with high-quality and up-to-date clinical and molecular pathology annotations to demonstrate routine applicability.
[ { "created": "Fri, 26 Jan 2024 17:29:01 GMT", "version": "v1" }, { "created": "Mon, 5 Feb 2024 15:36:44 GMT", "version": "v2" }, { "created": "Tue, 9 Jul 2024 13:57:09 GMT", "version": "v3" }, { "created": "Fri, 12 Jul 2024 10:16:55 GMT", "version": "v4" } ]
2024-07-15
[ [ "Redlich", "Jan-Philipp", "" ], [ "Feuerhake", "Friedrich", "" ], [ "Weis", "Joachim", "" ], [ "Schaadt", "Nadine S.", "" ], [ "Teuber-Hanselmann", "Sarah", "" ], [ "Buck", "Christoph", "" ], [ "Luttmann", "Sabine", "" ], [ "Eberle", "Andrea", "" ], [ "Nikolin", "Stefan", "" ], [ "Appenzeller", "Arno", "" ], [ "Portmann", "Andreas", "" ], [ "Homeyer", "André", "" ] ]
2401.15048
Dmytro Zakharov
Dmytro Zakharov, Oleksandr Kuznetsov, Emanuele Frontoni
Unrecognizable Yet Identifiable: Image Distortion with Preserved Embeddings
null
Engineering Applications of Artificial Intelligence, Volume 137, Part B, November 2024, 109164
10.1016/j.engappai.2024.109164
null
cs.CV cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Biometric authentication systems play a crucial role in modern security systems. However, maintaining the balance of privacy and integrity of stored biometrics derivative data while achieving high recognition accuracy is often challenging. Addressing this issue, we introduce an innovative image transformation technique that effectively renders facial images unrecognizable to the eye while maintaining their identifiability by neural network models, which allows the distorted photo version to be stored for further verification. While initially intended for biometrics systems, the proposed methodology can be used in various artificial intelligence applications to distort the visual data and keep the derived features close. By experimenting with widely used datasets LFW and MNIST, we show that it is possible to build the distortion that changes the image content by more than 70% while maintaining the same recognition accuracy. We compare our method with previously state-of-the-art approaches. We publically release the source code.
[ { "created": "Fri, 26 Jan 2024 18:20:53 GMT", "version": "v1" }, { "created": "Wed, 28 Aug 2024 09:42:44 GMT", "version": "v2" } ]
2024-08-29
[ [ "Zakharov", "Dmytro", "" ], [ "Kuznetsov", "Oleksandr", "" ], [ "Frontoni", "Emanuele", "" ] ]
2401.15068
Craig Messner
Craig Messner and Tom Lippincott
Pairing Orthographically Variant Literary Words to Standard Equivalents Using Neural Edit Distance Models
Accepted to LaTeCH@EACL2024
Proceedings of the 8th Joint {SIGHUM} Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature (2024)
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a novel corpus consisting of orthographically variant words found in works of 19th century U.S. literature annotated with their corresponding "standard" word pair. We train a set of neural edit distance models to pair these variants with their standard forms, and compare the performance of these models to the performance of a set of neural edit distance models trained on a corpus of orthographic errors made by L2 English learners. Finally, we analyze the relative performance of these models in the light of different negative training sample generation strategies, and offer concluding remarks on the unique challenge literary orthographic variation poses to string pairing methodologies.
[ { "created": "Fri, 26 Jan 2024 18:49:34 GMT", "version": "v1" } ]
2024-05-22
[ [ "Messner", "Craig", "" ], [ "Lippincott", "Tom", "" ] ]
2401.15081
Xiaoming Zhai
Xiaoming Zhai, Matthew Nyaaba, and Wenchao Ma
Can generative AI and ChatGPT outperform humans on cognitive-demanding problem-solving tasks in science?
null
Science & Education, 2024
null
null
cs.AI cs.CY
http://creativecommons.org/licenses/by/4.0/
This study aimed to examine an assumption that generative artificial intelligence (GAI) tools can overcome the cognitive intensity that humans suffer when solving problems. We compared the performance of ChatGPT and GPT-4 on 2019 NAEP science assessments with students by cognitive demands of the items. Fifty-four tasks were coded by experts using a two-dimensional cognitive load framework, including task cognitive complexity and dimensionality. ChatGPT and GPT-4 responses were scored using the scoring keys of NAEP. The analysis of the available data was based on the average student ability scores for students who answered each item correctly and the percentage of students who responded to individual items. Results showed that both ChatGPT and GPT-4 consistently outperformed most students who answered the NAEP science assessments. As the cognitive demand for NAEP tasks increases, statistically higher average student ability scores are required to correctly address the questions. This pattern was observed for students in grades 4, 8, and 12, respectively. However, ChatGPT and GPT-4 were not statistically sensitive to the increase in cognitive demands of the tasks, except for Grade 4. As the first study focusing on comparing GAI and K-12 students in problem-solving in science, this finding implies the need for changes to educational objectives to prepare students with competence to work with GAI tools in the future. Education ought to emphasize the cultivation of advanced cognitive skills rather than depending solely on tasks that demand cognitive intensity. This approach would foster critical thinking, analytical skills, and the application of knowledge in novel contexts. Findings also suggest the need for innovative assessment practices by moving away from cognitive intensity tasks toward creativity and analytical skills to avoid the negative effects of GAI on testing more efficiently.
[ { "created": "Sun, 7 Jan 2024 12:36:31 GMT", "version": "v1" } ]
2024-01-31
[ [ "Zhai", "Xiaoming", "" ], [ "Nyaaba", "Matthew", "" ], [ "Ma", "Wenchao", "" ] ]
2401.15324
Cen Mo
Cen Mo, Fuyudi Zhang, Liang Li
Neutrino Reconstruction in TRIDENT Based on Graph Neural Network
null
Intelligent Computers, Algorithms, and Applications. IC 2023. Communications in Computer and Information Science, vol 2036
10.1007/978-981-97-0065-3_20
null
hep-ex cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
TRopIcal DEep-sea Neutrino Telescope (TRIDENT) is a next-generation neutrino telescope to be located in the South China Sea. With a large detector volume and the use of advanced hybrid digital optical modules (hDOMs), TRIDENT aims to discover multiple astrophysical neutrino sources and probe all-flavor neutrino physics. The reconstruction resolution of primary neutrinos is on the critical path to these scientific goals. We have developed a novel reconstruction method based on graph neural network (GNN) for TRIDENT. In this paper, we present the reconstruction performance of the GNN-based approach on both track- and shower-like neutrino events in TRIDENT.
[ { "created": "Sat, 27 Jan 2024 06:57:24 GMT", "version": "v1" } ]
2024-04-23
[ [ "Mo", "Cen", "" ], [ "Zhang", "Fuyudi", "" ], [ "Li", "Liang", "" ] ]
2401.15390
Guadalupe Ortiz
Guadalupe Ortiz, Juan Boubeta-Puig, Javier Criado, David Corral-Plaza, Alfonso Garcia-de-Prado, Inmaculada Medina-Bulo, Luis Iribarne
A microservice architecture for real-time IoT data processing: A reusable Web of things approach for smart ports
null
Comput.Stand.Interfaces 81:103604(2022)
10.1016/j.csi.2021.103604
null
cs.SE cs.AI
http://creativecommons.org/licenses/by/4.0/
Major advances in telecommunications and the Internet of Things have given rise to numerous smart city scenarios in which smart services are provided. What was once a dream for the future has now become reality. However, the need to provide these smart services quickly, efficiently, in an interoperable manner and in real time is a cutting-edge technological challenge. Although some software architectures offer solutions in this area, these are often limited in terms of reusability and maintenance by independent modules, involving the need for system downtime when maintaining or evolving, as well as by a lack of standards in terms of the interoperability of their interface. In this paper, we propose a fully reusable microservice architecture, standardized through the use of the Web of things paradigm, and with high efficiency in real-time data processing, supported by complex event processing techniques. To illustrate the proposal, we present a fully reusable implementation of the microservices necessary for the deployment of the architecture in the field of air quality monitoring and alerting in smart ports. The performance evaluation of this architecture shows excellent results.
[ { "created": "Sat, 27 Jan 2024 11:40:38 GMT", "version": "v1" } ]
2024-01-31
[ [ "Ortiz", "Guadalupe", "" ], [ "Boubeta-Puig", "Juan", "" ], [ "Criado", "Javier", "" ], [ "Corral-Plaza", "David", "" ], [ "Garcia-de-Prado", "Alfonso", "" ], [ "Medina-Bulo", "Inmaculada", "" ], [ "Iribarne", "Luis", "" ] ]
2401.15400
R\'uben Almeida
R\'uben Almeida, Ricardo Campos, Al\'ipio Jorge, S\'ergio Nunes
Indexing Portuguese NLP Resources with PT-Pump-Up
Demo Track, 3 pages
PROPOR 2024
null
null
cs.CL cs.IR
http://creativecommons.org/licenses/by-nc-nd/4.0/
The recent advances in natural language processing (NLP) are linked to training processes that require vast amounts of corpora. Access to this data is commonly not a trivial process due to resource dispersion and the need to maintain these infrastructures online and up-to-date. New developments in NLP are often compromised due to the scarcity of data or lack of a shared repository that works as an entry point to the community. This is especially true in low and mid-resource languages, such as Portuguese, which lack data and proper resource management infrastructures. In this work, we propose PT-Pump-Up, a set of tools that aim to reduce resource dispersion and improve the accessibility to Portuguese NLP resources. Our proposal is divided into four software components: a) a web platform to list the available resources; b) a client-side Python package to simplify the loading of Portuguese NLP resources; c) an administrative Python package to manage the platform and d) a public GitHub repository to foster future collaboration and contributions. All four components are accessible using: https://linktr.ee/pt_pump_up
[ { "created": "Sat, 27 Jan 2024 12:33:07 GMT", "version": "v1" } ]
2024-01-30
[ [ "Almeida", "Rúben", "" ], [ "Campos", "Ricardo", "" ], [ "Jorge", "Alípio", "" ], [ "Nunes", "Sérgio", "" ] ]
2401.15472
Cristina Carmona-Duarte
Cristina Carmona-Duarte, Miguel A. Ferrer, Antonio Parziale, Angelo Marcelli
Temporal evolution in synthetic handwriting
Published in Pattern Recognition
Pattern Recognition 68, p.p 233 - 244 (2017)
10.1016/j.patcog.2017.03.019
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
New methods for generating synthetic handwriting images for biometric applications have recently been developed. The temporal evolution of handwriting from childhood to adulthood is usually left unexplored in these works. This paper proposes a novel methodology for including temporal evolution in a handwriting synthesizer by means of simplifying the text trajectory plan and handwriting dynamics. This is achieved through a tailored version of the kinematic theory of rapid human movements and the neuromotor inspired handwriting synthesizer. The realism of the proposed method has been evaluated by comparing the temporal evolution of real and synthetic samples both quantitatively and subjectively. The quantitative test is based on a visual perception algorithm that compares the letter variability and the number of strokes in the real and synthetic handwriting produced at different ages. In the subjective test, 30 people are asked to evaluate the perceived realism of the evolution of the synthetic handwriting.
[ { "created": "Sat, 27 Jan 2024 17:56:03 GMT", "version": "v1" } ]
2024-01-31
[ [ "Carmona-Duarte", "Cristina", "" ], [ "Ferrer", "Miguel A.", "" ], [ "Parziale", "Antonio", "" ], [ "Marcelli", "Angelo", "" ] ]
2401.15473
Cristina Carmona-Duarte
Miguel A. Ferrer, Moises Diaz, Cristina Carmona-Duarte, Rejean Plamondon
iDeLog: Iterative Dual Spatial and Kinematic Extraction of Sigma-Lognormal Parameters
Accepted Version published by Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(1); p.p. 114-125, 2020
10.1109/TPAMI.2018.2879312
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
The Kinematic Theory of rapid movements and its associated Sigma-Lognormal model have been extensively used in a large variety of applications. While the physical and biological meaning of the model have been widely tested and validated for rapid movements, some shortcomings have been detected when it is used with continuous long and complex movements. To alleviate such drawbacks, and inspired by the motor equivalence theory and a conceivable visual feedback, this paper proposes a novel framework to extract the Sigma-Lognormal parameters, namely iDeLog. Specifically, iDeLog consists of two steps. The first one, influenced by the motor equivalence model, separately derives an initial action plan defined by a set of virtual points and angles from the trajectory and a sequence of lognormals from the velocity. In the second step, based on a hypothetical visual feedback compatible with an open-loop motor control, the virtual target points of the action plan are iteratively moved to improve the matching between the observed and reconstructed trajectory and velocity. During experiments conducted with handwritten signatures, iDeLog obtained promising results as compared to the previous development of the Sigma-Lognormal.
[ { "created": "Sat, 27 Jan 2024 17:58:42 GMT", "version": "v1" }, { "created": "Wed, 7 Feb 2024 13:06:33 GMT", "version": "v2" } ]
2024-02-08
[ [ "Ferrer", "Miguel A.", "" ], [ "Diaz", "Moises", "" ], [ "Carmona-Duarte", "Cristina", "" ], [ "Plamondon", "Rejean", "" ] ]
2401.15583
Shuai Yuan
Shuai Yuan, Hanlin Qin, Xiang Yan, Naveed AKhtar, Ajmal Mian
SCTransNet: Spatial-channel Cross Transformer Network for Infrared Small Target Detection
null
IEEE Transactions on Geoscience and Remote Sensing, vol. 62, pp. 1-15, 2024
10.1109/TGRS.2024.3383649
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Infrared small target detection (IRSTD) has recently benefitted greatly from U-shaped neural models. However, largely overlooking effective global information modeling, existing techniques struggle when the target has high similarities with the background. We present a Spatial-channel Cross Transformer Network (SCTransNet) that leverages spatial-channel cross transformer blocks (SCTBs) on top of long-range skip connections to address the aforementioned challenge. In the proposed SCTBs, the outputs of all encoders are interacted with cross transformer to generate mixed features, which are redistributed to all decoders to effectively reinforce semantic differences between the target and clutter at full scales. Specifically, SCTB contains the following two key elements: (a) spatial-embedded single-head channel-cross attention (SSCA) for exchanging local spatial features and full-level global channel information to eliminate ambiguity among the encoders and facilitate high-level semantic associations of the images, and (b) a complementary feed-forward network (CFN) for enhancing the feature discriminability via a multi-scale strategy and cross-spatial-channel information interaction to promote beneficial information transfer. Our SCTransNet effectively encodes the semantic differences between targets and backgrounds to boost its internal representation for detecting small infrared targets accurately. Extensive experiments on three public datasets, NUDT-SIRST, NUAA-SIRST, and IRSTD-1k, demonstrate that the proposed SCTransNet outperforms existing IRSTD methods. Our code will be made public at https://github.com/xdFai.
[ { "created": "Sun, 28 Jan 2024 06:41:15 GMT", "version": "v1" }, { "created": "Thu, 1 Feb 2024 02:29:54 GMT", "version": "v2" }, { "created": "Tue, 30 Apr 2024 09:40:01 GMT", "version": "v3" } ]
2024-05-01
[ [ "Yuan", "Shuai", "" ], [ "Qin", "Hanlin", "" ], [ "Yan", "Xiang", "" ], [ "AKhtar", "Naveed", "" ], [ "Mian", "Ajmal", "" ] ]
2401.15635
Dan Zhang
Dan Zhang and Yangliao Geng and Wenwen Gong and Zhongang Qi and Zhiyu Chen and Xing Tang and Ying Shan and Yuxiao Dong and Jie Tang
RecDCL: Dual Contrastive Learning for Recommendation
Accepted to WWW 2024
Proceedings of TheWebConf 2024 (WWW '24), May 13--17, 2024, Singapore
10.1145/3589334.3645533
null
cs.IR cs.CL
http://creativecommons.org/licenses/by/4.0/
Self-supervised learning (SSL) has recently achieved great success in mining the user-item interactions for collaborative filtering. As a major paradigm, contrastive learning (CL) based SSL helps address data sparsity in Web platforms by contrasting the embeddings between raw and augmented data. However, existing CL-based methods mostly focus on contrasting in a batch-wise way, failing to exploit potential regularity in the feature dimension. This leads to redundant solutions during the representation learning of users and items. In this work, we investigate how to employ both batch-wise CL (BCL) and feature-wise CL (FCL) for recommendation. We theoretically analyze the relation between BCL and FCL, and find that combining BCL and FCL helps eliminate redundant solutions but never misses an optimal solution. We propose a dual contrastive learning recommendation framework -- RecDCL. In RecDCL, the FCL objective is designed to eliminate redundant solutions on user-item positive pairs and to optimize the uniform distributions within users and items using a polynomial kernel for driving the representations to be orthogonal; The BCL objective is utilized to generate contrastive embeddings on output vectors for enhancing the robustness of the representations. Extensive experiments on four widely-used benchmarks and one industry dataset demonstrate that RecDCL can consistently outperform the state-of-the-art GNNs-based and SSL-based models (with an improvement of up to 5.65\% in terms of Recall@20). The source code is publicly available (https://github.com/THUDM/RecDCL).
[ { "created": "Sun, 28 Jan 2024 11:51:09 GMT", "version": "v1" }, { "created": "Mon, 19 Feb 2024 03:09:40 GMT", "version": "v2" } ]
2024-02-20
[ [ "Zhang", "Dan", "" ], [ "Geng", "Yangliao", "" ], [ "Gong", "Wenwen", "" ], [ "Qi", "Zhongang", "" ], [ "Chen", "Zhiyu", "" ], [ "Tang", "Xing", "" ], [ "Shan", "Ying", "" ], [ "Dong", "Yuxiao", "" ], [ "Tang", "Jie", "" ] ]
2401.15944
Yejun Lee
Jeongho Min, Yejun Lee, Dongyoung Kim, Jaejun Yoo
Bridging the Domain Gap: A Simple Domain Matching Method for Reference-based Image Super-Resolution in Remote Sensing
Accepted to IEEE GRSL 2023
Volume: 21, Year: 2023, Page: 1-5
10.1109/LGRS.2023.3336680
Article Sequence Number: 8000105, Print ISSN: 1545-598X
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Recently, reference-based image super-resolution (RefSR) has shown excellent performance in image super-resolution (SR) tasks. The main idea of RefSR is to utilize additional information from the reference (Ref) image to recover the high-frequency components in low-resolution (LR) images. By transferring relevant textures through feature matching, RefSR models outperform existing single image super-resolution (SISR) models. However, their performance significantly declines when a domain gap between Ref and LR images exists, which often occurs in real-world scenarios, such as satellite imaging. In this letter, we introduce a Domain Matching (DM) module that can be seamlessly integrated with existing RefSR models to enhance their performance in a plug-and-play manner. To the best of our knowledge, we are the first to explore Domain Matching-based RefSR in remote sensing image processing. Our analysis reveals that their domain gaps often occur in different satellites, and our model effectively addresses these challenges, whereas existing models struggle. Our experiments demonstrate that the proposed DM module improves SR performance both qualitatively and quantitatively for remote sensing super-resolution tasks.
[ { "created": "Mon, 29 Jan 2024 08:10:00 GMT", "version": "v1" } ]
2024-01-30
[ [ "Min", "Jeongho", "" ], [ "Lee", "Yejun", "" ], [ "Kim", "Dongyoung", "" ], [ "Yoo", "Jaejun", "" ] ]
2401.15990
Jiejiang Yu
Huadeng Wang, Jiejiang Yu, Bingbing Li, Xipeng Pan, Zhenbing Liu, Rushi Lan, Xiaonan Luo
Gland Segmentation Via Dual Encoders and Boundary-Enhanced Attention
Published in: ICASSP 2024
ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Seoul, Korea, Republic of, 2024, pp. 2345-2349,
10.1109/ICASSP48485.2024.10447267
null
eess.IV cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate and automated gland segmentation on pathological images can assist pathologists in diagnosing the malignancy of colorectal adenocarcinoma. However, due to various gland shapes, severe deformation of malignant glands, and overlapping adhesions between glands. Gland segmentation has always been very challenging. To address these problems, we propose a DEA model. This model consists of two branches: the backbone encoding and decoding network and the local semantic extraction network. The backbone encoding and decoding network extracts advanced Semantic features, uses the proposed feature decoder to restore feature space information, and then enhances the boundary features of the gland through boundary enhancement attention. The local semantic extraction network uses the pre-trained DeepLabv3+ as a Local semantic-guided encoder to realize the extraction of edge features. Experimental results on two public datasets, GlaS and CRAG, confirm that the performance of our method is better than other gland segmentation methods.
[ { "created": "Mon, 29 Jan 2024 09:20:08 GMT", "version": "v1" }, { "created": "Thu, 9 May 2024 14:05:56 GMT", "version": "v2" } ]
2024-05-10
[ [ "Wang", "Huadeng", "" ], [ "Yu", "Jiejiang", "" ], [ "Li", "Bingbing", "" ], [ "Pan", "Xipeng", "" ], [ "Liu", "Zhenbing", "" ], [ "Lan", "Rushi", "" ], [ "Luo", "Xiaonan", "" ] ]
2401.16086
V\'ictor M. S\'anchez-Cartagena
V\'ictor M. S\'anchez-Cartagena, Miquel Espl\`a-Gomis, Juan Antonio P\'erez-Ortiz, Felipe S\'anchez-Mart\'inez
Non-Fluent Synthetic Target-Language Data Improve Neural Machine Translation
arXiv admin note: text overlap with arXiv:2109.03645
IEEE Transactions on Pattern Analysis and Machine Intelligence ( Volume: 46, Issue: 2, February 2024)
10.1109/TPAMI.2023.3333949
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When the amount of parallel sentences available to train a neural machine translation is scarce, a common practice is to generate new synthetic training samples from them. A number of approaches have been proposed to produce synthetic parallel sentences that are similar to those in the parallel data available. These approaches work under the assumption that non-fluent target-side synthetic training samples can be harmful and may deteriorate translation performance. Even so, in this paper we demonstrate that synthetic training samples with non-fluent target sentences can improve translation performance if they are used in a multilingual machine translation framework as if they were sentences in another language. We conducted experiments on ten low-resource and four high-resource translation tasks and found out that this simple approach consistently improves translation performance as compared to state-of-the-art methods for generating synthetic training samples similar to those found in corpora. Furthermore, this improvement is independent of the size of the original training corpus, the resulting systems are much more robust against domain shift and produce less hallucinations.
[ { "created": "Mon, 29 Jan 2024 11:52:45 GMT", "version": "v1" } ]
2024-01-30
[ [ "Sánchez-Cartagena", "Víctor M.", "" ], [ "Esplà-Gomis", "Miquel", "" ], [ "Pérez-Ortiz", "Juan Antonio", "" ], [ "Sánchez-Martínez", "Felipe", "" ] ]
2401.16173
Qing Shuai
Qing Shuai, Zhiyuan Yu, Zhize Zhou, Lixin Fan, Haijun Yang, Can Yang, Xiaowei Zhou
Reconstructing Close Human Interactions from Multiple Views
SIGGRAPH Asia 2023
ACM Transactions on Graphics 2023
10.1145/3618336
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
This paper addresses the challenging task of reconstructing the poses of multiple individuals engaged in close interactions, captured by multiple calibrated cameras. The difficulty arises from the noisy or false 2D keypoint detections due to inter-person occlusion, the heavy ambiguity in associating keypoints to individuals due to the close interactions, and the scarcity of training data as collecting and annotating motion data in crowded scenes is resource-intensive. We introduce a novel system to address these challenges. Our system integrates a learning-based pose estimation component and its corresponding training and inference strategies. The pose estimation component takes multi-view 2D keypoint heatmaps as input and reconstructs the pose of each individual using a 3D conditional volumetric network. As the network doesn't need images as input, we can leverage known camera parameters from test scenes and a large quantity of existing motion capture data to synthesize massive training data that mimics the real data distribution in test scenes. Extensive experiments demonstrate that our approach significantly surpasses previous approaches in terms of pose accuracy and is generalizable across various camera setups and population sizes. The code is available on our project page: https://github.com/zju3dv/CloseMoCap.
[ { "created": "Mon, 29 Jan 2024 14:08:02 GMT", "version": "v1" } ]
2024-01-30
[ [ "Shuai", "Qing", "" ], [ "Yu", "Zhiyuan", "" ], [ "Zhou", "Zhize", "" ], [ "Fan", "Lixin", "" ], [ "Yang", "Haijun", "" ], [ "Yang", "Can", "" ], [ "Zhou", "Xiaowei", "" ] ]
2401.16232
Dmytro Zakharov
Oleksandr Kuznetsov, Dmytro Zakharov, Emanuele Frontoni, Andrea Maranesi, Serhii Bohucharskyi
Cross-Database Liveness Detection: Insights from Comparative Biometric Analysis
Presented at SCIA 2023, Lviv, Ukraine, Nov. 2023
Proceedings of the 2nd International Workshop on Social Communication and Information Activity in Digital Humanities (SCIA 2023), in CEUR Workshop Proceedings, vol. 3608, 2023, pp. 250-263
null
null
cs.CV cs.CR
http://creativecommons.org/licenses/by/4.0/
In an era where biometric security serves as a keystone of modern identity verification systems, ensuring the authenticity of these biometric samples is paramount. Liveness detection, the capability to differentiate between genuine and spoofed biometric samples, stands at the forefront of this challenge. This research presents a comprehensive evaluation of liveness detection models, with a particular focus on their performance in cross-database scenarios, a test paradigm notorious for its complexity and real-world relevance. Our study commenced by meticulously assessing models on individual datasets, revealing the nuances in their performance metrics. Delving into metrics such as the Half Total Error Rate, False Acceptance Rate, and False Rejection Rate, we unearthed invaluable insights into the models' strengths and weaknesses. Crucially, our exploration of cross-database testing provided a unique perspective, highlighting the chasm between training on one dataset and deploying on another. Comparative analysis with extant methodologies, ranging from convolutional networks to more intricate strategies, enriched our understanding of the current landscape. The variance in performance, even among state-of-the-art models, underscored the inherent challenges in this domain. In essence, this paper serves as both a repository of findings and a clarion call for more nuanced, data-diverse, and adaptable approaches in biometric liveness detection. In the dynamic dance between authenticity and deception, our work offers a blueprint for navigating the evolving rhythms of biometric security.
[ { "created": "Mon, 29 Jan 2024 15:32:18 GMT", "version": "v1" } ]
2024-01-30
[ [ "Kuznetsov", "Oleksandr", "" ], [ "Zakharov", "Dmytro", "" ], [ "Frontoni", "Emanuele", "" ], [ "Maranesi", "Andrea", "" ], [ "Bohucharskyi", "Serhii", "" ] ]
2401.16329
Cristina Carmona-Duarte
Miguel A. Ferrer, Moises Diaz, Cristina Carmona-Duarte, Jose J. Quintana Hernandez, Rejean Plamondon
Synthesis of 3D on-air signatures with the Sigma-Lognormal model
Accepted Version. Published on Knowledge-Based Systems
Knowledge-Based Systems, Vol. 265,2023
10.1016/j.knosys.2023.110365
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Signature synthesis is a computation technique that generates artificial specimens which can support decision making in automatic signature verification. A lot of work has been dedicated to this subject, which centres on synthesizing dynamic and static two-dimensional handwriting on canvas. This paper proposes a framework to generate synthetic 3D on-air signatures exploiting the lognormality principle, which mimics the complex neuromotor control processes at play as the fingertip moves. Addressing the usual cases involving the development of artificial individuals and duplicated samples, this paper contributes to the synthesis of: (1) the trajectory and velocity of entirely 3D new signatures; (2) kinematic information when only the 3D trajectory of the signature is known, and (3) duplicate samples of 3D real signatures. Validation was conducted by generating synthetic 3D signature databases mimicking real ones and showing that automatic signature verifications of genuine and skilled forgeries report performances similar to those of real and synthetic databases. We also observed that training 3D automatic signature verifiers with duplicates can reduce errors. We further demonstrated that our proposal is also valid for synthesizing 3D air writing and gestures. Finally, a perception test confirmed the human likeness of the generated specimens. The databases generated are publicly available, only for research purposes, at .
[ { "created": "Mon, 29 Jan 2024 17:35:19 GMT", "version": "v1" } ]
2024-01-31
[ [ "Ferrer", "Miguel A.", "" ], [ "Diaz", "Moises", "" ], [ "Carmona-Duarte", "Cristina", "" ], [ "Hernandez", "Jose J. Quintana", "" ], [ "Plamondon", "Rejean", "" ] ]
2401.16363
Ninon Burgos
Ravi Hassanaly, Camille Brianceau, Ma\"elys Solal, Olivier Colliot, Ninon Burgos
Evaluation of pseudo-healthy image reconstruction for anomaly detection with deep generative models: Application to brain FDG PET
Accepted for publication at the Journal of Machine Learning for Biomedical Imaging (MELBA) https://melba-journal.org/2024:003
Machine.Learning.for.Biomedical.Imaging. 2 (2024)
10.59275/j.melba.2024-b87a
null
eess.IV cs.CV
http://creativecommons.org/licenses/by/4.0/
Over the past years, pseudo-healthy reconstruction for unsupervised anomaly detection has gained in popularity. This approach has the great advantage of not requiring tedious pixel-wise data annotation and offers possibility to generalize to any kind of anomalies, including that corresponding to rare diseases. By training a deep generative model with only images from healthy subjects, the model will learn to reconstruct pseudo-healthy images. This pseudo-healthy reconstruction is then compared to the input to detect and localize anomalies. The evaluation of such methods often relies on a ground truth lesion mask that is available for test data, which may not exist depending on the application. We propose an evaluation procedure based on the simulation of realistic abnormal images to validate pseudo-healthy reconstruction methods when no ground truth is available. This allows us to extensively test generative models on different kinds of anomalies and measuring their performance using the pair of normal and abnormal images corresponding to the same subject. It can be used as a preliminary automatic step to validate the capacity of a generative model to reconstruct pseudo-healthy images, before a more advanced validation step that would require clinician's expertise. We apply this framework to the reconstruction of 3D brain FDG PET using a convolutional variational autoencoder with the aim to detect as early as possible the neurodegeneration markers that are specific to dementia such as Alzheimer's disease.
[ { "created": "Mon, 29 Jan 2024 18:02:22 GMT", "version": "v1" } ]
2024-01-30
[ [ "Hassanaly", "Ravi", "" ], [ "Brianceau", "Camille", "" ], [ "Solal", "Maëlys", "" ], [ "Colliot", "Olivier", "" ], [ "Burgos", "Ninon", "" ] ]
2401.16448
Weimin Fu
Weimin Fu, Kaichen Yang, Raj Gautam Dutta, Xiaolong Guo, Gang Qu
LLM4SecHW: Leveraging Domain Specific Large Language Model for Hardware Debugging
6 pages. 1 figure
2023 Asian Hardware Oriented Security and Trust Symposium (AsianHOST), Tianjin, China, 2023, pp. 1-6
10.1109/AsianHOST59942.2023.10409307
null
cs.AR cs.AI
http://creativecommons.org/licenses/by-sa/4.0/
This paper presents LLM4SecHW, a novel framework for hardware debugging that leverages domain specific Large Language Model (LLM). Despite the success of LLMs in automating various software development tasks, their application in the hardware security domain has been limited due to the constraints of commercial LLMs and the scarcity of domain specific data. To address these challenges, we propose a unique approach to compile a dataset of open source hardware design defects and their remediation steps, utilizing version control data. This dataset provides a substantial foundation for training machine learning models for hardware. LLM4SecHW employs fine tuning of medium sized LLMs based on this dataset, enabling the identification and rectification of bugs in hardware designs. This pioneering approach offers a reference workflow for the application of fine tuning domain specific LLMs in other research areas. We evaluate the performance of our proposed system on various open source hardware designs, demonstrating its efficacy in accurately identifying and correcting defects. Our work brings a new perspective on automating the quality control process in hardware design.
[ { "created": "Sun, 28 Jan 2024 19:45:25 GMT", "version": "v1" } ]
2024-01-31
[ [ "Fu", "Weimin", "" ], [ "Yang", "Kaichen", "" ], [ "Dutta", "Raj Gautam", "" ], [ "Guo", "Xiaolong", "" ], [ "Qu", "Gang", "" ] ]
2401.16519
Cristina Carmona-Duarte
Miguel A. Ferrer, Moises Diaz, Jose J. Quintana, Cristina Carmona-Duarte
Extending the kinematic theory of rapid movements with new primitives
Accepted version: published on Pattern Recognition Letters [ISSN 0167-8655], v. 167, p. 181-188, (Marzo 2023)
Pattern Recognition Letters, 167, 181-188,2023
10.1016/j.patrec.2023.02.021
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
The Kinematic Theory of rapid movements, and its associated Sigma-Lognormal, model 2D spatiotemporal trajectories. It is constructed mainly as a temporal overlap of curves between virtual target points. Specifically, it uses an arc and a lognormal as primitives for the representation of the trajectory and velocity, respectively. This paper proposes developing this model, in what we call the Kinematic Theory Transform, which establishes a mathematical framework that allows further primitives to be used. Mainly, we evaluate Euler curves to link virtual target points and Gaussian, Beta, Gamma, Double-bounded lognormal, and Generalized Extreme Value functions to model the bell-shaped velocity profile. Using these primitives, we report reconstruction results with spatiotemporal trajectories executed by human beings, animals, and anthropomorphic robots.
[ { "created": "Mon, 29 Jan 2024 19:45:12 GMT", "version": "v1" } ]
2024-01-31
[ [ "Ferrer", "Miguel A.", "" ], [ "Diaz", "Moises", "" ], [ "Quintana", "Jose J.", "" ], [ "Carmona-Duarte", "Cristina", "" ] ]
2401.16640
Nicholas Kluge Corr\^ea
Nicholas Kluge Corr\^ea, Sophia Falk, Shiza Fatimah, Aniket Sen, Nythamar de Oliveira
TeenyTinyLlama: open-source tiny language models trained in Brazilian Portuguese
21 pages, 5 figures
Machine Learning With Applications, 16, 100558
10.1016/j.mlwa.2024.100558
null
cs.CL cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Large language models (LLMs) have significantly advanced natural language processing, but their progress has yet to be equal across languages. While most LLMs are trained in high-resource languages like English, multilingual models generally underperform monolingual ones. Additionally, aspects of their multilingual foundation sometimes restrict the byproducts they produce, like computational demands and licensing regimes. In this study, we document the development of open-foundation models tailored for use in low-resource settings, their limitations, and their benefits. This is the TeenyTinyLlama pair: two compact models for Brazilian Portuguese text generation. We release them under the permissive Apache 2.0 license on GitHub and Hugging Face for community use and further development. See https://github.com/Nkluge-correa/TeenyTinyLlama
[ { "created": "Tue, 30 Jan 2024 00:25:54 GMT", "version": "v1" }, { "created": "Tue, 9 Apr 2024 14:35:02 GMT", "version": "v2" }, { "created": "Fri, 17 May 2024 12:36:21 GMT", "version": "v3" } ]
2024-05-20
[ [ "Corrêa", "Nicholas Kluge", "" ], [ "Falk", "Sophia", "" ], [ "Fatimah", "Shiza", "" ], [ "Sen", "Aniket", "" ], [ "de Oliveira", "Nythamar", "" ] ]
2401.16688
Vin\'icius Yu Okubo
Vin\'icius Yu Okubo, Kotaro Shimizu, B. S. Shivaram, Hae Yong Kim
Characterization of Magnetic Labyrinthine Structures Through Junctions and Terminals Detection Using Template Matching and CNN
12 pages, 7 figures, published in IEEE Access
IEEE Access, vol. 12, pp. 92419 - 92430, 2024
10.1109/ACCESS.2024.3422259
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Defects influence diverse properties of materials, shaping their structural, mechanical, and electronic characteristics. Among a variety of materials exhibiting unique defects, magnets exhibit diverse nano- to micro-scale defects and have been intensively studied in materials science. Specifically, defects in magnetic labyrinthine patterns, called junctions and terminals are ubiquitous and serve as points of interest. While detecting and characterizing such defects is crucial for understanding magnets, systematically investigating large-scale images containing over a thousand closely packed junctions and terminals remains a formidable challenge. This study introduces a new technique called TM-CNN (Template Matching - Convolutional Neural Network) designed to detect a multitude of small objects in images, such as the defects in magnetic labyrinthine patterns. TM-CNN was used to identify 641,649 such structures in 444 experimental images, and the results were explored to deepen understanding of magnetic materials. It employs a two-stage detection approach combining template matching, used in initial detection, with a convolutional neural network, used to eliminate incorrect identifications. To train a CNN classifier, it is necessary to annotate a large number of training images. This difficulty prevents the use of CNN in many practical applications. TM-CNN significantly reduces the manual workload for creating training images by automatically making most of the annotations and leaving only a small number of corrections to human reviewers. In testing, TM-CNN achieved an impressive F1 score of 0.991, far outperforming traditional template matching and CNN-based object detection algorithms.
[ { "created": "Tue, 30 Jan 2024 02:23:07 GMT", "version": "v1" }, { "created": "Fri, 17 May 2024 02:16:42 GMT", "version": "v2" }, { "created": "Thu, 18 Jul 2024 23:04:14 GMT", "version": "v3" } ]
2024-07-22
[ [ "Okubo", "Vinícius Yu", "" ], [ "Shimizu", "Kotaro", "" ], [ "Shivaram", "B. S.", "" ], [ "Kim", "Hae Yong", "" ] ]
2401.16779
Aydogan Ozcan
Jingxi Li, Yuhang Li, Tianyi Gan, Che-Yung Shen, Mona Jarrahi, Aydogan Ozcan
All-optical complex field imaging using diffractive processors
25 Pages, 6 Figures
Light: Science & Applications (2024)
10.1038/s41377-024-01482-6
null
physics.optics cs.CV physics.app-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Complex field imaging, which captures both the amplitude and phase information of input optical fields or objects, can offer rich structural insights into samples, such as their absorption and refractive index distributions. However, conventional image sensors are intensity-based and inherently lack the capability to directly measure the phase distribution of a field. This limitation can be overcome using interferometric or holographic methods, often supplemented by iterative phase retrieval algorithms, leading to a considerable increase in hardware complexity and computational demand. Here, we present a complex field imager design that enables snapshot imaging of both the amplitude and quantitative phase information of input fields using an intensity-based sensor array without any digital processing. Our design utilizes successive deep learning-optimized diffractive surfaces that are structured to collectively modulate the input complex field, forming two independent imaging channels that perform amplitude-to-amplitude and phase-to-intensity transformations between the input and output planes within a compact optical design, axially spanning ~100 wavelengths. The intensity distributions of the output fields at these two channels on the sensor plane directly correspond to the amplitude and quantitative phase profiles of the input complex field, eliminating the need for any digital image reconstruction algorithms. We experimentally validated the efficacy of our complex field diffractive imager designs through 3D-printed prototypes operating at the terahertz spectrum, with the output amplitude and phase channel images closely aligning with our numerical simulations. We envision that this complex field imager will have various applications in security, biomedical imaging, sensing and material science, among others.
[ { "created": "Tue, 30 Jan 2024 06:39:54 GMT", "version": "v1" } ]
2024-05-30
[ [ "Li", "Jingxi", "" ], [ "Li", "Yuhang", "" ], [ "Gan", "Tianyi", "" ], [ "Shen", "Che-Yung", "" ], [ "Jarrahi", "Mona", "" ], [ "Ozcan", "Aydogan", "" ] ]
2401.16886
Ming Kang
Ming Kang, Chee-Ming Ting, Fung Fung Ting, Rapha\"el Phan
CAFCT-Net: A CNN-Transformer Hybrid Network with Contextual and Attentional Feature Fusion for Liver Tumor Segmentation
null
In ICIP (2024) 2970--2974
10.1109/ICIP51287.2024.10647768
null
cs.CV eess.SP stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Medical image semantic segmentation techniques can help identify tumors automatically from computed tomography (CT) scans. In this paper, we propose a Contextual and Attentional feature Fusions enhanced Convolutional Neural Network (CNN) and Transformer hybrid network (CAFCT-Net) for liver tumor segmentation. We incorporate three novel modules in the CAFCT-Net architecture: Attentional Feature Fusion (AFF), Atrous Spatial Pyramid Pooling (ASPP) of DeepLabv3, and Attention Gates (AGs) to improve contextual information related to tumor boundaries for accurate segmentation. Experimental results show that the proposed model achieves a mean Intersection over Union (IoU) of 76.54% and Dice coefficient of 84.29%, respectively, on the Liver Tumor Segmentation Benchmark (LiTS) dataset, outperforming pure CNN or Transformer methods, e.g., Attention U-Net and PVTFormer.
[ { "created": "Tue, 30 Jan 2024 10:42:11 GMT", "version": "v1" }, { "created": "Fri, 4 Oct 2024 18:16:26 GMT", "version": "v2" } ]
2024-10-08
[ [ "Kang", "Ming", "" ], [ "Ting", "Chee-Ming", "" ], [ "Ting", "Fung Fung", "" ], [ "Phan", "Raphaël", "" ] ]
2401.17026
Cristina Carmona-Duarte
Miguel A. Ferrer, Sukalpa Chanda, Moises Diaz, Chayan Kr. Banerjee, Anirban Majumdar, Cristina Carmona-Duarte, Parikshit Acharya, Umapada Pal
Static and Dynamic Synthesis of Bengali and Devanagari Signatures
Accepted version. Published on IEEE Transactions on Cybernetics [ISSN 2168-2267], v. 48(10), p. 2896-2907
IEEE Transactions on Cybernetics, v. 48(10), p. 2896-2907, 2018
10.1109/TCYB.2017.2751740
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Developing an automatic signature verification system is challenging and demands a large number of training samples. This is why synthetic handwriting generation is an emerging topic in document image analysis. Some handwriting synthesizers use the motor equivalence model, the well-established hypothesis from neuroscience, which analyses how a human being accomplishes movement. Specifically, a motor equivalence model divides human actions into two steps: 1) the effector independent step at cognitive level and 2) the effector dependent step at motor level. In fact, recent work reports the successful application to Western scripts of a handwriting synthesizer, based on this theory. This paper aims to adapt this scheme for the generation of synthetic signatures in two Indic scripts, Bengali (Bangla), and Devanagari (Hindi). For this purpose, we use two different online and offline databases for both Bengali and Devanagari signatures. This paper reports an effective synthesizer for static and dynamic signatures written in Devanagari or Bengali scripts. We obtain promising results with artificially generated signatures in terms of appearance and performance when we compare the results with those for real signatures.
[ { "created": "Tue, 30 Jan 2024 14:01:30 GMT", "version": "v1" } ]
2024-01-31
[ [ "Ferrer", "Miguel A.", "" ], [ "Chanda", "Sukalpa", "" ], [ "Diaz", "Moises", "" ], [ "Banerjee", "Chayan Kr.", "" ], [ "Majumdar", "Anirban", "" ], [ "Carmona-Duarte", "Cristina", "" ], [ "Acharya", "Parikshit", "" ], [ "Pal", "Umapada", "" ] ]
2401.17056
Bruno Berenguel-Baeta
Bruno Berenguel-Baeta, Manuel Guerrero-Viu, Alejandro de Nova, Jesus Bermudez-Cameo, Alejandro Perez-Yus, Jose J. Guerrero
Floor extraction and door detection for visually impaired guidance
null
International Conference on Control, Automation, Robotics and Vision 2020, pp. 1222-1229
10.1109/ICARCV50220.2020.9305464
null
cs.RO cs.CV
http://creativecommons.org/licenses/by/4.0/
Finding obstacle-free paths in unknown environments is a big navigation issue for visually impaired people and autonomous robots. Previous works focus on obstacle avoidance, however they do not have a general view of the environment they are moving in. New devices based on computer vision systems can help impaired people to overcome the difficulties of navigating in unknown environments in safe conditions. In this work it is proposed a combination of sensors and algorithms that can lead to the building of a navigation system for visually impaired people. Based on traditional systems that use RGB-D cameras for obstacle avoidance, it is included and combined the information of a fish-eye camera, which will give a better understanding of the user's surroundings. The combination gives robustness and reliability to the system as well as a wide field of view that allows to obtain many information from the environment. This combination of sensors is inspired by human vision where the center of the retina (fovea) provides more accurate information than the periphery, where humans have a wider field of view. The proposed system is mounted on a wearable device that provides the obstacle-free zones of the scene, allowing the planning of trajectories for people guidance.
[ { "created": "Tue, 30 Jan 2024 14:38:43 GMT", "version": "v1" } ]
2024-01-31
[ [ "Berenguel-Baeta", "Bruno", "" ], [ "Guerrero-Viu", "Manuel", "" ], [ "de Nova", "Alejandro", "" ], [ "Bermudez-Cameo", "Jesus", "" ], [ "Perez-Yus", "Alejandro", "" ], [ "Guerrero", "Jose J.", "" ] ]
2401.17058
Bruno Berenguel-Baeta
Bruno Berenguel-Baeta and Jesus Bermudez-Cameo and Jose J. Guerrero
Atlanta Scaled layouts from non-central panoramas
null
Pattern Recognition, Volume 129, Page 108740, year 2022
10.1016/j.patcog.2022.108740
null
cs.CV cs.RO
http://creativecommons.org/licenses/by/4.0/
In this work we present a novel approach for 3D layout recovery of indoor environments using a non-central acquisition system. From a non-central panorama, full and scaled 3D lines can be independently recovered by geometry reasoning without geometric nor scale assumptions. However, their sensitivity to noise and complex geometric modeling has led these panoramas being little investigated. Our new pipeline aims to extract the boundaries of the structural lines of an indoor environment with a neural network and exploit the properties of non-central projection systems in a new geometrical processing to recover an scaled 3D layout. The results of our experiments show that we improve state-of-the-art methods for layout reconstruction and line extraction in non-central projection systems. We completely solve the problem in Manhattan and Atlanta environments, handling occlusions and retrieving the metric scale of the room without extra measurements. As far as the authors knowledge goes, our approach is the first work using deep learning on non-central panoramas and recovering scaled layouts from single panoramas.
[ { "created": "Tue, 30 Jan 2024 14:39:38 GMT", "version": "v1" } ]
2024-01-31
[ [ "Berenguel-Baeta", "Bruno", "" ], [ "Bermudez-Cameo", "Jesus", "" ], [ "Guerrero", "Jose J.", "" ] ]
2401.17061
Bruno Berenguel-Baeta
Bruno Berenguel-Baeta and Jesus Bermudez-Cameo and Jose J. Guerrero
OmniSCV: An Omnidirectional Synthetic Image Generator for Computer Vision
null
Sensors 2020, vol. 20, pp. 2066
10.3390/s20072066
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Omnidirectional and 360{\deg} images are becoming widespread in industry and in consumer society, causing omnidirectional computer vision to gain attention. Their wide field of view allows the gathering of a great amount of information about the environment from only an image. However, the distortion of these images requires the development of specific algorithms for their treatment and interpretation. Moreover, a high number of images is essential for the correct training of computer vision algorithms based on learning. In this paper, we present a tool for generating datasets of omnidirectional images with semantic and depth information. These images are synthesized from a set of captures that are acquired in a realistic virtual environment for Unreal Engine 4 through an interface plugin. We gather a variety of well-known projection models such as equirectangular and cylindrical panoramas, different fish-eye lenses, catadioptric systems, and empiric models. Furthermore, we include in our tool photorealistic non-central-projection systems as non-central panoramas and non-central catadioptric systems. As far as we know, this is the first reported tool for generating photorealistic non-central images in the literature. Moreover, since the omnidirectional images are made virtually, we provide pixel-wise information about semantics and depth as well as perfect knowledge of the calibration parameters of the cameras. This allows the creation of ground-truth information with pixel precision for training learning algorithms and testing 3D vision approaches. To validate the proposed tool, different computer vision algorithms are tested as line extractions from dioptric and catadioptric central images, 3D Layout recovery and SLAM using equirectangular panoramas, and 3D reconstruction from non-central panoramas.
[ { "created": "Tue, 30 Jan 2024 14:40:19 GMT", "version": "v1" } ]
2024-01-31
[ [ "Berenguel-Baeta", "Bruno", "" ], [ "Bermudez-Cameo", "Jesus", "" ], [ "Guerrero", "Jose J.", "" ] ]
2401.17075
Bruno Berenguel-Baeta
Bruno Berenguel-Baeta, Jesus Bermudez-Cameo, Jose J. Guerrero
Non-central panorama indoor dataset
null
Data in Brief 2022, Volume 43, pp. 108375
10.1016/j.dib.2022.108375
null
cs.DB cs.CV
http://creativecommons.org/licenses/by/4.0/
Omnidirectional images are one of the main sources of information for learning based scene understanding algorithms. However, annotated datasets of omnidirectional images cannot keep the pace of these learning based algorithms development. Among the different panoramas and in contrast to standard central ones, non-central panoramas provide geometrical information in the distortion of the image from which we can retrieve 3D information of the environment [2]. However, due to the lack of commercial non-central devices, up until now there was no dataset of these kinds of panoramas. In this data paper, we present the first dataset of non-central panoramas for indoor scene understanding. The dataset is composed by {\bf 2574} RGB non-central panoramas taken in around 650 different rooms. Each panorama has associated a depth map and annotations to obtain the layout of the room from the image as a structural edge map, list of corners in the image, the 3D corners of the room and the camera pose. The images are taken from photorealistic virtual environments and pixel-wise automatically annotated.
[ { "created": "Tue, 30 Jan 2024 14:56:59 GMT", "version": "v1" } ]
2024-01-31
[ [ "Berenguel-Baeta", "Bruno", "" ], [ "Bermudez-Cameo", "Jesus", "" ], [ "Guerrero", "Jose J.", "" ] ]
2401.17185
Qingyu Xiao
Qingyu Xiao, Zulfiqar Zaidi and Matthew Gombolay
Multi-Camera Asynchronous Ball Localization and Trajectory Prediction with Factor Graphs and Human Poses
Accepted by ICRA 2024
2024 IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, 2024, pp. 13695-13702
10.1109/ICRA57147.2024.10610631
null
cs.RO cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The rapid and precise localization and prediction of a ball are critical for developing agile robots in ball sports, particularly in sports like tennis characterized by high-speed ball movements and powerful spins. The Magnus effect induced by spin adds complexity to trajectory prediction during flight and bounce dynamics upon contact with the ground. In this study, we introduce an innovative approach that combines a multi-camera system with factor graphs for real-time and asynchronous 3D tennis ball localization. Additionally, we estimate hidden states like velocity and spin for trajectory prediction. Furthermore, to enhance spin inference early in the ball's flight, where limited observations are available, we integrate human pose data using a temporal convolutional network (TCN) to compute spin priors within the factor graph. This refinement provides more accurate spin priors at the beginning of the factor graph, leading to improved early-stage hidden state inference for prediction. Our result shows the trained TCN can predict the spin priors with RMSE of 5.27 Hz. Integrating TCN into the factor graph reduces the prediction error of landing positions by over 63.6% compared to a baseline method that utilized an adaptive extended Kalman filter.
[ { "created": "Tue, 30 Jan 2024 17:13:29 GMT", "version": "v1" } ]
2024-09-26
[ [ "Xiao", "Qingyu", "" ], [ "Zaidi", "Zulfiqar", "" ], [ "Gombolay", "Matthew", "" ] ]
2401.17319
Ehsan Hallaji
Ehsan Hallaji and Roozbeh Razavi-Far and Mehrdad Saif and Boyu Wang and Qiang Yang
Decentralized Federated Learning: A Survey on Security and Privacy
Accepted for publication in IEEE Transactions on Big Data
IEEE Transactions on Big Data, vol. 10, no. 2, pp. 194-213, 2024
10.1109/TBDATA.2024.3362191
null
cs.CR cs.AI cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
Federated learning has been rapidly evolving and gaining popularity in recent years due to its privacy-preserving features, among other advantages. Nevertheless, the exchange of model updates and gradients in this architecture provides new attack surfaces for malicious users of the network which may jeopardize the model performance and user and data privacy. For this reason, one of the main motivations for decentralized federated learning is to eliminate server-related threats by removing the server from the network and compensating for it through technologies such as blockchain. However, this advantage comes at the cost of challenging the system with new privacy threats. Thus, performing a thorough security analysis in this new paradigm is necessary. This survey studies possible variations of threats and adversaries in decentralized federated learning and overviews the potential defense mechanisms. Trustability and verifiability of decentralized federated learning are also considered in this study.
[ { "created": "Thu, 25 Jan 2024 23:35:47 GMT", "version": "v1" } ]
2024-03-20
[ [ "Hallaji", "Ehsan", "" ], [ "Razavi-Far", "Roozbeh", "" ], [ "Saif", "Mehrdad", "" ], [ "Wang", "Boyu", "" ], [ "Yang", "Qiang", "" ] ]
2401.17320
Cristina Carmona-Duarte
C. Carmona-Duarte, M.A.Ferrer, R. Plamondon, A. Gomez-Rodellar, P. Gomez-Vilda
Sigma-lognormal modeling of speech
Published in Open Acces
Cognitive Computation, 13(2). pp. 488-503, 2021
10.1007/s12559-020-09803-8
null
q-bio.NC cs.CV cs.SD eess.AS
http://creativecommons.org/licenses/by-nc-nd/4.0/
Human movement studies and analyses have been fundamental in many scientific domains, ranging from neuroscience to education, pattern recognition to robotics, health care to sports, and beyond. Previous speech motor models were proposed to understand how speech movement is produced and how the resulting speech varies when some parameters are changed. However, the inverse approach, in which the muscular response parameters and the subject's age are derived from real continuous speech, is not possible with such models. Instead, in the handwriting field, the kinematic theory of rapid human movements and its associated Sigma-lognormal model have been applied successfully to obtain the muscular response parameters. This work presents a speech kinematics based model that can be used to study, analyze, and reconstruct complex speech kinematics in a simplified manner. A method based on the kinematic theory of rapid human movements and its associated Sigma lognormal model are applied to describe and to parameterize the asymptotic impulse response of the neuromuscular networks involved in speech as a response to a neuromotor command. The method used to carry out transformations from formants to a movement observation is also presented. Experiments carried out with the (English) VTR TIMIT database and the (German) Saarbrucken Voice Database, including people of different ages, with and without laryngeal pathologies, corroborate the link between the extracted parameters and aging, on the one hand, and the proportion between the first and second formants required in applying the kinematic theory of rapid human movements, on the other. The results should drive innovative developments in the modeling and understanding of speech kinematics.
[ { "created": "Sat, 27 Jan 2024 18:00:20 GMT", "version": "v1" } ]
2024-02-01
[ [ "Carmona-Duarte", "C.", "" ], [ "Ferrer", "M. A.", "" ], [ "Plamondon", "R.", "" ], [ "Gomez-Rodellar", "A.", "" ], [ "Gomez-Vilda", "P.", "" ] ]
2401.17511
Adarsa Sivaprasad
Adarsa Sivaprasad and Ehud Reiter
Linguistically Communicating Uncertainty in Patient-Facing Risk Prediction Models
null
https://aclanthology.org/2024.uncertainlp-1.13
null
null
cs.AI cs.CL
http://creativecommons.org/licenses/by/4.0/
This paper addresses the unique challenges associated with uncertainty quantification in AI models when applied to patient-facing contexts within healthcare. Unlike traditional eXplainable Artificial Intelligence (XAI) methods tailored for model developers or domain experts, additional considerations of communicating in natural language, its presentation and evaluating understandability are necessary. We identify the challenges in communication model performance, confidence, reasoning and unknown knowns using natural language in the context of risk prediction. We propose a design aimed at addressing these challenges, focusing on the specific application of in-vitro fertilisation outcome prediction.
[ { "created": "Wed, 31 Jan 2024 00:08:44 GMT", "version": "v1" } ]
2024-08-06
[ [ "Sivaprasad", "Adarsa", "" ], [ "Reiter", "Ehud", "" ] ]
2401.17536
Ying Su
Ying Su, Jipeng Zhang, Yangqiu Song, Tong Zhang
PipeNet: Question Answering with Semantic Pruning over Knowledge Graphs
8 pages, 4 figures, accepted to *SEM 2024
https://aclanthology.org/2024.starsem-1.29
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-nd/4.0/
It is well acknowledged that incorporating explicit knowledge graphs (KGs) can benefit question answering. Existing approaches typically follow a grounding-reasoning pipeline in which entity nodes are first grounded for the query (question and candidate answers), and then a reasoning module reasons over the matched multi-hop subgraph for answer prediction. Although the pipeline largely alleviates the issue of extracting essential information from giant KGs, efficiency is still an open challenge when scaling up hops in grounding the subgraphs. In this paper, we target at finding semantically related entity nodes in the subgraph to improve the efficiency of graph reasoning with KG. We propose a grounding-pruning-reasoning pipeline to prune noisy nodes, remarkably reducing the computation cost and memory usage while also obtaining decent subgraph representation. In detail, the pruning module first scores concept nodes based on the dependency distance between matched spans and then prunes the nodes according to score ranks. To facilitate the evaluation of pruned subgraphs, we also propose a graph attention network (GAT) based module to reason with the subgraph data. Experimental results on CommonsenseQA and OpenBookQA demonstrate the effectiveness of our method.
[ { "created": "Wed, 31 Jan 2024 01:37:33 GMT", "version": "v1" }, { "created": "Fri, 17 May 2024 01:06:46 GMT", "version": "v2" } ]
2024-07-24
[ [ "Su", "Ying", "" ], [ "Zhang", "Jipeng", "" ], [ "Song", "Yangqiu", "" ], [ "Zhang", "Tong", "" ] ]
2401.17548
Lifan Zhao
Lifan Zhao, Yanyan Shen
Rethinking Channel Dependence for Multivariate Time Series Forecasting: Learning from Leading Indicators
Accepted to ICLR 2024. Code is at https://github.com/SJTU-DMTai/LIFT
The Twelfth International Conference on Learning Representations, 2024
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, channel-independent methods have achieved state-of-the-art performance in multivariate time series (MTS) forecasting. Despite reducing overfitting risks, these methods miss potential opportunities in utilizing channel dependence for accurate predictions. We argue that there exist locally stationary lead-lag relationships between variates, i.e., some lagged variates may follow the leading indicators within a short time period. Exploiting such channel dependence is beneficial since leading indicators offer advance information that can be used to reduce the forecasting difficulty of the lagged variates. In this paper, we propose a new method named LIFT that first efficiently estimates leading indicators and their leading steps at each time step and then judiciously allows the lagged variates to utilize the advance information from leading indicators. LIFT plays as a plugin that can be seamlessly collaborated with arbitrary time series forecasting methods. Extensive experiments on six real-world datasets demonstrate that LIFT improves the state-of-the-art methods by 5.5% in average forecasting performance. Our code is available at https://github.com/SJTU-Quant/LIFT.
[ { "created": "Wed, 31 Jan 2024 02:26:09 GMT", "version": "v1" }, { "created": "Fri, 23 Feb 2024 06:38:39 GMT", "version": "v2" }, { "created": "Sun, 24 Mar 2024 13:29:40 GMT", "version": "v3" }, { "created": "Sun, 7 Apr 2024 02:44:18 GMT", "version": "v4" }, { "created": "Sat, 13 Apr 2024 04:26:56 GMT", "version": "v5" }, { "created": "Tue, 13 Aug 2024 05:31:22 GMT", "version": "v6" } ]
2024-08-14
[ [ "Zhao", "Lifan", "" ], [ "Shen", "Yanyan", "" ] ]
2401.17626
Andr\'e Silva
Benoit Baudry, Khashayar Etemadi, Sen Fang, Yogya Gamage, Yi Liu, Yuxin Liu, Martin Monperrus, Javier Ron, Andr\'e Silva, Deepika Tiwari
Generative AI to Generate Test Data Generators
null
IEEE Software, 2024
10.1109/MS.2024.3418570
null
cs.SE cs.AI cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
Generating fake data is an essential dimension of modern software testing, as demonstrated by the number and significance of data faking libraries. Yet, developers of faking libraries cannot keep up with the wide range of data to be generated for different natural languages and domains. In this paper, we assess the ability of generative AI for generating test data in different domains. We design three types of prompts for Large Language Models (LLMs), which perform test data generation tasks at different levels of integrability: 1) raw test data generation, 2) synthesizing programs in a specific language that generate useful test data, and 3) producing programs that use state-of-the-art faker libraries. We evaluate our approach by prompting LLMs to generate test data for 11 domains. The results show that LLMs can successfully generate realistic test data generators in a wide range of domains at all three levels of integrability.
[ { "created": "Wed, 31 Jan 2024 06:58:26 GMT", "version": "v1" }, { "created": "Fri, 14 Jun 2024 14:49:12 GMT", "version": "v2" } ]
2024-06-26
[ [ "Baudry", "Benoit", "" ], [ "Etemadi", "Khashayar", "" ], [ "Fang", "Sen", "" ], [ "Gamage", "Yogya", "" ], [ "Liu", "Yi", "" ], [ "Liu", "Yuxin", "" ], [ "Monperrus", "Martin", "" ], [ "Ron", "Javier", "" ], [ "Silva", "André", "" ], [ "Tiwari", "Deepika", "" ] ]
2401.17642
Hanyu Zhou
Hanyu Zhou, Yi Chang, Haoyue Liu, Wending Yan, Yuxing Duan, Zhiwei Shi, Luxin Yan
Exploring the Common Appearance-Boundary Adaptation for Nighttime Optical Flow
null
International Conference on Learning Representations (ICLR), 2024
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate a challenging task of nighttime optical flow, which suffers from weakened texture and amplified noise. These degradations weaken discriminative visual features, thus causing invalid motion feature matching. Typically, existing methods employ domain adaptation to transfer knowledge from auxiliary domain to nighttime domain in either input visual space or output motion space. However, this direct adaptation is ineffective, since there exists a large domain gap due to the intrinsic heterogeneous nature of the feature representations between auxiliary and nighttime domains. To overcome this issue, we explore a common-latent space as the intermediate bridge to reinforce the feature alignment between auxiliary and nighttime domains. In this work, we exploit two auxiliary daytime and event domains, and propose a novel common appearance-boundary adaptation framework for nighttime optical flow. In appearance adaptation, we employ the intrinsic image decomposition to embed the auxiliary daytime image and the nighttime image into a reflectance-aligned common space. We discover that motion distributions of the two reflectance maps are very similar, benefiting us to consistently transfer motion appearance knowledge from daytime to nighttime domain. In boundary adaptation, we theoretically derive the motion correlation formula between nighttime image and accumulated events within a spatiotemporal gradient-aligned common space. We figure out that the correlation of the two spatiotemporal gradient maps shares significant discrepancy, benefitting us to contrastively transfer boundary knowledge from event to nighttime domain. Moreover, appearance adaptation and boundary adaptation are complementary to each other, since they could jointly transfer global motion and local boundary knowledge to the nighttime domain.
[ { "created": "Wed, 31 Jan 2024 07:51:52 GMT", "version": "v1" } ]
2024-02-01
[ [ "Zhou", "Hanyu", "" ], [ "Chang", "Yi", "" ], [ "Liu", "Haoyue", "" ], [ "Yan", "Wending", "" ], [ "Duan", "Yuxing", "" ], [ "Shi", "Zhiwei", "" ], [ "Yan", "Luxin", "" ] ]
2401.17661
Idoia Berges
V\'ictor Julio Ram\'irez-Dur\'an, Idoia Berges, Arantza Illarramendi
Towards the implementation of Industry 4.0: A methodology-based approach oriented to the customer life cycle
Accepted version of paper: V\'ictor Julio Ram\'irez-Dur\'an, Idoia Berges, Arantza Illarramendi: Towards the implementation of Industry 4.0: A methodology-based approach oriented to the customer life cycle. Comput. Ind. 126: 103403 (2021). DOI: 10.1016/j.compind.2021.103403
Comput. Ind. 126: 103403 (2021)
10.1016/j.compind.2021.103403
null
cs.SE cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Many different worldwide initiatives are promoting the transformation from machine dominant manufacturing to digital manufacturing. Thus, to achieve a successful transformation to Industry 4.0 standard, manufacturing enterprises are required to implement a clear roadmap. However, Small and Medium Manufacturing Enterprises (SMEs) encounter many barriers and difficulties (economical, technical, cultural, etc.) in the implementation of Industry 4.0. Although several works deal with the incorporation of Industry 4.0 technologies in the area of the product and supply chain life cycles, which SMEs could use as reference, this is not the case for the customer life cycle. Thus, we present two contributions that can help the software engineers of those SMEs to incorporate Industry 4.0 technologies in the context of the customer life cycle. The first contribution is a methodology that can help those software engineers in the task of creating new software services, aligned with Industry 4.0, that allow to change how customers interact with enterprises and the experiences they have while interacting with them. The methodology details a set of stages that are divided into phases which in turn are made up of activities. It places special emphasis on the incorporation of semantics descriptions and 3D visualization in the implementation of those new services. The second contribution is a system developed for a real manufacturing scenario, using the proposed methodology, which allows to observe the possibilities that this kind of systems can offer to SMEs in two phases of the customer life cycle: Discover & Shop, and Use & Service.
[ { "created": "Wed, 31 Jan 2024 08:31:08 GMT", "version": "v1" } ]
2024-02-01
[ [ "Ramírez-Durán", "Víctor Julio", "" ], [ "Berges", "Idoia", "" ], [ "Illarramendi", "Arantza", "" ] ]
2402.00015
Jerome White
Chandan Agrawal, Ashish Papanai, Jerome White
Maintaining User Trust Through Multistage Uncertainty Aware Inference
null
Presented at Deployable AI Workshop at AAAI-2024
null
null
cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes and evaluates a multistage approach to AI deployment. Each stage involves a more accurate method of inference, yet engaging each comes with an increasing cost. In outlining the architecture, we present a method for quantifying model uncertainty that facilitates confident deferral decisions. The architecture is currently under active deployment to thousands of cotton farmers across India. The broader idea however is applicable to a growing sector of AI deployments in challenging low resources settings.
[ { "created": "Thu, 28 Dec 2023 14:14:31 GMT", "version": "v1" }, { "created": "Mon, 15 Apr 2024 07:06:54 GMT", "version": "v2" } ]
2024-04-16
[ [ "Agrawal", "Chandan", "" ], [ "Papanai", "Ashish", "" ], [ "White", "Jerome", "" ] ]
2402.00029
Necdet Gurkan
Necdet Gurkan, Jordan W. Suchow
Exploring Public Opinion on Responsible AI Through The Lens of Cultural Consensus Theory
null
Proceedings of the 57th Hawaii International Conference on System Sciences, 713-723 (2024)
null
null
cs.CY cs.AI
http://creativecommons.org/licenses/by/4.0/
As the societal implications of Artificial Intelligence (AI) continue to grow, the pursuit of responsible AI necessitates public engagement in its development and governance processes. This involvement is crucial for capturing diverse perspectives and promoting equitable practices and outcomes. We applied Cultural Consensus Theory (CCT) to a nationally representative survey dataset on various aspects of AI to discern beliefs and attitudes about responsible AI in the United States. Our results offer valuable insights by identifying shared and contrasting views on responsible AI. Furthermore, these findings serve as critical reference points for developers and policymakers, enabling them to more effectively consider individual variances and group-level cultural perspectives when making significant decisions and addressing the public's concerns.
[ { "created": "Sat, 6 Jan 2024 20:57:35 GMT", "version": "v1" } ]
2024-02-02
[ [ "Gurkan", "Necdet", "" ], [ "Suchow", "Jordan W.", "" ] ]