id
stringlengths
10
10
submitter
stringlengths
3
52
authors
stringlengths
6
7.24k
title
stringlengths
12
217
comments
stringlengths
1
446
journal-ref
stringlengths
4
297
doi
stringlengths
12
118
report-no
stringclasses
237 values
categories
stringlengths
5
71
license
stringclasses
6 values
abstract
stringlengths
90
3.26k
versions
listlengths
1
17
update_date
stringclasses
969 values
authors_parsed
sequencelengths
1
451
2402.11571
Eric Nichols
Zining Wang and Paul Reisert and Eric Nichols and Randy Gomez
Ain't Misbehavin' -- Using LLMs to Generate Expressive Robot Behavior in Conversations with the Tabletop Robot Haru
Accepted as Late Breaking Report (LBR) at the 19th Annual ACM/IEEE International Conference on Human Robot Interaction (HRI '24)
Companion of HRI '24, March 11-14, 2024, Boulder, CO, USA
10.1145/3610978.3640562
null
cs.RO cs.AI cs.CL
http://creativecommons.org/licenses/by/4.0/
Social robots aim to establish long-term bonds with humans through engaging conversation. However, traditional conversational approaches, reliant on scripted interactions, often fall short in maintaining engaging conversations. This paper addresses this limitation by integrating large language models (LLMs) into social robots to achieve more dynamic and expressive conversations. We introduce a fully-automated conversation system that leverages LLMs to generate robot responses with expressive behaviors, congruent with the robot's personality. We incorporate robot behavior with two modalities: 1) a text-to-speech (TTS) engine capable of various delivery styles, and 2) a library of physical actions for the robot. We develop a custom, state-of-the-art emotion recognition model to dynamically select the robot's tone of voice and utilize emojis from LLM output as cues for generating robot actions. A demo of our system is available here. To illuminate design and implementation issues, we conduct a pilot study where volunteers chat with a social robot using our proposed system, and we analyze their feedback, conducting a rigorous error analysis of chat transcripts. Feedback was overwhelmingly positive, with participants commenting on the robot's empathy, helpfulness, naturalness, and entertainment. Most negative feedback was due to automatic speech recognition (ASR) errors which had limited impact on conversations. However, we observed a small class of errors, such as the LLM repeating itself or hallucinating fictitious information and human responses, that have the potential to derail conversations, raising important issues for LLM application.
[ { "created": "Sun, 18 Feb 2024 12:35:52 GMT", "version": "v1" } ]
2024-02-20
[ [ "Wang", "Zining", "" ], [ "Reisert", "Paul", "" ], [ "Nichols", "Eric", "" ], [ "Gomez", "Randy", "" ] ]
2402.11670
Lars Nieradzik
Lars Nieradzik, Henrike Stephani, J\"ordis Sieburg-Rockel, Stephanie Helmling, Andrea Olbrich, Janis Keuper
Challenging the Black Box: A Comprehensive Evaluation of Attribution Maps of CNN Applications in Agriculture and Forestry
null
Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 2: VISAPP, 2024, pp. 483-492
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
In this study, we explore the explainability of neural networks in agriculture and forestry, specifically in fertilizer treatment classification and wood identification. The opaque nature of these models, often considered 'black boxes', is addressed through an extensive evaluation of state-of-the-art Attribution Maps (AMs), also known as class activation maps (CAMs) or saliency maps. Our comprehensive qualitative and quantitative analysis of these AMs uncovers critical practical limitations. Findings reveal that AMs frequently fail to consistently highlight crucial features and often misalign with the features considered important by domain experts. These discrepancies raise substantial questions about the utility of AMs in understanding the decision-making process of neural networks. Our study provides critical insights into the trustworthiness and practicality of AMs within the agriculture and forestry sectors, thus facilitating a better understanding of neural networks in these application areas.
[ { "created": "Sun, 18 Feb 2024 18:16:43 GMT", "version": "v1" } ]
2024-02-20
[ [ "Nieradzik", "Lars", "" ], [ "Stephani", "Henrike", "" ], [ "Sieburg-Rockel", "Jördis", "" ], [ "Helmling", "Stephanie", "" ], [ "Olbrich", "Andrea", "" ], [ "Keuper", "Janis", "" ] ]
2402.11680
Till Beemelmanns
Till Beemelmanns, Yuchen Tao, Bastian Lampe, Lennart Reiher, Raphael van Kempen, Timo Woopen, and Lutz Eckstein
3D Point Cloud Compression with Recurrent Neural Network and Image Compression Methods
Code: https://github.com/ika-rwth-aachen/Point-Cloud-Compression
2022 IEEE Intelligent Vehicles Symposium (IV)
10.1109/IV51971.2022.9827270
null
cs.CV cs.AI eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Storing and transmitting LiDAR point cloud data is essential for many AV applications, such as training data collection, remote control, cloud services or SLAM. However, due to the sparsity and unordered structure of the data, it is difficult to compress point cloud data to a low volume. Transforming the raw point cloud data into a dense 2D matrix structure is a promising way for applying compression algorithms. We propose a new lossless and calibrated 3D-to-2D transformation which allows compression algorithms to efficiently exploit spatial correlations within the 2D representation. To compress the structured representation, we use common image compression methods and also a self-supervised deep compression approach using a recurrent neural network. We also rearrange the LiDAR's intensity measurements to a dense 2D representation and propose a new metric to evaluate the compression performance of the intensity. Compared to approaches that are based on generic octree point cloud compression or based on raw point cloud data compression, our approach achieves the best quantitative and visual performance. Source code and dataset are available at https://github.com/ika-rwth-aachen/Point-Cloud-Compression.
[ { "created": "Sun, 18 Feb 2024 19:08:19 GMT", "version": "v1" } ]
2024-02-20
[ [ "Beemelmanns", "Till", "" ], [ "Tao", "Yuchen", "" ], [ "Lampe", "Bastian", "" ], [ "Reiher", "Lennart", "" ], [ "van Kempen", "Raphael", "" ], [ "Woopen", "Timo", "" ], [ "Eckstein", "Lutz", "" ] ]
2402.11895
Sugat Chaturvedi
Rochana Chaturvedi, Sugat Chaturvedi and Elena Zheleva
Bridging or Breaking: Impact of Intergroup Interactions on Religious Polarization
null
In Proceedings of the ACM Web Conference 2024 (WWW '24), May 13-17, 2024, Singapore, Singapore. ACM, New York, NY, USA, 12 pages
10.1145/3589334.3645675
null
cs.SI cs.CL physics.soc-ph
http://creativecommons.org/licenses/by/4.0/
While exposure to diverse viewpoints may reduce polarization, it can also have a backfire effect and exacerbate polarization when the discussion is adversarial. Here, we examine the question whether intergroup interactions around important events affect polarization between majority and minority groups in social networks. We compile data on the religious identity of nearly 700,000 Indian Twitter users engaging in COVID-19-related discourse during 2020. We introduce a new measure for an individual's group conformity based on contextualized embeddings of tweet text, which helps us assess polarization between religious groups. We then use a meta-learning framework to examine heterogeneous treatment effects of intergroup interactions on an individual's group conformity in the light of communal, political, and socio-economic events. We find that for political and social events, intergroup interactions reduce polarization. This decline is weaker for individuals at the extreme who already exhibit high conformity to their group. In contrast, during communal events, intergroup interactions can increase group conformity. Finally, we decompose the differential effects across religious groups in terms of emotions and topics of discussion. The results show that the dynamics of religious polarization are sensitive to the context and have important implications for understanding the role of intergroup interactions.
[ { "created": "Mon, 19 Feb 2024 07:21:09 GMT", "version": "v1" }, { "created": "Tue, 20 Feb 2024 04:00:15 GMT", "version": "v2" }, { "created": "Sun, 10 Mar 2024 05:38:20 GMT", "version": "v3" } ]
2024-08-21
[ [ "Chaturvedi", "Rochana", "" ], [ "Chaturvedi", "Sugat", "" ], [ "Zheleva", "Elena", "" ] ]
2402.11929
Chong Zeng
Chong Zeng and Yue Dong and Pieter Peers and Youkang Kong and Hongzhi Wu and Xin Tong
DiLightNet: Fine-grained Lighting Control for Diffusion-based Image Generation
Accepted to SIGGRAPH 2024. Project page: https://dilightnet.github.io/
ACM SIGGRAPH 2024 Conference Proceedings
10.1145/3641519.3657396
null
cs.CV cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a novel method for exerting fine-grained lighting control during text-driven diffusion-based image generation. While existing diffusion models already have the ability to generate images under any lighting condition, without additional guidance these models tend to correlate image content and lighting. Moreover, text prompts lack the necessary expressional power to describe detailed lighting setups. To provide the content creator with fine-grained control over the lighting during image generation, we augment the text-prompt with detailed lighting information in the form of radiance hints, i.e., visualizations of the scene geometry with a homogeneous canonical material under the target lighting. However, the scene geometry needed to produce the radiance hints is unknown. Our key observation is that we only need to guide the diffusion process, hence exact radiance hints are not necessary; we only need to point the diffusion model in the right direction. Based on this observation, we introduce a three stage method for controlling the lighting during image generation. In the first stage, we leverage a standard pretrained diffusion model to generate a provisional image under uncontrolled lighting. Next, in the second stage, we resynthesize and refine the foreground object in the generated image by passing the target lighting to a refined diffusion model, named DiLightNet, using radiance hints computed on a coarse shape of the foreground object inferred from the provisional image. To retain the texture details, we multiply the radiance hints with a neural encoding of the provisional synthesized image before passing it to DiLightNet. Finally, in the third stage, we resynthesize the background to be consistent with the lighting on the foreground object. We demonstrate and validate our lighting controlled diffusion model on a variety of text prompts and lighting conditions.
[ { "created": "Mon, 19 Feb 2024 08:17:21 GMT", "version": "v1" }, { "created": "Tue, 28 May 2024 03:55:20 GMT", "version": "v2" } ]
2024-05-29
[ [ "Zeng", "Chong", "" ], [ "Dong", "Yue", "" ], [ "Peers", "Pieter", "" ], [ "Kong", "Youkang", "" ], [ "Wu", "Hongzhi", "" ], [ "Tong", "Xin", "" ] ]
2402.12041
Ciaran Eising
Daniel Jakab, Brian Michael Deegan, Sushil Sharma, Eoin Martino Grua, Jonathan Horgan, Enda Ward, Pepijn Van De Ven, Anthony Scanlan, Ciar\'an Eising
Surround-View Fisheye Optics in Computer Vision and Simulation: Survey and Challenges
23 pages, 19 figures, 2 tables
IEEE Transactions on Intelligent Transportation Systems, 2024
10.1109/TITS.2024.3368136
null
cs.CV eess.IV
http://creativecommons.org/licenses/by/4.0/
In this paper, we provide a survey on automotive surround-view fisheye optics, with an emphasis on the impact of optical artifacts on computer vision tasks in autonomous driving and ADAS. The automotive industry has advanced in applying state-of-the-art computer vision to enhance road safety and provide automated driving functionality. When using camera systems on vehicles, there is a particular need for a wide field of view to capture the entire vehicle's surroundings, in areas such as low-speed maneuvering, automated parking, and cocoon sensing. However, one crucial challenge in surround-view cameras is the strong optical aberrations of the fisheye camera, which is an area that has received little attention in the literature. Additionally, a comprehensive dataset is needed for testing safety-critical scenarios in vehicle automation. The industry has turned to simulation as a cost-effective strategy for creating synthetic datasets with surround-view camera imagery. We examine different simulation methods (such as model-driven and data-driven simulations) and discuss the simulators' ability (or lack thereof) to model real-world optical performance. Overall, this paper highlights the optical aberrations in automotive fisheye datasets, and the limitations of optical reality in simulated fisheye datasets, with a focus on computer vision in surround-view optical systems.
[ { "created": "Mon, 19 Feb 2024 10:56:28 GMT", "version": "v1" }, { "created": "Wed, 21 Feb 2024 14:48:28 GMT", "version": "v2" } ]
2024-03-12
[ [ "Jakab", "Daniel", "" ], [ "Deegan", "Brian Michael", "" ], [ "Sharma", "Sushil", "" ], [ "Grua", "Eoin Martino", "" ], [ "Horgan", "Jonathan", "" ], [ "Ward", "Enda", "" ], [ "Van De Ven", "Pepijn", "" ], [ "Scanlan", "Anthony", "" ], [ "Eising", "Ciarán", "" ] ]
2402.12074
Yongquan He
Yongquan He and Peng Zhang and Luchen Liu and Qi Liang and Wenyuan Zhang and Chuang Zhang
HIP Network: Historical Information Passing Network for Extrapolation Reasoning on Temporal Knowledge Graph
7 pages, 3 figures
IJCAI (2021) 1915-1921
10.24963/IJCAI.2021/264
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, temporal knowledge graph (TKG) reasoning has received significant attention. Most existing methods assume that all timestamps and corresponding graphs are available during training, which makes it difficult to predict future events. To address this issue, recent works learn to infer future events based on historical information. However, these methods do not comprehensively consider the latent patterns behind temporal changes, to pass historical information selectively, update representations appropriately and predict events accurately. In this paper, we propose the Historical Information Passing (HIP) network to predict future events. HIP network passes information from temporal, structural and repetitive perspectives, which are used to model the temporal evolution of events, the interactions of events at the same time step, and the known events respectively. In particular, our method considers the updating of relation representations and adopts three scoring functions corresponding to the above dimensions. Experimental results on five benchmark datasets show the superiority of HIP network, and the significant improvements on Hits@1 prove that our method can more accurately predict what is going to happen.
[ { "created": "Mon, 19 Feb 2024 11:50:30 GMT", "version": "v1" } ]
2024-02-22
[ [ "He", "Yongquan", "" ], [ "Zhang", "Peng", "" ], [ "Liu", "Luchen", "" ], [ "Liang", "Qi", "" ], [ "Zhang", "Wenyuan", "" ], [ "Zhang", "Chuang", "" ] ]
2402.12193
Yuxia Wang
Yuxia Wang, Zenan Zhai, Haonan Li, Xudong Han, Lizhi Lin, Zhenxuan Zhang, Jingru Zhao, Preslav Nakov, Timothy Baldwin
A Chinese Dataset for Evaluating the Safeguards in Large Language Models
14 pages
ACL2024-Findings
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Many studies have demonstrated that large language models (LLMs) can produce harmful responses, exposing users to unexpected risks when LLMs are deployed. Previous studies have proposed comprehensive taxonomies of the risks posed by LLMs, as well as corresponding prompts that can be used to examine the safety mechanisms of LLMs. However, the focus has been almost exclusively on English, and little has been explored for other languages. Here we aim to bridge this gap. We first introduce a dataset for the safety evaluation of Chinese LLMs, and then extend it to two other scenarios that can be used to better identify false negative and false positive examples in terms of risky prompt rejections. We further present a set of fine-grained safety assessment criteria for each risk type, facilitating both manual annotation and automatic evaluation in terms of LLM response harmfulness. Our experiments on five LLMs show that region-specific risks are the prevalent type of risk, presenting the major issue with all Chinese LLMs we experimented with. Our data is available at https://github.com/Libr-AI/do-not-answer. Warning: this paper contains example data that may be offensive, harmful, or biased.
[ { "created": "Mon, 19 Feb 2024 14:56:18 GMT", "version": "v1" }, { "created": "Sun, 26 May 2024 17:15:44 GMT", "version": "v2" }, { "created": "Sun, 4 Aug 2024 08:56:33 GMT", "version": "v3" } ]
2024-08-06
[ [ "Wang", "Yuxia", "" ], [ "Zhai", "Zenan", "" ], [ "Li", "Haonan", "" ], [ "Han", "Xudong", "" ], [ "Lin", "Lizhi", "" ], [ "Zhang", "Zhenxuan", "" ], [ "Zhao", "Jingru", "" ], [ "Nakov", "Preslav", "" ], [ "Baldwin", "Timothy", "" ] ]
2402.12202
Yu Yang
Chengyi Ju and Jiannong Cao and Yu Yang and Zhen-Qun Yang and Ho Man Lee
Heterogeneity-aware Cross-school Electives Recommendation: a Hybrid Federated Approach
null
2023 IEEE International Conference on Data Mining Workshops (ICDMW)
10.1109/ICDMW60847.2023.00191
null
cs.IR cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
In the era of modern education, addressing cross-school learner diversity is crucial, especially in personalized recommender systems for elective course selection. However, privacy concerns often limit cross-school data sharing, which hinders existing methods' ability to model sparse data and address heterogeneity effectively, ultimately leading to suboptimal recommendations. In response, we propose HFRec, a heterogeneity-aware hybrid federated recommender system designed for cross-school elective course recommendations. The proposed model constructs heterogeneous graphs for each school, incorporating various interactions and historical behaviors between students to integrate context and content information. We design an attention mechanism to capture heterogeneity-aware representations. Moreover, under a federated scheme, we train individual school-based models with adaptive learning settings to recommend tailored electives. Our HFRec model demonstrates its effectiveness in providing personalized elective recommendations while maintaining privacy, as it outperforms state-of-the-art models on both open-source and real-world datasets.
[ { "created": "Mon, 19 Feb 2024 15:06:04 GMT", "version": "v1" } ]
2024-02-20
[ [ "Ju", "Chengyi", "" ], [ "Cao", "Jiannong", "" ], [ "Yang", "Yu", "" ], [ "Yang", "Zhen-Qun", "" ], [ "Lee", "Ho Man", "" ] ]
2402.12320
Ganesh Sapkota
Ganesh Sapkota, Sanjay Madria
Landmark Stereo Dataset for Landmark Recognition and Moving Node Localization in a Non-GPS Battlefield Environment
null
2023 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), St. Louis, MO, USA, 2023, pp. 1-11
10.1109/AIPR60534.2023.10440690
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
In this paper, we have proposed a new strategy of using the landmark anchor node instead of a radio-based anchor node to obtain the virtual coordinates (landmarkID, DISTANCE) of moving troops or defense forces that will help in tracking and maneuvering the troops along a safe path within a GPS-denied battlefield environment. The proposed strategy implements landmark recognition using the Yolov5 model and landmark distance estimation using an efficient Stereo Matching Algorithm. We consider that a moving node carrying a low-power mobile device facilitated with a calibrated stereo vision camera that captures stereo images of a scene containing landmarks within the battlefield region whose locations are stored in an offline server residing within the device itself. We created a custom landmark image dataset called MSTLandmarkv1 with 34 landmark classes and another landmark stereo dataset of those 34 landmark instances called MSTLandmarkStereov1. We trained the YOLOv5 model with MSTLandmarkv1 dataset and achieved 0.95 mAP @ 0.5 IoU and 0.767 mAP @ [0.5: 0.95] IoU. We calculated the distance from a node to the landmark utilizing the bounding box coordinates and the depth map generated by the improved SGM algorithm using MSTLandmarkStereov1. The tuple of landmark IDs obtained from the detection result and the distances calculated by the SGM algorithm are stored as the virtual coordinates of a node. In future work, we will use these virtual coordinates to obtain the location of a node using an efficient trilateration algorithm and optimize the node position using the appropriate optimization method.
[ { "created": "Mon, 19 Feb 2024 17:49:23 GMT", "version": "v1" } ]
2024-04-09
[ [ "Sapkota", "Ganesh", "" ], [ "Madria", "Sanjay", "" ] ]
2402.12372
Mario S\"anger
Mario S\"anger, Samuele Garda, Xing David Wang, Leon Weber-Genzel, Pia Droop, Benedikt Fuchs, Alan Akbik, Ulf Leser
HunFlair2 in a cross-corpus evaluation of biomedical named entity recognition and normalization tools
null
Bioinformatics, Volume 40, Number 10, 2024, btae564, Oxford University Press
10.1093/bioinformatics/btae564
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the exponential growth of the life science literature, biomedical text mining (BTM) has become an essential technology for accelerating the extraction of insights from publications. Identifying named entities (e.g., diseases, drugs, or genes) in texts and their linkage to reference knowledge bases are crucial steps in BTM pipelines to enable information aggregation from different documents. However, tools for these two steps are rarely applied in the same context in which they were developed. Instead, they are applied in the wild, i.e., on application-dependent text collections different from those used for the tools' training, varying, e.g., in focus, genre, style, and text type. This raises the question of whether the reported performance of BTM tools can be trusted for downstream applications. Here, we report on the results of a carefully designed cross-corpus benchmark for named entity extraction, where tools were applied systematically to corpora not used during their training. Based on a survey of 28 published systems, we selected five for an in-depth analysis on three publicly available corpora encompassing four different entity types. Comparison between tools results in a mixed picture and shows that, in a cross-corpus setting, the performance is significantly lower than the one reported in an in-corpus setting. HunFlair2 showed the best performance on average, being closely followed by PubTator. Our results indicate that users of BTM tools should expect diminishing performances when applying them in the wild compared to original publications and show that further research is necessary to make BTM tools more robust.
[ { "created": "Mon, 19 Feb 2024 18:58:18 GMT", "version": "v1" }, { "created": "Tue, 20 Feb 2024 13:10:27 GMT", "version": "v2" } ]
2024-10-15
[ [ "Sänger", "Mario", "" ], [ "Garda", "Samuele", "" ], [ "Wang", "Xing David", "" ], [ "Weber-Genzel", "Leon", "" ], [ "Droop", "Pia", "" ], [ "Fuchs", "Benedikt", "" ], [ "Akbik", "Alan", "" ], [ "Leser", "Ulf", "" ] ]
2402.12390
Jos\'e Alberto Ben\'itez-Andrades Ph.D.
Jos\'e Alberto Ben\'itez-Andrades, Alejandro Rodr\'iguez-Gonz\'alez, Carmen Benavides, Leticia S\'anchez-Valde\'on and Isa\'ias Garc\'ia
A Semantic Social Network Analysis Tool for Sensitivity Analysis and What-If Scenario Testing in Alcohol Consumption Studies
null
Int. J. Environ. Res. Public Health 2018, 15(11), 2420;
10.3390/ijerph15112420
null
cs.SI cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Social Network Analysis (SNA) is a set of techniques developed in the field of social and behavioral sciences research, in order to characterize and study the social relationships that are established among a set of individuals. When building a social network for performing an SNA analysis, an initial process of data gathering is achieved in order to extract the characteristics of the individuals and their relationships. This is usually done by completing a questionnaire containing different types of questions that will be later used to obtain the SNA measures needed to perform the study. There are, then, a great number of different possible network generating questions and also many possibilities for mapping the responses to the corresponding characteristics and relationships. Many variations may be introduced into these questions (the way they are posed, the weights given to each of the responses, etc.) that may have an effect on the resulting networks. All these different variations are difficult to achieve manually, because the process is time-consuming and error prone. The tool described in this paper uses semantic knowledge representation techniques in order to facilitate this kind of sensitivity studies. The base of the tool is a conceptual structure, called "ontology" that is able to represent the different concepts and their definitions. The tool is compared to other similar ones, and the advantages of the approach are highlighted, giving some particular examples from an ongoing SNA study about alcohol consumption habits in adolescents.
[ { "created": "Wed, 14 Feb 2024 16:17:04 GMT", "version": "v1" } ]
2024-02-21
[ [ "Benítez-Andrades", "José Alberto", "" ], [ "Rodríguez-González", "Alejandro", "" ], [ "Benavides", "Carmen", "" ], [ "Sánchez-Valdeón", "Leticia", "" ], [ "García", "Isaías", "" ] ]
2402.12407
Shashwat Khandelwal
Shashwat Khandelwal, Ziaul Choudhury, Shashwat Shrivastava and Suresh Purini
Accelerating local laplacian filters on FPGAs
6 pages, 5 figures, 2 tables
10.1109/FPL50879.2020.00028
null
null
eess.IV cs.CV cs.GR eess.SP
http://creativecommons.org/licenses/by-sa/4.0/
Images when processed using various enhancement techniques often lead to edge degradation and other unwanted artifacts such as halos. These artifacts pose a major problem for photographic applications where they can denude the quality of an image. There is a plethora of edge-aware techniques proposed in the field of image processing. However, these require the application of complex optimization or post-processing methods. Local Laplacian Filtering is an edge-aware image processing technique that involves the construction of simple Gaussian and Laplacian pyramids. This technique can be successfully applied for detail smoothing, detail enhancement, tone mapping and inverse tone mapping of an image while keeping it artifact-free. The problem though with this approach is that it is computationally expensive. Hence, parallelization schemes using multi-core CPUs and GPUs have been proposed. As is well known, they are not power-efficient, and a well-designed hardware architecture on an FPGA can do better on the performance per watt metric. In this paper, we propose a hardware accelerator, which exploits fully the available parallelism in the Local Laplacian Filtering algorithm, while minimizing the utilization of on-chip FPGA resources. On Virtex-7 FPGA, we obtain a 7.5x speed-up to process a 1 MB image when compared to an optimized baseline CPU implementation. To the best of our knowledge, we are not aware of any other hardware accelerators proposed in the research literature for the Local Laplacian Filtering problem.
[ { "created": "Sun, 18 Feb 2024 10:49:23 GMT", "version": "v1" } ]
2024-02-21
[ [ "Khandelwal", "Shashwat", "" ], [ "Choudhury", "Ziaul", "" ], [ "Shrivastava", "Shashwat", "" ], [ "Purini", "Suresh", "" ] ]
2402.12522
Teng Wu
Teng Wu, Bruno Vallet, Marc Pierrot-Deseilligny, Ewelina Rupnik
An evaluation of Deep Learning based stereo dense matching dataset shift from aerial images and a large scale stereo dataset
null
International Journal of Applied Earth Observation and Geoinformation, 128(2024)
10.1016/j.jag.2024.103715
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Dense matching is crucial for 3D scene reconstruction since it enables the recovery of scene 3D geometry from image acquisition. Deep Learning (DL)-based methods have shown effectiveness in the special case of epipolar stereo disparity estimation in the computer vision community. DL-based methods depend heavily on the quality and quantity of training datasets. However, generating ground-truth disparity maps for real scenes remains a challenging task in the photogrammetry community. To address this challenge, we propose a method for generating ground-truth disparity maps directly from Light Detection and Ranging (LiDAR) and images to produce a large and diverse dataset for six aerial datasets across four different areas and two areas with different resolution images. We also introduce a LiDAR-to-image co-registration refinement to the framework that takes special precautions regarding occlusions and refrains from disparity interpolation to avoid precision loss. Evaluating 11 dense matching methods across datasets with diverse scene types, image resolutions, and geometric configurations, which are deeply investigated in dataset shift, GANet performs best with identical training and testing data, and PSMNet shows robustness across different datasets, and we proposed the best strategy for training with a limit dataset. We will also provide the dataset and training models; more information can be found at https://github.com/whuwuteng/Aerial_Stereo_Dataset.
[ { "created": "Mon, 19 Feb 2024 20:33:46 GMT", "version": "v1" } ]
2024-02-21
[ [ "Wu", "Teng", "" ], [ "Vallet", "Bruno", "" ], [ "Pierrot-Deseilligny", "Marc", "" ], [ "Rupnik", "Ewelina", "" ] ]
2402.12646
Sevil Zanjani Miyandoab
Ehsan Rokhsatyazdi, Shahryar Rahnamayan, Sevil Zanjani Miyandoab, Azam Asilian Bidgoli, H.R. Tizhoosh
Training Artificial Neural Networks by Coordinate Search Algorithm
7 pages, 9 figures
2023 IEEE Symposium Series on Computational Intelligence (SSCI), pp. 1540-1546. IEEE, 2023
10.1109/SSCI52147.2023.10371958
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Training Artificial Neural Networks poses a challenging and critical problem in machine learning. Despite the effectiveness of gradient-based learning methods, such as Stochastic Gradient Descent (SGD), in training neural networks, they do have several limitations. For instance, they require differentiable activation functions, and cannot optimize a model based on several independent non-differentiable loss functions simultaneously; for example, the F1-score, which is used during testing, can be used during training when a gradient-free optimization algorithm is utilized. Furthermore, the training in any DNN can be possible with a small size of the training dataset. To address these concerns, we propose an efficient version of the gradient-free Coordinate Search (CS) algorithm, an instance of General Pattern Search methods, for training neural networks. The proposed algorithm can be used with non-differentiable activation functions and tailored to multi-objective/multi-loss problems. Finding the optimal values for weights of ANNs is a large-scale optimization problem. Therefore instead of finding the optimal value for each variable, which is the common technique in classical CS, we accelerate optimization and convergence by bundling the weights. In fact, this strategy is a form of dimension reduction for optimization problems. Based on the experimental results, the proposed method, in some cases, outperforms the gradient-based approach, particularly, in situations with insufficient labeled training data. The performance plots demonstrate a high convergence rate, highlighting the capability of our suggested method to find a reasonable solution with fewer function calls. As of now, the only practical and efficient way of training ANNs with hundreds of thousands of weights is gradient-based algorithms such as SGD or Adam. In this paper we introduce an alternative method for training ANN.
[ { "created": "Tue, 20 Feb 2024 01:47:25 GMT", "version": "v1" } ]
2024-02-21
[ [ "Rokhsatyazdi", "Ehsan", "" ], [ "Rahnamayan", "Shahryar", "" ], [ "Miyandoab", "Sevil Zanjani", "" ], [ "Bidgoli", "Azam Asilian", "" ], [ "Tizhoosh", "H. R.", "" ] ]
2402.12754
Wentian Zhang
Haozhe Liu, Wentian Zhang, Feng Liu, Haoqian Wu, Linlin Shen
Fingerprint Presentation Attack Detector Using Global-Local Model
This paper was accepted by IEEE Transactions on Cybernetics. Current version is updated with minor revisions on introduction and related works
IEEE TRANSACTIONS ON CYBERNETICS, VOL. 52, NO. 11, 12315-12328, November 2022
10.1109/TCYB.2021.3081764
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The vulnerability of automated fingerprint recognition systems (AFRSs) to presentation attacks (PAs) promotes the vigorous development of PA detection (PAD) technology. However, PAD methods have been limited by information loss and poor generalization ability, resulting in new PA materials and fingerprint sensors. This paper thus proposes a global-local model-based PAD (RTK-PAD) method to overcome those limitations to some extent. The proposed method consists of three modules, called: 1) the global module; 2) the local module; and 3) the rethinking module. By adopting the cut-out-based global module, a global spoofness score predicted from nonlocal features of the entire fingerprint images can be achieved. While by using the texture in-painting-based local module, a local spoofness score predicted from fingerprint patches is obtained. The two modules are not independent but connected through our proposed rethinking module by localizing two discriminative patches for the local module based on the global spoofness score. Finally, the fusion spoofness score by averaging the global and local spoofness scores is used for PAD. Our experimental results evaluated on LivDet 2017 show that the proposed RTK-PAD can achieve an average classification error (ACE) of 2.28% and a true detection rate (TDR) of 91.19% when the false detection rate (FDR) equals 1.0%, which significantly outperformed the state-of-the-art methods by $\sim$10% in terms of TDR (91.19% versus 80.74%).
[ { "created": "Tue, 20 Feb 2024 06:47:12 GMT", "version": "v1" } ]
2024-02-21
[ [ "Liu", "Haozhe", "" ], [ "Zhang", "Wentian", "" ], [ "Liu", "Feng", "" ], [ "Wu", "Haoqian", "" ], [ "Shen", "Linlin", "" ] ]
2402.12862
Wen Wu
Wen Wu, Bo Li, Chao Zhang, Chung-Cheng Chiu, Qiujia Li, Junwen Bai, Tara N. Sainath, Philip C. Woodland
Handling Ambiguity in Emotion: From Out-of-Domain Detection to Distribution Estimation
null
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2024
10.18653/v1/2024.acl-long.114
null
cs.CL
http://creativecommons.org/licenses/by-nc-nd/4.0/
The subjective perception of emotion leads to inconsistent labels from human annotators. Typically, utterances lacking majority-agreed labels are excluded when training an emotion classifier, which cause problems when encountering ambiguous emotional expressions during testing. This paper investigates three methods to handle ambiguous emotion. First, we show that incorporating utterances without majority-agreed labels as an additional class in the classifier reduces the classification performance of the other emotion classes. Then, we propose detecting utterances with ambiguous emotions as out-of-domain samples by quantifying the uncertainty in emotion classification using evidential deep learning. This approach retains the classification accuracy while effectively detects ambiguous emotion expressions. Furthermore, to obtain fine-grained distinctions among ambiguous emotions, we propose representing emotion as a distribution instead of a single class label. The task is thus re-framed from classification to distribution estimation where every individual annotation is taken into account, not just the majority opinion. The evidential uncertainty measure is extended to quantify the uncertainty in emotion distribution estimation. Experimental results on the IEMOCAP and CREMA-D datasets demonstrate the superior capability of the proposed method in terms of majority class prediction, emotion distribution estimation, and uncertainty estimation.
[ { "created": "Tue, 20 Feb 2024 09:53:38 GMT", "version": "v1" } ]
2024-10-14
[ [ "Wu", "Wen", "" ], [ "Li", "Bo", "" ], [ "Zhang", "Chao", "" ], [ "Chiu", "Chung-Cheng", "" ], [ "Li", "Qiujia", "" ], [ "Bai", "Junwen", "" ], [ "Sainath", "Tara N.", "" ], [ "Woodland", "Philip C.", "" ] ]
2402.12923
Anju Rani
Anju Rani, Daniel Ortiz-Arroyo, Petar Durdevic
Advancements in Point Cloud-Based 3D Defect Detection and Classification for Industrial Systems: A Comprehensive Survey
27 pages, 13 figures, 3 tables, review paper
Information Fusion, 2024
10.1016/j.inffus.2024.102575
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
In recent years, 3D point clouds (PCs) have gained significant attention due to their diverse applications across various fields, such as computer vision (CV), condition monitoring (CM), virtual reality, robotics, autonomous driving, etc. Deep learning (DL) has proven effective in leveraging 3D PCs to address various challenges encountered in 2D vision. However, applying deep neural networks (DNNs) to process 3D PCs presents unique challenges. This paper provides an in-depth review of recent advancements in DL-based industrial CM using 3D PCs, with a specific focus on defect shape classification and segmentation within industrial applications. Recognizing the crucial role of these aspects in industrial maintenance, the paper offers insightful observations on the strengths and limitations of the reviewed DL-based PC processing methods. This knowledge synthesis aims to contribute to understanding and enhancing CM processes, particularly within the framework of remaining useful life (RUL), in industrial systems.
[ { "created": "Tue, 20 Feb 2024 11:18:40 GMT", "version": "v1" }, { "created": "Tue, 23 Jul 2024 09:34:45 GMT", "version": "v2" } ]
2024-07-24
[ [ "Rani", "Anju", "" ], [ "Ortiz-Arroyo", "Daniel", "" ], [ "Durdevic", "Petar", "" ] ]
2402.12950
Zimeng Xiao
Jinjing Shi, Zimeng Xiao, Heyuan Shi, Yu Jiang and Xuelong Li
QuanTest: Entanglement-Guided Testing of Quantum Neural Network Systems
This paper has been accepted by TOSEM 2024
ACM Transactions on Software Engineering and Methodology, 2024
10.1145/3688840
null
cs.SE cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Quantum Neural Network (QNN) combines the Deep Learning (DL) principle with the fundamental theory of quantum mechanics to achieve machine learning tasks with quantum acceleration. Recently, QNN systems have been found to manifest robustness issues similar to classical DL systems. There is an urgent need for ways to test their correctness and security. However, QNN systems differ significantly from traditional quantum software and classical DL systems, posing critical challenges for QNN testing. These challenges include the inapplicability of traditional quantum software testing methods to QNN systems due to differences in programming paradigms and decision logic representations, the dependence of quantum test sample generation on perturbation operators, and the absence of effective information in quantum neurons. In this paper, we propose QuanTest, a quantum entanglement-guided adversarial testing framework to uncover potential erroneous behaviors in QNN systems. We design a quantum entanglement adequacy criterion to quantify the entanglement acquired by the input quantum states from the QNN system, along with two similarity metrics to measure the proximity of generated quantum adversarial examples to the original inputs. Subsequently, QuanTest formulates the problem of generating test inputs that maximize the quantum entanglement adequacy and capture incorrect behaviors of the QNN system as a joint optimization problem and solves it in a gradient-based manner to generate quantum adversarial examples. results demonstrate that QuanTest possesses the capability to capture erroneous behaviors in QNN systems. The entanglement-guided approach proves effective in adversarial testing, generating more adversarial examples.
[ { "created": "Tue, 20 Feb 2024 12:11:28 GMT", "version": "v1" }, { "created": "Mon, 26 Aug 2024 08:02:40 GMT", "version": "v2" } ]
2024-08-27
[ [ "Shi", "Jinjing", "" ], [ "Xiao", "Zimeng", "" ], [ "Shi", "Heyuan", "" ], [ "Jiang", "Yu", "" ], [ "Li", "Xuelong", "" ] ]
2402.13195
Collin Hague
Collin Hague, Nick Kakavitsas, Jincheng Zhang, Chris Beam, Andrew Willis, Artur Wolek
Design and Flight Demonstration of a Quadrotor for Urban Mapping and Target Tracking Research
7 pages, 10 figures, To be presented at IEEE SoutheastCon 2024
SoutheastCon 2024, 559-564
10.1109/SoutheastCon52093.2024.10500131
null
cs.RO cs.CV cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes the hardware design and flight demonstration of a small quadrotor with imaging sensors for urban mapping, hazard avoidance, and target tracking research. The vehicle is equipped with five cameras, including two pairs of fisheye stereo cameras that enable a nearly omnidirectional view and a two-axis gimbaled camera. An onboard NVIDIA Jetson Orin Nano computer running the Robot Operating System software is used for data collection. An autonomous tracking behavior was implemented to coordinate the motion of the quadrotor and gimbaled camera to track a moving GPS coordinate. The data collection system was demonstrated through a flight test that tracked a moving GPS-tagged vehicle through a series of roads and parking lots. A map of the environment was reconstructed from the collected images using the Direct Sparse Odometry (DSO) algorithm. The performance of the quadrotor was also characterized by acoustic noise, communication range, battery voltage in hover, and maximum speed tests.
[ { "created": "Tue, 20 Feb 2024 18:06:00 GMT", "version": "v1" }, { "created": "Fri, 15 Mar 2024 18:15:18 GMT", "version": "v2" } ]
2024-05-03
[ [ "Hague", "Collin", "" ], [ "Kakavitsas", "Nick", "" ], [ "Zhang", "Jincheng", "" ], [ "Beam", "Chris", "" ], [ "Willis", "Andrew", "" ], [ "Wolek", "Artur", "" ] ]
2402.13219
Ammar Abbas M.Sc.
Ammar N. Abbas, Chidera W. Amazu, Joseph Mietkiewicz, Houda Briwa, Andres Alonzo Perez, Gabriele Baldissone, Micaela Demichela, Georgios G. Chasparis, John D. Kelleher, and Maria Chiara Leva
Analyzing Operator States and the Impact of AI-Enhanced Decision Support in Control Rooms: A Human-in-the-Loop Specialized Reinforcement Learning Framework for Intervention Strategies
null
International Journal of Human-Computer Interaction, 2024
null
null
cs.AI cs.HC cs.LG cs.MA cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In complex industrial and chemical process control rooms, effective decision-making is crucial for safety and efficiency. The experiments in this paper evaluate the impact and applications of an AI-based decision support system integrated into an improved human-machine interface, using dynamic influence diagrams, a hidden Markov model, and deep reinforcement learning. The enhanced support system aims to reduce operator workload, improve situational awareness, and provide different intervention strategies to the operator adapted to the current state of both the system and human performance. Such a system can be particularly useful in cases of information overload when many alarms and inputs are presented all within the same time window, or for junior operators during training. A comprehensive cross-data analysis was conducted, involving 47 participants and a diverse range of data sources such as smartwatch metrics, eye-tracking data, process logs, and responses from questionnaires. The results indicate interesting insights regarding the effectiveness of the approach in aiding decision-making, decreasing perceived workload, and increasing situational awareness for the scenarios considered. Additionally, the results provide valuable insights to compare differences between styles of information gathering when using the system by individual participants. These findings are particularly relevant when predicting the overall performance of the individual participant and their capacity to successfully handle a plant upset and the alarms connected to it using process and human-machine interaction logs in real-time. These predictions enable the development of more effective intervention strategies.
[ { "created": "Tue, 20 Feb 2024 18:31:27 GMT", "version": "v1" } ]
2024-08-09
[ [ "Abbas", "Ammar N.", "" ], [ "Amazu", "Chidera W.", "" ], [ "Mietkiewicz", "Joseph", "" ], [ "Briwa", "Houda", "" ], [ "Perez", "Andres Alonzo", "" ], [ "Baldissone", "Gabriele", "" ], [ "Demichela", "Micaela", "" ], [ "Chasparis", "Georgios G.", "" ], [ "Kelleher", "John D.", "" ], [ "Leva", "Maria Chiara", "" ] ]
2402.13287
Jose Manuel Camacho Rodriguez
William N. Caballero, Jose Manuel Camacho, Tahir Ekin, Roi Naveiro
Manipulating hidden-Markov-model inferences by corrupting batch data
42 pages, 8 figures, 11 tables
Caballero, W. N., Camacho, J. M., Ekin, T., & Naveiro, R. (2024). Manipulating hidden-Markov-model inferences by corrupting batch data. Computers & Operations Research, 162, 106478
10.1016/j.cor.2023.106478
null
cs.CR cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Time-series models typically assume untainted and legitimate streams of data. However, a self-interested adversary may have incentive to corrupt this data, thereby altering a decision maker's inference. Within the broader field of adversarial machine learning, this research provides a novel, probabilistic perspective toward the manipulation of hidden Markov model inferences via corrupted data. In particular, we provision a suite of corruption problems for filtering, smoothing, and decoding inferences leveraging an adversarial risk analysis approach. Multiple stochastic programming models are set forth that incorporate realistic uncertainties and varied attacker objectives. Three general solution methods are developed by alternatively viewing the problem from frequentist and Bayesian perspectives. The efficacy of each method is illustrated via extensive, empirical testing. The developed methods are characterized by their solution quality and computational effort, resulting in a stratification of techniques across varying problem-instance architectures. This research highlights the weaknesses of hidden Markov models under adversarial activity, thereby motivating the need for robustification techniques to ensure their security.
[ { "created": "Mon, 19 Feb 2024 12:22:22 GMT", "version": "v1" } ]
2024-02-22
[ [ "Caballero", "William N.", "" ], [ "Camacho", "Jose Manuel", "" ], [ "Ekin", "Tahir", "" ], [ "Naveiro", "Roi", "" ] ]
2402.13290
Goonmeet Bajaj
Goonmeet Bajaj, Srinivasan Parthasarathy, Valerie L. Shalin, Amit Sheth
Grounding from an AI and Cognitive Science Lens
null
IEEE Intelligent Systems, 2024
10.1109/MIS.2024.3366669
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Grounding is a challenging problem, requiring a formal definition and different levels of abstraction. This article explores grounding from both cognitive science and machine learning perspectives. It identifies the subtleties of grounding, its significance for collaborative agents, and similarities and differences in grounding approaches in both communities. The article examines the potential of neuro-symbolic approaches tailored for grounding tasks, showcasing how they can more comprehensively address grounding. Finally, we discuss areas for further exploration and development in grounding.
[ { "created": "Mon, 19 Feb 2024 17:44:34 GMT", "version": "v1" } ]
2024-02-22
[ [ "Bajaj", "Goonmeet", "" ], [ "Parthasarathy", "Srinivasan", "" ], [ "Shalin", "Valerie L.", "" ], [ "Sheth", "Amit", "" ] ]
2402.13301
Manvi Agarwal
Manvi Agarwal (S2A, IDS), Changhong Wang (S2A, IDS), Ga\"el Richard (S2A, IDS)
Structure-informed Positional Encoding for Music Generation
null
IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Apr 2024, Seoul, South Korea
null
null
cs.SD cs.AI eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Music generated by deep learning methods often suffers from a lack of coherence and long-term organization. Yet, multi-scale hierarchical structure is a distinctive feature of music signals. To leverage this information, we propose a structure-informed positional encoding framework for music generation with Transformers. We design three variants in terms of absolute, relative and non-stationary positional information. We comprehensively test them on two symbolic music generation tasks: next-timestep prediction and accompaniment generation. As a comparison, we choose multiple baselines from the literature and demonstrate the merits of our methods using several musically-motivated evaluation metrics. In particular, our methods improve the melodic and structural consistency of the generated pieces.
[ { "created": "Tue, 20 Feb 2024 13:41:35 GMT", "version": "v1" }, { "created": "Wed, 28 Feb 2024 12:37:34 GMT", "version": "v2" } ]
2024-02-29
[ [ "Agarwal", "Manvi", "", "S2A, IDS" ], [ "Wang", "Changhong", "", "S2A, IDS" ], [ "Richard", "Gaël", "", "S2A, IDS" ] ]
2402.13306
Jose Robledo Hernandez
Efren Hern\'andez-Molina, Benjamin Ojeda-Maga\~na, Jose Guadalupe Robledo-Hern\'andez and Ruben Ruelas
Vision System Prototype for Inspection and Monitoring with a Smart Camera
8 pages, 16 figures, in Spanish language
IEEE Latin America Transactions, vol. 18, no. 09, pp. 1614-1622, September 2020
10.1109/TLA.2020.9381804
null
cs.CV eess.IV
http://creativecommons.org/licenses/by-nc-sa/4.0/
This paper presents the design of an artificial vision system prototype for automatic inspection and monitoring of objects over a conveyor belt and using a Smart camera 2D BOA-INS. The prototype consists of a conveyor belt and an embedded system based on an Arduino Mega card for system control, and it has as main peripherals the smart camera, a direct current motor, a photoelectric sensor, LED illumination and LEDs indicating the status (good or defect) of each evaluated object. The application of the prototype is for educational purposes, so that undergraduate, master and diploma students can simulate a continuous production line, controlled by an embedded system, and perform quality control by monitoring through a visual system and a personal computer. This allows implementing the topics of embedded systems, artificial vision, artificial intelligence, pattern recognition, automatic control, as well as automation of real processes.
[ { "created": "Tue, 20 Feb 2024 18:58:23 GMT", "version": "v1" } ]
2024-02-22
[ [ "Hernández-Molina", "Efren", "" ], [ "Ojeda-Magaña", "Benjamin", "" ], [ "Robledo-Hernández", "Jose Guadalupe", "" ], [ "Ruelas", "Ruben", "" ] ]
2402.13368
Md Rifat Arefin
Md Rifat Arefin, Yan Zhang, Aristide Baratin, Francesco Locatello, Irina Rish, Dianbo Liu, Kenji Kawaguchi
Unsupervised Concept Discovery Mitigates Spurious Correlations
null
ICLM 2024
null
null
cs.LG cs.CV
http://creativecommons.org/licenses/by/4.0/
Models prone to spurious correlations in training data often produce brittle predictions and introduce unintended biases. Addressing this challenge typically involves methods relying on prior knowledge and group annotation to remove spurious correlations, which may not be readily available in many applications. In this paper, we establish a novel connection between unsupervised object-centric learning and mitigation of spurious correlations. Instead of directly inferring subgroups with varying correlations with labels, our approach focuses on discovering concepts: discrete ideas that are shared across input samples. Leveraging existing object-centric representation learning, we introduce CoBalT: a concept balancing technique that effectively mitigates spurious correlations without requiring human labeling of subgroups. Evaluation across the benchmark datasets for sub-population shifts demonstrate superior or competitive performance compared state-of-the-art baselines, without the need for group annotation. Code is available at https://github.com/rarefin/CoBalT.
[ { "created": "Tue, 20 Feb 2024 20:48:00 GMT", "version": "v1" }, { "created": "Tue, 16 Jul 2024 17:54:43 GMT", "version": "v2" } ]
2024-07-31
[ [ "Arefin", "Md Rifat", "" ], [ "Zhang", "Yan", "" ], [ "Baratin", "Aristide", "" ], [ "Locatello", "Francesco", "" ], [ "Rish", "Irina", "" ], [ "Liu", "Dianbo", "" ], [ "Kawaguchi", "Kenji", "" ] ]
2402.13432
Yanis Labrak
Yanis Labrak, Adrien Bazoge, Oumaima El Khettari, Mickael Rouvier, Pacome Constant dit Beaufils, Natalia Grabar, Beatrice Daille, Solen Quiniou, Emmanuel Morin, Pierre-Antoine Gourraud, Richard Dufour
DrBenchmark: A Large Language Understanding Evaluation Benchmark for French Biomedical Domain
Accepted at LREC-Coling 2024
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
null
null
cs.CL cs.AI cs.LG
http://creativecommons.org/publicdomain/zero/1.0/
The biomedical domain has sparked a significant interest in the field of Natural Language Processing (NLP), which has seen substantial advancements with pre-trained language models (PLMs). However, comparing these models has proven challenging due to variations in evaluation protocols across different models. A fair solution is to aggregate diverse downstream tasks into a benchmark, allowing for the assessment of intrinsic PLMs qualities from various perspectives. Although still limited to few languages, this initiative has been undertaken in the biomedical field, notably English and Chinese. This limitation hampers the evaluation of the latest French biomedical models, as they are either assessed on a minimal number of tasks with non-standardized protocols or evaluated using general downstream tasks. To bridge this research gap and account for the unique sensitivities of French, we present the first-ever publicly available French biomedical language understanding benchmark called DrBenchmark. It encompasses 20 diversified tasks, including named-entity recognition, part-of-speech tagging, question-answering, semantic textual similarity, and classification. We evaluate 8 state-of-the-art pre-trained masked language models (MLMs) on general and biomedical-specific data, as well as English specific MLMs to assess their cross-lingual capabilities. Our experiments reveal that no single model excels across all tasks, while generalist models are sometimes still competitive.
[ { "created": "Tue, 20 Feb 2024 23:54:02 GMT", "version": "v1" } ]
2024-06-11
[ [ "Labrak", "Yanis", "" ], [ "Bazoge", "Adrien", "" ], [ "Khettari", "Oumaima El", "" ], [ "Rouvier", "Mickael", "" ], [ "Beaufils", "Pacome Constant dit", "" ], [ "Grabar", "Natalia", "" ], [ "Daille", "Beatrice", "" ], [ "Quiniou", "Solen", "" ], [ "Morin", "Emmanuel", "" ], [ "Gourraud", "Pierre-Antoine", "" ], [ "Dufour", "Richard", "" ] ]
2402.13452
Vijeta Deshpande
Vijeta Deshpande, Minhwa Lee, Zonghai Yao, Zihao Zhang, Jason Brian Gibbons, Hong Yu
LocalTweets to LocalHealth: A Mental Health Surveillance Framework Based on Twitter Data
null
LREC-COLING 2024
null
null
cs.SI cs.CL cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Prior research on Twitter (now X) data has provided positive evidence of its utility in developing supplementary health surveillance systems. In this study, we present a new framework to surveil public health, focusing on mental health (MH) outcomes. We hypothesize that locally posted tweets are indicative of local MH outcomes and collect tweets posted from 765 neighborhoods (census block groups) in the USA. We pair these tweets from each neighborhood with the corresponding MH outcome reported by the Center for Disease Control (CDC) to create a benchmark dataset, LocalTweets. With LocalTweets, we present the first population-level evaluation task for Twitter-based MH surveillance systems. We then develop an efficient and effective method, LocalHealth, for predicting MH outcomes based on LocalTweets. When used with GPT3.5, LocalHealth achieves the highest F1-score and accuracy of 0.7429 and 79.78\%, respectively, a 59\% improvement in F1-score over the GPT3.5 in zero-shot setting. We also utilize LocalHealth to extrapolate CDC's estimates to proxy unreported neighborhoods, achieving an F1-score of 0.7291. Our work suggests that Twitter data can be effectively leveraged to simulate neighborhood-level MH outcomes.
[ { "created": "Wed, 21 Feb 2024 01:11:28 GMT", "version": "v1" }, { "created": "Tue, 26 Mar 2024 17:59:14 GMT", "version": "v2" } ]
2024-03-27
[ [ "Deshpande", "Vijeta", "" ], [ "Lee", "Minhwa", "" ], [ "Yao", "Zonghai", "" ], [ "Zhang", "Zihao", "" ], [ "Gibbons", "Jason Brian", "" ], [ "Yu", "Hong", "" ] ]
2402.13542
Yue Yu
Lingxi Zhang, Yue Yu, Kuan Wang, Chao Zhang
ARL2: Aligning Retrievers for Black-box Large Language Models via Self-guided Adaptive Relevance Labeling
ACL 2024
ACL 2024
null
null
cs.CL cs.AI cs.IR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Retrieval-augmented generation enhances large language models (LLMs) by incorporating relevant information from external knowledge sources. This enables LLMs to adapt to specific domains and mitigate hallucinations in knowledge-intensive tasks. However, existing retrievers are often misaligned with LLMs due to their separate training processes and the black-box nature of LLMs. To address this challenge, we propose ARL2, a retriever learning technique that harnesses LLMs as labelers. ARL2 leverages LLMs to annotate and score relevant evidence, enabling learning the retriever from robust LLM supervision. Furthermore, ARL2 uses an adaptive self-training strategy for curating high-quality and diverse relevance data, which can effectively reduce the annotation cost. Extensive experiments demonstrate the effectiveness of ARL2, achieving accuracy improvements of 5.4% on NQ and 4.6% on MMLU compared to the state-of-the-art methods. Additionally, ARL2 exhibits robust transfer learning capabilities and strong zero-shot generalization abilities. Our code will be published at \url{https://github.com/zhanglingxi-cs/ARL2}.
[ { "created": "Wed, 21 Feb 2024 05:41:34 GMT", "version": "v1" }, { "created": "Tue, 4 Jun 2024 05:17:24 GMT", "version": "v2" } ]
2024-06-05
[ [ "Zhang", "Lingxi", "" ], [ "Yu", "Yue", "" ], [ "Wang", "Kuan", "" ], [ "Zhang", "Chao", "" ] ]
2402.13573
Nayan Saxena
Ethan Smith, Nayan Saxena, Aninda Saha
ToDo: Token Downsampling for Efficient Generation of High-Resolution Images
null
2024, Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Attention mechanism has been crucial for image diffusion models, however, their quadratic computational complexity limits the sizes of images we can process within reasonable time and memory constraints. This paper investigates the importance of dense attention in generative image models, which often contain redundant features, making them suitable for sparser attention mechanisms. We propose a novel training-free method ToDo that relies on token downsampling of key and value tokens to accelerate Stable Diffusion inference by up to 2x for common sizes and up to 4.5x or more for high resolutions like 2048x2048. We demonstrate that our approach outperforms previous methods in balancing efficient throughput and fidelity.
[ { "created": "Wed, 21 Feb 2024 07:10:28 GMT", "version": "v1" }, { "created": "Wed, 28 Feb 2024 18:31:50 GMT", "version": "v2" }, { "created": "Wed, 8 May 2024 05:09:48 GMT", "version": "v3" } ]
2024-05-09
[ [ "Smith", "Ethan", "" ], [ "Saxena", "Nayan", "" ], [ "Saha", "Aninda", "" ] ]
2402.13615
Tomas Veloz
Tomas Veloz, Olha Sobetska
Analyizing the Conjunction Fallacy as a Fact
book chapter
In: Veloz, Khrennikov, Toni, Castillo (eds) Trends and Challenges in Cognitive Modeling. STEAM-H: Springer (2023)
null
null
cs.AI math.PR nlin.AO
http://creativecommons.org/licenses/by/4.0/
Since the seminal paper by Tversky and Kahneman, the conjunction fallacy has been the subject of multiple debates and become a fundamental challenge for cognitive theories in decision-making. In this article, we take a rather uncommon perspective on this phenomenon. Instead of trying to explain the nature or causes of the conjunction fallacy (intensional definition), we analyze its range of factual possibilities (extensional definition). We show that the majority of research on the conjunction fallacy, according to our sample of experiments reviewed which covers literature between 1983 and 2016, has focused on a narrow part of the a priori factual possibilities, implying that explanations of the conjunction fallacy are fundamentally biased by the short scope of possibilities explored. The latter is a rather curious aspect of the research evolution in the conjunction fallacy considering that the very nature of it is motivated by extensional considerations.
[ { "created": "Wed, 21 Feb 2024 08:40:04 GMT", "version": "v1" } ]
2024-02-22
[ [ "Veloz", "Tomas", "" ], [ "Sobetska", "Olha", "" ] ]
2402.13651
Mikolaj Czerkawski
Mikolaj Czerkawski and Carmine Clemente and Craig Michie and Christos Tachtatzis
Robustness of Deep Neural Networks for Micro-Doppler Radar Classification
null
International Radar Symposium 2022
10.23919/IRS54158.2022.9905017
null
cs.CV cs.LG eess.SP
http://creativecommons.org/licenses/by/4.0/
With the great capabilities of deep classifiers for radar data processing come the risks of learning dataset-specific features that do not generalize well. In this work, the robustness of two deep convolutional architectures, trained and tested on the same data, is evaluated. When standard training practice is followed, both classifiers exhibit sensitivity to subtle temporal shifts of the input representation, an augmentation that carries minimal semantic content. Furthermore, the models are extremely susceptible to adversarial examples. Both small temporal shifts and adversarial examples are a result of a model overfitting on features that do not generalize well. As a remedy, it is shown that training on adversarial examples and temporally augmented samples can reduce this effect and lead to models that generalise better. Finally, models operating on cadence-velocity diagram representation rather than Doppler-time are demonstrated to be naturally more immune to adversarial examples.
[ { "created": "Wed, 21 Feb 2024 09:37:17 GMT", "version": "v1" }, { "created": "Thu, 22 Feb 2024 07:22:51 GMT", "version": "v2" } ]
2024-02-23
[ [ "Czerkawski", "Mikolaj", "" ], [ "Clemente", "Carmine", "" ], [ "Michie", "Craig", "" ], [ "Tachtatzis", "Christos", "" ] ]
2402.13718
Xinrong Zhang
Xinrong Zhang and Yingfa Chen and Shengding Hu and Zihang Xu and Junhao Chen and Moo Khai Hao and Xu Han and Zhen Leng Thai and Shuo Wang and Zhiyuan Liu and Maosong Sun
$\infty$Bench: Extending Long Context Evaluation Beyond 100K Tokens
null
2023.12.15ARR
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Processing and reasoning over long contexts is crucial for many practical applications of Large Language Models (LLMs), such as document comprehension and agent construction. Despite recent strides in making LLMs process contexts with more than 100K tokens, there is currently a lack of a standardized benchmark to evaluate this long-context capability. Existing public benchmarks typically focus on contexts around 10K tokens, limiting the assessment and comparison of LLMs in processing longer contexts. In this paper, we propose $\infty$Bench, the first LLM benchmark featuring an average data length surpassing 100K tokens. $\infty$Bench comprises synthetic and realistic tasks spanning diverse domains, presented in both English and Chinese. The tasks in $\infty$Bench are designed to require well understanding of long dependencies in contexts, and make simply retrieving a limited number of passages from contexts not sufficient for these tasks. In our experiments, based on $\infty$Bench, we evaluate the state-of-the-art proprietary and open-source LLMs tailored for processing long contexts. The results indicate that existing long context LLMs still require significant advancements to effectively process 100K+ context. We further present three intriguing analyses regarding the behavior of LLMs processing long context.
[ { "created": "Wed, 21 Feb 2024 11:30:29 GMT", "version": "v1" }, { "created": "Thu, 22 Feb 2024 03:50:24 GMT", "version": "v2" }, { "created": "Sat, 24 Feb 2024 15:07:55 GMT", "version": "v3" } ]
2024-02-27
[ [ "Zhang", "Xinrong", "" ], [ "Chen", "Yingfa", "" ], [ "Hu", "Shengding", "" ], [ "Xu", "Zihang", "" ], [ "Chen", "Junhao", "" ], [ "Hao", "Moo Khai", "" ], [ "Han", "Xu", "" ], [ "Thai", "Zhen Leng", "" ], [ "Wang", "Shuo", "" ], [ "Liu", "Zhiyuan", "" ], [ "Sun", "Maosong", "" ] ]
2402.13782
Vincent Derkinderen
Vincent Derkinderen, Robin Manhaeve, Pedro Zuidberg Dos Martires, Luc De Raedt
Semirings for Probabilistic and Neuro-Symbolic Logic Programming
null
International Journal of Approximate Reasoning (2024): 109130
10.1016/j.ijar.2024.109130
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The field of probabilistic logic programming (PLP) focuses on integrating probabilistic models into programming languages based on logic. Over the past 30 years, numerous languages and frameworks have been developed for modeling, inference and learning in probabilistic logic programs. While originally PLP focused on discrete probability, more recent approaches have incorporated continuous distributions as well as neural networks, effectively yielding neural-symbolic methods. We provide a unified algebraic perspective on PLP, showing that many if not most of the extensions of PLP can be cast within a common algebraic logic programming framework, in which facts are labeled with elements of a semiring and disjunction and conjunction are replaced by addition and multiplication. This does not only hold for the PLP variations itself but also for the underlying execution mechanism that is based on (algebraic) model counting.
[ { "created": "Wed, 21 Feb 2024 13:06:52 GMT", "version": "v1" } ]
2024-02-22
[ [ "Derkinderen", "Vincent", "" ], [ "Manhaeve", "Robin", "" ], [ "Martires", "Pedro Zuidberg Dos", "" ], [ "De Raedt", "Luc", "" ] ]
2402.13852
Azmine Toushik Wasi
Azmine Toushik Wasi
Neural Control System for Continuous Glucose Monitoring and Maintenance
9 Pages, 4 figures, ICLR 2024 Tiny Papers Track https://openreview.net/forum?id=Te4P3Cn54g
The Second Tiny Papers Track at ICLR 2024
null
null
cs.LG cs.AI cs.NE cs.SY eess.SY stat.ML
http://creativecommons.org/licenses/by-nc-sa/4.0/
Precise glucose level monitoring is critical for people with diabetes to avoid serious complications. While there are several methods for continuous glucose level monitoring, research on maintenance devices is limited. To mitigate the gap, we provide a novel neural control system for continuous glucose monitoring and management that uses differential predictive control. Our approach, led by a sophisticated neural policy and differentiable modeling, constantly adjusts insulin supply in real-time, thereby improving glucose level optimization in the body. This end-to-end method maximizes efficiency, providing personalized care and improved health outcomes, as confirmed by empirical evidence. Code and data are available at: \url{https://github.com/azminewasi/NeuralCGMM}.
[ { "created": "Wed, 21 Feb 2024 14:56:36 GMT", "version": "v1" }, { "created": "Tue, 5 Mar 2024 16:32:24 GMT", "version": "v2" }, { "created": "Fri, 7 Jun 2024 11:16:12 GMT", "version": "v3" } ]
2024-06-10
[ [ "Wasi", "Azmine Toushik", "" ] ]
2402.13897
Lo\"ic Rakotoson
Lo\"ic Rakotoson, Sylvain Massip, Fr\'ejus A. A. Laleye
Science Checker Reloaded: A Bidirectional Paradigm for Transparency and Logical Reasoning
6 pages, 3 figures
NTERNET 2024, The Sixteenth International Conference on Evolving Internet, volume 16, pages 6-11
null
null
cs.IR cs.AI cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
Information retrieval is a rapidly evolving field. However it still faces significant limitations in the scientific and industrial vast amounts of information, such as semantic divergence and vocabulary gaps in sparse retrieval, low precision and lack of interpretability in semantic search, or hallucination and outdated information in generative models. In this paper, we introduce a two-block approach to tackle these hurdles for long documents. The first block enhances language understanding in sparse retrieval by query expansion to retrieve relevant documents. The second block deepens the result by providing comprehensive and informative answers to the complex question using only the information spread in the long document, enabling bidirectional engagement. At various stages of the pipeline, intermediate results are presented to users to facilitate understanding of the system's reasoning. We believe this bidirectional approach brings significant advancements in terms of transparency, logical thinking, and comprehensive understanding in the field of scientific information retrieval.
[ { "created": "Wed, 21 Feb 2024 16:09:25 GMT", "version": "v1" }, { "created": "Thu, 14 Mar 2024 00:21:09 GMT", "version": "v2" } ]
2024-03-15
[ [ "Rakotoson", "Loïc", "" ], [ "Massip", "Sylvain", "" ], [ "Laleye", "Fréjus A. A.", "" ] ]
2402.13914
Przemyslaw Biecek
Przemyslaw Biecek, Wojciech Samek
Position: Explain to Question not to Justify
null
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:3996-4006, 2024
null
null
cs.AI cs.CR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Explainable Artificial Intelligence (XAI) is a young but very promising field of research. Unfortunately, the progress in this field is currently slowed down by divergent and incompatible goals. We separate various threads tangled within the area of XAI into two complementary cultures of human/value-oriented explanations (BLUE XAI) and model/validation-oriented explanations (RED XAI). This position paper argues that the area of RED XAI is currently under-explored, i.e., more methods for explainability are desperately needed to question models (e.g., extract knowledge from well-performing models as well as spotting and fixing bugs in faulty models), and the area of RED XAI hides great opportunities and potential for important research necessary to ensure the safety of AI systems. We conclude this paper by presenting promising challenges in this area.
[ { "created": "Wed, 21 Feb 2024 16:30:24 GMT", "version": "v1" }, { "created": "Fri, 28 Jun 2024 08:37:28 GMT", "version": "v2" } ]
2024-07-30
[ [ "Biecek", "Przemyslaw", "" ], [ "Samek", "Wojciech", "" ] ]
2402.14033
Yongquan He
Yongquan He and Zihan Wang and Peng Zhang and Zhaopeng Tu and Zhaochun Ren
VN Network: Embedding Newly Emerging Entities with Virtual Neighbors
10 pages, 5 figures
CIKM (2020) 505-514
10.1145/3340531.3411865
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Embedding entities and relations into continuous vector spaces has attracted a surge of interest in recent years. Most embedding methods assume that all test entities are available during training, which makes it time-consuming to retrain embeddings for newly emerging entities. To address this issue, recent works apply the graph neural network on the existing neighbors of the unseen entities. In this paper, we propose a novel framework, namely Virtual Neighbor (VN) network, to address three key challenges. Firstly, to reduce the neighbor sparsity problem, we introduce the concept of the virtual neighbors inferred by rules. And we assign soft labels to these neighbors by solving a rule-constrained problem, rather than simply regarding them as unquestionably true. Secondly, many existing methods only use one-hop or two-hop neighbors for aggregation and ignore the distant information that may be helpful. Instead, we identify both logic and symmetric path rules to capture complex patterns. Finally, instead of one-time injection of rules, we employ an iterative learning scheme between the embedding method and virtual neighbor prediction to capture the interactions within. Experimental results on two knowledge graph completion tasks demonstrate that our VN network significantly outperforms state-of-the-art baselines. Furthermore, results on Subject/Object-R show that our proposed VN network is highly robust to the neighbor sparsity problem.
[ { "created": "Wed, 21 Feb 2024 03:04:34 GMT", "version": "v1" } ]
2024-02-23
[ [ "He", "Yongquan", "" ], [ "Wang", "Zihan", "" ], [ "Zhang", "Peng", "" ], [ "Tu", "Zhaopeng", "" ], [ "Ren", "Zhaochun", "" ] ]
2402.14147
Tzu-Sheng Kuo
Tzu-Sheng Kuo, Aaron Halfaker, Zirui Cheng, Jiwoo Kim, Meng-Hsin Wu, Tongshuang Wu, Kenneth Holstein, Haiyi Zhu
Wikibench: Community-Driven Data Curation for AI Evaluation on Wikipedia
null
Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (CHI '24)
10.1145/3613904.3642278
null
cs.HC cs.AI
http://creativecommons.org/licenses/by-sa/4.0/
AI tools are increasingly deployed in community contexts. However, datasets used to evaluate AI are typically created by developers and annotators outside a given community, which can yield misleading conclusions about AI performance. How might we empower communities to drive the intentional design and curation of evaluation datasets for AI that impacts them? We investigate this question on Wikipedia, an online community with multiple AI-based content moderation tools deployed. We introduce Wikibench, a system that enables communities to collaboratively curate AI evaluation datasets, while navigating ambiguities and differences in perspective through discussion. A field study on Wikipedia shows that datasets curated using Wikibench can effectively capture community consensus, disagreement, and uncertainty. Furthermore, study participants used Wikibench to shape the overall data curation process, including refining label definitions, determining data inclusion criteria, and authoring data statements. Based on our findings, we propose future directions for systems that support community-driven data curation.
[ { "created": "Wed, 21 Feb 2024 22:10:21 GMT", "version": "v1" } ]
2024-02-23
[ [ "Kuo", "Tzu-Sheng", "" ], [ "Halfaker", "Aaron", "" ], [ "Cheng", "Zirui", "" ], [ "Kim", "Jiwoo", "" ], [ "Wu", "Meng-Hsin", "" ], [ "Wu", "Tongshuang", "" ], [ "Holstein", "Kenneth", "" ], [ "Zhu", "Haiyi", "" ] ]
2402.14340
Duksu Kim
Sangwon Choi, Daejune Choi, Duksu Kim
TIE-KD: Teacher-Independent and Explainable Knowledge Distillation for Monocular Depth Estimation
13 pages, 8 figures, under review for a journal
Image and Vision Computing, 148 (2024), 105110
10.1016/j.imavis.2024.105110
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Monocular depth estimation (MDE) is essential for numerous applications yet is impeded by the substantial computational demands of accurate deep learning models. To mitigate this, we introduce a novel Teacher-Independent Explainable Knowledge Distillation (TIE-KD) framework that streamlines the knowledge transfer from complex teacher models to compact student networks, eliminating the need for architectural similarity. The cornerstone of TIE-KD is the Depth Probability Map (DPM), an explainable feature map that interprets the teacher's output, enabling feature-based knowledge distillation solely from the teacher's response. This approach allows for efficient student learning, leveraging the strengths of feature-based distillation. Extensive evaluation of the KITTI dataset indicates that TIE-KD not only outperforms conventional response-based KD methods but also demonstrates consistent efficacy across diverse teacher and student architectures. The robustness and adaptability of TIE-KD underscore its potential for applications requiring efficient and interpretable models, affirming its practicality for real-world deployment.
[ { "created": "Thu, 22 Feb 2024 07:17:30 GMT", "version": "v1" } ]
2024-07-16
[ [ "Choi", "Sangwon", "" ], [ "Choi", "Daejune", "" ], [ "Kim", "Duksu", "" ] ]
2402.14346
Francesco Malandrino
Francesco Malandrino and Giuseppe Di Giacomo and Marco Levorato and Carla Fabiana Chiasserini
Dependable Distributed Training of Compressed Machine Learning Models
null
IEEE WoWMoM 2024
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The existing work on the distributed training of machine learning (ML) models has consistently overlooked the distribution of the achieved learning quality, focusing instead on its average value. This leads to a poor dependability}of the resulting ML models, whose performance may be much worse than expected. We fill this gap by proposing DepL, a framework for dependable learning orchestration, able to make high-quality, efficient decisions on (i) the data to leverage for learning, (ii) the models to use and when to switch among them, and (iii) the clusters of nodes, and the resources thereof, to exploit. For concreteness, we consider as possible available models a full DNN and its compressed versions. Unlike previous studies, DepL guarantees that a target learning quality is reached with a target probability, while keeping the training cost at a minimum. We prove that DepL has constant competitive ratio and polynomial complexity, and show that it outperforms the state-of-the-art by over 27% and closely matches the optimum.
[ { "created": "Thu, 22 Feb 2024 07:24:26 GMT", "version": "v1" } ]
2024-02-23
[ [ "Malandrino", "Francesco", "" ], [ "Di Giacomo", "Giuseppe", "" ], [ "Levorato", "Marco", "" ], [ "Chiasserini", "Carla Fabiana", "" ] ]
2402.14424
Song Tong
Song Tong, Kai Mao, Zhen Huang, Yukun Zhao, Kaiping Peng
Automating psychological hypothesis generation with AI: when large language models meet causal graph
null
Humanities and Social Sciences Communications, (2024) 11:896
10.1057/s41599-024-03407-5
null
cs.AI cs.CY
http://creativecommons.org/licenses/by/4.0/
Leveraging the synergy between causal knowledge graphs and a large language model (LLM), our study introduces a groundbreaking approach for computational hypothesis generation in psychology. We analyzed 43,312 psychology articles using a LLM to extract causal relation pairs. This analysis produced a specialized causal graph for psychology. Applying link prediction algorithms, we generated 130 potential psychological hypotheses focusing on `well-being', then compared them against research ideas conceived by doctoral scholars and those produced solely by the LLM. Interestingly, our combined approach of a LLM and causal graphs mirrored the expert-level insights in terms of novelty, clearly surpassing the LLM-only hypotheses (t(59) = 3.34, p=0.007 and t(59) = 4.32, p<0.001, respectively). This alignment was further corroborated using deep semantic analysis. Our results show that combining LLM with machine learning techniques such as causal knowledge graphs can revolutionize automated discovery in psychology, extracting novel insights from the extensive literature. This work stands at the crossroads of psychology and artificial intelligence, championing a new enriched paradigm for data-driven hypothesis generation in psychological research.
[ { "created": "Thu, 22 Feb 2024 10:12:16 GMT", "version": "v1" }, { "created": "Sun, 17 Mar 2024 04:14:27 GMT", "version": "v2" }, { "created": "Tue, 16 Jul 2024 03:12:45 GMT", "version": "v3" } ]
2024-08-19
[ [ "Tong", "Song", "" ], [ "Mao", "Kai", "" ], [ "Huang", "Zhen", "" ], [ "Zhao", "Yukun", "" ], [ "Peng", "Kaiping", "" ] ]
2402.14473
Jiajie Su
Jiajie Su, Chaochao Chen, Zibin Lin, Xi Li, Weiming Liu, and Xiaolin Zheng
Personalized Behavior-Aware Transformer for Multi-Behavior Sequential Recommendation
null
Proceedings of the 31st ACM International Conference on Multimedia. 2023: 6321-6331
null
null
cs.IR cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sequential Recommendation (SR) captures users' dynamic preferences by modeling how users transit among items. However, SR models that utilize only single type of behavior interaction data encounter performance degradation when the sequences are short. To tackle this problem, we focus on Multi-Behavior Sequential Recommendation (MBSR) in this paper, which aims to leverage time-evolving heterogeneous behavioral dependencies for better exploring users' potential intents on the target behavior. Solving MBSR is challenging. On the one hand, users exhibit diverse multi-behavior patterns due to personal characteristics. On the other hand, there exists comprehensive co-influence between behavior correlations and item collaborations, the intensity of which is deeply affected by temporal factors. To tackle these challenges, we propose a Personalized Behavior-Aware Transformer framework (PBAT) for MBSR problem, which models personalized patterns and multifaceted sequential collaborations in a novel way to boost recommendation performance. First, PBAT develops a personalized behavior pattern generator in the representation layer, which extracts dynamic and discriminative behavior patterns for sequential learning. Second, PBAT reforms the self-attention layer with a behavior-aware collaboration extractor, which introduces a fused behavior-aware attention mechanism for incorporating both behavioral and temporal impacts into collaborative transitions. We conduct experiments on three benchmark datasets and the results demonstrate the effectiveness and interpretability of our framework. Our implementation code is released at https://github.com/TiliaceaeSU/PBAT.
[ { "created": "Thu, 22 Feb 2024 12:03:21 GMT", "version": "v1" } ]
2024-02-23
[ [ "Su", "Jiajie", "" ], [ "Chen", "Chaochao", "" ], [ "Lin", "Zibin", "" ], [ "Li", "Xi", "" ], [ "Liu", "Weiming", "" ], [ "Zheng", "Xiaolin", "" ] ]
2402.14741
Daniel Capell\'an-Mart\'in Mr.
Daniel Capell\'an-Mart\'in, Abhijeet Parida, Juan J. G\'omez-Valverde, Ramon Sanchez-Jacob, Pooneh Roshanitabrizi, Marius G. Linguraru, Mar\'ia J. Ledesma-Carbayo, Syed M. Anwar
Zero-Shot Pediatric Tuberculosis Detection in Chest X-Rays using Self-Supervised Learning
5 pages, 3 figures, 2 tables. This paper has been accepted at IEEE ISBI 2024
21st IEEE International Symposium on Biomedical Imaging (ISBI 2024), Athens, Greece
10.1109/ISBI56570.2024.10635520
null
cs.CV eess.IV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Tuberculosis (TB) remains a significant global health challenge, with pediatric cases posing a major concern. The World Health Organization (WHO) advocates for chest X-rays (CXRs) for TB screening. However, visual interpretation by radiologists can be subjective, time-consuming and prone to error, especially in pediatric TB. Artificial intelligence (AI)-driven computer-aided detection (CAD) tools, especially those utilizing deep learning, show promise in enhancing lung disease detection. However, challenges include data scarcity and lack of generalizability. In this context, we propose a novel self-supervised paradigm leveraging Vision Transformers (ViT) for improved TB detection in CXR, enabling zero-shot pediatric TB detection. We demonstrate improvements in TB detection performance ($\sim$12.7% and $\sim$13.4% top AUC/AUPR gains in adults and children, respectively) when conducting self-supervised pre-training when compared to fully-supervised (i.e., non pre-trained) ViT models, achieving top performances of 0.959 AUC and 0.962 AUPR in adult TB detection, and 0.697 AUC and 0.607 AUPR in zero-shot pediatric TB detection. As a result, this work demonstrates that self-supervised learning on adult CXRs effectively extends to challenging downstream tasks such as pediatric TB detection, where data are scarce.
[ { "created": "Thu, 22 Feb 2024 17:55:18 GMT", "version": "v1" } ]
2024-08-29
[ [ "Capellán-Martín", "Daniel", "" ], [ "Parida", "Abhijeet", "" ], [ "Gómez-Valverde", "Juan J.", "" ], [ "Sanchez-Jacob", "Ramon", "" ], [ "Roshanitabrizi", "Pooneh", "" ], [ "Linguraru", "Marius G.", "" ], [ "Ledesma-Carbayo", "María J.", "" ], [ "Anwar", "Syed M.", "" ] ]
2402.14743
\c{S}aziye Bet\"ul \"Ozate\c{s}
\c{S}aziye Bet\"ul \"Ozate\c{s}, Tar{\i}k Emre T{\i}ra\c{s}, Efe Eren Gen\c{c}, Esma Fat{\i}ma Bilgin Ta\c{s}demir
Dependency Annotation of Ottoman Turkish with Multilingual BERT
9 pages, 5 figures. Accepted to LAW-XVIII
\c{S}aziye Bet\"ul \"Ozate\c{s}, Tar{\i}k T{\i}ra\c{s}, Efe Gen\c{c}, and Esma Bilgin Ta\c{s}demir. 2024. Dependency Annotation of Ottoman Turkish with Multilingual BERT. LAW-XVIII, pages 188-196, St. Julians, Malta
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
This study introduces a pretrained large language model-based annotation methodology for the first de dency treebank in Ottoman Turkish. Our experimental results show that, iteratively, i) pseudo-annotating data using a multilingual BERT-based parsing model, ii) manually correcting the pseudo-annotations, and iii) fine-tuning the parsing model with the corrected annotations, we speed up and simplify the challenging dependency annotation process. The resulting treebank, that will be a part of the Universal Dependencies (UD) project, will facilitate automated analysis of Ottoman Turkish documents, unlocking the linguistic richness embedded in this historical heritage.
[ { "created": "Thu, 22 Feb 2024 17:58:50 GMT", "version": "v1" }, { "created": "Thu, 22 Aug 2024 11:29:42 GMT", "version": "v2" } ]
2024-08-23
[ [ "Özateş", "Şaziye Betül", "" ], [ "Tıraş", "Tarık Emre", "" ], [ "Genç", "Efe Eren", "" ], [ "Taşdemir", "Esma Fatıma Bilgin", "" ] ]
2402.14810
Xueyi Liu
Xueyi Liu, Li Yi
GeneOH Diffusion: Towards Generalizable Hand-Object Interaction Denoising via Denoising Diffusion
Accepted to ICLR 2024. Project website: https://meowuu7.github.io/GeneOH-Diffusion/; Huggingface Demo: https://huggingface.co/spaces/xymeow7/gene-hoi-denoising; Code: https://github.com/Meowuu7/GeneOH-Diffusion
ICLR 2024
null
null
cs.CV cs.AI cs.GR cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
In this work, we tackle the challenging problem of denoising hand-object interactions (HOI). Given an erroneous interaction sequence, the objective is to refine the incorrect hand trajectory to remove interaction artifacts for a perceptually realistic sequence. This challenge involves intricate interaction noise, including unnatural hand poses and incorrect hand-object relations, alongside the necessity for robust generalization to new interactions and diverse noise patterns. We tackle those challenges through a novel approach, GeneOH Diffusion, incorporating two key designs: an innovative contact-centric HOI representation named GeneOH and a new domain-generalizable denoising scheme. The contact-centric representation GeneOH informatively parameterizes the HOI process, facilitating enhanced generalization across various HOI scenarios. The new denoising scheme consists of a canonical denoising model trained to project noisy data samples from a whitened noise space to a clean data manifold and a "denoising via diffusion" strategy which can handle input trajectories with various noise patterns by first diffusing them to align with the whitened noise space and cleaning via the canonical denoiser. Extensive experiments on four benchmarks with significant domain variations demonstrate the superior effectiveness of our method. GeneOH Diffusion also shows promise for various downstream applications. Project website: https://meowuu7.github.io/GeneOH-Diffusion/.
[ { "created": "Thu, 22 Feb 2024 18:59:21 GMT", "version": "v1" } ]
2024-02-23
[ [ "Liu", "Xueyi", "" ], [ "Yi", "Li", "" ] ]
2402.14846
Grgur Kova\v{c}
Grgur Kova\v{c}, R\'emy Portelas, Masataka Sawayama, Peter Ford Dominey, Pierre-Yves Oudeyer
Stick to your Role! Stability of Personal Values Expressed in Large Language Models
The project website and code are available at https://sites.google.com/view/llmvaluestability Published in PLOS ONE ( https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0309114 ), and a shorter version at CogSci 24 ( https://escholarship.org/uc/item/7w4823c6 )
PLOS ONE, August 2024
10.1371/journal.pone.0309114
null
cs.CL cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The standard way to study Large Language Models (LLMs) with benchmarks or psychology questionnaires is to provide many different queries from similar minimal contexts (e.g. multiple choice questions). However, due to LLMs' highly context-dependent nature, conclusions from such minimal-context evaluations may be little informative about the model's behavior in deployment (where it will be exposed to many new contexts). We argue that context-dependence (specifically, value stability) should be studied as a specific property of LLMs and used as another dimension of LLM comparison (alongside others such as cognitive abilities, knowledge, or model size). We present a case-study on the stability of value expression over different contexts (simulated conversations on different topics) as measured using a standard psychology questionnaire (PVQ) and on behavioral downstream tasks. Reusing methods from psychology, we study Rank-order stability on the population (interpersonal) level, and Ipsative stability on the individual (intrapersonal) level. We consider two settings (with and without instructing LLMs to simulate particular personas), two simulated populations, and three downstream tasks. We observe consistent trends in the stability of models and model families - Mixtral, Mistral, GPT-3.5 and Qwen families are more stable than LLaMa-2 and Phi. The consistency of these trends implies that some models exhibit higher value stability than others, and that stability can be estimated with the set of introduced methodological tools. When instructed to simulate particular personas, LLMs exhibit low Rank-order stability, which further diminishes with conversation length. This highlights the need for future research on LLMs that coherently simulate different personas. This paper provides a foundational step in that direction, and, to our knowledge, it is the first study of value stability in LLMs.
[ { "created": "Mon, 19 Feb 2024 14:53:01 GMT", "version": "v1" }, { "created": "Mon, 29 Apr 2024 17:36:18 GMT", "version": "v2" }, { "created": "Tue, 30 Apr 2024 07:09:22 GMT", "version": "v3" }, { "created": "Wed, 28 Aug 2024 14:04:05 GMT", "version": "v4" } ]
2024-08-29
[ [ "Kovač", "Grgur", "" ], [ "Portelas", "Rémy", "" ], [ "Sawayama", "Masataka", "" ], [ "Dominey", "Peter Ford", "" ], [ "Oudeyer", "Pierre-Yves", "" ] ]
2402.14847
Michal Bou\v{s}ka
Michal Bou\v{s}ka, P\v{r}emysl \v{S}\r{u}cha, Anton\'in Nov\'ak, Zden\v{e}k Hanz\'alek
Deep learning-driven scheduling algorithm for a single machine problem minimizing the total tardiness
null
European Journal of Operational Research, Volume 308, Issue 3, 1 August 2023, Pages 990-1006
10.1016/j.ejor.2022.11.034
null
math.OC cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
In this paper, we investigate the use of the deep learning method for solving a well-known NP-hard single machine scheduling problem with the objective of minimizing the total tardiness. We propose a deep neural network that acts as a polynomial-time estimator of the criterion value used in a single-pass scheduling algorithm based on Lawler's decomposition and symmetric decomposition proposed by Della Croce et al. Essentially, the neural network guides the algorithm by estimating the best splitting of the problem into subproblems. The paper also describes a new method for generating the training data set, which speeds up the training dataset generation and reduces the average optimality gap of solutions. The experimental results show that our machine learning-driven approach can efficiently generalize information from the training phase to significantly larger instances. Even though the instances used in the training phase have from 75 to 100 jobs, the average optimality gap on instances with up to 800 jobs is 0.26%, which is almost five times less than the gap of the state-of-the-art heuristic.
[ { "created": "Mon, 19 Feb 2024 15:34:09 GMT", "version": "v1" } ]
2024-02-28
[ [ "Bouška", "Michal", "" ], [ "Šůcha", "Přemysl", "" ], [ "Novák", "Antonín", "" ], [ "Hanzálek", "Zdeněk", "" ] ]
2402.14854
Hyolim Jeon
Hyolim Jeon, Dongje Yoo, Daeun Lee, Sejung Son, Seungbae Kim, Jinyoung Han
A Dual-Prompting for Interpretable Mental Health Language Models
null
Proceedings of the Ninth Workshop on Computational Linguistics and Clinical Psychology 2024
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite the increasing demand for AI-based mental health monitoring tools, their practical utility for clinicians is limited by the lack of interpretability.The CLPsych 2024 Shared Task (Chim et al., 2024) aims to enhance the interpretability of Large Language Models (LLMs), particularly in mental health analysis, by providing evidence of suicidality through linguistic content. We propose a dual-prompting approach: (i) Knowledge-aware evidence extraction by leveraging the expert identity and a suicide dictionary with a mental health-specific LLM; and (ii) Evidence summarization by employing an LLM-based consistency evaluator. Comprehensive experiments demonstrate the effectiveness of combining domain-specific information, revealing performance improvements and the approach's potential to aid clinicians in assessing mental state progression.
[ { "created": "Tue, 20 Feb 2024 06:18:02 GMT", "version": "v1" } ]
2024-02-26
[ [ "Jeon", "Hyolim", "" ], [ "Yoo", "Dongje", "" ], [ "Lee", "Daeun", "" ], [ "Son", "Sejung", "" ], [ "Kim", "Seungbae", "" ], [ "Han", "Jinyoung", "" ] ]
2402.14881
Chen Qian
Shanker Ram and Chen Qian
A Study on the Vulnerability of Test Questions against ChatGPT-based Cheating
2023 International Conference on Machine Learning and Applications (ICMLA)
2023 International Conference on Machine Learning and Applications (ICMLA)
null
null
cs.CL cs.AI cs.CY
http://creativecommons.org/licenses/by/4.0/
ChatGPT is a chatbot that can answer text prompts fairly accurately, even performing very well on postgraduate-level questions. Many educators have found that their take-home or remote tests and exams are vulnerable to ChatGPT-based cheating because students may directly use answers provided by tools like ChatGPT. In this paper, we try to provide an answer to an important question: how well ChatGPT can answer test questions and how we can detect whether the questions of a test can be answered correctly by ChatGPT. We generated ChatGPT's responses to the MedMCQA dataset, which contains over 10,000 medical school entrance exam questions. We analyzed the responses and uncovered certain types of questions ChatGPT answers more inaccurately than others. In addition, we have created a basic natural language processing model to single out the most vulnerable questions to ChatGPT in a collection of questions or a sample exam. Our tool can be used by test-makers to avoid ChatGPT-vulnerable test questions.
[ { "created": "Wed, 21 Feb 2024 23:51:06 GMT", "version": "v1" } ]
2024-02-26
[ [ "Ram", "Shanker", "" ], [ "Qian", "Chen", "" ] ]
2402.14958
Jakub Kol\'a\v{r}
Jakub Kol\'a\v{r}, Radim \v{S}petl\'ik, Ji\v{r}\'i Matas
EE3P: Event-based Estimation of Periodic Phenomena Properties
9 pages, 55 figures, accepted and presented at CVWW24, published in Proceedings of the 27th Computer Vision Winter Workshop, 2024
Proceedings of the 27th Computer Vision Winter Workshop, February 14-16, 2024, Terme Olimia, Slovenia, pages 66-74, CIP data: COBISS.SI-ID 185271043 ISBN 978-961-96564-0-2
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
We introduce a novel method for measuring properties of periodic phenomena with an event camera, a device asynchronously reporting brightness changes at independently operating pixels. The approach assumes that for fast periodic phenomena, in any spatial window where it occurs, a very similar set of events is generated at the time difference corresponding to the frequency of the motion. To estimate the frequency, we compute correlations of spatio-temporal windows in the event space. The period is calculated from the time differences between the peaks of the correlation responses. The method is contactless, eliminating the need for markers, and does not need distinguishable landmarks. We evaluate the proposed method on three instances of periodic phenomena: (i) light flashes, (ii) vibration, and (iii) rotational speed. In all experiments, our method achieves a relative error lower than 0.04%, which is within the error margin of ground truth measurements.
[ { "created": "Thu, 22 Feb 2024 20:37:30 GMT", "version": "v1" } ]
2024-02-26
[ [ "Kolář", "Jakub", "" ], [ "Špetlík", "Radim", "" ], [ "Matas", "Jiří", "" ] ]
2402.15010
Yanis Labrak
Yanis Labrak, Adrien Bazoge, Beatrice Daille, Mickael Rouvier, Richard Dufour
How Important Is Tokenization in French Medical Masked Language Models?
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
null
null
cs.CL cs.AI cs.LG
http://creativecommons.org/publicdomain/zero/1.0/
Subword tokenization has become the prevailing standard in the field of natural language processing (NLP) over recent years, primarily due to the widespread utilization of pre-trained language models. This shift began with Byte-Pair Encoding (BPE) and was later followed by the adoption of SentencePiece and WordPiece. While subword tokenization consistently outperforms character and word-level tokenization, the precise factors contributing to its success remain unclear. Key aspects such as the optimal segmentation granularity for diverse tasks and languages, the influence of data sources on tokenizers, and the role of morphological information in Indo-European languages remain insufficiently explored. This is particularly pertinent for biomedical terminology, characterized by specific rules governing morpheme combinations. Despite the agglutinative nature of biomedical terminology, existing language models do not explicitly incorporate this knowledge, leading to inconsistent tokenization strategies for common terms. In this paper, we seek to delve into the complexities of subword tokenization in French biomedical domain across a variety of NLP tasks and pinpoint areas where further enhancements can be made. We analyze classical tokenization algorithms, including BPE and SentencePiece, and introduce an original tokenization strategy that integrates morpheme-enriched word segmentation into existing tokenization methods.
[ { "created": "Thu, 22 Feb 2024 23:11:08 GMT", "version": "v1" }, { "created": "Sun, 9 Jun 2024 15:11:31 GMT", "version": "v2" } ]
2024-06-11
[ [ "Labrak", "Yanis", "" ], [ "Bazoge", "Adrien", "" ], [ "Daille", "Beatrice", "" ], [ "Rouvier", "Mickael", "" ], [ "Dufour", "Richard", "" ] ]
2402.15255
Vy Vo
Vy Vo, He Zhao, Trung Le, Edwin V. Bonilla, Dinh Phung
Optimal Transport for Structure Learning Under Missing Data
null
Proceedings of the 41st International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Causal discovery in the presence of missing data introduces a chicken-and-egg dilemma. While the goal is to recover the true causal structure, robust imputation requires considering the dependencies or, preferably, causal relations among variables. Merely filling in missing values with existing imputation methods and subsequently applying structure learning on the complete data is empirically shown to be sub-optimal. To address this problem, we propose a score-based algorithm for learning causal structures from missing data based on optimal transport. This optimal transport viewpoint diverges from existing score-based approaches that are dominantly based on expectation maximization. We formulate structure learning as a density fitting problem, where the goal is to find the causal model that induces a distribution of minimum Wasserstein distance with the observed data distribution. Our framework is shown to recover the true causal graphs more effectively than competing methods in most simulations and real-data settings. Empirical evidence also shows the superior scalability of our approach, along with the flexibility to incorporate any off-the-shelf causal discovery methods for complete data.
[ { "created": "Fri, 23 Feb 2024 10:49:04 GMT", "version": "v1" }, { "created": "Sat, 1 Jun 2024 10:57:01 GMT", "version": "v2" } ]
2024-06-04
[ [ "Vo", "Vy", "" ], [ "Zhao", "He", "" ], [ "Le", "Trung", "" ], [ "Bonilla", "Edwin V.", "" ], [ "Phung", "Dinh", "" ] ]
2402.15464
Kaveh Fathian
Kaveh Fathian, Tyler Summers
CLIPPER+: A Fast Maximal Clique Algorithm for Robust Global Registration
null
IEEE ROBOTICS AND AUTOMATION LETTERS, 2024
10.1109/LRA.2024.3368233
null
cs.RO cs.CV
http://creativecommons.org/licenses/by/4.0/
We present CLIPPER+, an algorithm for finding maximal cliques in unweighted graphs for outlier-robust global registration. The registration problem can be formulated as a graph and solved by finding its maximum clique. This formulation leads to extreme robustness to outliers; however, finding the maximum clique is an NP-hard problem, and therefore approximation is required in practice for large-size problems. The performance of an approximation algorithm is evaluated by its computational complexity (the lower the runtime, the better) and solution accuracy (how close the solution is to the maximum clique). Accordingly, the main contribution of CLIPPER+ is outperforming the state-of-the-art in accuracy while maintaining a relatively low runtime. CLIPPER+ builds on prior work (CLIPPER [1] and PMC [2]) and prunes the graph by removing vertices that have a small core number and cannot be a part of the maximum clique. This will result in a smaller graph, on which the maximum clique can be estimated considerably faster. We evaluate the performance of CLIPPER+ on standard graph benchmarks, as well as synthetic and real-world point cloud registration problems. These evaluations demonstrate that CLIPPER+ has the highest accuracy and can register point clouds in scenarios where over $99\%$ of associations are outliers. Our code and evaluation benchmarks are released at https://github.com/ariarobotics/clipperp.
[ { "created": "Fri, 23 Feb 2024 17:50:22 GMT", "version": "v1" } ]
2024-02-26
[ [ "Fathian", "Kaveh", "" ], [ "Summers", "Tyler", "" ] ]
2402.15518
Pedro Reviriego
Gonzalo Mart\'inez, Jos\'e Alberto Hern\'andez, Javier Conde, Pedro Reviriego and Elena Merino
Beware of Words: Evaluating the Lexical Richness of Conversational Large Language Models
null
ACM Transactions on Intelligent Systems and Technology, 2024
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
The performance of conversational Large Language Models (LLMs) in general, and of ChatGPT in particular, is currently being evaluated on many different tasks, from logical reasoning or maths to answering questions on a myriad of topics. Instead, much less attention is being devoted to the study of the linguistic features of the texts generated by these LLMs. This is surprising since LLMs are models for language, and understanding how they use the language is important. Indeed, conversational LLMs are poised to have a significant impact on the evolution of languages as they may eventually dominate the creation of new text. This means that for example, if conversational LLMs do not use a word it may become less and less frequent and eventually stop being used altogether. Therefore, evaluating the linguistic features of the text they produce and how those depend on the model parameters is the first step toward understanding the potential impact of conversational LLMs on the evolution of languages. In this paper, we consider the evaluation of the lexical richness of the text generated by LLMs and how it depends on the model parameters. A methodology is presented and used to conduct a comprehensive evaluation of lexical richness using ChatGPT as a case study. The results show how lexical richness depends on the version of ChatGPT and some of its parameters, such as the presence penalty, or on the role assigned to the model. The dataset and tools used in our analysis are released under open licenses with the goal of drawing the much-needed attention to the evaluation of the linguistic features of LLM-generated text.
[ { "created": "Sun, 11 Feb 2024 13:41:17 GMT", "version": "v1" } ]
2024-09-10
[ [ "Martínez", "Gonzalo", "" ], [ "Hernández", "José Alberto", "" ], [ "Conde", "Javier", "" ], [ "Reviriego", "Pedro", "" ], [ "Merino", "Elena", "" ] ]
2402.15584
Nikola Zubi\'c
Nikola Zubi\'c, Mathias Gehrig, Davide Scaramuzza
State Space Models for Event Cameras
18 pages, 5 figures, 6 tables, CVPR 2024 Camera Ready paper
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, 2024
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Today, state-of-the-art deep neural networks that process event-camera data first convert a temporal window of events into dense, grid-like input representations. As such, they exhibit poor generalizability when deployed at higher inference frequencies (i.e., smaller temporal windows) than the ones they were trained on. We address this challenge by introducing state-space models (SSMs) with learnable timescale parameters to event-based vision. This design adapts to varying frequencies without the need to retrain the network at different frequencies. Additionally, we investigate two strategies to counteract aliasing effects when deploying the model at higher frequencies. We comprehensively evaluate our approach against existing methods based on RNN and Transformer architectures across various benchmarks, including Gen1 and 1 Mpx event camera datasets. Our results demonstrate that SSM-based models train 33% faster and also exhibit minimal performance degradation when tested at higher frequencies than the training input. Traditional RNN and Transformer models exhibit performance drops of more than 20 mAP, with SSMs having a drop of 3.76 mAP, highlighting the effectiveness of SSMs in event-based vision tasks.
[ { "created": "Fri, 23 Feb 2024 19:51:55 GMT", "version": "v1" }, { "created": "Fri, 5 Apr 2024 17:01:34 GMT", "version": "v2" }, { "created": "Thu, 18 Apr 2024 15:29:14 GMT", "version": "v3" } ]
2024-04-19
[ [ "Zubić", "Nikola", "" ], [ "Gehrig", "Mathias", "" ], [ "Scaramuzza", "Davide", "" ] ]
2402.15666
Shu-Ting Pi
Shu-Ting Pi, Cheng-Ping Hsieh, Qun Liu, Yuying Zhu
Universal Model in Online Customer Service
null
Companion Proceedings of the ACM Web Conference 2023
10.1145/3543873.3587630
null
cs.LG cs.AI cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Building machine learning models can be a time-consuming process that often takes several months to implement in typical business scenarios. To ensure consistent model performance and account for variations in data distribution, regular retraining is necessary. This paper introduces a solution for improving online customer service in e-commerce by presenting a universal model for predict-ing labels based on customer questions, without requiring training. Our novel approach involves using machine learning techniques to tag customer questions in transcripts and create a repository of questions and corresponding labels. When a customer requests assistance, an information retrieval model searches the repository for similar questions, and statistical analysis is used to predict the corresponding label. By eliminating the need for individual model training and maintenance, our approach reduces both the model development cycle and costs. The repository only requires periodic updating to maintain accuracy.
[ { "created": "Sat, 24 Feb 2024 00:41:16 GMT", "version": "v1" } ]
2024-02-27
[ [ "Pi", "Shu-Ting", "" ], [ "Hsieh", "Cheng-Ping", "" ], [ "Liu", "Qun", "" ], [ "Zhu", "Yuying", "" ] ]
2402.15810
Fanjin Zhang
Fanjin Zhang, Shijie Shi, Yifan Zhu, Bo Chen, Yukuo Cen, Jifan Yu, Yelin Chen, Lulu Wang, Qingfei Zhao, Yuqing Cheng, Tianyi Han, Yuwei An, Dan Zhang, Weng Lam Tam, Kun Cao, Yunhe Pang, Xinyu Guan, Huihui Yuan, Jian Song, Xiaoyan Li, Yuxiao Dong, Jie Tang
OAG-Bench: A Human-Curated Benchmark for Academic Graph Mining
KDD'24, 9 pages, 5 appendix pages
Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD '24), August 25--29, 2024, Barcelona, Spain
10.1145/3637528.3672354
null
cs.DL cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the rapid proliferation of scientific literature, versatile academic knowledge services increasingly rely on comprehensive academic graph mining. Despite the availability of public academic graphs, benchmarks, and datasets, these resources often fall short in multi-aspect and fine-grained annotations, are constrained to specific task types and domains, or lack underlying real academic graphs. In this paper, we present OAG-Bench, a comprehensive, multi-aspect, and fine-grained human-curated benchmark based on the Open Academic Graph (OAG). OAG-Bench covers 10 tasks, 20 datasets, 70+ baselines, and 120+ experimental results to date. We propose new data annotation strategies for certain tasks and offer a suite of data pre-processing codes, algorithm implementations, and standardized evaluation protocols to facilitate academic graph mining. Extensive experiments reveal that even advanced algorithms like large language models (LLMs) encounter difficulties in addressing key challenges in certain tasks, such as paper source tracing and scholar profiling. We also introduce the Open Academic Graph Challenge (OAG-Challenge) to encourage community input and sharing. We envisage that OAG-Bench can serve as a common ground for the community to evaluate and compare algorithms in academic graph mining, thereby accelerating algorithm development and advancement in this field. OAG-Bench is accessible at https://www.aminer.cn/data/.
[ { "created": "Sat, 24 Feb 2024 13:15:54 GMT", "version": "v1" }, { "created": "Thu, 20 Jun 2024 04:15:12 GMT", "version": "v2" } ]
2024-06-21
[ [ "Zhang", "Fanjin", "" ], [ "Shi", "Shijie", "" ], [ "Zhu", "Yifan", "" ], [ "Chen", "Bo", "" ], [ "Cen", "Yukuo", "" ], [ "Yu", "Jifan", "" ], [ "Chen", "Yelin", "" ], [ "Wang", "Lulu", "" ], [ "Zhao", "Qingfei", "" ], [ "Cheng", "Yuqing", "" ], [ "Han", "Tianyi", "" ], [ "An", "Yuwei", "" ], [ "Zhang", "Dan", "" ], [ "Tam", "Weng Lam", "" ], [ "Cao", "Kun", "" ], [ "Pang", "Yunhe", "" ], [ "Guan", "Xinyu", "" ], [ "Yuan", "Huihui", "" ], [ "Song", "Jian", "" ], [ "Li", "Xiaoyan", "" ], [ "Dong", "Yuxiao", "" ], [ "Tang", "Jie", "" ] ]
2402.15858
Jieming Bian
Yuanzhe Peng, Jieming Bian, Jie Xu
FedMM: Federated Multi-Modal Learning with Modality Heterogeneity in Computational Pathology
null
2024 International Conference on Acoustics, Speech and Signal Processing (ICASSP 2024)
null
null
cs.CV cs.DC
http://creativecommons.org/licenses/by/4.0/
The fusion of complementary multimodal information is crucial in computational pathology for accurate diagnostics. However, existing multimodal learning approaches necessitate access to users' raw data, posing substantial privacy risks. While Federated Learning (FL) serves as a privacy-preserving alternative, it falls short in addressing the challenges posed by heterogeneous (yet possibly overlapped) modalities data across various hospitals. To bridge this gap, we propose a Federated Multi-Modal (FedMM) learning framework that federatedly trains multiple single-modal feature extractors to enhance subsequent classification performance instead of existing FL that aims to train a unified multimodal fusion model. Any participating hospital, even with small-scale datasets or limited devices, can leverage these federated trained extractors to perform local downstream tasks (e.g., classification) while ensuring data privacy. Through comprehensive evaluations of two publicly available datasets, we demonstrate that FedMM notably outperforms two baselines in accuracy and AUC metrics.
[ { "created": "Sat, 24 Feb 2024 16:58:42 GMT", "version": "v1" } ]
2024-02-27
[ [ "Peng", "Yuanzhe", "" ], [ "Bian", "Jieming", "" ], [ "Xu", "Jie", "" ] ]
2402.15987
Masanari Ohi
Masanari Ohi, Masahiro Kaneko, Ryuto Koike, Mengsay Loem, Naoaki Okazaki
Likelihood-based Mitigation of Evaluation Bias in Large Language Models
5 main pages
ACL2024 (findings)
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Large Language Models (LLMs) are widely used to evaluate natural language generation tasks as automated metrics. However, the likelihood, a measure of LLM's plausibility for a sentence, can vary due to superficial differences in sentences, such as word order and sentence structure. It is therefore possible that there might be a likelihood bias if LLMs are used for evaluation: they might overrate sentences with higher likelihoods while underrating those with lower likelihoods. In this paper, we investigate the presence and impact of likelihood bias in LLM-based evaluators. We also propose a method to mitigate the likelihood bias. Our method utilizes highly biased instances as few-shot examples for in-context learning. Our experiments in evaluating the data-to-text and grammatical error correction tasks reveal that several LLMs we test display a likelihood bias. Furthermore, our proposed method successfully mitigates this bias, also improving evaluation performance (in terms of correlation of models with human scores) significantly.
[ { "created": "Sun, 25 Feb 2024 04:52:02 GMT", "version": "v1" }, { "created": "Fri, 1 Mar 2024 06:44:44 GMT", "version": "v2" }, { "created": "Sat, 12 Oct 2024 09:57:43 GMT", "version": "v3" } ]
2024-10-15
[ [ "Ohi", "Masanari", "" ], [ "Kaneko", "Masahiro", "" ], [ "Koike", "Ryuto", "" ], [ "Loem", "Mengsay", "" ], [ "Okazaki", "Naoaki", "" ] ]
2402.15990
Abdul Ali Bangash
Zhimin Zhao, Yihao Chen, Abdul Ali Bangash, Bram Adams, Ahmed E. Hassan
An Empirical Study of Challenges in Machine Learning Asset Management
null
Empirical Software Engineering 2024
10.1007/s10664-024-10474-4
null
cs.SE cs.AI
http://creativecommons.org/licenses/by/4.0/
In machine learning (ML), efficient asset management, including ML models, datasets, algorithms, and tools, is vital for resource optimization, consistent performance, and a streamlined development lifecycle. This enables quicker iterations, adaptability, reduced development-to-deployment time, and reliable outputs. Despite existing research, a significant knowledge gap remains in operational challenges like model versioning, data traceability, and collaboration, which are crucial for the success of ML projects. Our study aims to address this gap by analyzing 15,065 posts from developer forums and platforms, employing a mixed-method approach to classify inquiries, extract challenges using BERTopic, and identify solutions through open card sorting and BERTopic clustering. We uncover 133 topics related to asset management challenges, grouped into 16 macro-topics, with software dependency, model deployment, and model training being the most discussed. We also find 79 solution topics, categorized under 18 macro-topics, highlighting software dependency, feature development, and file management as key solutions. This research underscores the need for further exploration of identified pain points and the importance of collaborative efforts across academia, industry, and the research community.
[ { "created": "Sun, 25 Feb 2024 05:05:52 GMT", "version": "v1" }, { "created": "Wed, 28 Feb 2024 05:58:18 GMT", "version": "v2" } ]
2024-06-19
[ [ "Zhao", "Zhimin", "" ], [ "Chen", "Yihao", "" ], [ "Bangash", "Abdul Ali", "" ], [ "Adams", "Bram", "" ], [ "Hassan", "Ahmed E.", "" ] ]
2402.16012
Bocheng Wang
Mulin Chen, Bocheng Wang, Xuelong Li
Deep Contrastive Graph Learning with Clustering-Oriented Guidance
Accept at AAAI24
AAAI (2024) Vol. 38, No. 10, pages 11364-11372
10.1609/aaai.v38i10.29016
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph Convolutional Network (GCN) has exhibited remarkable potential in improving graph-based clustering. To handle the general clustering scenario without a prior graph, these models estimate an initial graph beforehand to apply GCN. Throughout the literature, we have witnessed that 1) most models focus on the initial graph while neglecting the original features. Therefore, the discriminability of the learned representation may be corrupted by a low-quality initial graph; 2) the training procedure lacks effective clustering guidance, which may lead to the incorporation of clustering-irrelevant information into the learned graph. To tackle these problems, the Deep Contrastive Graph Learning (DCGL) model is proposed for general data clustering. Specifically, we establish a pseudo-siamese network, which incorporates auto-encoder with GCN to emphasize both the graph structure and the original features. On this basis, feature-level contrastive learning is introduced to enhance the discriminative capacity, and the relationship between samples and centroids is employed as the clustering-oriented guidance. Afterward, a two-branch graph learning mechanism is designed to extract the local and global structural relationships, which are further embedded into a unified graph under the cluster-level contrastive guidance. Experimental results on several benchmark datasets demonstrate the superiority of DCGL against state-of-the-art algorithms.
[ { "created": "Sun, 25 Feb 2024 07:03:37 GMT", "version": "v1" } ]
2024-04-04
[ [ "Chen", "Mulin", "" ], [ "Wang", "Bocheng", "" ], [ "Li", "Xuelong", "" ] ]
2402.16013
Sahal Shaji Mullappilly
Sahal Shaji Mullappilly, Abhishek Singh Gehlot, Rao Muhammad Anwer, Fahad Shahbaz Khan, Hisham Cholakkal
Semi-supervised Open-World Object Detection
Accepted to AAAI 2024 (Main Track)
Proceedings of the AAAI Conference on Artificial Intelligence 2024
10.1609/aaai.v38i5.28227
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Conventional open-world object detection (OWOD) problem setting first distinguishes known and unknown classes and then later incrementally learns the unknown objects when introduced with labels in the subsequent tasks. However, the current OWOD formulation heavily relies on the external human oracle for knowledge input during the incremental learning stages. Such reliance on run-time makes this formulation less realistic in a real-world deployment. To address this, we introduce a more realistic formulation, named semi-supervised open-world detection (SS-OWOD), that reduces the annotation cost by casting the incremental learning stages of OWOD in a semi-supervised manner. We demonstrate that the performance of the state-of-the-art OWOD detector dramatically deteriorates in the proposed SS-OWOD setting. Therefore, we introduce a novel SS-OWOD detector, named SS-OWFormer, that utilizes a feature-alignment scheme to better align the object query representations between the original and augmented images to leverage the large unlabeled and few labeled data. We further introduce a pseudo-labeling scheme for unknown detection that exploits the inherent capability of decoder object queries to capture object-specific information. We demonstrate the effectiveness of our SS-OWOD problem setting and approach for remote sensing object detection, proposing carefully curated splits and baseline performance evaluations. Our experiments on 4 datasets including MS COCO, PASCAL, Objects365 and DOTA demonstrate the effectiveness of our approach. Our source code, models and splits are available here - https://github.com/sahalshajim/SS-OWFormer
[ { "created": "Sun, 25 Feb 2024 07:12:51 GMT", "version": "v1" } ]
2024-04-15
[ [ "Mullappilly", "Sahal Shaji", "" ], [ "Gehlot", "Abhishek Singh", "" ], [ "Anwer", "Rao Muhammad", "" ], [ "Khan", "Fahad Shahbaz", "" ], [ "Cholakkal", "Hisham", "" ] ]
2402.16039
Han Li
Zihan Liu, Han Li, Anfan Chen, Renwen Zhang, Yi-Chieh Lee
Understanding Public Perceptions of AI Conversational Agents: A Cross-Cultural Analysis
17 pages, 4 figures, 7 tables
CHI2024
10.1145/3613904.3642840
null
cs.HC cs.CL
http://creativecommons.org/licenses/by/4.0/
Conversational Agents (CAs) have increasingly been integrated into everyday life, sparking significant discussions on social media. While previous research has examined public perceptions of AI in general, there is a notable lack in research focused on CAs, with fewer investigations into cultural variations in CA perceptions. To address this gap, this study used computational methods to analyze about one million social media discussions surrounding CAs and compared people's discourses and perceptions of CAs in the US and China. We find Chinese participants tended to view CAs hedonically, perceived voice-based and physically embodied CAs as warmer and more competent, and generally expressed positive emotions. In contrast, US participants saw CAs more functionally, with an ambivalent attitude. Warm perception was a key driver of positive emotions toward CAs in both countries. We discussed practical implications for designing contextually sensitive and user-centric CAs to resonate with various users' preferences and needs.
[ { "created": "Sun, 25 Feb 2024 09:34:22 GMT", "version": "v1" } ]
2024-02-27
[ [ "Liu", "Zihan", "" ], [ "Li", "Han", "" ], [ "Chen", "Anfan", "" ], [ "Zhang", "Renwen", "" ], [ "Lee", "Yi-Chieh", "" ] ]
2402.16086
Feng Lu
Feng Lu, Shuting Dong, Lijun Zhang, Bingxi Liu, Xiangyuan Lan, Dongmei Jiang, Chun Yuan
Deep Homography Estimation for Visual Place Recognition
Accepted by AAAI2024
AAAI 2024
10.1609/aaai.v38i9.28901
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visual place recognition (VPR) is a fundamental task for many applications such as robot localization and augmented reality. Recently, the hierarchical VPR methods have received considerable attention due to the trade-off between accuracy and efficiency. They usually first use global features to retrieve the candidate images, then verify the spatial consistency of matched local features for re-ranking. However, the latter typically relies on the RANSAC algorithm for fitting homography, which is time-consuming and non-differentiable. This makes existing methods compromise to train the network only in global feature extraction. Here, we propose a transformer-based deep homography estimation (DHE) network that takes the dense feature map extracted by a backbone network as input and fits homography for fast and learnable geometric verification. Moreover, we design a re-projection error of inliers loss to train the DHE network without additional homography labels, which can also be jointly trained with the backbone network to help it extract the features that are more suitable for local matching. Extensive experiments on benchmark datasets show that our method can outperform several state-of-the-art methods. And it is more than one order of magnitude faster than the mainstream hierarchical VPR methods using RANSAC. The code is released at https://github.com/Lu-Feng/DHE-VPR.
[ { "created": "Sun, 25 Feb 2024 13:22:17 GMT", "version": "v1" }, { "created": "Mon, 18 Mar 2024 09:33:47 GMT", "version": "v2" } ]
2024-04-09
[ [ "Lu", "Feng", "" ], [ "Dong", "Shuting", "" ], [ "Zhang", "Lijun", "" ], [ "Liu", "Bingxi", "" ], [ "Lan", "Xiangyuan", "" ], [ "Jiang", "Dongmei", "" ], [ "Yuan", "Chun", "" ] ]
2402.16139
Antonio San Mart\'in
Antonio San Mart\'in
What Generative Artificial Intelligence Means for Terminological Definitions
37 pages, 1 figure
Proceedings of the 3rd International Conference on Multilingual Digital Terminology Today (MDTT 2024)
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
This paper examines the impact of Generative Artificial Intelligence (GenAI) tools like ChatGPT on the creation and consumption of terminological definitions. From the terminologist's point of view, the strategic use of GenAI tools can streamline the process of crafting definitions, reducing both time and effort, while potentially enhancing quality. GenAI tools enable AI-assisted terminography, notably post-editing terminography, where the machine produces a definition that the terminologist then corrects or refines. However, the potential of GenAI tools to fulfill all the terminological needs of a user, including term definitions, challenges the very existence of terminological definitions and resources as we know them. Unlike terminological definitions, GenAI tools can describe the knowledge activated by a term in a specific context. However, a main drawback of these tools is that their output can contain errors. For this reason, users requiring reliability will likely still resort to terminological resources for definitions. Nevertheless, with the inevitable integration of AI into terminology work, the distinction between human-created and AI-created content will become increasingly blurred.
[ { "created": "Sun, 25 Feb 2024 16:36:51 GMT", "version": "v1" }, { "created": "Fri, 29 Mar 2024 17:51:32 GMT", "version": "v2" }, { "created": "Fri, 19 Apr 2024 16:13:43 GMT", "version": "v3" } ]
2024-07-01
[ [ "Martín", "Antonio San", "" ] ]
2402.16188
Vincent Christlein
Alexander Schmidt, Prathmesh Madhu, Andreas Maier, Vincent Christlein, Ronak Kosti
ARIN: Adaptive Resampling and Instance Normalization for Robust Blind Inpainting of Dunhuang Cave Paintings
null
2022 Eleventh International Conference on Image Processing Theory, Tools and Applications (IPTA), Salzburg, Austria, 2022, pp. 1-6
10.1109/IPTA54936.2022.9784144
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Image enhancement algorithms are very useful for real world computer vision tasks where image resolution is often physically limited by the sensor size. While state-of-the-art deep neural networks show impressive results for image enhancement, they often struggle to enhance real-world images. In this work, we tackle a real-world setting: inpainting of images from Dunhuang caves. The Dunhuang dataset consists of murals, half of which suffer from corrosion and aging. These murals feature a range of rich content, such as Buddha statues, bodhisattvas, sponsors, architecture, dance, music, and decorative patterns designed by different artists spanning ten centuries, which makes manual restoration challenging. We modify two different existing methods (CAR, HINet) that are based upon state-of-the-art (SOTA) super resolution and deblurring networks. We show that those can successfully inpaint and enhance these deteriorated cave paintings. We further show that a novel combination of CAR and HINet, resulting in our proposed inpainting network (ARIN), is very robust to external noise, especially Gaussian noise. To this end, we present a quantitative and qualitative comparison of our proposed approach with existing SOTA networks and winners of the Dunhuang challenge. One of the proposed methods HINet) represents the new state of the art and outperforms the 1st place of the Dunhuang Challenge, while our combination ARIN, which is robust to noise, is comparable to the 1st place. We also present and discuss qualitative results showing the impact of our method for inpainting on Dunhuang cave images.
[ { "created": "Sun, 25 Feb 2024 20:27:20 GMT", "version": "v1" } ]
2024-02-27
[ [ "Schmidt", "Alexander", "" ], [ "Madhu", "Prathmesh", "" ], [ "Maier", "Andreas", "" ], [ "Christlein", "Vincent", "" ], [ "Kosti", "Ronak", "" ] ]
2402.16268
Rishi Bommasani
Rishi Bommasani, Kevin Klyman, Shayne Longpre, Betty Xiong, Sayash Kapoor, Nestor Maslej, Arvind Narayanan, Percy Liang
Foundation Model Transparency Reports
null
Published in AIES 2024
null
null
cs.LG cs.AI cs.CY
http://creativecommons.org/licenses/by/4.0/
Foundation models are critical digital technologies with sweeping societal impact that necessitates transparency. To codify how foundation model developers should provide transparency about the development and deployment of their models, we propose Foundation Model Transparency Reports, drawing upon the transparency reporting practices in social media. While external documentation of societal harms prompted social media transparency reports, our objective is to institutionalize transparency reporting for foundation models while the industry is still nascent. To design our reports, we identify 6 design principles given the successes and shortcomings of social media transparency reporting. To further schematize our reports, we draw upon the 100 transparency indicators from the Foundation Model Transparency Index. Given these indicators, we measure the extent to which they overlap with the transparency requirements included in six prominent government policies (e.g., the EU AI Act, the US Executive Order on Safe, Secure, and Trustworthy AI). Well-designed transparency reports could reduce compliance costs, in part due to overlapping regulatory requirements across different jurisdictions. We encourage foundation model developers to regularly publish transparency reports, building upon recommendations from the G7 and the White House.
[ { "created": "Mon, 26 Feb 2024 03:09:06 GMT", "version": "v1" } ]
2024-07-19
[ [ "Bommasani", "Rishi", "" ], [ "Klyman", "Kevin", "" ], [ "Longpre", "Shayne", "" ], [ "Xiong", "Betty", "" ], [ "Kapoor", "Sayash", "" ], [ "Maslej", "Nestor", "" ], [ "Narayanan", "Arvind", "" ], [ "Liang", "Percy", "" ] ]
2402.16361
Shiwen Ni
Shiwen Ni, Min Yang, Ruifeng Xu, Chengming Li and Xiping Hu
Layer-wise Regularized Dropout for Neural Language Models
null
LREC-COLING 2024
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Among the various pre-trained neural language models that are popular today, dropout is already an indispensable regularization technique. To solve the inconsistency between training and inference caused by the randomness of dropout, some studies use consistency training to regularize dropout at the output layer. In this paper, we propose a novel Layer-wise Regularized Dropout (LR-Drop), which is specially designed for Transformer-based Language models. Specifically, LR-Drop layer-wise regularizes each Transformer layer using the consistency training strategy. Each training sample passes through the two siamese sub-models sampled by dropout, and then LR-Drop forces the hidden states, multi-head attention matrices, and output distribution of the two siamese sub-models to be consistent. The proposed LR-Drop can be regarded as a "self-distillation" framework, in which each sub-model generated by dropout is the other's "teacher" model and "student" model. Through extensive experiments on 8 natural language understanding datasets, 6 neural machine translation datasets, and 1 abstractive summarization dataset (a total of 15 datasets), we show that LR-Drop achieves superior performances, including state-of-the-art results.
[ { "created": "Mon, 26 Feb 2024 07:31:35 GMT", "version": "v1" } ]
2024-02-27
[ [ "Ni", "Shiwen", "" ], [ "Yang", "Min", "" ], [ "Xu", "Ruifeng", "" ], [ "Li", "Chengming", "" ], [ "Hu", "Xiping", "" ] ]
2402.16364
Tzuf Paz-Argaman
Tzuf Paz-Argaman, Sayali Kulkarni, John Palowitch, Jason Baldridge, and Reut Tsarfaty
Where Do We Go from Here? Multi-scale Allocentric Relational Inference from Natural Spatial Descriptions
null
EACL 2024
null
null
cs.CL cs.LG cs.MM
http://creativecommons.org/licenses/by/4.0/
When communicating routes in natural language, the concept of acquired spatial knowledge is crucial for geographic information retrieval (GIR) and in spatial cognitive research. However, NLP navigation studies often overlook the impact of such acquired knowledge on textual descriptions. Current navigation studies concentrate on egocentric local descriptions (e.g., `it will be on your right') that require reasoning over the agent's local perception. These instructions are typically given as a sequence of steps, with each action-step explicitly mentioning and being followed by a landmark that the agent can use to verify they are on the right path (e.g., `turn right and then you will see...'). In contrast, descriptions based on knowledge acquired through a map provide a complete view of the environment and capture its overall structure. These instructions (e.g., `it is south of Central Park and a block north of a police station') are typically non-sequential, contain allocentric relations, with multiple spatial relations and implicit actions, without any explicit verification. This paper introduces the Rendezvous (RVS) task and dataset, which includes 10,404 examples of English geospatial instructions for reaching a target location using map-knowledge. Our analysis reveals that RVS exhibits a richer use of spatial allocentric relations, and requires resolving more spatial relations simultaneously compared to previous text-based navigation benchmarks.
[ { "created": "Mon, 26 Feb 2024 07:33:28 GMT", "version": "v1" }, { "created": "Sun, 4 Aug 2024 08:36:08 GMT", "version": "v2" } ]
2024-08-06
[ [ "Paz-Argaman", "Tzuf", "" ], [ "Kulkarni", "Sayali", "" ], [ "Palowitch", "John", "" ], [ "Baldridge", "Jason", "" ], [ "Tsarfaty", "Reut", "" ] ]
2402.16389
Shiwen Ni
Shiwen Ni, Minghuan Tan, Yuelin Bai, Fuqiang Niu, Min Yang, Bowen Zhang, Ruifeng Xu, Xiaojun Chen, Chengming Li, Xiping Hu, Ye Li, Jianping Fan
MoZIP: A Multilingual Benchmark to Evaluate Large Language Models in Intellectual Property
null
LREC-COLING 2024
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large language models (LLMs) have demonstrated impressive performance in various natural language processing (NLP) tasks. However, there is limited understanding of how well LLMs perform in specific domains (e.g, the intellectual property (IP) domain). In this paper, we contribute a new benchmark, the first Multilingual-oriented quiZ on Intellectual Property (MoZIP), for the evaluation of LLMs in the IP domain. The MoZIP benchmark includes three challenging tasks: IP multiple-choice quiz (IPQuiz), IP question answering (IPQA), and patent matching (PatentMatch). In addition, we also develop a new IP-oriented multilingual large language model (called MoZi), which is a BLOOMZ-based model that has been supervised fine-tuned with multilingual IP-related text data. We evaluate our proposed MoZi model and four well-known LLMs (i.e., BLOOMZ, BELLE, ChatGLM and ChatGPT) on the MoZIP benchmark. Experimental results demonstrate that MoZi outperforms BLOOMZ, BELLE and ChatGLM by a noticeable margin, while it had lower scores compared with ChatGPT. Notably, the performance of current LLMs on the MoZIP benchmark has much room for improvement, and even the most powerful ChatGPT does not reach the passing level. Our source code, data, and models are available at \url{https://github.com/AI-for-Science/MoZi}.
[ { "created": "Mon, 26 Feb 2024 08:27:50 GMT", "version": "v1" } ]
2024-02-27
[ [ "Ni", "Shiwen", "" ], [ "Tan", "Minghuan", "" ], [ "Bai", "Yuelin", "" ], [ "Niu", "Fuqiang", "" ], [ "Yang", "Min", "" ], [ "Zhang", "Bowen", "" ], [ "Xu", "Ruifeng", "" ], [ "Chen", "Xiaojun", "" ], [ "Li", "Chengming", "" ], [ "Hu", "Xiping", "" ], [ "Li", "Ye", "" ], [ "Fan", "Jianping", "" ] ]
2402.16420
Lev Kharlashkin
Lev Kharlashkin, Melany Macias, Leo Huovinen, Mika H\"am\"al\"ainen
Predicting Sustainable Development Goals Using Course Descriptions -- from LLMs to Conventional Foundation Models
3 figures, 2 tables
Journal of Data Mining & Digital Humanities, NLP4DH (April 29, 2024) jdmdh:13127
10.46298/jdmdh.13127
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
We present our work on predicting United Nations sustainable development goals (SDG) for university courses. We use an LLM named PaLM 2 to generate training data given a noisy human-authored course description input as input. We use this data to train several different smaller language models to predict SDGs for university courses. This work contributes to better university level adaptation of SDGs. The best performing model in our experiments was BART with an F1-score of 0.786.
[ { "created": "Mon, 26 Feb 2024 09:19:46 GMT", "version": "v1" }, { "created": "Tue, 23 Apr 2024 12:49:57 GMT", "version": "v2" } ]
2024-08-07
[ [ "Kharlashkin", "Lev", "" ], [ "Macias", "Melany", "" ], [ "Huovinen", "Leo", "" ], [ "Hämäläinen", "Mika", "" ] ]
2402.16514
Luk\'a\v{s} Gajdo\v{s}ech
Katar\'ina Osvaldov\'a, Luk\'a\v{s} Gajdo\v{s}ech, Viktor Kocur, Martin Madaras
Enhancement of 3D Camera Synthetic Training Data with Noise Models
Published in 2024 Proceedings of the 27th Computer Vision Winter Workshop (CVWW). Accepted: 19.1.2024. Published: 16.2.2024. This work was funded by the Horizon-Widera-2021 European Twinning project TERAIS G.A. n. 101079338. Code: https://doi.org/10.5281/zenodo.10581562 Data: https://doi.org/10.5281/zenodo.10581278
Proceedings of the 27th Computer Vision Winter Workshop CVWW (2024) 29-37
10.5281/zenodo.10694437
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
The goal of this paper is to assess the impact of noise in 3D camera-captured data by modeling the noise of the imaging process and applying it on synthetic training data. We compiled a dataset of specifically constructed scenes to obtain a noise model. We specifically model lateral noise, affecting the position of captured points in the image plane, and axial noise, affecting the position along the axis perpendicular to the image plane. The estimated models can be used to emulate noise in synthetic training data. The added benefit of adding artificial noise is evaluated in an experiment with rendered data for object segmentation. We train a series of neural networks with varying levels of noise in the data and measure their ability to generalize on real data. The results show that using too little or too much noise can hurt the networks' performance indicating that obtaining a model of noise from real scanners is beneficial for synthetic data generation.
[ { "created": "Mon, 26 Feb 2024 11:50:42 GMT", "version": "v1" } ]
2024-02-27
[ [ "Osvaldová", "Katarína", "" ], [ "Gajdošech", "Lukáš", "" ], [ "Kocur", "Viktor", "" ], [ "Madaras", "Martin", "" ] ]
2402.16654
Andrey Savchenko
Pavel Blinov, Konstantin Egorov, Ivan Sviridov, Nikolay Ivanov, Stepan Botman, Evgeniy Tagin, Stepan Kudin, Galina Zubkova, Andrey Savchenko
GigaPevt: Multimodal Medical Assistant
IJCAI 2024, 4 pages, 2 figures, 2 tables
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence (IJCAI) Demo Track, 2024, pp. 8614-8618
10.24963/ijcai.2024/992
null
cs.AI cs.CL cs.HC
http://creativecommons.org/licenses/by/4.0/
Building an intelligent and efficient medical assistant is still a challenging AI problem. The major limitation comes from the data modality scarceness, which reduces comprehensive patient perception. This demo paper presents the GigaPevt, the first multimodal medical assistant that combines the dialog capabilities of large language models with specialized medical models. Such an approach shows immediate advantages in dialog quality and metric performance, with a 1.18% accuracy improvement in the question-answering task.
[ { "created": "Mon, 26 Feb 2024 15:26:56 GMT", "version": "v1" }, { "created": "Tue, 30 Jul 2024 06:04:31 GMT", "version": "v2" } ]
2024-07-31
[ [ "Blinov", "Pavel", "" ], [ "Egorov", "Konstantin", "" ], [ "Sviridov", "Ivan", "" ], [ "Ivanov", "Nikolay", "" ], [ "Botman", "Stepan", "" ], [ "Tagin", "Evgeniy", "" ], [ "Kudin", "Stepan", "" ], [ "Zubkova", "Galina", "" ], [ "Savchenko", "Andrey", "" ] ]
2402.16871
Sascha Ossowski
Alberto Fern\'andez, Holger Billhardt, Sascha Ossowski, \'Oscar S\'anchez
Bike3S: A Tool for Bike Sharing Systems Simulation
null
Journal of Simulation 14(4), 2020
10.1080/17477778.2020.1718022
null
cs.MA cs.AI
http://creativecommons.org/licenses/by/4.0/
Vehicle sharing systems are becoming increasingly popular. The effectiveness of such systems depends, among other factors, on different strategic and operational management decisions and policies, like the dimension of the fleet or the distribution of vehicles. It is of foremost importance to be able to anticipate and evaluate the potential effects of such strategies before they can be successfully deployed. In this paper we present Bike3S, a simulator for a station-based bike sharing system. The simulator performs semi-realistic simulations of the operation of a bike sharing system and allows for evaluating and testing different management decisions and strategies. In particular, the simulator has been designed to test different station capacities, station distributions, and balancing strategies. The simulator carries out microscopic agent-based simulations, where users of different types can be defined that act according to their individual goals and objectives which influences the overall dynamics of the whole system.
[ { "created": "Wed, 24 Jan 2024 17:33:40 GMT", "version": "v1" } ]
2024-02-28
[ [ "Fernández", "Alberto", "" ], [ "Billhardt", "Holger", "" ], [ "Ossowski", "Sascha", "" ], [ "Sánchez", "Óscar", "" ] ]
2402.16898
Nguyen Do Hoang Khoi
Nguyen Do, Tanmoy Chowdhury, Chen Ling, Liang Zhao, My T. Thai
MIM-Reasoner: Learning with Theoretical Guarantees for Multiplex Influence Maximization
null
International Conference on Artificial Intelligence and Statistics (AISTATS) 2024
null
null
cs.SI cs.AI cs.LG math.PR stat.ML
http://creativecommons.org/licenses/by-nc-sa/4.0/
Multiplex influence maximization (MIM) asks us to identify a set of seed users such as to maximize the expected number of influenced users in a multiplex network. MIM has been one of central research topics, especially in nowadays social networking landscape where users participate in multiple online social networks (OSNs) and their influences can propagate among several OSNs simultaneously. Although there exist a couple combinatorial algorithms to MIM, learning-based solutions have been desired due to its generalization ability to heterogeneous networks and their diversified propagation characteristics. In this paper, we introduce MIM-Reasoner, coupling reinforcement learning with probabilistic graphical model, which effectively captures the complex propagation process within and between layers of a given multiplex network, thereby tackling the most challenging problem in MIM. We establish a theoretical guarantee for MIM-Reasoner as well as conduct extensive analyses on both synthetic and real-world datasets to validate our MIM-Reasoner's performance.
[ { "created": "Sat, 24 Feb 2024 03:48:22 GMT", "version": "v1" }, { "created": "Sun, 10 Mar 2024 07:35:15 GMT", "version": "v2" } ]
2024-03-12
[ [ "Do", "Nguyen", "" ], [ "Chowdhury", "Tanmoy", "" ], [ "Ling", "Chen", "" ], [ "Zhao", "Liang", "" ], [ "Thai", "My T.", "" ] ]
2402.16998
Jerry Ngo
Jerry Ngo, Yoon Kim
What Do Language Models Hear? Probing for Auditory Representations in Language Models
null
2024.acl-long.297
null
null
cs.CL cs.AI cs.LG cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
This work explores whether language models encode meaningfully grounded representations of sounds of objects. We learn a linear probe that retrieves the correct text representation of an object given a snippet of audio related to that object, where the sound representation is given by a pretrained audio model. This probe is trained via a contrastive loss that pushes the language representations and sound representations of an object to be close to one another. After training, the probe is tested on its ability to generalize to objects that were not seen during training. Across different language models and audio models, we find that the probe generalization is above chance in many cases, indicating that despite being trained only on raw text, language models encode grounded knowledge of sounds for some objects.
[ { "created": "Mon, 26 Feb 2024 20:13:58 GMT", "version": "v1" }, { "created": "Fri, 16 Aug 2024 08:13:38 GMT", "version": "v2" } ]
2024-08-19
[ [ "Ngo", "Jerry", "" ], [ "Kim", "Yoon", "" ] ]
2402.17029
Vincent Christlein
Vincent Christlein, David Bernecker, Andreas Maier, Elli Angelopoulou
Offline Writer Identification Using Convolutional Neural Network Activation Features
fixed tab 1b
Pattern Recognition. DAGM 2015. Lecture Notes in Computer Science(), vol 9358. Springer, Cham
10.1007/978-3-319-24947-6_45
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Convolutional neural networks (CNNs) have recently become the state-of-the-art tool for large-scale image classification. In this work we propose the use of activation features from CNNs as local descriptors for writer identification. A global descriptor is then formed by means of GMM supervector encoding, which is further improved by normalization with the KL-Kernel. We evaluate our method on two publicly available datasets: the ICDAR 2013 benchmark database and the CVL dataset. While we perform comparably to the state of the art on CVL, our proposed method yields about 0.21 absolute improvement in terms of mAP on the challenging bilingual ICDAR dataset.
[ { "created": "Mon, 26 Feb 2024 21:16:14 GMT", "version": "v1" } ]
2024-02-28
[ [ "Christlein", "Vincent", "" ], [ "Bernecker", "David", "" ], [ "Maier", "Andreas", "" ], [ "Angelopoulou", "Elli", "" ] ]
2402.17124
Xinran Zhao
Xinran Zhao, Hongming Zhang, Xiaoman Pan, Wenlin Yao, Dong Yu, Tongshuang Wu, Jianshu Chen
Fact-and-Reflection (FaR) Improves Confidence Calibration of Large Language Models
17 pages, 10 figures
Findings of the Association for Computational Linguistics ACL 2024
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
For a LLM to be trustworthy, its confidence level should be well-calibrated with its actual performance. While it is now common sense that LLM performances are greatly impacted by prompts, the confidence calibration in prompting LLMs has yet to be thoroughly explored. In this paper, we explore how different prompting strategies influence LLM confidence calibration and how it could be improved. We conduct extensive experiments on six prompting methods in the question-answering context and we observe that, while these methods help improve the expected LLM calibration, they also trigger LLMs to be over-confident when responding to some instances. Inspired by human cognition, we propose Fact-and-Reflection (FaR) prompting, which improves the LLM calibration in two steps. First, FaR elicits the known "facts" that are relevant to the input prompt from the LLM. And then it asks the model to "reflect" over them to generate the final answer. Experiments show that FaR prompting achieves significantly better calibration; it lowers the Expected Calibration Error by 23.5% on our multi-purpose QA tasks. Notably, FaR prompting even elicits the capability of verbally expressing concerns in less confident scenarios, which helps trigger retrieval augmentation for solving these harder instances.
[ { "created": "Tue, 27 Feb 2024 01:37:23 GMT", "version": "v1" }, { "created": "Sun, 8 Sep 2024 19:17:32 GMT", "version": "v2" } ]
2024-09-10
[ [ "Zhao", "Xinran", "" ], [ "Zhang", "Hongming", "" ], [ "Pan", "Xiaoman", "" ], [ "Yao", "Wenlin", "" ], [ "Yu", "Dong", "" ], [ "Wu", "Tongshuang", "" ], [ "Chen", "Jianshu", "" ] ]
2402.17256
Pei Wang
Pei Wang, Keqing He, Yejie Wang, Xiaoshuai Song, Yutao Mou, Jingang Wang, Yunsen Xian, Xunliang Cai, Weiran Xu
Beyond the Known: Investigating LLMs Performance on Out-of-Domain Intent Detection
null
LREC-COLING 2024
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Out-of-domain (OOD) intent detection aims to examine whether the user's query falls outside the predefined domain of the system, which is crucial for the proper functioning of task-oriented dialogue (TOD) systems. Previous methods address it by fine-tuning discriminative models. Recently, some studies have been exploring the application of large language models (LLMs) represented by ChatGPT to various downstream tasks, but it is still unclear for their ability on OOD detection task.This paper conducts a comprehensive evaluation of LLMs under various experimental settings, and then outline the strengths and weaknesses of LLMs. We find that LLMs exhibit strong zero-shot and few-shot capabilities, but is still at a disadvantage compared to models fine-tuned with full resource. More deeply, through a series of additional analysis experiments, we discuss and summarize the challenges faced by LLMs and provide guidance for future work including injecting domain knowledge, strengthening knowledge transfer from IND(In-domain) to OOD, and understanding long instructions.
[ { "created": "Tue, 27 Feb 2024 07:02:10 GMT", "version": "v1" }, { "created": "Mon, 4 Mar 2024 06:04:32 GMT", "version": "v2" } ]
2024-03-05
[ [ "Wang", "Pei", "" ], [ "He", "Keqing", "" ], [ "Wang", "Yejie", "" ], [ "Song", "Xiaoshuai", "" ], [ "Mou", "Yutao", "" ], [ "Wang", "Jingang", "" ], [ "Xian", "Yunsen", "" ], [ "Cai", "Xunliang", "" ], [ "Xu", "Weiran", "" ] ]
2402.17372
Matteo Bastico
Matteo Bastico, Etienne Decenci\`ere, Laurent Cort\'e, Yannick Tillier, David Ryckelynck
Coupled Laplacian Eigenmaps for Locally-Aware 3D Rigid Point Cloud Matching
This paper has been accepted at Computer Vision and Patter Recognition (CVPR) 2024
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 3447-3458
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Point cloud matching, a crucial technique in computer vision, medical and robotics fields, is primarily concerned with finding correspondences between pairs of point clouds or voxels. In some practical scenarios, emphasizing local differences is crucial for accurately identifying a correct match, thereby enhancing the overall robustness and reliability of the matching process. Commonly used shape descriptors have several limitations and often fail to provide meaningful local insights about the paired geometries. In this work, we propose a new technique, based on graph Laplacian eigenmaps, to match point clouds by taking into account fine local structures. To deal with the order and sign ambiguity of Laplacian eigenmaps, we introduce a new operator, called Coupled Laplacian (https://github.com/matteo-bastico/CoupLap), that allows to easily generate aligned eigenspaces for multiple registered geometries. We show that the similarity between those aligned high-dimensional spaces provides a locally meaningful score to match shapes. We firstly evaluate the performance of the proposed technique in a point-wise manner, focusing on the task of object anomaly localization on the MVTec 3D-AD dataset. Additionally, we define a new medical task, called automatic Bone Side Estimation (BSE), which we address through a global similarity score derived from coupled eigenspaces. In order to test it, we propose a benchmark collecting bone surface structures from various public datasets. Our matching technique, based on Coupled Laplacian, outperforms other methods by reaching an impressive accuracy on both tasks.
[ { "created": "Tue, 27 Feb 2024 10:10:12 GMT", "version": "v1" }, { "created": "Fri, 26 Jul 2024 14:48:04 GMT", "version": "v2" } ]
2024-07-29
[ [ "Bastico", "Matteo", "" ], [ "Decencière", "Etienne", "" ], [ "Corté", "Laurent", "" ], [ "Tillier", "Yannick", "" ], [ "Ryckelynck", "David", "" ] ]
2402.17386
Emanuel Pfeffer
Emanuel Pfeffer and Michael Wa{\ss}mer and Yee-Ying Cung and Roger Wolf and Ulrich Husemann
A case study of sending graph neural networks back to the test bench for applications in high-energy particle physics
null
Comput Softw Big Sci 8, 13 (2024)
10.1007/s41781-024-00122-3
null
hep-ph cs.AI hep-ex
http://creativecommons.org/licenses/by/4.0/
In high-energy particle collisions, the primary collision products usually decay further resulting in tree-like, hierarchical structures with a priori unknown multiplicity. At the stable-particle level all decay products of a collision form permutation invariant sets of final state objects. The analogy to mathematical graphs gives rise to the idea that graph neural networks (GNNs), which naturally resemble these properties, should be best-suited to address many tasks related to high-energy particle physics. In this paper we describe a benchmark test of a typical GNN against neural networks of the well-established deep fully-connected feed-forward architecture. We aim at performing this comparison maximally unbiased in terms of nodes, hidden layers, or trainable parameters of the neural networks under study. As physics case we use the classification of the final state X produced in association with top quark-antiquark pairs in proton-proton collisions at the Large Hadron Collider at CERN, where X stands for a bottom quark-antiquark pair produced either non-resonantly or through the decay of an intermediately produced Z or Higgs boson.
[ { "created": "Tue, 27 Feb 2024 10:26:25 GMT", "version": "v1" } ]
2024-07-15
[ [ "Pfeffer", "Emanuel", "" ], [ "Waßmer", "Michael", "" ], [ "Cung", "Yee-Ying", "" ], [ "Wolf", "Roger", "" ], [ "Husemann", "Ulrich", "" ] ]
2402.17392
Alexandra Kogam
Vasilii A. Gromov, Alexandra S. Kogan
Spot the bot: Coarse-Grained Partition of Semantic Paths for Bots and Humans
null
Pattern Recognition and Machine Intelligence, 2023. pp. 348--355
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Nowadays, technology is rapidly advancing: bots are writing comments, articles, and reviews. Due to this fact, it is crucial to know if the text was written by a human or by a bot. This paper focuses on comparing structures of the coarse-grained partitions of semantic paths for human-written and bot-generated texts. We compare the clusterizations of datasets of n-grams from literary texts and texts generated by several bots. The hypothesis is that the structures and clusterizations are different. Our research supports the hypothesis. As the semantic structure may be different for different languages, we investigate Russian, English, German, and Vietnamese languages.
[ { "created": "Tue, 27 Feb 2024 10:38:37 GMT", "version": "v1" } ]
2024-03-03
[ [ "Gromov", "Vasilii A.", "" ], [ "Kogan", "Alexandra S.", "" ] ]
2402.17433
Jiaqi Wang
Jiaqi Wang, Zhenxi Song, Zhengyu Ma, Xipeng Qiu, Min Zhang, Zhiguo Zhang
Enhancing EEG-to-Text Decoding through Transferable Representations from Pre-trained Contrastive EEG-Text Masked Autoencoder
8 pages (excluding references), accepted by ACL 2024 Main Conference
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics, Volume 1, pages 7278-7292, August 2024, Bangkok, Thailand
10.18653/v1/2024.acl-long.393
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reconstructing natural language from non-invasive electroencephalography (EEG) holds great promise as a language decoding technology for brain-computer interfaces (BCIs). However, EEG-based language decoding is still in its nascent stages, facing several technical issues such as: 1) Absence of a hybrid strategy that can effectively integrate cross-modality (between EEG and text) self-learning with intra-modality self-reconstruction of EEG features or textual sequences; 2) Under-utilization of large language models (LLMs) to enhance EEG-based language decoding. To address above issues, we propose the Contrastive EEG-Text Masked Autoencoder (CET-MAE), a novel model that orchestrates compound self-supervised learning across and within EEG and text through a dedicated multi-stream encoder. Furthermore, we develop a framework called E2T-PTR (EEG-to-Text decoding using Pretrained Transferable Representations), which leverages pre-trained modules alongside the EEG stream from CET-MAE and further enables an LLM (specifically BART) to decode text from EEG sequences. Comprehensive experiments conducted on the popular text-evoked EEG database, ZuCo, demonstrate the superiority of E2T-PTR, which outperforms the state-of-the-art in ROUGE-1 F1 and BLEU-4 scores by 8.34% and 32.21%, respectively. These results indicate significant advancements in the field and underscores the proposed framework's potential to enable more powerful and widespread BCI applications.
[ { "created": "Tue, 27 Feb 2024 11:45:21 GMT", "version": "v1" }, { "created": "Wed, 28 Feb 2024 03:34:00 GMT", "version": "v2" }, { "created": "Mon, 10 Jun 2024 09:51:50 GMT", "version": "v3" } ]
2024-09-27
[ [ "Wang", "Jiaqi", "" ], [ "Song", "Zhenxi", "" ], [ "Ma", "Zhengyu", "" ], [ "Qiu", "Xipeng", "" ], [ "Zhang", "Min", "" ], [ "Zhang", "Zhiguo", "" ] ]
2402.17482
Saja Al Ani
Saja Al Ani, Joanne Cleland, Ahmed Zoha
Automated Classification of Phonetic Segments in Child Speech Using Raw Ultrasound Imaging
null
Proceedings of the 17th International Joint Conference on Biomedical Engineering Systems and Technologies - Volume 1: BIOIMAGING, 2024, pages 326-331
10.5220/0012592700003657
null
cs.SD cs.AI cs.CV eess.AS
http://creativecommons.org/licenses/by/4.0/
Speech sound disorder (SSD) is defined as a persistent impairment in speech sound production leading to reduced speech intelligibility and hindered verbal communication. Early recognition and intervention of children with SSD and timely referral to speech and language therapists (SLTs) for treatment are crucial. Automated detection of speech impairment is regarded as an efficient method for examining and screening large populations. This study focuses on advancing the automatic diagnosis of SSD in early childhood by proposing a technical solution that integrates ultrasound tongue imaging (UTI) with deep-learning models. The introduced FusionNet model combines UTI data with the extracted texture features to classify UTI. The overarching aim is to elevate the accuracy and efficiency of UTI analysis, particularly for classifying speech sounds associated with SSD. This study compared the FusionNet approach with standard deep-learning methodologies, highlighting the excellent improvement results of the FusionNet model in UTI classification and the potential of multi-learning in improving UTI classification in speech therapy clinics.
[ { "created": "Tue, 27 Feb 2024 13:08:34 GMT", "version": "v1" } ]
2024-02-28
[ [ "Ani", "Saja Al", "" ], [ "Cleland", "Joanne", "" ], [ "Zoha", "Ahmed", "" ] ]
2402.17706
Junzhe Chen
Junzhe Chen, Qiao Yang, Senmao Tian, Shunli Zhang
Adaptive quantization with mixed-precision based on low-cost proxy
accepted by icassp2024
ICASSP2024
10.1109/ICASSP48485.2024.10447866
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is critical to deploy complicated neural network models on hardware with limited resources. This paper proposes a novel model quantization method, named the Low-Cost Proxy-Based Adaptive Mixed-Precision Model Quantization (LCPAQ), which contains three key modules. The hardware-aware module is designed by considering the hardware limitations, while an adaptive mixed-precision quantization module is developed to evaluate the quantization sensitivity by using the Hessian matrix and Pareto frontier techniques. Integer linear programming is used to fine-tune the quantization across different layers. Then the low-cost proxy neural architecture search module efficiently explores the ideal quantization hyperparameters. Experiments on the ImageNet demonstrate that the proposed LCPAQ achieves comparable or superior quantization accuracy to existing mixed-precision models. Notably, LCPAQ achieves 1/200 of the search time compared with existing methods, which provides a shortcut in practical quantization use for resource-limited devices.
[ { "created": "Tue, 27 Feb 2024 17:36:01 GMT", "version": "v1" } ]
2024-04-04
[ [ "Chen", "Junzhe", "" ], [ "Yang", "Qiao", "" ], [ "Tian", "Senmao", "" ], [ "Zhang", "Shunli", "" ] ]
2402.17903
Jingying Wang
Jingying Wang, Haoran Tang, Taylor Kantor, Tandis Soltani, Vitaliy Popov and Xu Wang
Surgment: Segmentation-enabled Semantic Search and Creation of Visual Question and Feedback to Support Video-Based Surgery Learning
null
CHI'2024
10.1145/3613904.3642587
null
cs.HC cs.CV
http://creativecommons.org/licenses/by/4.0/
Videos are prominent learning materials to prepare surgical trainees before they enter the operating room (OR). In this work, we explore techniques to enrich the video-based surgery learning experience. We propose Surgment, a system that helps expert surgeons create exercises with feedback based on surgery recordings. Surgment is powered by a few-shot-learning-based pipeline (SegGPT+SAM) to segment surgery scenes, achieving an accuracy of 92\%. The segmentation pipeline enables functionalities to create visual questions and feedback desired by surgeons from a formative study. Surgment enables surgeons to 1) retrieve frames of interest through sketches, and 2) design exercises that target specific anatomical components and offer visual feedback. In an evaluation study with 11 surgeons, participants applauded the search-by-sketch approach for identifying frames of interest and found the resulting image-based questions and feedback to be of high educational value.
[ { "created": "Tue, 27 Feb 2024 21:42:23 GMT", "version": "v1" } ]
2024-06-27
[ [ "Wang", "Jingying", "" ], [ "Tang", "Haoran", "" ], [ "Kantor", "Taylor", "" ], [ "Soltani", "Tandis", "" ], [ "Popov", "Vitaliy", "" ], [ "Wang", "Xu", "" ] ]
2402.17944
Weijie Xu
Xi Fang, Weijie Xu, Fiona Anting Tan, Jiani Zhang, Ziqing Hu, Yanjun Qi, Scott Nickleach, Diego Socolinsky, Srinivasan Sengamedu, Christos Faloutsos
Large Language Models(LLMs) on Tabular Data: Prediction, Generation, and Understanding -- A Survey
41 pages, 4 figures, 8 tables
TMLR 2024
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Recent breakthroughs in large language modeling have facilitated rigorous exploration of their application in diverse tasks related to tabular data modeling, such as prediction, tabular data synthesis, question answering, and table understanding. Each task presents unique challenges and opportunities. However, there is currently a lack of comprehensive review that summarizes and compares the key techniques, metrics, datasets, models, and optimization approaches in this research domain. This survey aims to address this gap by consolidating recent progress in these areas, offering a thorough survey and taxonomy of the datasets, metrics, and methodologies utilized. It identifies strengths, limitations, unexplored territories, and gaps in the existing literature, while providing some insights for future research directions in this vital and rapidly evolving field. It also provides relevant code and datasets references. Through this comprehensive review, we hope to provide interested readers with pertinent references and insightful perspectives, empowering them with the necessary tools and knowledge to effectively navigate and address the prevailing challenges in the field.
[ { "created": "Tue, 27 Feb 2024 23:59:01 GMT", "version": "v1" }, { "created": "Fri, 1 Mar 2024 00:14:42 GMT", "version": "v2" }, { "created": "Mon, 10 Jun 2024 17:41:32 GMT", "version": "v3" }, { "created": "Fri, 21 Jun 2024 19:59:54 GMT", "version": "v4" } ]
2024-06-25
[ [ "Fang", "Xi", "" ], [ "Xu", "Weijie", "" ], [ "Tan", "Fiona Anting", "" ], [ "Zhang", "Jiani", "" ], [ "Hu", "Ziqing", "" ], [ "Qi", "Yanjun", "" ], [ "Nickleach", "Scott", "" ], [ "Socolinsky", "Diego", "" ], [ "Sengamedu", "Srinivasan", "" ], [ "Faloutsos", "Christos", "" ] ]
2402.18109
Qinglin Liu
Qinglin Liu, Xiaoqian Lv, Wei Yu, Changyong Guo, Shengping Zhang
Dual-Context Aggregation for Universal Image Matting
null
Multimed Tools Appl (2023)
10.1007/s11042-023-17517-w
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Natural image matting aims to estimate the alpha matte of the foreground from a given image. Various approaches have been explored to address this problem, such as interactive matting methods that use guidance such as click or trimap, and automatic matting methods tailored to specific objects. However, existing matting methods are designed for specific objects or guidance, neglecting the common requirement of aggregating global and local contexts in image matting. As a result, these methods often encounter challenges in accurately identifying the foreground and generating precise boundaries, which limits their effectiveness in unforeseen scenarios. In this paper, we propose a simple and universal matting framework, named Dual-Context Aggregation Matting (DCAM), which enables robust image matting with arbitrary guidance or without guidance. Specifically, DCAM first adopts a semantic backbone network to extract low-level features and context features from the input image and guidance. Then, we introduce a dual-context aggregation network that incorporates global object aggregators and local appearance aggregators to iteratively refine the extracted context features. By performing both global contour segmentation and local boundary refinement, DCAM exhibits robustness to diverse types of guidance and objects. Finally, we adopt a matting decoder network to fuse the low-level features and the refined context features for alpha matte estimation. Experimental results on five matting datasets demonstrate that the proposed DCAM outperforms state-of-the-art matting methods in both automatic matting and interactive matting tasks, which highlights the strong universality and high performance of DCAM. The source code is available at \url{https://github.com/Windaway/DCAM}.
[ { "created": "Wed, 28 Feb 2024 06:56:24 GMT", "version": "v1" } ]
2024-02-29
[ [ "Liu", "Qinglin", "" ], [ "Lv", "Xiaoqian", "" ], [ "Yu", "Wei", "" ], [ "Guo", "Changyong", "" ], [ "Zhang", "Shengping", "" ] ]
2402.18115
Minghan Li
Minghan Li and Shuai Li and Xindong Zhang and Lei Zhang
UniVS: Unified and Universal Video Segmentation with Prompts as Queries
21 pages, 11 figures, 10 tabels, CVPR2024
The IEEE/CVF Conference on Computer Vision and Pattern Recognition 2024
null
null
cs.CV cs.CL
http://creativecommons.org/licenses/by-sa/4.0/
Despite the recent advances in unified image segmentation (IS), developing a unified video segmentation (VS) model remains a challenge. This is mainly because generic category-specified VS tasks need to detect all objects and track them across consecutive frames, while prompt-guided VS tasks require re-identifying the target with visual/text prompts throughout the entire video, making it hard to handle the different tasks with the same architecture. We make an attempt to address these issues and present a novel unified VS architecture, namely UniVS, by using prompts as queries. UniVS averages the prompt features of the target from previous frames as its initial query to explicitly decode masks, and introduces a target-wise prompt cross-attention layer in the mask decoder to integrate prompt features in the memory pool. By taking the predicted masks of entities from previous frames as their visual prompts, UniVS converts different VS tasks into prompt-guided target segmentation, eliminating the heuristic inter-frame matching process. Our framework not only unifies the different VS tasks but also naturally achieves universal training and testing, ensuring robust performance across different scenarios. UniVS shows a commendable balance between performance and universality on 10 challenging VS benchmarks, covering video instance, semantic, panoptic, object, and referring segmentation tasks. Code can be found at \url{https://github.com/MinghanLi/UniVS}.
[ { "created": "Wed, 28 Feb 2024 07:05:27 GMT", "version": "v1" }, { "created": "Mon, 10 Jun 2024 10:52:54 GMT", "version": "v2" } ]
2024-06-11
[ [ "Li", "Minghan", "" ], [ "Li", "Shuai", "" ], [ "Zhang", "Xindong", "" ], [ "Zhang", "Lei", "" ] ]
2402.18171
Zihua Liu
Zihua Liu, Songyan Zhang, Zhicheng Wang and Masatoshi Okutomi
Digging Into Normal Incorporated Stereo Matching
null
Proceedings of the 30th ACM International Conference on Multimedia (ACMMM2022), pp.6050-6060, October 2022
10.1145/3503161.3548312
null
cs.CV
http://creativecommons.org/publicdomain/zero/1.0/
Despite the remarkable progress facilitated by learning-based stereo-matching algorithms, disparity estimation in low-texture, occluded, and bordered regions still remains a bottleneck that limits the performance. To tackle these challenges, geometric guidance like plane information is necessary as it provides intuitive guidance about disparity consistency and affinity similarity. In this paper, we propose a normal incorporated joint learning framework consisting of two specific modules named non-local disparity propagation(NDP) and affinity-aware residual learning(ARL). The estimated normal map is first utilized for calculating a non-local affinity matrix and a non-local offset to perform spatial propagation at the disparity level. To enhance geometric consistency, especially in low-texture regions, the estimated normal map is then leveraged to calculate a local affinity matrix, providing the residual learning with information about where the correction should refer and thus improving the residual learning efficiency. Extensive experiments on several public datasets including Scene Flow, KITTI 2015, and Middlebury 2014 validate the effectiveness of our proposed method. By the time we finished this work, our approach ranked 1st for stereo matching across foreground pixels on the KITTI 2015 dataset and 3rd on the Scene Flow dataset among all the published works.
[ { "created": "Wed, 28 Feb 2024 09:01:50 GMT", "version": "v1" } ]
2024-02-29
[ [ "Liu", "Zihua", "" ], [ "Zhang", "Songyan", "" ], [ "Wang", "Zhicheng", "" ], [ "Okutomi", "Masatoshi", "" ] ]
2402.18175
Zhuofeng Wu
Zhuofeng Wu, Yusuke Monno, and Masatoshi Okutomi
Self-Supervised Spatially Variant PSF Estimation for Aberration-Aware Depth-from-Defocus
null
International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2024
null
null
cs.CV eess.IV
http://creativecommons.org/publicdomain/zero/1.0/
In this paper, we address the task of aberration-aware depth-from-defocus (DfD), which takes account of spatially variant point spread functions (PSFs) of a real camera. To effectively obtain the spatially variant PSFs of a real camera without requiring any ground-truth PSFs, we propose a novel self-supervised learning method that leverages the pair of real sharp and blurred images, which can be easily captured by changing the aperture setting of the camera. In our PSF estimation, we assume rotationally symmetric PSFs and introduce the polar coordinate system to more accurately learn the PSF estimation network. We also handle the focus breathing phenomenon that occurs in real DfD situations. Experimental results on synthetic and real data demonstrate the effectiveness of our method regarding both the PSF estimation and the depth estimation.
[ { "created": "Wed, 28 Feb 2024 09:07:26 GMT", "version": "v1" } ]
2024-02-29
[ [ "Wu", "Zhuofeng", "" ], [ "Monno", "Yusuke", "" ], [ "Okutomi", "Masatoshi", "" ] ]
2402.18178
Wenjiao Bian
Wenjiao Bian, Yusuke Monno, Masatoshi Okutomi
Reflection Removal Using Recurrent Polarization-to-Polarization Network
null
ICASSP 2024
null
null
cs.CV
http://creativecommons.org/publicdomain/zero/1.0/
This paper addresses reflection removal, which is the task of separating reflection components from a captured image and deriving the image with only transmission components. Considering that the existence of the reflection changes the polarization state of a scene, some existing methods have exploited polarized images for reflection removal. While these methods apply polarized images as the inputs, they predict the reflection and the transmission directly as non-polarized intensity images. In contrast, we propose a polarization-to-polarization approach that applies polarized images as the inputs and predicts "polarized" reflection and transmission images using two sequential networks to facilitate the separation task by utilizing the interrelated polarization information between the reflection and the transmission. We further adopt a recurrent framework, where the predicted reflection and transmission images are used to iteratively refine each other. Experimental results on a public dataset demonstrate that our method outperforms other state-of-the-art methods.
[ { "created": "Wed, 28 Feb 2024 09:08:22 GMT", "version": "v1" } ]
2024-02-29
[ [ "Bian", "Wenjiao", "" ], [ "Monno", "Yusuke", "" ], [ "Okutomi", "Masatoshi", "" ] ]
2402.18181
Zihua Liu
Zihua Liu, Yizhou Li and Masatoshi Okutomi
CFDNet: A Generalizable Foggy Stereo Matching Network with Contrastive Feature Distillation
null
IEEE International Conference on Robotics and Automation (ICRA2024)
null
null
cs.CV
http://creativecommons.org/publicdomain/zero/1.0/
Stereo matching under foggy scenes remains a challenging task since the scattering effect degrades the visibility and results in less distinctive features for dense correspondence matching. While some previous learning-based methods integrated a physical scattering function for simultaneous stereo-matching and dehazing, simply removing fog might not aid depth estimation because the fog itself can provide crucial depth cues. In this work, we introduce a framework based on contrastive feature distillation (CFD). This strategy combines feature distillation from merged clean-fog features with contrastive learning, ensuring balanced dependence on fog depth hints and clean matching features. This framework helps to enhance model generalization across both clean and foggy environments. Comprehensive experiments on synthetic and real-world datasets affirm the superior strength and adaptability of our method.
[ { "created": "Wed, 28 Feb 2024 09:12:01 GMT", "version": "v1" }, { "created": "Thu, 29 Feb 2024 07:42:53 GMT", "version": "v2" } ]
2024-03-07
[ [ "Liu", "Zihua", "" ], [ "Li", "Yizhou", "" ], [ "Okutomi", "Masatoshi", "" ] ]
2402.18201
Sen Xu
Sen Xu, Shikui Wei, Tao Ruan, and Lixin Liao
Learning Invariant Inter-pixel Correlations for Superpixel Generation
Accepted by AAAI24
Proceedings of the AAAI Conference on Artificial Intelligence, 38(6), 6351-6359 (2024)
10.1609/aaai.v38i6.28454
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep superpixel algorithms have made remarkable strides by substituting hand-crafted features with learnable ones. Nevertheless, we observe that existing deep superpixel methods, serving as mid-level representation operations, remain sensitive to the statistical properties (e.g., color distribution, high-level semantics) embedded within the training dataset. Consequently, learnable features exhibit constrained discriminative capability, resulting in unsatisfactory pixel grouping performance, particularly in untrainable application scenarios. To address this issue, we propose the Content Disentangle Superpixel (CDS) algorithm to selectively separate the invariant inter-pixel correlations and statistical properties, i.e., style noise. Specifically, We first construct auxiliary modalities that are homologous to the original RGB image but have substantial stylistic variations. Then, driven by mutual information, we propose the local-grid correlation alignment across modalities to reduce the distribution discrepancy of adaptively selected features and learn invariant inter-pixel correlations. Afterwards, we perform global-style mutual information minimization to enforce the separation of invariant content and train data styles. The experimental results on four benchmark datasets demonstrate the superiority of our approach to existing state-of-the-art methods, regarding boundary adherence, generalization, and efficiency. Code and pre-trained model are available at https://github.com/rookiie/CDSpixel.
[ { "created": "Wed, 28 Feb 2024 09:46:56 GMT", "version": "v1" }, { "created": "Tue, 9 Apr 2024 07:18:41 GMT", "version": "v2" } ]
2024-04-10
[ [ "Xu", "Sen", "" ], [ "Wei", "Shikui", "" ], [ "Ruan", "Tao", "" ], [ "Liao", "Lixin", "" ] ]
2402.18576
Sales Aribe Jr.
Sales Aribe Jr
Improved Forecasting Using a PSO-RDV Framework to Enhance Artificial Neural Network
9 pages, 4 figures, Published with International Journal of Engineering Trends and Technology (IJETT)
International Journal of Engineering Trends and Technology, vol. 72, no. 1, pp. 11-19, 2024
10.14445/22315381/IJETT-V72I1P102
null
cs.NE cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Decision making and planning have long relied heavily on AI-driven forecasts. The government and the general public are working to minimize the risks while maximizing benefits in the face of potential future public health uncertainties. This study used an improved method of forecasting utilizing the Random Descending Velocity Inertia Weight (RDV IW) technique to improve the convergence of Particle Swarm Optimization (PSO) and the accuracy of Artificial Neural Network (ANN). The IW technique, inspired by the motions of a golf ball, modified the particles' velocities as they approached the solution point to a parabolically descending structure. Simulation results revealed that the proposed forecasting model with [0.4, 0.9] combination of alpha and alpha_dump exhibits a 6.36% improvement in position error and 11.75% improvement in computational time compared to the old model, thus, improving its convergence. It reached the optimum level at minimal steps with 12.50% improvement as against the old model since it provides better velocity averages when speed stabilization occurs at the 24th iteration. Meanwhile, the computed p-values for NRMSE (0.04889174), MAE (0.02829063), MAPE (0.02226053), WAPE (0.01701545), and R2 (0.00000021) of the proposed algorithm are less than the set 0.05 level of significance, thus the values indicated a significant result in terms of accuracy performance. Applying the modified ANN-PSO using RDV IW technique greatly improved the new HIV/AIDS forecasting model compared with the two models.
[ { "created": "Wed, 10 Jan 2024 01:15:33 GMT", "version": "v1" } ]
2024-03-01
[ [ "Aribe", "Sales", "Jr" ] ]
2402.18589
Nikola Milo\v{s}evi\'c Dr
Milo\v{s} Ko\v{s}prdi\'c, Adela Ljaji\'c, Bojana Ba\v{s}aragin, Darija Medvecki, Nikola Milo\v{s}evi\'c
Verif.ai: Towards an Open-Source Scientific Generative Question-Answering System with Referenced and Verifiable Answers
Accepted as a short paper at The Sixteenth International Conference on Evolving Internet (INTERNET 2024)
The Sixteenth International Conference on Evolving Internet (INTERNET 2024)
null
null
cs.IR cs.AI cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
In this paper, we present the current progress of the project Verif.ai, an open-source scientific generative question-answering system with referenced and verified answers. The components of the system are (1) an information retrieval system combining semantic and lexical search techniques over scientific papers (PubMed), (2) a fine-tuned generative model (Mistral 7B) taking top answers and generating answers with references to the papers from which the claim was derived, and (3) a verification engine that cross-checks the generated claim and the abstract or paper from which the claim was derived, verifying whether there may have been any hallucinations in generating the claim. We are reinforcing the generative model by providing the abstract in context, but in addition, an independent set of methods and models are verifying the answer and checking for hallucinations. Therefore, we believe that by using our method, we can make scientists more productive, while building trust in the use of generative language models in scientific environments, where hallucinations and misinformation cannot be tolerated.
[ { "created": "Fri, 9 Feb 2024 10:25:01 GMT", "version": "v1" } ]
2024-04-11
[ [ "Košprdić", "Miloš", "" ], [ "Ljajić", "Adela", "" ], [ "Bašaragin", "Bojana", "" ], [ "Medvecki", "Darija", "" ], [ "Milošević", "Nikola", "" ] ]
2402.18616
Jos\'e Ra\'ul Romero
Aurora Ram\'irez and Jos\'e Ra\'ul Romero and Carlos Garc\'ia-Mart\'inez and Sebasti\'an Ventura
JCLEC-MO: a Java suite for solving many-objective optimization engineering problems
41 pages, 5 figures, journal paper
Engineering Applications of Artificial Intelligence, Volume 81, May 2019, Pages 14-28
10.1016/j.engappai.2019.02.003
null
cs.NE cs.AI
http://creativecommons.org/licenses/by/4.0/
Although metaheuristics have been widely recognized as efficient techniques to solve real-world optimization problems, implementing them from scratch remains difficult for domain-specific experts without programming skills. In this scenario, metaheuristic optimization frameworks are a practical alternative as they provide a variety of algorithms composed of customized elements, as well as experimental support. Recently, many engineering problems require to optimize multiple or even many objectives, increasing the interest in appropriate metaheuristic algorithms and frameworks that might integrate new specific requirements while maintaining the generality and reusability principles they were conceived for. Based on this idea, this paper introduces JCLEC-MO, a Java framework for both multi- and many-objective optimization that enables engineers to apply, or adapt, a great number of multi-objective algorithms with little coding effort. A case study is developed and explained to show how JCLEC-MO can be used to address many-objective engineering problems, often requiring the inclusion of domain-specific elements, and to analyze experimental outcomes by means of conveniently connected R utilities.
[ { "created": "Wed, 28 Feb 2024 17:38:01 GMT", "version": "v1" } ]
2024-03-01
[ [ "Ramírez", "Aurora", "" ], [ "Romero", "José Raúl", "" ], [ "García-Martínez", "Carlos", "" ], [ "Ventura", "Sebastián", "" ] ]
2402.18743
Cristian Ramirez-Atencia
Cristian Ramirez-Atencia and Victor Rodriguez-Fernandez and David Camacho
A revision on Multi-Criteria Decision Making methods for Multi-UAV Mission Planning Support
Preprint submitted and acepted in Expert Systems with Applications
Expert Systems with Applications, Volume 160, 2020, 113708
10.1016/j.eswa.2020.113708
null
cs.AI cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Over the last decade, Unmanned Aerial Vehicles (UAVs) have been extensively used in many commercial applications due to their manageability and risk avoidance. One of the main problems considered is the Mission Planning for multiple UAVs, where a solution plan must be found satisfying the different constraints of the problem. This problem has multiple variables that must be optimized simultaneously, such as the makespan, the cost of the mission or the risk. Therefore, the problem has a lot of possible optimal solutions, and the operator must select the final solution to be executed among them. In order to reduce the workload of the operator in this decision process, a Decision Support System (DSS) becomes necessary. In this work, a DSS consisting of ranking and filtering systems, which order and reduce the optimal solutions, has been designed. With regard to the ranking system, a wide range of Multi-Criteria Decision Making (MCDM) methods, including some fuzzy MCDM, are compared on a multi-UAV mission planning scenario, in order to study which method could fit better in a multi-UAV decision support system. Expert operators have evaluated the solutions returned, and the results show, on the one hand, that fuzzy methods generally achieve better average scores, and on the other, that all of the tested methods perform better when the preferences of the operators are biased towards a specific variable, and worse when their preferences are balanced. For the filtering system, a similarity function based on the proximity of the solutions has been designed, and on top of that, a threshold is tuned empirically to decide how to filter solutions without losing much of the hypervolume of the space of solutions.
[ { "created": "Wed, 28 Feb 2024 22:54:08 GMT", "version": "v1" } ]
2024-03-01
[ [ "Ramirez-Atencia", "Cristian", "" ], [ "Rodriguez-Fernandez", "Victor", "" ], [ "Camacho", "David", "" ] ]
2402.18817
Binh M. Le
Binh M. Le, Simon S. Woo
Gradient Alignment for Cross-Domain Face Anti-Spoofing
null
The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2024
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Recent advancements in domain generalization (DG) for face anti-spoofing (FAS) have garnered considerable attention. Traditional methods have focused on designing learning objectives and additional modules to isolate domain-specific features while retaining domain-invariant characteristics in their representations. However, such approaches often lack guarantees of consistent maintenance of domain-invariant features or the complete removal of domain-specific features. Furthermore, most prior works of DG for FAS do not ensure convergence to a local flat minimum, which has been shown to be advantageous for DG. In this paper, we introduce GAC-FAS, a novel learning objective that encourages the model to converge towards an optimal flat minimum without necessitating additional learning modules. Unlike conventional sharpness-aware minimizers, GAC-FAS identifies ascending points for each domain and regulates the generalization gradient updates at these points to align coherently with empirical risk minimization (ERM) gradient updates. This unique approach specifically guides the model to be robust against domain shifts. We demonstrate the efficacy of GAC-FAS through rigorous testing on challenging cross-domain FAS datasets, where it establishes state-of-the-art performance. The code is available at https://github.com/leminhbinh0209/CVPR24-FAS.
[ { "created": "Thu, 29 Feb 2024 02:57:44 GMT", "version": "v1" }, { "created": "Tue, 12 Mar 2024 01:54:21 GMT", "version": "v2" } ]
2024-03-13
[ [ "Le", "Binh M.", "" ], [ "Woo", "Simon S.", "" ] ]