categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2407.08641 | null | null | http://arxiv.org/pdf/2407.08641v1 | 2024-07-11T16:22:13Z | 2024-07-11T16:22:13Z | How more data can hurt: Instability and regularization in
next-generation reservoir computing | It has been found recently that more data can, counter-intuitively, hurt the performance of deep neural networks. Here, we show that a more extreme version of the phenomenon occurs in data-driven models of dynamical systems. To elucidate the underlying mechanism, we focus on next-generation reservoir computing (NGRC) -- a popular framework for learning dynamics from data. We find that, despite learning a better representation of the flow map with more training data, NGRC can adopt an ill-conditioned ``integrator'' and lose stability. We link this data-induced instability to the auxiliary dimensions created by the delayed states in NGRC. Based on these findings, we propose simple strategies to mitigate the instability, either by increasing regularization strength in tandem with data size, or by carefully introducing noise during training. Our results highlight the importance of proper regularization in data-driven modeling of dynamical systems. | [
"['Yuanzhao Zhang' 'Sean P. Cornelius']"
] |
null | null | 2407.08647 | null | null | http://arxiv.org/pdf/2407.08647v1 | 2024-07-11T16:25:21Z | 2024-07-11T16:25:21Z | From Real to Cloned Singer Identification | Cloned voices of popular singers sound increasingly realistic and have gained popularity over the past few years. They however pose a threat to the industry due to personality rights concerns. As such, methods to identify the original singer in synthetic voices are needed. In this paper, we investigate how singer identification methods could be used for such a task. We present three embedding models that are trained using a singer-level contrastive learning scheme, where positive pairs consist of segments with vocals from the same singers. These segments can be mixtures for the first model, vocals for the second, and both for the third. We demonstrate that all three models are highly capable of identifying real singers. However, their performance deteriorates when classifying cloned versions of singers in our evaluation set. This is especially true for models that use mixtures as an input. These findings highlight the need to understand the biases that exist within singer identification systems, and how they can influence the identification of voice deepfakes in music. | [
"['Dorian Desblancs' 'Gabriel Meseguer-Brocal' 'Romain Hennequin'\n 'Manuel Moussallam']"
] |
null | null | 2407.08649 | null | null | http://arxiv.org/pdf/2407.08649v1 | 2024-07-11T16:28:31Z | 2024-07-11T16:28:31Z | Confidence-based Estimators for Predictive Performance in Model
Monitoring | After a machine learning model has been deployed into production, its predictive performance needs to be monitored. Ideally, such monitoring can be carried out by comparing the model's predictions against ground truth labels. For this to be possible, the ground truth labels must be available relatively soon after inference. However, there are many use cases where ground truth labels are available only after a significant delay, or in the worst case, not at all. In such cases, directly monitoring the model's predictive performance is impossible. Recently, novel methods for estimating the predictive performance of a model when ground truth is unavailable have been developed. Many of these methods leverage model confidence or other uncertainty estimates and are experimentally compared against a naive baseline method, namely Average Confidence (AC), which estimates model accuracy as the average of confidence scores for a given set of predictions. However, until now the theoretical properties of the AC method have not been properly explored. In this paper, we try to fill this gap by reviewing the AC method and show that under certain general assumptions, it is an unbiased and consistent estimator of model accuracy with many desirable properties. We also compare this baseline estimator against some more complex estimators empirically and show that in many cases the AC method is able to beat the others, although the comparative quality of the different estimators is heavily case-dependent. | [
"['Juhani Kivimäki' 'Jakub Białek' 'Jukka K. Nurminen' 'Wojtek Kuberski']"
] |
null | null | 2407.08654 | null | null | http://arxiv.org/pdf/2407.08654v1 | 2024-07-11T16:37:15Z | 2024-07-11T16:37:15Z | Adaptive Smooth Non-Stationary Bandits | We study a $K$-armed non-stationary bandit model where rewards change smoothly, as captured by H"{o}lder class assumptions on rewards as functions of time. Such smooth changes are parametrized by a H"{o}lder exponent $beta$ and coefficient $lambda$. While various sub-cases of this general model have been studied in isolation, we first establish the minimax dynamic regret rate generally for all $K,beta,lambda$. Next, we show this optimal dynamic regret can be attained adaptively, without knowledge of $beta,lambda$. To contrast, even with parameter knowledge, upper bounds were only previously known for limited regimes $betaleq 1$ and $beta=2$ (Slivkins, 2014; Krishnamurthy and Gopalan, 2021; Manegueu et al., 2021; Jia et al.,2023). Thus, our work resolves open questions raised by these disparate threads of the literature. We also study the problem of attaining faster gap-dependent regret rates in non-stationary bandits. While such rates are long known to be impossible in general (Garivier and Moulines, 2011), we show that environments admitting a safe arm (Suk and Kpotufe, 2022) allow for much faster rates than the worst-case scaling with $sqrt{T}$. While previous works in this direction focused on attaining the usual logarithmic regret bounds, as summed over stationary periods, our new gap-dependent rates reveal new optimistic regimes of non-stationarity where even the logarithmic bounds are pessimistic. We show our new gap-dependent rate is tight and that its achievability (i.e., as made possible by a safe arm) has a surprisingly simple and clean characterization within the smooth H"{o}lder class model. | [
"['Joe Suk']"
] |
null | null | 2407.08655 | null | null | http://arxiv.org/pdf/2407.08655v1 | 2024-07-11T16:39:24Z | 2024-07-11T16:39:24Z | SPOCKMIP: Segmentation of Vessels in MRAs with Enhanced Continuity using
Maximum Intensity Projection as Loss | Identification of vessel structures of different sizes in biomedical images is crucial in the diagnosis of many neurodegenerative diseases. However, the sparsity of good-quality annotations of such images makes the task of vessel segmentation challenging. Deep learning offers an efficient way to segment vessels of different sizes by learning their high-level feature representations and the spatial continuity of such features across dimensions. Semi-supervised patch-based approaches have been effective in identifying small vessels of one to two voxels in diameter. This study focuses on improving the segmentation quality by considering the spatial correlation of the features using the Maximum Intensity Projection~(MIP) as an additional loss criterion. Two methods are proposed with the incorporation of MIPs of label segmentation on the single~(z-axis) and multiple perceivable axes of the 3D volume. The proposed MIP-based methods produce segmentations with improved vessel continuity, which is evident in visual examinations of ROIs. Patch-based training is improved by introducing an additional loss term, MIP loss, to penalise the predicted discontinuity of vessels. A training set of 14 volumes is selected from the StudyForrest dataset comprising of 18 7-Tesla 3D Time-of-Flight~(ToF) Magnetic Resonance Angiography (MRA) images. The generalisation performance of the method is evaluated using the other unseen volumes in the dataset. It is observed that the proposed method with multi-axes MIP loss produces better quality segmentations with a median Dice of $80.245 pm 0.129$. Also, the method with single-axis MIP loss produces segmentations with a median Dice of $79.749 pm 0.109$. Furthermore, a visual comparison of the ROIs in the predicted segmentation reveals a significant improvement in the continuity of the vessels when MIP loss is incorporated into training. | [
"['Chethan Radhakrishna' 'Karthikesh Varma Chintalapati'\n 'Sri Chandana Hudukula Ram Kumar' 'Raviteja Sutrave' 'Hendrik Mattern'\n 'Oliver Speck' 'Andreas Nürnberger' 'Soumick Chatterjee']"
] |
null | null | 2407.08659 | null | null | http://arxiv.org/pdf/2407.08659v1 | 2024-07-11T16:46:04Z | 2024-07-11T16:46:04Z | Controlling the Fidelity and Diversity of Deep Generative Models via
Pseudo Density | We introduce an approach to bias deep generative models, such as GANs and diffusion models, towards generating data with either enhanced fidelity or increased diversity. Our approach involves manipulating the distribution of training and generated data through a novel metric for individual samples, named pseudo density, which is based on the nearest-neighbor information from real samples. Our approach offers three distinct techniques to adjust the fidelity and diversity of deep generative models: 1) Per-sample perturbation, enabling precise adjustments for individual samples towards either more common or more unique characteristics; 2) Importance sampling during model inference to enhance either fidelity or diversity in the generated data; 3) Fine-tuning with importance sampling, which guides the generative model to learn an adjusted distribution, thus controlling fidelity and diversity. Furthermore, our fine-tuning method demonstrates the ability to improve the Frechet Inception Distance (FID) for pre-trained generative models with minimal iterations. | [
"['Shuangqi Li' 'Chen Liu' 'Tong Zhang' 'Hieu Le' 'Sabine Süsstrunk'\n 'Mathieu Salzmann']"
] |
null | null | 2407.08668 | null | null | http://arxiv.org/pdf/2407.08668v1 | 2024-07-11T16:57:17Z | 2024-07-11T16:57:17Z | Estimation of spatio-temporal extremes via generative neural networks | Recent methods in modeling spatial extreme events have focused on utilizing parametric max-stable processes and their underlying dependence structure. In this work, we provide a unified approach for analyzing spatial extremes with little available data by estimating the distribution of model parameters or the spatial dependence directly. By employing recent developments in generative neural networks we predict a full sample-based distribution, allowing for direct assessment of uncertainty regarding model parameters or other parameter dependent functionals. We validate our method by fitting several simulated max-stable processes, showing a high accuracy of the approach, regarding parameter estimation, as well as uncertainty quantification. Additional robustness checks highlight the generalization and extrapolation capabilities of the model, while an application to precipitation extremes across Western Germany demonstrates the usability of our approach in real-world scenarios. | [
"['Christopher Bülte' 'Lisa Leimenstoll' 'Melanie Schienle']"
] |
null | null | 2407.08678 | null | null | http://arxiv.org/pdf/2407.08678v1 | 2024-07-11T17:12:42Z | 2024-07-11T17:12:42Z | How to beat a Bayesian adversary | Deep neural networks and other modern machine learning models are often susceptible to adversarial attacks. Indeed, an adversary may often be able to change a model's prediction through a small, directed perturbation of the model's input - an issue in safety-critical applications. Adversarially robust machine learning is usually based on a minmax optimisation problem that minimises the machine learning loss under maximisation-based adversarial attacks. In this work, we study adversaries that determine their attack using a Bayesian statistical approach rather than maximisation. The resulting Bayesian adversarial robustness problem is a relaxation of the usual minmax problem. To solve this problem, we propose Abram - a continuous-time particle system that shall approximate the gradient flow corresponding to the underlying learning problem. We show that Abram approximates a McKean-Vlasov process and justify the use of Abram by giving assumptions under which the McKean-Vlasov process finds the minimiser of the Bayesian adversarial robustness problem. We discuss two ways to discretise Abram and show its suitability in benchmark adversarial deep learning experiments. | [
"['Zihan Ding' 'Kexin Jin' 'Jonas Latz' 'Chenguang Liu']"
] |
null | null | 2407.08681 | null | null | http://arxiv.org/pdf/2407.08681v1 | 2024-07-11T17:14:19Z | 2024-07-11T17:14:19Z | Hardware Neural Control of CartPole and F1TENTH Race Car | Nonlinear model predictive control (NMPC) has proven to be an effective control method, but it is expensive to compute. This work demonstrates the use of hardware FPGA neural network controllers trained to imitate NMPC with supervised learning. We use these Neural Controllers (NCs) implemented on inexpensive embedded FPGA hardware for high frequency control on physical cartpole and F1TENTH race car. Our results show that the NCs match the control performance of the NMPCs in simulation and outperform it in reality, due to the faster control rate that is afforded by the quick FPGA NC inference. We demonstrate kHz control rates for a physical cartpole and offloading control to the FPGA hardware on the F1TENTH car. Code and hardware implementation for this paper are available at https:// github.com/SensorsINI/Neural-Control-Tools. | [
"['Marcin Paluch' 'Florian Bolli' 'Xiang Deng' 'Antonio Rios Navarro'\n 'Chang Gao' 'Tobi Delbruck']"
] |
null | null | 2407.08689 | null | null | http://arxiv.org/pdf/2407.08689v1 | 2024-07-11T17:28:07Z | 2024-07-11T17:28:07Z | Operationalizing the Blueprint for an AI Bill of Rights: Recommendations
for Practitioners, Researchers, and Policy Makers | As Artificial Intelligence (AI) tools are increasingly employed in diverse real-world applications, there has been significant interest in regulating these tools. To this end, several regulatory frameworks have been introduced by different countries worldwide. For example, the European Union recently passed the AI Act, the White House issued an Executive Order on safe, secure, and trustworthy AI, and the White House Office of Science and Technology Policy issued the Blueprint for an AI Bill of Rights (AI BoR). Many of these frameworks emphasize the need for auditing and improving the trustworthiness of AI tools, underscoring the importance of safety, privacy, explainability, fairness, and human fallback options. Although these regulatory frameworks highlight the necessity of enforcement, practitioners often lack detailed guidance on implementing them. Furthermore, the extensive research on operationalizing each of these aspects is frequently buried in technical papers that are difficult for practitioners to parse. In this write-up, we address this shortcoming by providing an accessible overview of existing literature related to operationalizing regulatory principles. We provide easy-to-understand summaries of state-of-the-art literature and highlight various gaps that exist between regulatory guidelines and existing AI research, including the trade-offs that emerge during operationalization. We hope that this work not only serves as a starting point for practitioners interested in learning more about operationalizing the regulatory guidelines outlined in the Blueprint for an AI BoR but also provides researchers with a list of critical open problems and gaps between regulations and state-of-the-art AI research. Finally, we note that this is a working paper and we invite feedback in line with the purpose of this document as described in the introduction. | [
"['Alex Oesterling' 'Usha Bhalla' 'Suresh Venkatasubramanian'\n 'Himabindu Lakkaraju']"
] |
null | null | 2407.08693 | null | null | http://arxiv.org/pdf/2407.08693v2 | 2024-07-12T19:19:34Z | 2024-07-11T17:31:01Z | Robotic Control via Embodied Chain-of-Thought Reasoning | A key limitation of learned robot control policies is their inability to generalize outside their training data. Recent works on vision-language-action models (VLAs) have shown that the use of large, internet pre-trained vision-language models as the backbone of learned robot policies can substantially improve their robustness and generalization ability. Yet, one of the most exciting capabilities of large vision-language models in other domains is their ability to reason iteratively through complex problems. Can that same capability be brought into robotics to allow policies to improve performance by reasoning about a given task before acting? Naive use of "chain-of-thought" (CoT) style prompting is significantly less effective with standard VLAs because of the relatively simple training examples that are available to them. Additionally, purely semantic reasoning about sub-tasks, as is common in regular CoT, is insufficient for robot policies that need to ground their reasoning in sensory observations and the robot state. To this end, we introduce Embodied Chain-of-Thought Reasoning (ECoT) for VLAs, in which we train VLAs to perform multiple steps of reasoning about plans, sub-tasks, motions, and visually grounded features like object bounding boxes and end effector positions, before predicting the robot action. We design a scalable pipeline for generating synthetic training data for ECoT on large robot datasets. We demonstrate, that ECoT increases the absolute success rate of OpenVLA, the current strongest open-source VLA policy, by 28% across challenging generalization tasks, without any additional robot training data. Additionally, ECoT makes it easier for humans to interpret a policy's failures and correct its behavior using natural language. | [
"['Michał Zawalski' 'William Chen' 'Karl Pertsch' 'Oier Mees'\n 'Chelsea Finn' 'Sergey Levine']"
] |
null | null | 2407.08694 | null | null | http://arxiv.org/pdf/2407.08694v1 | 2024-07-11T17:31:12Z | 2024-07-11T17:31:12Z | Cloud Atlas: Efficient Fault Localization for Cloud Systems using
Language Models and Causal Insight | Runtime failure and performance degradation is commonplace in modern cloud systems. For cloud providers, automatically determining the root cause of incidents is paramount to ensuring high reliability and availability as prompt fault localization can enable faster diagnosis and triage for timely resolution. A compelling solution explored in recent work is causal reasoning using causal graphs to capture relationships between varied cloud system performance metrics. To be effective, however, systems developers must correctly define the causal graph of their system, which is a time-consuming, brittle, and challenging task that increases in difficulty for large and dynamic systems and requires domain expertise. Alternatively, automated data-driven approaches have limited efficacy for cloud systems due to the inherent rarity of incidents. In this work, we present Atlas, a novel approach to automatically synthesizing causal graphs for cloud systems. Atlas leverages large language models (LLMs) to generate causal graphs using system documentation, telemetry, and deployment feedback. Atlas is complementary to data-driven causal discovery techniques, and we further enhance Atlas with a data-driven validation step. We evaluate Atlas across a range of fault localization scenarios and demonstrate that Atlas is capable of generating causal graphs in a scalable and generalizable manner, with performance that far surpasses that of data-driven algorithms and is commensurate to the ground-truth baseline. | [
"['Zhiqiang Xie' 'Yujia Zheng' 'Lizi Ottens' 'Kun Zhang'\n 'Christos Kozyrakis' 'Jonathan Mace']"
] |
null | null | 2407.08699 | null | null | http://arxiv.org/pdf/2407.08699v1 | 2024-07-11T17:32:40Z | 2024-07-11T17:32:40Z | Mitigating Catastrophic Forgetting in Language Transfer via Model
Merging | As open-weight large language models (LLMs) achieve ever more impressive performances across a wide range of tasks in English, practitioners aim to adapt these models to different languages. However, such language adaptation is often accompanied by catastrophic forgetting of the base model's capabilities, severely limiting the usefulness of the resulting model. We address this issue by proposing Branch-and-Merge (BaM), a new adaptation method based on iteratively merging multiple models, fine-tuned on a subset of the available training data. BaM is based on the insight that this yields lower magnitude but higher quality weight changes, reducing forgetting of the source domain while maintaining learning on the target domain. We demonstrate in an extensive empirical study on Bulgarian and German that BaM can significantly reduce forgetting while matching or even improving target domain performance compared to both standard continued pretraining and instruction finetuning across different model architectures. | [
"['Anton Alexandrov' 'Veselin Raychev' 'Mark Niklas Müller' 'Ce Zhang'\n 'Martin Vechev' 'Kristina Toutanova']"
] |
null | null | 2407.08700 | null | null | http://arxiv.org/pdf/2407.08700v1 | 2024-07-11T17:33:38Z | 2024-07-11T17:33:38Z | Flex-TPU: A Flexible TPU with Runtime Reconfigurable Dataflow
Architecture | Tensor processing units (TPUs) are one of the most well-known machine learning (ML) accelerators utilized at large scale in data centers as well as in tiny ML applications. TPUs offer several improvements and advantages over conventional ML accelerators, like graphical processing units (GPUs), being designed specifically to perform the multiply-accumulate (MAC) operations required in the matrix-matrix and matrix-vector multiplies extensively present throughout the execution of deep neural networks (DNNs). Such improvements include maximizing data reuse and minimizing data transfer by leveraging the temporal dataflow paradigms provided by the systolic array architecture. While this design provides a significant performance benefit, the current implementations are restricted to a single dataflow consisting of either input, output, or weight stationary architectures. This can limit the achievable performance of DNN inference and reduce the utilization of compute units. Therefore, the work herein consists of developing a reconfigurable dataflow TPU, called the Flex-TPU, which can dynamically change the dataflow per layer during run-time. Our experiments thoroughly test the viability of the Flex-TPU comparing it to conventional TPU designs across multiple well-known ML workloads. The results show that our Flex-TPU design achieves a significant performance increase of up to 2.75x compared to conventional TPU, with only minor area and power overheads. | [
"['Mohammed Elbtity' 'Peyton Chandarana' 'Ramtin Zand']"
] |
null | null | 2407.08704 | null | null | http://arxiv.org/pdf/2407.08704v1 | 2024-07-11T17:40:39Z | 2024-07-11T17:40:39Z | Towards Efficient Deployment of Hybrid SNNs on Neuromorphic and Edge AI
Hardware | This paper explores the synergistic potential of neuromorphic and edge computing to create a versatile machine learning (ML) system tailored for processing data captured by dynamic vision sensors. We construct and train hybrid models, blending spiking neural networks (SNNs) and artificial neural networks (ANNs) using PyTorch and Lava frameworks. Our hybrid architecture integrates an SNN for temporal feature extraction and an ANN for classification. We delve into the challenges of deploying such hybrid structures on hardware. Specifically, we deploy individual components on Intel's Neuromorphic Processor Loihi (for SNN) and Jetson Nano (for ANN). We also propose an accumulator circuit to transfer data from the spiking to the non-spiking domain. Furthermore, we conduct comprehensive performance analyses of hybrid SNN-ANN models on a heterogeneous system of neuromorphic and edge AI hardware, evaluating accuracy, latency, power, and energy consumption. Our findings demonstrate that the hybrid spiking networks surpass the baseline ANN model across all metrics and outperform the baseline SNN model in accuracy and latency. | [
"['James Seekings' 'Peyton Chandarana' 'Mahsa Ardakani'\n 'MohammadReza Mohammadi' 'Ramtin Zand']"
] |
null | null | 2407.08707 | null | null | http://arxiv.org/pdf/2407.08707v1 | 2024-07-11T17:44:41Z | 2024-07-11T17:44:41Z | Extracting Training Data from Document-Based VQA Models | Vision-Language Models (VLMs) have made remarkable progress in document-based Visual Question Answering (i.e., responding to queries about the contents of an input document provided as an image). In this work, we show these models can memorize responses for training samples and regurgitate them even when the relevant visual information has been removed. This includes Personal Identifiable Information (PII) repeated once in the training set, indicating these models could divulge memorised sensitive information and therefore pose a privacy risk. We quantitatively measure the extractability of information in controlled experiments and differentiate between cases where it arises from generalization capabilities or from memorization. We further investigate the factors that influence memorization across multiple state-of-the-art models and propose an effective heuristic countermeasure that empirically prevents the extractability of PII. | [
"['Francesco Pinto' 'Nathalie Rauschmayr' 'Florian Tramèr' 'Philip Torr'\n 'Federico Tombari']"
] |
null | null | 2407.08708 | null | null | http://arxiv.org/pdf/2407.08708v2 | 2024-07-13T10:44:58Z | 2024-07-11T17:46:21Z | eyeballvul: a future-proof benchmark for vulnerability detection in the
wild | Long contexts of recent LLMs have enabled a new use case: asking models to find security vulnerabilities in entire codebases. To evaluate model performance on this task, we introduce eyeballvul: a benchmark designed to test the vulnerability detection capabilities of language models at scale, that is sourced and updated weekly from the stream of published vulnerabilities in open-source repositories. The benchmark consists of a list of revisions in different repositories, each associated with the list of known vulnerabilities present at that revision. An LLM-based scorer is used to compare the list of possible vulnerabilities returned by a model to the list of known vulnerabilities for each revision. As of July 2024, eyeballvul contains 24,000+ vulnerabilities across 6,000+ revisions and 5,000+ repositories, and is around 55GB in size. | [
"['Timothee Chauvin']"
] |
null | null | 2407.08715 | null | null | http://arxiv.org/pdf/2407.08715v1 | 2024-07-11T17:50:31Z | 2024-07-11T17:50:31Z | Sensor-Aware Classifiers for Energy-Efficient Time Series Applications
on IoT Devices | Time-series data processing is an important component of many real-world applications, such as health monitoring, environmental monitoring, and digital agriculture. These applications collect distinct windows of sensor data (e.g., few seconds) and process them to assess the environment. Machine learning (ML) models are being employed in time-series applications due to their generalization abilities for classification. State-of-the-art time-series applications wait for entire sensor data window to become available before processing the data using ML algorithms, resulting in high sensor energy consumption. However, not all situations require processing full sensor window to make accurate inference. For instance, in activity recognition, sitting and standing activities can be inferred with partial windows. Using this insight, we propose to employ early exit classifiers with partial sensor windows to minimize energy consumption while maintaining accuracy. Specifically, we first utilize multiple early exits with successively increasing amount of data as they become available in a window. If early exits provide inference with high confidence, we return the label and enter low power mode for sensors. The proposed approach has potential to enable significant energy savings in time series applications. We utilize neural networks and random forest classifiers to evaluate our approach. Our evaluations with six datasets show that the proposed approach enables up to 50-60% energy savings on average without any impact on accuracy. The energy savings can enable time-series applications in remote locations with limited energy availability. | [
"['Dina Hussein' 'Lubah Nelson' 'Ganapati Bhat']"
] |
null | null | 2407.08722 | null | null | http://arxiv.org/pdf/2407.08722v1 | 2024-07-11T17:55:49Z | 2024-07-11T17:55:49Z | Unifying 3D Representation and Control of Diverse Robots with a Single
Camera | Mirroring the complex structures and diverse functions of natural organisms is a long-standing challenge in robotics. Modern fabrication techniques have dramatically expanded feasible hardware, yet deploying these systems requires control software to translate desired motions into actuator commands. While conventional robots can easily be modeled as rigid links connected via joints, it remains an open challenge to model and control bio-inspired robots that are often multi-material or soft, lack sensing capabilities, and may change their material properties with use. Here, we introduce Neural Jacobian Fields, an architecture that autonomously learns to model and control robots from vision alone. Our approach makes no assumptions about the robot's materials, actuation, or sensing, requires only a single camera for control, and learns to control the robot without expert intervention by observing the execution of random commands. We demonstrate our method on a diverse set of robot manipulators, varying in actuation, materials, fabrication, and cost. Our approach achieves accurate closed-loop control and recovers the causal dynamic structure of each robot. By enabling robot control with a generic camera as the only sensor, we anticipate our work will dramatically broaden the design space of robotic systems and serve as a starting point for lowering the barrier to robotic automation. | [
"['Sizhe Lester Li' 'Annan Zhang' 'Boyuan Chen' 'Hanna Matusik' 'Chao Liu'\n 'Daniela Rus' 'Vincent Sitzmann']"
] |
null | null | 2407.08723 | null | null | http://arxiv.org/pdf/2407.08723v1 | 2024-07-11T17:56:03Z | 2024-07-11T17:56:03Z | Topological Generalization Bounds for Discrete-Time Stochastic
Optimization Algorithms | We present a novel set of rigorous and computationally efficient topology-based complexity notions that exhibit a strong correlation with the generalization gap in modern deep neural networks (DNNs). DNNs show remarkable generalization properties, yet the source of these capabilities remains elusive, defying the established statistical learning theory. Recent studies have revealed that properties of training trajectories can be indicative of generalization. Building on this insight, state-of-the-art methods have leveraged the topology of these trajectories, particularly their fractal dimension, to quantify generalization. Most existing works compute this quantity by assuming continuous- or infinite-time training dynamics, complicating the development of practical estimators capable of accurately predicting generalization without access to test data. In this paper, we respect the discrete-time nature of training trajectories and investigate the underlying topological quantities that can be amenable to topological data analysis tools. This leads to a new family of reliable topological complexity measures that provably bound the generalization error, eliminating the need for restrictive geometric assumptions. These measures are computationally friendly, enabling us to propose simple yet effective algorithms for computing generalization indices. Moreover, our flexible framework can be extended to different domains, tasks, and architectures. Our experimental results demonstrate that our new complexity measures correlate highly with generalization error in industry-standards architectures such as transformers and deep graph networks. Our approach consistently outperforms existing topological bounds across a wide range of datasets, models, and optimizers, highlighting the practical relevance and effectiveness of our complexity measures. | [
"['Rayna Andreeva' 'Benjamin Dupuis' 'Rik Sarkar' 'Tolga Birdal'\n 'Umut Şimşekli']"
] |
null | null | 2407.08729 | null | null | http://arxiv.org/pdf/2407.08729v1 | 2024-07-11T17:58:10Z | 2024-07-11T17:58:10Z | BiEquiFormer: Bi-Equivariant Representations for Global Point Cloud
Registration | The goal of this paper is to address the problem of textit{global} point cloud registration (PCR) i.e., finding the optimal alignment between point clouds irrespective of the initial poses of the scans. This problem is notoriously challenging for classical optimization methods due to computational constraints. First, we show that state-of-the-art deep learning methods suffer from huge performance degradation when the point clouds are arbitrarily placed in space. We propose that textit{equivariant deep learning} should be utilized for solving this task and we characterize the specific type of bi-equivariance of PCR. Then, we design BiEquiformer a novel and scalable textit{bi-equivariant} pipeline i.e. equivariant to the independent transformations of the input point clouds. While a naive approach would process the point clouds independently we design expressive bi-equivariant layers that fuse the information from both point clouds. This allows us to extract high-quality superpoint correspondences and in turn, robust point-cloud registration. Extensive comparisons against state-of-the-art methods show that our method achieves comparable performance in the canonical setting and superior performance in the robust setting in both the 3DMatch and the challenging low-overlap 3DLoMatch dataset. | [
"['Stefanos Pertigkiozoglou' 'Evangelos Chatzipantazis' 'Kostas Daniilidis']"
] |
null | null | 2407.08734 | null | null | http://arxiv.org/pdf/2407.08734v1 | 2024-07-11T17:59:00Z | 2024-07-11T17:59:00Z | Transformer Circuit Faithfulness Metrics are not Robust | Mechanistic interpretability work attempts to reverse engineer the learned algorithms present inside neural networks. One focus of this work has been to discover 'circuits' -- subgraphs of the full model that explain behaviour on specific tasks. But how do we measure the performance of such circuits? Prior work has attempted to measure circuit 'faithfulness' -- the degree to which the circuit replicates the performance of the full model. In this work, we survey many considerations for designing experiments that measure circuit faithfulness by ablating portions of the model's computation. Concerningly, we find existing methods are highly sensitive to seemingly insignificant changes in the ablation methodology. We conclude that existing circuit faithfulness scores reflect both the methodological choices of researchers as well as the actual components of the circuit - the task a circuit is required to perform depends on the ablation used to test it. The ultimate goal of mechanistic interpretability work is to understand neural networks, so we emphasize the need for more clarity in the precise claims being made about circuits. We open source a library at https://github.com/UFO-101/auto-circuit that includes highly efficient implementations of a wide range of ablation methodologies and circuit discovery algorithms. | [
"['Joseph Miller' 'Bilal Chughtai' 'William Saunders']"
] |
null | null | 2407.08737 | null | null | http://arxiv.org/pdf/2407.08737v1 | 2024-07-11T17:59:45Z | 2024-07-11T17:59:45Z | Video Diffusion Alignment via Reward Gradients | We have made significant progress towards building foundational video diffusion models. As these models are trained using large-scale unsupervised data, it has become crucial to adapt these models to specific downstream tasks. Adapting these models via supervised fine-tuning requires collecting target datasets of videos, which is challenging and tedious. In this work, we utilize pre-trained reward models that are learned via preferences on top of powerful vision discriminative models to adapt video diffusion models. These models contain dense gradient information with respect to generated RGB pixels, which is critical to efficient learning in complex search spaces, such as videos. We show that backpropagating gradients from these reward models to a video diffusion model can allow for compute and sample efficient alignment of the video diffusion model. We show results across a variety of reward models and video diffusion models, demonstrating that our approach can learn much more efficiently in terms of reward queries and computation than prior gradient-free approaches. Our code, model weights,and more visualization are available at https://vader-vid.github.io. | [
"['Mihir Prabhudesai' 'Russell Mendonca' 'Zheyang Qin'\n 'Katerina Fragkiadaki' 'Deepak Pathak']"
] |
null | null | 2407.08742 | null | null | http://arxiv.org/pdf/2407.08742v1 | 2024-05-29T01:23:19Z | 2024-05-29T01:23:19Z | Improved Robustness and Hyperparameter Selection in Modern Hopfield
Networks | The modern Hopfield network generalizes the classical Hopfield network by allowing for sharper interaction functions. This increases the capacity of the network as an autoassociative memory as nearby learned attractors will not interfere with one another. However, the implementation of the network relies on applying large exponents to the dot product of memory vectors and probe vectors. If the dimension of the data is large the calculation can be very large and result in problems when using floating point numbers in a practical implementation. We describe this problem in detail, modify the original network description to mitigate the problem, and show the modification will not alter the networks' dynamics during update or training. We also show our modification greatly improves hyperparameter selection for the modern Hopfield network, removing the dependence on the interaction vertex and resulting in an optimal region of hyperparameters that does not significantly change with the interaction vertex as it does in the original network. | [
"['Hayden McAlister' 'Anthony Robins' 'Lech Szymanski']"
] |
null | null | 2407.08744 | null | null | http://arxiv.org/pdf/2407.08744v1 | 2024-06-03T15:11:54Z | 2024-06-03T15:11:54Z | Toward Efficient Deep Spiking Neuron Networks:A Survey On Compression | With the rapid development of deep learning, Deep Spiking Neural Networks (DSNNs) have emerged as promising due to their unique spike event processing and asynchronous computation. When deployed on neuromorphic chips, DSNNs offer significant power advantages over Deep Artificial Neural Networks (DANNs) and eliminate time and energy consuming multiplications due to the binary nature of spikes (0 or 1). Additionally, DSNNs excel in processing temporal information, making them potentially superior for handling temporal data compared to DANNs. However, their deep network structure and numerous parameters result in high computational costs and energy consumption, limiting real-life deployment. To enhance DSNNs efficiency, researchers have adapted methods from DANNs, such as pruning, quantization, and knowledge distillation, and developed specific techniques like reducing spike firing and pruning time steps. While previous surveys have covered DSNNs algorithms, hardware deployment, and general overviews, focused research on DSNNs compression and efficiency has been lacking. This survey addresses this gap by concentrating on efficient DSNNs and their compression methods. It begins with an exploration of DSNNs' biological background and computational units, highlighting differences from DANNs. It then delves into various compression methods, including pruning, quantization, knowledge distillation, and reducing spike firing, and concludes with suggestions for future research directions. | [
"['Hui Xie' 'Ge Yang' 'Wenjuan Gao']"
] |
null | null | 2407.08746 | null | null | http://arxiv.org/pdf/2407.08746v1 | 2024-06-03T17:03:14Z | 2024-06-03T17:03:14Z | Iteration over event space in time-to-first-spike spiking neural
networks for Twitter bot classification | This study proposes a framework that extends existing time-coding time-to-first-spike spiking neural network (SNN) models to allow processing information changing over time. We explain spike propagation through a model with multiple input and output spikes at each neuron, as well as design training rules for end-to-end backpropagation. This strategy enables us to process information changing over time. The model is trained and evaluated on a Twitter bot detection task where the time of events (tweets and retweets) is the primary carrier of information. This task was chosen to evaluate how the proposed SNN deals with spike train data composed of hundreds of events occurring at timescales differing by almost five orders of magnitude. The impact of various parameters on model properties, performance and training-time stability is analyzed. | [
"['Mateusz Pabian' 'Dominik Rzepka' 'Mirosław Pawlak']"
] |
null | null | 2407.08750 | null | null | http://arxiv.org/pdf/2407.08750v1 | 2024-06-26T16:04:49Z | 2024-06-26T16:04:49Z | ROLCH: Regularized Online Learning for Conditional Heteroskedasticity | Large-scale streaming data are common in modern machine learning applications and have led to the development of online learning algorithms. Many fields, such as supply chain management, weather and meteorology, energy markets, and finance, have pivoted towards using probabilistic forecasts, which yields the need not only for accurate learning of the expected value but also for learning the conditional heteroskedasticity. Against this backdrop, we present a methodology for online estimation of regularized linear distributional models for conditional heteroskedasticity. The proposed algorithm is based on a combination of recent developments for the online estimation of LASSO models and the well-known GAMLSS framework. We provide a case study on day-ahead electricity price forecasting, in which we show the competitive performance of the adaptive estimation combined with strongly reduced computational effort. Our algorithms are implemented in a computationally efficient Python package. | [
"['Simon Hirsch' 'Jonathan Berrisch' 'Florian Ziel']"
] |
null | null | 2407.08751 | null | null | http://arxiv.org/pdf/2407.08751v1 | 2024-06-27T13:47:06Z | 2024-06-27T13:47:06Z | Latent Diffusion for Neural Spiking Data | Modern datasets in neuroscience enable unprecedented inquiries into the relationship between complex behaviors and the activity of many simultaneously recorded neurons. While latent variable models can successfully extract low-dimensional embeddings from such recordings, using them to generate realistic spiking data, especially in a behavior-dependent manner, still poses a challenge. Here, we present Latent Diffusion for Neural Spiking data (LDNS), a diffusion-based generative model with a low-dimensional latent space: LDNS employs an autoencoder with structured state-space (S4) layers to project discrete high-dimensional spiking data into continuous time-aligned latents. On these inferred latents, we train expressive (conditional) diffusion models, enabling us to sample neural activity with realistic single-neuron and population spiking statistics. We validate LDNS on synthetic data, accurately recovering latent structure, firing rates, and spiking statistics. Next, we demonstrate its flexibility by generating variable-length data that mimics human cortical activity during attempted speech. We show how to equip LDNS with an expressive observation model that accounts for single-neuron dynamics not mediated by the latent state, further increasing the realism of generated samples. Finally, conditional LDNS trained on motor cortical activity during diverse reaching behaviors can generate realistic spiking data given reach direction or unseen reach trajectories. In summary, LDNS simultaneously enables inference of low-dimensional latents and realistic conditional generation of neural spiking datasets, opening up further possibilities for simulating experimentally testable hypotheses. | [
"['Jaivardhan Kapoor' 'Auguste Schulz' 'Julius Vetter' 'Felix Pei'\n 'Richard Gao' 'Jakob H. Macke']"
] |
null | null | 2407.08758 | null | null | http://arxiv.org/pdf/2407.08758v1 | 2024-03-08T21:22:05Z | 2024-03-08T21:22:05Z | Credit Card Fraud Detection in the Nigerian Financial Sector: A
Comparison of Unsupervised TensorFlow-Based Anomaly Detection Techniques,
Autoencoders and PCA Algorithm | Credit card fraud is a major cause of national concern in the Nigerian financial sector, affecting hundreds of transactions per second and impacting international ecommerce negatively. Despite the rapid spread and adoption of online marketing, millions of Nigerians are prevented from transacting in several countries with local credit cards due to bans and policies directed at restricting credit card fraud. Presently, a myriad of technologies exist to detect fraudulent transactions, a few of which are adopted by Nigerian financial institutions to proactively manage the situation. Fraud detection allows institutions to restrict offenders from networks and with a centralized banking identity management system, such as the Bank Verification Number used by the Central Bank of Nigeria, offenders who may have stolen other identities can be backtraced and their bank accounts frozen. This paper aims to compare the effectiveness of two fraud detection technologies that are projected to work fully independent of human intervention to possibly predict and detect fraudulent credit card transactions. Autoencoders as an unsupervised tensorflow based anomaly detection technique generally offers greater performance in dimensionality reduction than the Principal Component Analysis, and this theory was tested out on Nigerian credit card transaction data. Results demonstrate that autoencoders are better suited to analyzing complex and extensive datasets and offer more reliable results with minimal mislabeling than the PCA algorithm. | [
"['Jennifer Onyeama']"
] |
null | null | 2407.08762 | null | null | http://arxiv.org/pdf/2407.08762v1 | 2024-07-09T19:31:49Z | 2024-07-09T19:31:49Z | Commute-Time-Optimised Graphs for GNNs | We explore graph rewiring methods that optimise commute time. Recent graph rewiring approaches facilitate long-range interactions in sparse graphs, making such rewirings commute-time-optimal $textit{on average}$. However, when an expert prior exists on which node pairs should or should not interact, a superior rewiring would favour short commute times between these privileged node pairs. We construct two synthetic datasets with known priors reflecting realistic settings, and use these to motivate two bespoke rewiring methods that incorporate the known prior. We investigate the regimes where our rewiring improves test performance on the synthetic datasets. Finally, we perform a case study on a real-world citation graph to investigate the practical implications of our work. | [
"['Igor Sterner' 'Shiye Su' 'Petar Veličković']"
] |
null | null | 2407.08765 | null | null | http://arxiv.org/pdf/2407.08765v1 | 2024-07-11T05:25:45Z | 2024-07-11T05:25:45Z | Approximating G(t)/GI/1 queues with deep learning | In this paper, we apply a supervised machine-learning approach to solve a fundamental problem in queueing theory: estimating the transient distribution of the number in the system for a G(t)/GI/1. We develop a neural network mechanism that provides a fast and accurate predictor of these distributions for moderate horizon lengths and practical settings. It is based on using a Recurrent Neural Network (RNN) architecture based on the first several moments of the time-dependant inter-arrival and the stationary service time distributions; we call it the Moment-Based Recurrent Neural Network (RNN) method (MBRNN ). Our empirical study suggests MBRNN requires only the first four inter-arrival and service time moments. We use simulation to generate a substantial training dataset and present a thorough performance evaluation to examine the accuracy of our method using two different test sets. We show that even under the configuration with the worst performance errors, the mean number of customers over the entire timeline has an error of less than 3%. While simulation modeling can achieve high accuracy, the advantage of the MBRNN over simulation is runtime, while the MBRNN analyzes hundreds of systems within a fraction of a second. This paper focuses on a G(t)/GI/1; however, the MBRNN approach demonstrated here can be extended to other queueing systems, as the training data labeling is based on simulations (which can be applied to more complex systems) and the training is based on deep learning, which can capture very complex time sequence tasks. In summary, the MBRNN can potentially revolutionize our ability to perform transient analyses of queueing systems. | [
"['Eliran Sherzer' 'Opher Baron' 'Dmitry Krass' 'Yehezkel Resheff']"
] |
null | null | 2407.08797 | null | null | http://arxiv.org/pdf/2407.08797v1 | 2024-07-11T18:13:38Z | 2024-07-11T18:13:38Z | Deep Inverse Design for High-Level Synthesis | High-level synthesis (HLS) has significantly advanced the automation of digital circuits design, yet the need for expertise and time in pragma tuning remains challenging. Existing solutions for the design space exploration (DSE) adopt either heuristic methods, lacking essential information for further optimization potential, or predictive models, missing sufficient generalization due to the time-consuming nature of HLS and the exponential growth of the design space. To address these challenges, we propose Deep Inverse Design for HLS (DID4HLS), a novel approach that integrates graph neural networks and generative models. DID4HLS iteratively optimizes hardware designs aimed at compute-intensive algorithms by learning conditional distributions of design features from post-HLS data. Compared to four state-of-the-art DSE baselines, our method achieved an average improvement of 42.5% on average distance to reference set (ADRS) compared to the best-performing baselines across six benchmarks, while demonstrating high robustness and efficiency. | [
"['Ping Chang' 'Tosiron Adegbija' 'Yuchao Liao' 'Claudio Talarico' 'Ao Li'\n 'Janet Roveda']"
] |
null | null | 2407.08800 | null | null | http://arxiv.org/pdf/2407.08800v1 | 2024-07-11T18:18:32Z | 2024-07-11T18:18:32Z | Local Clustering for Lung Cancer Image Classification via Sparse
Solution Technique | In this work, we propose to use a local clustering approach based on the sparse solution technique to study the medical image, especially the lung cancer image classification task. We view images as the vertices in a weighted graph and the similarity between a pair of images as the edges in the graph. The vertices within the same cluster can be assumed to share similar features and properties, thus making the applications of graph clustering techniques very useful for image classification. Recently, the approach based on the sparse solutions of linear systems for graph clustering has been found to identify clusters more efficiently than traditional clustering methods such as spectral clustering. We propose to use the two newly developed local clustering methods based on sparse solution of linear system for image classification. In addition, we employ a box spline-based tight-wavelet-framelet method to clean these images and help build a better adjacency matrix before clustering. The performance of our methods is shown to be very effective in classifying images. Our approach is significantly more efficient and either favorable or equally effective compared with other state-of-the-art approaches. Finally, we shall make a remark by pointing out two image deformation methods to build up more artificial image data to increase the number of labeled images. | [
"['Jackson Hamel' 'Ming-Jun Lai' 'Zhaiming Shen' 'Ye Tian']"
] |
null | null | 2407.08803 | null | null | http://arxiv.org/pdf/2407.08803v1 | 2024-07-11T18:23:46Z | 2024-07-11T18:23:46Z | PID Accelerated Temporal Difference Algorithms | Long-horizon tasks, which have a large discount factor, pose a challenge for most conventional reinforcement learning (RL) algorithms. Algorithms such as Value Iteration and Temporal Difference (TD) learning have a slow convergence rate and become inefficient in these tasks. When the transition distributions are given, PID VI was recently introduced to accelerate the convergence of Value Iteration using ideas from control theory. Inspired by this, we introduce PID TD Learning and PID Q-Learning algorithms for the RL setting in which only samples from the environment are available. We give theoretical analysis of their convergence and acceleration compared to their traditional counterparts. We also introduce a method for adapting PID gains in the presence of noise and empirically verify its effectiveness. | [
"['Mark Bedaywi' 'Amin Rakhsha' 'Amir-massoud Farahmand']"
] |
null | null | 2407.08806 | null | null | http://arxiv.org/pdf/2407.08806v1 | 2024-07-11T18:30:01Z | 2024-07-11T18:30:01Z | HO-FMN: Hyperparameter Optimization for Fast Minimum-Norm Attacks | Gradient-based attacks are a primary tool to evaluate robustness of machine-learning models. However, many attacks tend to provide overly-optimistic evaluations as they use fixed loss functions, optimizers, step-size schedulers, and default hyperparameters. In this work, we tackle these limitations by proposing a parametric variation of the well-known fast minimum-norm attack algorithm, whose loss, optimizer, step-size scheduler, and hyperparameters can be dynamically adjusted. We re-evaluate 12 robust models, showing that our attack finds smaller adversarial perturbations without requiring any additional tuning. This also enables reporting adversarial robustness as a function of the perturbation budget, providing a more complete evaluation than that offered by fixed-budget attacks, while remaining efficient. We release our open-source code at https://github.com/pralab/HO-FMN. | [
"['Raffaele Mura' 'Giuseppe Floris' 'Luca Scionis' 'Giorgio Piras'\n 'Maura Pintor' 'Ambra Demontis' 'Giorgio Giacinto' 'Battista Biggio'\n 'Fabio Roli']"
] |
null | null | 2407.08824 | null | null | http://arxiv.org/pdf/2407.08824v1 | 2024-07-11T19:13:16Z | 2024-07-11T19:13:16Z | Proving that Cryptic Crossword Clue Answers are Correct | Cryptic crossword clues are challenging cognitive tasks, for which new test sets are released on a daily basis by multiple international newspapers. Each cryptic clue contains both the definition of the answer to be placed in the crossword grid (in common with regular crosswords), and `wordplay' that proves that the answer is correct (i.e. a human solver can be confident that an answer is correct without needing crossing words to confirm it). Using an existing cryptic wordplay proving framework (operating on Python proofs created by an LLM), we show that it is possible to distinguish between correct answers and almost-correct ones based upon whether the wordplay `works'. | [
"['Martin Andrews' 'Sam Witteveen']"
] |
null | null | 2407.08838 | null | null | http://arxiv.org/pdf/2407.08838v1 | 2024-07-11T19:47:37Z | 2024-07-11T19:47:37Z | Deep Learning for Network Anomaly Detection under Data Contamination:
Evaluating Robustness and Mitigating Performance Degradation | Deep learning (DL) has emerged as a crucial tool in network anomaly detection (NAD) for cybersecurity. While DL models for anomaly detection excel at extracting features and learning patterns from data, they are vulnerable to data contamination -- the inadvertent inclusion of attack-related data in training sets presumed benign. This study evaluates the robustness of six unsupervised DL algorithms against data contamination using our proposed evaluation protocol. Results demonstrate significant performance degradation in state-of-the-art anomaly detection algorithms when exposed to contaminated data, highlighting the critical need for self-protection mechanisms in DL-based NAD models. To mitigate this vulnerability, we propose an enhanced auto-encoder with a constrained latent representation, allowing normal data to cluster more densely around a learnable center in the latent space. Our evaluation reveals that this approach exhibits improved resistance to data contamination compared to existing methods, offering a promising direction for more robust NAD systems. | [
"[\"D'Jeff K. Nkashama\" 'Jordan Masakuna Félicien' 'Arian Soltani'\n 'Jean-Charles Verdier' 'Pierre-Martin Tardif' 'Marc Frappier'\n 'Froduald Kabanza']"
] |
null | null | 2407.08839 | null | null | http://arxiv.org/pdf/2407.08839v1 | 2024-07-11T19:51:48Z | 2024-07-11T19:51:48Z | A Survey on the Application of Generative Adversarial Networks in
Cybersecurity: Prospective, Direction and Open Research Scopes | With the proliferation of Artificial Intelligence, there has been a massive increase in the amount of data required to be accumulated and disseminated digitally. As the data are available online in digital landscapes with complex and sophisticated infrastructures, it is crucial to implement various defense mechanisms based on cybersecurity. Generative Adversarial Networks (GANs), which are deep learning models, have emerged as powerful solutions for addressing the constantly changing security issues. This survey studies the significance of the deep learning model, precisely on GANs, in strengthening cybersecurity defenses. Our survey aims to explore the various works completed in GANs, such as Intrusion Detection Systems (IDS), Mobile and Network Trespass, BotNet Detection, and Malware Detection. The focus is to examine how GANs can be influential tools to strengthen cybersecurity defenses in these domains. Further, the paper discusses the challenges and constraints of using GANs in these areas and suggests future research directions. Overall, the paper highlights the potential of GANs in enhancing cybersecurity measures and addresses the need for further exploration in this field. | [
"['Md Mashrur Arifin' 'Md Shoaib Ahmed' 'Tanmai Kumar Ghosh' 'Jun Zhuang'\n 'Jyh-haw Yeh']"
] |
null | null | 2407.08840 | null | null | http://arxiv.org/pdf/2407.08840v1 | 2024-07-11T19:55:21Z | 2024-07-11T19:55:21Z | Data-driven Model Reduction for Soft Robots via Lagrangian Operator
Inference | Data-driven model reduction methods provide a nonintrusive way of constructing computationally efficient surrogates of high-fidelity models for real-time control of soft robots. This work leverages the Lagrangian nature of the model equations to derive structure-preserving linear reduced-order models via Lagrangian Operator Inference and compares their performance with prominent linear model reduction techniques through an anguilliform swimming soft robot model example with 231,336 degrees of freedom. The case studies demonstrate that preserving the underlying Lagrangian structure leads to learned models with higher predictive accuracy and robustness to unseen inputs. | [
"['Harsh Sharma' 'Iman Adibnazari' 'Jacobo Cervera-Torralba'\n 'Michael T. Tolley' 'Boris Kramer']"
] |
null | null | 2407.08843 | null | null | http://arxiv.org/pdf/2407.08843v1 | 2024-07-11T19:58:19Z | 2024-07-11T19:58:19Z | Inflationary Flows: Calibrated Bayesian Inference with Diffusion-Based
Models | Beyond estimating parameters of interest from data, one of the key goals of statistical inference is to properly quantify uncertainty in these estimates. In Bayesian inference, this uncertainty is provided by the posterior distribution, the computation of which typically involves an intractable high-dimensional integral. Among available approximation methods, sampling-based approaches come with strong theoretical guarantees but scale poorly to large problems, while variational approaches scale well but offer few theoretical guarantees. In particular, variational methods are known to produce overconfident estimates of posterior uncertainty and are typically non-identifiable, with many latent variable configurations generating equivalent predictions. Here, we address these challenges by showing how diffusion-based models (DBMs), which have recently produced state-of-the-art performance in generative modeling tasks, can be repurposed for performing calibrated, identifiable Bayesian inference. By exploiting a previously established connection between the stochastic and probability flow ordinary differential equations (pfODEs) underlying DBMs, we derive a class of models, inflationary flows, that uniquely and deterministically map high-dimensional data to a lower-dimensional Gaussian distribution via ODE integration. This map is both invertible and neighborhood-preserving, with controllable numerical error, with the result that uncertainties in the data are correctly propagated to the latent space. We demonstrate how such maps can be learned via standard DBM training using a novel noise schedule and are effective at both preserving and reducing intrinsic data dimensionality. The result is a class of highly expressive generative models, uniquely defined on a low-dimensional latent space, that afford principled Bayesian inference. | [
"['Daniela de Albuquerque' 'John Pearson']"
] |
null | null | 2407.08868 | null | null | http://arxiv.org/pdf/2407.08868v2 | 2024-07-15T16:47:42Z | 2024-07-11T21:10:03Z | Generalizable Physics-Informed Learning for Stochastic Safety-Critical
Systems | Accurate estimate of long-term risk is critical for safe decision-making, but sampling from rare risk events and long-term trajectories can be prohibitively costly. Risk gradient can be used in many first-order techniques for learning and control methods, but gradient estimate is difficult to obtain using Monte Carlo (MC) methods because the infinitesimal divisor may significantly amplify sampling noise. Motivated by this gap, we propose an efficient method to evaluate long-term risk probabilities and their gradients using short-term samples without sufficient risk events. We first derive that four types of long-term risk probability are solutions of certain partial differential equations (PDEs). Then, we propose a physics-informed learning technique that integrates data and physics information (aforementioned PDEs). The physics information helps propagate information beyond available data and obtain provable generalization beyond available data, which in turn enables long-term risk to be estimated using short-term samples of safe events. Finally, we demonstrate in simulation that the proposed technique has improved sample efficiency, generalizes well to unseen regions, and adapts to changing system parameters. | [
"['Zhuoyuan Wang' 'Albert Chern' 'Yorie Nakahira']"
] |
null | null | 2407.08886 | null | null | http://arxiv.org/pdf/2407.08886v1 | 2024-07-11T22:42:53Z | 2024-07-11T22:42:53Z | Semi-Supervised Multi-Task Learning Based Framework for Power System
Security Assessment | This paper develops a novel machine learning-based framework using Semi-Supervised Multi-Task Learning (SS-MTL) for power system dynamic security assessment that is accurate, reliable, and aware of topological changes. The learning algorithm underlying the proposed framework integrates conditional masked encoders and employs multi-task learning for classification-aware feature representation, which improves the accuracy and scalability to larger systems. Additionally, this framework incorporates a confidence measure for its predictions, enhancing its reliability and interpretability. A topological similarity index has also been incorporated to add topological awareness to the framework. Various experiments on the IEEE 68-bus system were conducted to validate the proposed method, employing two distinct database generation techniques to generate the required data to train the machine learning algorithm. The results demonstrate that our algorithm outperforms existing state-of-the-art machine learning based techniques for security assessment in terms of accuracy and robustness. Finally, our work underscores the value of employing auto-encoders for security assessment, highlighting improvements in accuracy, reliability, and robustness. All datasets and codes used have been made publicly available to ensure reproducibility and transparency. | [
"[\"Muhy Eddin Za'ter\" 'Amirhossein Sajadi' 'Bri-Mathias Hodge']"
] |
null | null | 2407.08887 | null | null | http://arxiv.org/pdf/2407.08887v1 | 2024-07-11T22:46:18Z | 2024-07-11T22:46:18Z | Automatic Pruning of Fine-tuning Datasets for Transformer-based Language
Models | Transformer-based language models have shown state-of-the-art performance on a variety of natural language understanding tasks. To achieve this performance, these models are first pre-trained on general corpus and then fine-tuned on downstream tasks. Previous work studied the effect of pruning the training set of the downstream tasks on the performance of the model on its evaluation set. In this work, we propose an automatic dataset pruning method for the training set of fine-tuning tasks. Our method is based on the model's success rate in correctly classifying each training data point. Unlike previous work which relies on user feedback to determine subset size, our method automatically extracts training subsets that are adapted for each pair of model and fine-tuning task. Our method provides multiple subsets for use in dataset pruning that navigate the trade-off between subset size and evaluation accuracy. Our largest subset, which we also refer to as the winning ticket subset, is on average $3 times$ smaller than the original training set of the fine-tuning task. Our experiments on 5 downstream tasks and 2 language models show that, on average, fine-tuning on the winning ticket subsets results in a $0.1 %$ increase in the evaluation performance of the model. | [
"['Mohammadreza Tayaranian' 'Seyyed Hasan Mozafari' 'Brett H. Meyer'\n 'James J. Clark' 'Warren J. Gross']"
] |
null | null | 2407.08888 | null | null | http://arxiv.org/pdf/2407.08888v1 | 2024-07-11T23:04:16Z | 2024-07-11T23:04:16Z | Uncovering Semantics and Topics Utilized by Threat Actors to Deliver
Malicious Attachments and URLs | Recent threat reports highlight that email remains the top vector for delivering malware to endpoints. Despite these statistics, detecting malicious email attachments and URLs often neglects semantic cues linguistic features and contextual clues. Our study employs BERTopic unsupervised topic modeling to identify common semantics and themes embedded in email to deliver malicious attachments and call-to-action URLs. We preprocess emails by extracting and sanitizing content and employ multilingual embedding models like BGE-M3 for dense representations, which clustering algorithms(HDBSCAN and OPTICS) use to group emails by semantic similarity. Phi3-Mini-4K-Instruct facilitates semantic and hLDA aid in thematic analysis to understand threat actor patterns. Our research will evaluate and compare different clustering algorithms on topic quantity, coherence, and diversity metrics, concluding with insights into the semantics and topics commonly used by threat actors to deliver malicious attachments and URLs, a significant contribution to the field of threat detection. | [
"['Andrey Yakymovych' 'Abhishek Singh']"
] |
null | null | 2407.08890 | null | null | http://arxiv.org/pdf/2407.08890v1 | 2024-07-11T23:16:44Z | 2024-07-11T23:16:44Z | DeepCodeProbe: Towards Understanding What Models Trained on Code Learn | Machine learning models trained on code and related artifacts offer valuable support for software maintenance but suffer from interpretability issues due to their complex internal variables. These concerns are particularly significant in safety-critical applications where the models' decision-making processes must be reliable. The specific features and representations learned by these models remain unclear, adding to the hesitancy in adopting them widely. To address these challenges, we introduce DeepCodeProbe, a probing approach that examines the syntax and representation learning abilities of ML models designed for software maintenance tasks. Our study applies DeepCodeProbe to state-of-the-art models for code clone detection, code summarization, and comment generation. Findings reveal that while small models capture abstract syntactic representations, their ability to fully grasp programming language syntax is limited. Increasing model capacity improves syntax learning but introduces trade-offs such as increased training time and overfitting. DeepCodeProbe also identifies specific code patterns the models learn from their training data. Additionally, we provide best practices for training models on code to enhance performance and interpretability, supported by an open-source replication package for broader application of DeepCodeProbe in interpreting other code-related models. | [
"['Vahid Majdinasab' 'Amin Nikanjam' 'Foutse Khomh']"
] |
null | null | 2407.08892 | null | null | http://arxiv.org/pdf/2407.08892v1 | 2024-07-11T23:34:32Z | 2024-07-11T23:34:32Z | Characterizing Prompt Compression Methods for Long Context Inference | Long context inference presents challenges at the system level with increased compute and memory requirements, as well as from an accuracy perspective in being able to reason over long contexts. Recently, several methods have been proposed to compress the prompt to reduce the context length. However, there has been little work on comparing the different proposed methods across different tasks through a standardized analysis. This has led to conflicting results. To address this, here we perform a comprehensive characterization and evaluation of different prompt compression methods. In particular, we analyze extractive compression, summarization-based abstractive compression, and token pruning methods. Surprisingly, we find that extractive compression often outperforms all the other approaches, and enables up to 10x compression with minimal accuracy degradation. Interestingly, we also find that despite several recent claims, token pruning methods often lag behind extractive compression. We only found marginal improvements on summarization tasks. | [
"['Siddharth Jha' 'Lutfi Eren Erdogan' 'Sehoon Kim' 'Kurt Keutzer'\n 'Amir Gholami']"
] |
null | null | 2407.08898 | null | null | http://arxiv.org/pdf/2407.08898v1 | 2024-07-12T00:07:43Z | 2024-07-12T00:07:43Z | IDAT: A Multi-Modal Dataset and Toolkit for Building and Evaluating
Interactive Task-Solving Agents | Seamless interaction between AI agents and humans using natural language remains a key goal in AI research. This paper addresses the challenges of developing interactive agents capable of understanding and executing grounded natural language instructions through the IGLU competition at NeurIPS. Despite advancements, challenges such as a scarcity of appropriate datasets and the need for effective evaluation platforms persist. We introduce a scalable data collection tool for gathering interactive grounded language instructions within a Minecraft-like environment, resulting in a Multi-Modal dataset with around 9,000 utterances and over 1,000 clarification questions. Additionally, we present a Human-in-the-Loop interactive evaluation platform for qualitative analysis and comparison of agent performance through multi-turn communication with human annotators. We offer to the community these assets referred to as IDAT (IGLU Dataset And Toolkit) which aim to advance the development of intelligent, interactive AI agents and provide essential resources for further research. | [
"['Shrestha Mohanty' 'Negar Arabzadeh' 'Andrea Tupini' 'Yuxuan Sun'\n 'Alexey Skrynnik' 'Artem Zholus' 'Marc-Alexandre Côté' 'Julia Kiseleva']"
] |
null | null | 2407.08910 | null | null | http://arxiv.org/abs/2407.08910v1 | 2024-07-12T01:06:01Z | 2024-07-12T01:06:01Z | PAIL: Performance based Adversarial Imitation Learning Engine for Carbon
Neutral Optimization | Achieving carbon neutrality within industrial operations has become increasingly imperative for sustainable development. It is both a significant challenge and a key opportunity for operational optimization in industry 4.0. In recent years, Deep Reinforcement Learning (DRL) based methods offer promising enhancements for sequential optimization processes and can be used for reducing carbon emissions. However, existing DRL methods need a pre-defined reward function to assess the impact of each action on the final sustainable development goals (SDG). In many real applications, such a reward function cannot be given in advance. To address the problem, this study proposes a Performance based Adversarial Imitation Learning (PAIL) engine. It is a novel method to acquire optimal operational policies for carbon neutrality without any pre-defined action rewards. Specifically, PAIL employs a Transformer-based policy generator to encode historical information and predict following actions within a multi-dimensional space. The entire action sequence will be iteratively updated by an environmental simulator. Then PAIL uses a discriminator to minimize the discrepancy between generated sequences and real-world samples of high SDG. In parallel, a Q-learning framework based performance estimator is designed to estimate the impact of each action on SDG. Based on these estimations, PAIL refines generated policies with the rewards from both discriminator and performance estimator. PAIL is evaluated on multiple real-world application cases and datasets. The experiment results demonstrate the effectiveness of PAIL comparing to other state-of-the-art baselines. In addition, PAIL offers meaningful interpretability for the optimization in carbon neutrality. | [
"['Yuyang Ye' 'Lu-An Tang' 'Haoyu Wang' 'Runlong Yu' 'Wenchao Yu' 'Erhu He'\n 'Haifeng Chen' 'Hui Xiong']"
] |
null | null | 2407.08916 | null | null | http://arxiv.org/pdf/2407.08916v1 | 2024-07-12T01:26:33Z | 2024-07-12T01:26:33Z | Transforming Movie Recommendations with Advanced Machine Learning: A
Study of NMF, SVD,and K-Means Clustering | This study develops a robust movie recommendation system using various machine learning techniques, including Non- Negative Matrix Factorization (NMF), Truncated Singular Value Decomposition (SVD), and K-Means clustering. The primary objective is to enhance user experience by providing personalized movie recommendations. The research encompasses data preprocessing, model training, and evaluation, highlighting the efficacy of the employed methods. Results indicate that the proposed system achieves high accuracy and relevance in recommendations, making significant contributions to the field of recommendations systems. | [
"['Yubing Yan' 'Camille Moreau' 'Zhuoyue Wang' 'Wenhan Fan' 'Chengqian Fu']"
] |
null | null | 2407.08922 | null | null | http://arxiv.org/pdf/2407.08922v1 | 2024-07-12T02:05:59Z | 2024-07-12T02:05:59Z | Leveraging large language models for nano synthesis mechanism
explanation: solid foundations or mere conjectures? | With the rapid development of artificial intelligence (AI), large language models (LLMs) such as GPT-4 have garnered significant attention in the scientific community, demonstrating great potential in advancing scientific discovery. This progress raises a critical question: are these LLMs well-aligned with real-world physicochemical principles? Current evaluation strategies largely emphasize fact-based knowledge, such as material property prediction or name recognition, but they often lack an understanding of fundamental physicochemical mechanisms that require logical reasoning. To bridge this gap, our study developed a benchmark consisting of 775 multiple-choice questions focusing on the mechanisms of gold nanoparticle synthesis. By reflecting on existing evaluation metrics, we question whether a direct true-or-false assessment merely suggests conjecture. Hence, we propose a novel evaluation metric, the confidence-based score (c-score), which probes the output logits to derive the precise probability for the correct answer. Based on extensive experiments, our results show that in the context of gold nanoparticle synthesis, LLMs understand the underlying physicochemical mechanisms rather than relying on conjecture. This study underscores the potential of LLMs to grasp intrinsic scientific mechanisms and sets the stage for developing more reliable and effective AI tools across various scientific domains. | [
"['Yingming Pu' 'Liping Huang' 'Tao Lin' 'Hongyu Chen']"
] |
null | null | 2407.08933 | null | null | http://arxiv.org/pdf/2407.08933v1 | 2024-07-12T02:34:54Z | 2024-07-12T02:34:54Z | Machine Learning in High Volume Media Manufacturing | Errors or failures in a high-volume manufacturing environment can have significant impact that can result in both the loss of time and money. Identifying such failures early has been a top priority for manufacturing industries and various rule-based algorithms have been developed over the years. However, catching these failures is time consuming and such algorithms cannot adapt well to changes in designs, and sometimes variations in everyday behavior. More importantly, the number of units to monitor in a high-volume manufacturing environment is too big for manual monitoring or for a simple program. Here we develop a novel program that combines both rule-based decisions and machine learning models that can not only learn and adapt to such day-to-day variations or long-term design changes, but also can be applied at scale to the high number of manufacturing units in use today. Using the current state-of-the-art technologies, we then deploy this program at-scale to handle the needs of ever-increasing demand from the manufacturing environment. | [
"['Siddarth Reddy Karuka' 'Abhinav Sunderrajan' 'Zheng Zheng'\n 'Yong Woon Tiean' 'Ganesh Nagappan' 'Allan Luk']"
] |
null | null | 2407.08934 | null | null | http://arxiv.org/pdf/2407.08934v1 | 2024-07-12T02:39:50Z | 2024-07-12T02:39:50Z | Compositional Structures in Neural Embedding and Interaction
Decompositions | We describe a basic correspondence between linear algebraic structures within vector embeddings in artificial neural networks and conditional independence constraints on the probability distributions modeled by these networks. Our framework aims to shed light on the emergence of structural patterns in data representations, a phenomenon widely acknowledged but arguably still lacking a solid formal grounding. Specifically, we introduce a characterization of compositional structures in terms of "interaction decompositions," and we establish necessary and sufficient conditions for the presence of such structures within the representations of a model. | [
"['Matthew Trager' 'Alessandro Achille' 'Pramuditha Perera' 'Luca Zancato'\n 'Stefano Soatto']"
] |
null | null | 2407.08946 | null | null | http://arxiv.org/pdf/2407.08946v1 | 2024-07-12T03:03:50Z | 2024-07-12T03:03:50Z | Your Diffusion Model is Secretly a Noise Classifier and Benefits from
Contrastive Training | Diffusion models learn to denoise data and the trained denoiser is then used to generate new samples from the data distribution. In this paper, we revisit the diffusion sampling process and identify a fundamental cause of sample quality degradation: the denoiser is poorly estimated in regions that are far Outside Of the training Distribution (OOD), and the sampling process inevitably evaluates in these OOD regions. This can become problematic for all sampling methods, especially when we move to parallel sampling which requires us to initialize and update the entire sample trajectory of dynamics in parallel, leading to many OOD evaluations. To address this problem, we introduce a new self-supervised training objective that differentiates the levels of noise added to a sample, leading to improved OOD denoising performance. The approach is based on our observation that diffusion models implicitly define a log-likelihood ratio that distinguishes distributions with different amounts of noise, and this expression depends on denoiser performance outside the standard training distribution. We show by diverse experiments that the proposed contrastive diffusion training is effective for both sequential and parallel settings, and it improves the performance and speed of parallel samplers significantly. | [
"['Yunshu Wu' 'Yingtao Luo' 'Xianghao Kong' 'Evangelos E. Papalexakis'\n 'Greg Ver Steeg']"
] |
null | null | 2407.08947 | null | null | http://arxiv.org/pdf/2407.08947v1 | 2024-07-12T03:07:28Z | 2024-07-12T03:07:28Z | Constructing Concept-based Models to Mitigate Spurious Correlations with
Minimal Human Effort | Enhancing model interpretability can address spurious correlations by revealing how models draw their predictions. Concept Bottleneck Models (CBMs) can provide a principled way of disclosing and guiding model behaviors through human-understandable concepts, albeit at a high cost of human efforts in data annotation. In this paper, we leverage a synergy of multiple foundation models to construct CBMs with nearly no human effort. We discover undesirable biases in CBMs built on pre-trained models and propose a novel framework designed to exploit pre-trained models while being immune to these biases, thereby reducing vulnerability to spurious correlations. Specifically, our method offers a seamless pipeline that adopts foundation models for assessing potential spurious correlations in datasets, annotating concepts for images, and refining the annotations for improved robustness. We evaluate the proposed method on multiple datasets, and the results demonstrate its effectiveness in reducing model reliance on spurious correlations while preserving its interpretability. | [
"['Jeeyung Kim' 'Ze Wang' 'Qiang Qiu']"
] |
null | null | 2407.08953 | null | null | http://arxiv.org/pdf/2407.08953v1 | 2024-07-12T03:16:54Z | 2024-07-12T03:16:54Z | Attribution Methods in Asset Pricing: Do They Account for Risk? | Over the past few decades, machine learning models have been extremely successful. As a result of axiomatic attribution methods, feature contributions have been explained more clearly and rigorously. There are, however, few studies that have examined domain knowledge in conjunction with the axioms. In this study, we examine asset pricing in finance, a field closely related to risk management. Consequently, when applying machine learning models, we must ensure that the attribution methods reflect the underlying risks accurately. In this work, we present and study several axioms derived from asset pricing domain knowledge. It is shown that while Shapley value and Integrated Gradients preserve most axioms, neither can satisfy all axioms. Using extensive analytical and empirical examples, we demonstrate how attribution methods can reflect risks and when they should not be used. | [
"['Dangxing Chen' 'Yuan Gao']"
] |
null | null | 2407.08964 | null | null | http://arxiv.org/pdf/2407.08964v1 | 2024-07-12T03:28:24Z | 2024-07-12T03:28:24Z | Communication-Aware Reinforcement Learning for Cooperative Adaptive
Cruise Control | Cooperative Adaptive Cruise Control (CACC) plays a pivotal role in enhancing traffic efficiency and safety in Connected and Autonomous Vehicles (CAVs). Reinforcement Learning (RL) has proven effective in optimizing complex decision-making processes in CACC, leading to improved system performance and adaptability. Among RL approaches, Multi-Agent Reinforcement Learning (MARL) has shown remarkable potential by enabling coordinated actions among multiple CAVs through Centralized Training with Decentralized Execution (CTDE). However, MARL often faces scalability issues, particularly when CACC vehicles suddenly join or leave the platoon, resulting in performance degradation. To address these challenges, we propose Communication-Aware Reinforcement Learning (CA-RL). CA-RL includes a communication-aware module that extracts and compresses vehicle communication information through forward and backward information transmission modules. This enables efficient cyclic information propagation within the CACC traffic flow, ensuring policy consistency and mitigating the scalability problems of MARL in CACC. Experimental results demonstrate that CA-RL significantly outperforms baseline methods in various traffic scenarios, achieving superior scalability, robustness, and overall system performance while maintaining reliable performance despite changes in the number of participating vehicles. | [
"['Sicong Jiang' 'Seongjin Choi' 'Lijun Sun']"
] |
null | null | 2407.08965 | null | null | http://arxiv.org/pdf/2407.08965v1 | 2024-07-12T03:28:46Z | 2024-07-12T03:28:46Z | Lite-SAM Is Actually What You Need for Segment Everything | This paper introduces Lite-SAM, an efficient end-to-end solution for the SegEvery task designed to reduce computational costs and redundancy. Lite-SAM is composed of four main components: a streamlined CNN-Transformer hybrid encoder (LiteViT), an automated prompt proposal network (AutoPPN), a traditional prompt encoder, and a mask decoder. All these components are integrated within the SAM framework. Our LiteViT, a high-performance lightweight backbone network, has only 1.16M parameters, which is a 23% reduction compared to the lightest existing backbone network Shufflenet. We also introduce AutoPPN, an innovative end-to-end method for prompt boxes and points generation. This is an improvement over traditional grid search sampling methods, and its unique design allows for easy integration into any SAM series algorithm, extending its usability. we have thoroughly benchmarked Lite-SAM across a plethora of both public and private datasets. The evaluation encompassed a broad spectrum of universal metrics, including the number of parameters, SegEvery execution time, and accuracy. The findings reveal that Lite-SAM, operating with a lean 4.2M parameters, significantly outpaces its counterparts, demonstrating performance improvements of 43x, 31x, 20x, 21x, and 1.6x over SAM, MobileSAM, Edge-SAM, EfficientViT-SAM, and MobileSAM-v2 respectively, all the while maintaining competitive accuracy. This underscores Lite-SAM's prowess in achieving an optimal equilibrium between performance and precision, thereby setting a new state-of-the-art(SOTA) benchmark in the domain. | [
"['Jianhai Fu' 'Yuanjie Yu' 'Ningchuan Li' 'Yi Zhang' 'Qichao Chen'\n 'Jianping Xiong' 'Jun Yin' 'Zhiyu Xiang']"
] |
null | null | 2407.08966 | null | null | http://arxiv.org/pdf/2407.08966v1 | 2024-07-12T03:30:53Z | 2024-07-12T03:30:53Z | LAPT: Label-driven Automated Prompt Tuning for OOD Detection with
Vision-Language Models | Out-of-distribution (OOD) detection is crucial for model reliability, as it identifies samples from unknown classes and reduces errors due to unexpected inputs. Vision-Language Models (VLMs) such as CLIP are emerging as powerful tools for OOD detection by integrating multi-modal information. However, the practical application of such systems is challenged by manual prompt engineering, which demands domain expertise and is sensitive to linguistic nuances. In this paper, we introduce Label-driven Automated Prompt Tuning (LAPT), a novel approach to OOD detection that reduces the need for manual prompt engineering. We develop distribution-aware prompts with in-distribution (ID) class names and negative labels mined automatically. Training samples linked to these class labels are collected autonomously via image synthesis and retrieval methods, allowing for prompt learning without manual effort. We utilize a simple cross-entropy loss for prompt optimization, with cross-modal and cross-distribution mixing strategies to reduce image noise and explore the intermediate space between distributions, respectively. The LAPT framework operates autonomously, requiring only ID class names as input and eliminating the need for manual intervention. With extensive experiments, LAPT consistently outperforms manually crafted prompts, setting a new standard for OOD detection. Moreover, LAPT not only enhances the distinction between ID and OOD samples, but also improves the ID classification accuracy and strengthens the generalization robustness to covariate shifts, resulting in outstanding performance in challenging full-spectrum OOD detection tasks. Codes are available at url{https://github.com/YBZh/LAPT}. | [
"['Yabin Zhang' 'Wenjie Zhu' 'Chenhang He' 'Lei Zhang']"
] |
null | null | 2407.08970 | null | null | http://arxiv.org/pdf/2407.08970v1 | 2024-07-12T03:40:13Z | 2024-07-12T03:40:13Z | Soft Prompts Go Hard: Steering Visual Language Models with Hidden
Meta-Instructions | We introduce a new type of indirect injection vulnerabilities in language models that operate on images: hidden "meta-instructions" that influence how the model interprets the image and steer the model's outputs to express an adversary-chosen style, sentiment, or point of view. We explain how to create meta-instructions by generating images that act as soft prompts. Unlike jailbreaking attacks and adversarial examples, the outputs resulting from these images are plausible and based on the visual content of the image, yet follow the adversary's (meta-)instructions. We describe the risks of these attacks, including misinformation and spin, evaluate their efficacy for multiple visual language models and adversarial meta-objectives, and demonstrate how they can "unlock" the capabilities of the underlying language models that are unavailable via explicit text instructions. Finally, we discuss defenses against these attacks. | [
"['Tingwei Zhang' 'Collin Zhang' 'John X. Morris' 'Eugene Bagdasaryan'\n 'Vitaly Shmatikov']"
] |
null | null | 2407.08973 | null | null | http://arxiv.org/pdf/2407.08973v1 | 2024-07-12T03:58:04Z | 2024-07-12T03:58:04Z | Integrating White and Black Box Techniques for Interpretable Machine
Learning | In machine learning algorithm design, there exists a trade-off between the interpretability and performance of the algorithm. In general, algorithms which are simpler and easier for humans to comprehend tend to show worse performance than more complex, less transparent algorithms. For example, a random forest classifier is likely to be more accurate than a simple decision tree, but at the expense of interpretability. In this paper, we present an ensemble classifier design which classifies easier inputs using a highly-interpretable classifier (i.e., white box model), and more difficult inputs using a more powerful, but less interpretable classifier (i.e., black box model). | [
"['Eric M. Vernon' 'Naoki Masuyama' 'Yusuke Nojima']"
] |
null | null | 2407.08974 | null | null | http://arxiv.org/pdf/2407.08974v1 | 2024-07-12T04:04:54Z | 2024-07-12T04:04:54Z | Topology-enhanced machine learning model (Top-ML) for anticancer peptide
prediction | Recently, therapeutic peptides have demonstrated great promise for cancer treatment. To explore powerful anticancer peptides, artificial intelligence (AI)-based approaches have been developed to systematically screen potential candidates. However, the lack of efficient featurization of peptides has become a bottleneck for these machine-learning models. In this paper, we propose a topology-enhanced machine learning model (Top-ML) for anticancer peptide prediction. Our Top-ML employs peptide topological features derived from its sequence "connection" information characterized by vector and spectral descriptors. Our Top-ML model has been validated on two widely used AntiCP 2.0 benchmark datasets and has achieved state-of-the-art performance. Our results highlight the potential of leveraging novel topology-based featurization to accelerate the identification of anticancer peptides. | [
"['Joshua Zhi En Tan' 'JunJie Wee' 'Xue Gong' 'Kelin Xia']"
] |
null | null | 2407.08976 | null | null | http://arxiv.org/pdf/2407.08976v1 | 2024-07-12T04:08:01Z | 2024-07-12T04:08:01Z | Computational-Statistical Trade-off in Kernel Two-Sample Testing with
Random Fourier Features | Recent years have seen a surge in methods for two-sample testing, among which the Maximum Mean Discrepancy (MMD) test has emerged as an effective tool for handling complex and high-dimensional data. Despite its success and widespread adoption, the primary limitation of the MMD test has been its quadratic-time complexity, which poses challenges for large-scale analysis. While various approaches have been proposed to expedite the procedure, it has been unclear whether it is possible to attain the same power guarantee as the MMD test at sub-quadratic time cost. To fill this gap, we revisit the approximated MMD test using random Fourier features, and investigate its computational-statistical trade-off. We start by revealing that the approximated MMD test is pointwise consistent in power only when the number of random features approaches infinity. We then consider the uniform power of the test and study the time-power trade-off under the minimax testing framework. Our result shows that, by carefully choosing the number of random features, it is possible to attain the same minimax separation rates as the MMD test within sub-quadratic time. We demonstrate this point under different distributional assumptions such as densities in a Sobolev ball. Our theoretical findings are corroborated by simulation studies. | [
"['Ikjun Choi' 'Ilmun Kim']"
] |
null | null | 2407.08978 | null | null | http://arxiv.org/pdf/2407.08978v1 | 2024-07-12T04:18:22Z | 2024-07-12T04:18:22Z | Towards Chapter-to-Chapter Context-Aware Literary Translation via Large
Language Models | Discourse phenomena in existing document-level translation datasets are sparse, which has been a fundamental obstacle in the development of context-aware machine translation models. Moreover, most existing document-level corpora and context-aware machine translation methods rely on an unrealistic assumption on sentence-level alignments. To mitigate these issues, we first curate a novel dataset of Chinese-English literature, which consists of 160 books with intricate discourse structures. Then, we propose a more pragmatic and challenging setting for context-aware translation, termed chapter-to-chapter (Ch2Ch) translation, and investigate the performance of commonly-used machine translation models under this setting. Furthermore, we introduce a potential approach of finetuning large language models (LLMs) within the domain of Ch2Ch literary translation, yielding impressive improvements over baselines. Through our comprehensive analysis, we unveil that literary translation under the Ch2Ch setting is challenging in nature, with respect to both model learning methods and translation decoding algorithms. | [
"['Linghao Jin' 'Li An' 'Xuezhe Ma']"
] |
null | null | 2407.08983 | null | null | http://arxiv.org/pdf/2407.08983v1 | 2024-07-12T04:38:28Z | 2024-07-12T04:38:28Z | Towards More Trustworthy and Interpretable LLMs for Code through
Syntax-Grounded Explanations | Trustworthiness and interpretability are inextricably linked concepts for LLMs. The more interpretable an LLM is, the more trustworthy it becomes. However, current techniques for interpreting LLMs when applied to code-related tasks largely focus on accuracy measurements, measures of how models react to change, or individual task performance instead of the fine-grained explanations needed at prediction time for greater interpretability, and hence trust. To improve upon this status quo, this paper introduces ASTrust, an interpretability method for LLMs of code that generates explanations grounded in the relationship between model confidence and syntactic structures of programming languages. ASTrust explains generated code in the context of syntax categories based on Abstract Syntax Trees and aids practitioners in understanding model predictions at both local (individual code snippets) and global (larger datasets of code) levels. By distributing and assigning model confidence scores to well-known syntactic structures that exist within ASTs, our approach moves beyond prior techniques that perform token-level confidence mapping by offering a view of model confidence that directly aligns with programming language concepts with which developers are familiar. To put ASTrust into practice, we developed an automated visualization that illustrates the aggregated model confidence scores superimposed on sequence, heat-map, and graph-based visuals of syntactic structures from ASTs. We examine both the practical benefit that ASTrust can provide through a data science study on 12 popular LLMs on a curated set of GitHub repos and the usefulness of ASTrust through a human study. | [
"['David N. Palacio' 'Daniel Rodriguez-Cardenas' 'Alejandro Velasco'\n 'Dipin Khati' 'Kevin Moran' 'Denys Poshyvanyk']"
] |
null | null | 2407.08987 | null | null | http://arxiv.org/pdf/2407.08987v1 | 2024-07-12T04:44:29Z | 2024-07-12T04:44:29Z | Parameter inference from a non-stationary unknown process | Non-stationary systems are found throughout the world, from climate patterns under the influence of variation in carbon dioxide concentration, to brain dynamics driven by ascending neuromodulation. Accordingly, there is a need for methods to analyze non-stationary processes, and yet most time-series analysis methods that are used in practice, on important problems across science and industry, make the simplifying assumption of stationarity. One important problem in the analysis of non-stationary systems is the problem class that we refer to as Parameter Inference from a Non-stationary Unknown Process (PINUP). Given an observed time series, this involves inferring the parameters that drive non-stationarity of the time series, without requiring knowledge or inference of a mathematical model of the underlying system. Here we review and unify a diverse literature of algorithms for PINUP. We formulate the problem, and categorize the various algorithmic contributions. This synthesis will allow researchers to identify gaps in the literature and will enable systematic comparisons of different methods. We also demonstrate that the most common systems that existing methods are tested on - notably the non-stationary Lorenz process and logistic map - are surprisingly easy to perform well on using simple statistical features like windowed mean and variance, undermining the practice of using good performance on these systems as evidence of algorithmic performance. We then identify more challenging problems that many existing methods perform poorly on and which can be used to drive methodological advances in the field. Our results unify disjoint scientific contributions to analyzing non-stationary systems and suggest new directions for progress on the PINUP problem and the broader study of non-stationary phenomena. | [
"['Kieran S. Owens' 'Ben D. Fulcher']"
] |
null | null | 2407.09011 | null | null | http://arxiv.org/pdf/2407.09011v1 | 2024-07-12T06:01:51Z | 2024-07-12T06:01:51Z | One Stone, Four Birds: A Comprehensive Solution for QA System Using
Supervised Contrastive Learning | This paper presents a novel and comprehensive solution to enhance both the robustness and efficiency of question answering (QA) systems through supervised contrastive learning (SCL). Training a high-performance QA system has become straightforward with pre-trained language models, requiring only a small amount of data and simple fine-tuning. However, despite recent advances, existing QA systems still exhibit significant deficiencies in functionality and training efficiency. We address the functionality issue by defining four key tasks: user input intent classification, out-of-domain input detection, new intent discovery, and continual learning. We then leverage a unified SCL-based representation learning method to efficiently build an intra-class compact and inter-class scattered feature space, facilitating both known intent classification and unknown intent detection and discovery. Consequently, with minimal additional tuning on downstream tasks, our approach significantly improves model efficiency and achieves new state-of-the-art performance across all tasks. | [
"['Bo Wang' 'Tsunenori Mine']"
] |
null | null | 2407.09013 | null | null | http://arxiv.org/pdf/2407.09013v1 | 2024-07-12T06:03:38Z | 2024-07-12T06:03:38Z | Procedural Content Generation via Generative Artificial Intelligence | The attempt to utilize machine learning in PCG has been made in the past. In this survey paper, we investigate how generative artificial intelligence (AI), which saw a significant increase in interest in the mid-2010s, is being used for PCG. We review applications of generative AI for the creation of various types of content, including terrains, items, and even storylines. While generative AI is effective for PCG, one significant issues it faces is that building high-performance generative AI requires vast amounts of training data. Because content generally highly customized, domain-specific training data is scarce, and straightforward approaches to generative AI models may not work well. For PCG research to advance further, issues related to limited training data must be overcome. Thus, we also give special consideration to research that addresses the challenges posed by limited training data. | [
"['Xinyu Mao' 'Wanli Yu' 'Kazunori D Yamada' 'Michael R. Zielewski']"
] |
null | null | 2407.09017 | null | null | http://arxiv.org/pdf/2407.09017v1 | 2024-07-12T06:10:01Z | 2024-07-12T06:10:01Z | AI-Driven Guided Response for Security Operation Centers with Microsoft
Copilot for Security | Security operation centers contend with a constant stream of security incidents, ranging from straightforward to highly complex. To address this, we developed Copilot Guided Response (CGR), an industry-scale ML architecture that guides security analysts across three key tasks -- (1) investigation, providing essential historical context by identifying similar incidents; (2) triaging to ascertain the nature of the incident -- whether it is a true positive, false positive, or benign positive; and (3) remediation, recommending tailored containment actions. CGR is integrated into the Microsoft Defender XDR product and deployed worldwide, generating millions of recommendations across thousands of customers. Our extensive evaluation, incorporating internal evaluation, collaboration with security experts, and customer feedback, demonstrates that CGR delivers high-quality recommendations across all three tasks. We provide a comprehensive overview of the CGR architecture, setting a precedent as the first cybersecurity company to openly discuss these capabilities in such depth. Additionally, we GUIDE, the largest public collection of real-world security incidents, spanning 13M evidences across 1M annotated incidents. By enabling researchers and practitioners to conduct research on real-world data, GUIDE advances the state of cybersecurity and supports the development of next-generation machine learning systems. | [
"['Scott Freitas' 'Jovan Kalajdjieski' 'Amir Gharib' 'Rob McCann']"
] |
null | null | 2407.09024 | null | null | http://arxiv.org/pdf/2407.09024v1 | 2024-07-12T06:32:36Z | 2024-07-12T06:32:36Z | Aligning Diffusion Behaviors with Q-functions for Efficient Continuous
Control | Drawing upon recent advances in language model alignment, we formulate offline Reinforcement Learning as a two-stage optimization problem: First pretraining expressive generative policies on reward-free behavior datasets, then fine-tuning these policies to align with task-specific annotations like Q-values. This strategy allows us to leverage abundant and diverse behavior data to enhance generalization and enable rapid adaptation to downstream tasks using minimal annotations. In particular, we introduce Efficient Diffusion Alignment (EDA) for solving continuous control problems. EDA utilizes diffusion models for behavior modeling. However, unlike previous approaches, we represent diffusion policies as the derivative of a scalar neural network with respect to action inputs. This representation is critical because it enables direct density calculation for diffusion models, making them compatible with existing LLM alignment theories. During policy fine-tuning, we extend preference-based alignment methods like Direct Preference Optimization (DPO) to align diffusion behaviors with continuous Q-functions. Our evaluation on the D4RL benchmark shows that EDA exceeds all baseline methods in overall performance. Notably, EDA maintains about 95% of performance and still outperforms several baselines given only 1% of Q-labelled data during fine-tuning. | [
"['Huayu Chen' 'Kaiwen Zheng' 'Hang Su' 'Jun Zhu']"
] |
null | null | 2407.09026 | null | null | http://arxiv.org/pdf/2407.09026v1 | 2024-07-12T06:34:24Z | 2024-07-12T06:34:24Z | HPC: Hierarchical Progressive Coding Framework for Volumetric Video | Volumetric video based on Neural Radiance Field (NeRF) holds vast potential for various 3D applications, but its substantial data volume poses significant challenges for compression and transmission. Current NeRF compression lacks the flexibility to adjust video quality and bitrate within a single model for various network and device capacities. To address these issues, we propose HPC, a novel hierarchical progressive volumetric video coding framework achieving variable bitrate using a single model. Specifically, HPC introduces a hierarchical representation with a multi-resolution residual radiance field to reduce temporal redundancy in long-duration sequences while simultaneously generating various levels of detail. Then, we propose an end-to-end progressive learning approach with a multi-rate-distortion loss function to jointly optimize both hierarchical representation and compression. Our HPC trained only once can realize multiple compression levels, while the current methods need to train multiple fixed-bitrate models for different rate-distortion (RD) tradeoffs. Extensive experiments demonstrate that HPC achieves flexible quality levels with variable bitrate by a single model and exhibits competitive RD performance, even outperforming fixed-bitrate models across various datasets. | [
"['Zihan Zheng' 'Houqiang Zhong' 'Qiang Hu' 'Xiaoyun Zhang' 'Li Song'\n 'Ya Zhang' 'Yanfeng Wang']"
] |
null | null | 2407.09032 | null | null | http://arxiv.org/pdf/2407.09032v1 | 2024-07-12T06:48:00Z | 2024-07-12T06:48:00Z | DRM Revisited: A Complete Error Analysis | In this work, we address a foundational question in the theoretical analysis of the Deep Ritz Method (DRM) under the over-parameteriztion regime: Given a target precision level, how can one determine the appropriate number of training samples, the key architectural parameters of the neural networks, the step size for the projected gradient descent optimization procedure, and the requisite number of iterations, such that the output of the gradient descent process closely approximates the true solution of the underlying partial differential equation to the specified precision? | [
"['Yuling Jiao' 'Ruoxuan Li' 'Peiying Wu' 'Jerry Zhijian Yang'\n 'Pingwen Zhang']"
] |
null | null | 2407.09039 | null | null | http://arxiv.org/pdf/2407.09039v1 | 2024-07-12T07:04:06Z | 2024-07-12T07:04:06Z | Overcoming Catastrophic Forgetting in Tabular Data Classification: A
Pseudorehearsal-based approach | Continual learning (CL) poses the important challenge of adapting to evolving data distributions without forgetting previously acquired knowledge while consolidating new knowledge. In this paper, we introduce a new methodology, coined as Tabular-data Rehearsal-based Incremental Lifelong Learning framework (TRIL3), designed to address the phenomenon of catastrophic forgetting in tabular data classification problems. TRIL3 uses the prototype-based incremental generative model XuILVQ to generate synthetic data to preserve old knowledge and the DNDF algorithm, which was modified to run in an incremental way, to learn classification tasks for tabular data, without storing old samples. After different tests to obtain the adequate percentage of synthetic data and to compare TRIL3 with other CL available proposals, we can conclude that the performance of TRIL3 outstands other options in the literature using only 50% of synthetic data. | [
"['Pablo García-Santaclara' 'Bruno Fernández-Castro'\n 'Rebeca P. Díaz-Redondo']"
] |
null | null | 2407.09050 | null | null | http://arxiv.org/pdf/2407.09050v1 | 2024-07-12T07:18:05Z | 2024-07-12T07:18:05Z | Refusing Safe Prompts for Multi-modal Large Language Models | Multimodal large language models (MLLMs) have become the cornerstone of today's generative AI ecosystem, sparking intense competition among tech giants and startups. In particular, an MLLM generates a text response given a prompt consisting of an image and a question. While state-of-the-art MLLMs use safety filters and alignment techniques to refuse unsafe prompts, in this work, we introduce MLLM-Refusal, the first method that induces refusals for safe prompts. In particular, our MLLM-Refusal optimizes a nearly-imperceptible refusal perturbation and adds it to an image, causing target MLLMs to likely refuse a safe prompt containing the perturbed image and a safe question. Specifically, we formulate MLLM-Refusal as a constrained optimization problem and propose an algorithm to solve it. Our method offers competitive advantages for MLLM model providers by potentially disrupting user experiences of competing MLLMs, since competing MLLM's users will receive unexpected refusals when they unwittingly use these perturbed images in their prompts. We evaluate MLLM-Refusal on four MLLMs across four datasets, demonstrating its effectiveness in causing competing MLLMs to refuse safe prompts while not affecting non-competing MLLMs. Furthermore, we explore three potential countermeasures -- adding Gaussian noise, DiffPure, and adversarial training. Our results show that they are insufficient: though they can mitigate MLLM-Refusal's effectiveness, they also sacrifice the accuracy and/or efficiency of the competing MLLM. The code is available at https://github.com/Sadcardation/MLLM-Refusal. | [
"['Zedian Shao' 'Hongbin Liu' 'Yuepeng Hu' 'Neil Zhenqiang Gong']"
] |
null | null | 2407.09055 | null | null | http://arxiv.org/pdf/2407.09055v1 | 2024-07-12T07:22:45Z | 2024-07-12T07:22:45Z | Advanced Graph Clustering Methods: A Comprehensive and In-Depth Analysis | Graph clustering, which aims to divide a graph into several homogeneous groups, is a critical area of study with applications that span various fields such as social network analysis, bioinformatics, and image segmentation. This paper explores both traditional and more recent approaches to graph clustering. Firstly, key concepts and definitions in graph theory are introduced. The background section covers essential topics, including graph Laplacians and the integration of Deep Learning in graph analysis. The paper then delves into traditional clustering methods, including Spectral Clustering and the Leiden algorithm. Following this, state-of-the-art clustering techniques that leverage deep learning are examined. A comprehensive comparison of these methods is made through experiments. The paper concludes with a discussion of the practical applications of graph clustering and potential future research directions. | [
"['Timothé Watteau' 'Aubin Bonnefoy' 'Simon Illouz-Laurent'\n 'Joaquim Jusseau' 'Serge Iovleff']"
] |
null | null | 2407.09061 | null | null | http://arxiv.org/pdf/2407.09061v1 | 2024-07-12T07:29:08Z | 2024-07-12T07:29:08Z | Spectral Self-supervised Feature Selection | Choosing a meaningful subset of features from high-dimensional observations in unsupervised settings can greatly enhance the accuracy of downstream analysis, such as clustering or dimensionality reduction, and provide valuable insights into the sources of heterogeneity in a given dataset. In this paper, we propose a self-supervised graph-based approach for unsupervised feature selection. Our method's core involves computing robust pseudo-labels by applying simple processing steps to the graph Laplacian's eigenvectors. The subset of eigenvectors used for computing pseudo-labels is chosen based on a model stability criterion. We then measure the importance of each feature by training a surrogate model to predict the pseudo-labels from the observations. Our approach is shown to be robust to challenging scenarios, such as the presence of outliers and complex substructures. We demonstrate the effectiveness of our method through experiments on real-world datasets, showing its robustness across multiple domains, particularly its effectiveness on biological datasets. | [
"['Daniel Segal' 'Ofir Lindenbaum' 'Ariel Jaffe']"
] |
null | null | 2407.09064 | null | null | http://arxiv.org/pdf/2407.09064v1 | 2024-07-12T07:34:10Z | 2024-07-12T07:34:10Z | Multi-Modal Dataset Creation for Federated~Learning with DICOM
Structured Reports | Purpose: Federated training is often hindered by heterogeneous datasets due to divergent data storage options, inconsistent naming schemes, varied annotation procedures, and disparities in label quality. This is particularly evident in the emerging multi-modal learning paradigms, where dataset harmonization including a uniform data representation and filtering options are of paramount importance. Methods: DICOM structured reports enable the standardized linkage of arbitrary information beyond the imaging domain and can be used within Python deep learning pipelines with highdicom. Building on this, we developed an open platform for data integration and interactive filtering capabilities that simplifies the process of assembling multi-modal datasets. Results: In this study, we extend our prior work by showing its applicability to more and divergent data types, as well as streamlining datasets for federated training within an established consortium of eight university hospitals in Germany. We prove its concurrent filtering ability by creating harmonized multi-modal datasets across all locations for predicting the outcome after minimally invasive heart valve replacement. The data includes DICOM data (i.e. computed tomography images, electrocardiography scans) as well as annotations (i.e. calcification segmentations, pointsets and pacemaker dependency), and metadata (i.e. prosthesis and diagnoses). Conclusion: Structured reports bridge the traditional gap between imaging systems and information systems. Utilizing the inherent DICOM reference system arbitrary data types can be queried concurrently to create meaningful cohorts for clinical studies. The graphical interface as well as example structured report templates will be made publicly available. | [
"['Malte Tölle' 'Lukas Burger' 'Halvar Kelm' 'Florian André' 'Peter Bannas'\n 'Gerhard Diller' 'Norbert Frey' 'Philipp Garthe' 'Stefan Groß'\n 'Anja Hennemuth' 'Lars Kaderali' 'Nina Krüger' 'Andreas Leha'\n 'Simon Martin' 'Alexander Meyer' 'Eike Nagel' 'Stefan Orwat'\n 'Clemens Scherer' 'Moritz Seiffert' 'Jan Moritz Seliger' 'Stefan Simm'\n 'Tim Friede' 'Tim Seidler' 'Sandy Engelhardt']"
] |
null | null | 2407.09087 | null | null | http://arxiv.org/pdf/2407.09087v1 | 2024-07-12T08:25:31Z | 2024-07-12T08:25:31Z | On the Role of Discrete Tokenization in Visual Representation Learning | In the realm of self-supervised learning (SSL), masked image modeling (MIM) has gained popularity alongside contrastive learning methods. MIM involves reconstructing masked regions of input images using their unmasked portions. A notable subset of MIM methodologies employs discrete tokens as the reconstruction target, but the theoretical underpinnings of this choice remain underexplored. In this paper, we explore the role of these discrete tokens, aiming to unravel their benefits and limitations. Building upon the connection between MIM and contrastive learning, we provide a comprehensive theoretical understanding on how discrete tokenization affects the model's generalization capabilities. Furthermore, we propose a novel metric named TCAS, which is specifically designed to assess the effectiveness of discrete tokens within the MIM framework. Inspired by this metric, we contribute an innovative tokenizer design and propose a corresponding MIM method named ClusterMIM. It demonstrates superior performance on a variety of benchmark datasets and ViT backbones. Code is available at https://github.com/PKU-ML/ClusterMIM. | [
"['Tianqi Du' 'Yifei Wang' 'Yisen Wang']"
] |
null | null | 2407.09093 | null | null | http://arxiv.org/pdf/2407.09093v1 | 2024-07-12T08:42:58Z | 2024-07-12T08:42:58Z | On Exact Bit-level Reversible Transformers Without Changing
Architectures | In the literature, various reversible deep neural networks (DNN) models have been proposed to reduce memory consumption or improve data-throughput in the training process. However, almost all existing reversible DNNs either are constrained to have special structures or are constructed by modifying the original DNN architectures considerably to enable reversibility. In this work, we propose exact bit-level reversible transformers without changing the architectures in the inference procedure. The basic idea is to first treat each transformer block as the Euler integration approximation for solving an ordinary differential equation (ODE) and then incorporate the technique of bidirectional integration approximation (BDIA) (see [26]) for BDIA-based diffusion inversion) into the neural architecture together with activation quantization to make it exactly bit-level reversible, referred to as BDIA-transformer. In the training process, we let a hyper-parameter $gamma$ in BDIA-transformer randomly take one of the two values ${0.5, -0.5}$ per transformer block for averaging two consecutive integration approximations, which regularizes the models for improving the validation accuracy. Light-weight side information per transformer block is required to be stored in the forward process to account for binary quantization loss to enable exact bit-level reversibility. In the inference procedure, the expectation $mathbb{E}(gamma)=0$ is taken to make the resulting architectures of BDIA-transformer be identical to transformers up to activation quantization. Empirical study indicates that BDIA-transformers outperform their original counterparts notably due to the regularization effect of the $gamma$ parameter. | [
"['Guoqiang Zhang' 'J. P. Lewis' 'W. B. Kleijn']"
] |
null | null | 2407.09096 | null | null | http://arxiv.org/pdf/2407.09096v1 | 2024-07-12T08:48:16Z | 2024-07-12T08:48:16Z | STD-LLM: Understanding Both Spatial and Temporal Properties of
Spatial-Temporal Data with LLMs | Spatial-temporal forecasting and imputation are important for real-world dynamic systems such as intelligent transportation, urban planning, and public health. Most existing methods are tailored for individual forecasting or imputation tasks but are not designed for both. Additionally, they are less effective for zero-shot and few-shot learning. While large language models (LLMs) have exhibited strong pattern recognition and reasoning abilities across various tasks, including few-shot and zero-shot learning, their development in understanding spatial-temporal data has been constrained by insufficient modeling of complex correlations such as the temporal correlations, spatial connectivity, non-pairwise and high-order spatial-temporal correlations within data. In this paper, we propose STD-LLM for understanding both spatial and temporal properties of underline{S}patial-underline{T}emporal underline{D}ata with underline{LLM}s, which is capable of implementing both spatial-temporal forecasting and imputation tasks. STD-LLM understands spatial-temporal correlations via explicitly designed spatial and temporal tokenizers as well as virtual nodes. Topology-aware node embeddings are designed for LLMs to comprehend and exploit the topology structure of data. Additionally, to capture the non-pairwise and higher-order correlations, we design a hypergraph learning module for LLMs, which can enhance the overall performance and improve efficiency. Extensive experiments demonstrate that STD-LLM exhibits strong performance and generalization capabilities across the forecasting and imputation tasks on various datasets. Moreover, STD-LLM achieves promising results on both few-shot and zero-shot learning tasks. | [
"['Yiheng Huang' 'Xiaowei Mao' 'Shengnan Guo' 'Yubin Chen' 'Youfang Lin'\n 'Huaiyu Wan']"
] |
null | null | 2407.09104 | null | null | http://arxiv.org/pdf/2407.09104v1 | 2024-07-12T09:10:07Z | 2024-07-12T09:10:07Z | UserBoost: Generating User-specific Synthetic Data for Faster Enrolment
into Behavioural Biometric Systems | Behavioural biometric authentication systems entail an enrolment period that is burdensome for the user. In this work, we explore generating synthetic gestures from a few real user gestures with generative deep learning, with the application of training a simple (i.e. non-deep-learned) authentication model. Specifically, we show that utilising synthetic data alongside real data can reduce the number of real datapoints a user must provide to enrol into a biometric system. To validate our methods, we use the publicly available dataset of WatchAuth, a system proposed in 2022 for authenticating smartwatch payments using the physical gesture of reaching towards a payment terminal. We develop a regularised autoencoder model for generating synthetic user-specific wrist motion data representing these physical gestures, and demonstrate the diversity and fidelity of our synthetic gestures. We show that using synthetic gestures in training can improve classification ability for a real-world system. Through this technique we can reduce the number of gestures required to enrol a user into a WatchAuth-like system by more than 40% without negatively impacting its error rates. | [
"['George Webber' 'Jack Sturgess' 'Ivan Martinovic']"
] |
null | null | 2407.09105 | null | null | http://arxiv.org/pdf/2407.09105v1 | 2024-07-12T09:10:37Z | 2024-07-12T09:10:37Z | Enhancing Training Efficiency Using Packing with Flash Attention | Padding is often used in tuning LLM models by adding special tokens to shorter training examples to match the length of the longest sequence in each batch. While this ensures uniformity for batch processing, it introduces inefficiencies by including irrelevant padding tokens in the computation and wastes GPU resources. On the other hand, the Hugging Face SFT trainer offers the option to use packing to combine multiple training examples up to the maximum sequence length. This allows for maximal utilization of GPU resources. However, without proper masking of each packed training example, attention will not be computed correctly when using SFT trainer. We enable and then analyse packing and Flash Attention with proper attention masking of each example and show the benefits of this training paradigm. | [
"['Achintya Kundu' 'Rhui Dih Lee' 'Laura Wynter' 'Raghu Kiran Ganti']"
] |
null | null | 2407.09111 | null | null | http://arxiv.org/pdf/2407.09111v1 | 2024-07-12T09:24:34Z | 2024-07-12T09:24:34Z | Inference Optimization of Foundation Models on AI Accelerators | Powerful foundation models, including large language models (LLMs), with Transformer architectures have ushered in a new era of Generative AI across various industries. Industry and research community have witnessed a large number of new applications, based on those foundation models. Such applications include question and answer, customer services, image and video generation, and code completions, among others. However, as the number of model parameters reaches to hundreds of billions, their deployment incurs prohibitive inference costs and high latency in real-world scenarios. As a result, the demand for cost-effective and fast inference using AI accelerators is ever more higher. To this end, our tutorial offers a comprehensive discussion on complementary inference optimization techniques using AI accelerators. Beginning with an overview of basic Transformer architectures and deep learning system frameworks, we deep dive into system optimization techniques for fast and memory-efficient attention computations and discuss how they can be implemented efficiently on AI accelerators. Next, we describe architectural elements that are key for fast transformer inference. Finally, we examine various model compression and fast decoding strategies in the same context. | [
"['Youngsuk Park' 'Kailash Budhathoki' 'Liangfu Chen' 'Jonas Kübler'\n 'Jiaji Huang' 'Matthäus Kleindessner' 'Jun Huan' 'Volkan Cevher'\n 'Yida Wang' 'George Karypis']"
] |
null | null | 2407.09120 | null | null | http://arxiv.org/abs/2407.09120v1 | 2024-07-12T09:35:25Z | 2024-07-12T09:35:25Z | URRL-IMVC: Unified and Robust Representation Learning for Incomplete
Multi-View Clustering | Incomplete multi-view clustering (IMVC) aims to cluster multi-view data that are only partially available. This poses two main challenges: effectively leveraging multi-view information and mitigating the impact of missing views. Prevailing solutions employ cross-view contrastive learning and missing view recovery techniques. However, they either neglect valuable complementary information by focusing only on consensus between views or provide unreliable recovered views due to the absence of supervision. To address these limitations, we propose a novel Unified and Robust Representation Learning for Incomplete Multi-View Clustering (URRL-IMVC). URRL-IMVC directly learns a unified embedding that is robust to view missing conditions by integrating information from multiple views and neighboring samples. Firstly, to overcome the limitations of cross-view contrastive learning, URRL-IMVC incorporates an attention-based auto-encoder framework to fuse multi-view information and generate unified embeddings. Secondly, URRL-IMVC directly enhances the robustness of the unified embedding against view-missing conditions through KNN imputation and data augmentation techniques, eliminating the need for explicit missing view recovery. Finally, incremental improvements are introduced to further enhance the overall performance, such as the Clustering Module and the customization of the Encoder. We extensively evaluate the proposed URRL-IMVC framework on various benchmark datasets, demonstrating its state-of-the-art performance. Furthermore, comprehensive ablation studies are performed to validate the effectiveness of our design. | [
"['Ge Teng' 'Ting Mao' 'Chen Shen' 'Xiang Tian' 'Xuesong Liu' 'Yaowu Chen'\n 'Jieping Ye']"
] |
null | null | 2407.09124 | null | null | http://arxiv.org/pdf/2407.09124v1 | 2024-07-12T09:38:47Z | 2024-07-12T09:38:47Z | Decentralized multi-agent reinforcement learning algorithm using a
cluster-synchronized laser network | Multi-agent reinforcement learning (MARL) studies crucial principles that are applicable to a variety of fields, including wireless networking and autonomous driving. We propose a photonic-based decision-making algorithm to address one of the most fundamental problems in MARL, called the competitive multi-armed bandit (CMAB) problem. Our numerical simulations demonstrate that chaotic oscillations and cluster synchronization of optically coupled lasers, along with our proposed decentralized coupling adjustment, efficiently balance exploration and exploitation while facilitating cooperative decision-making without explicitly sharing information among agents. Our study demonstrates how decentralized reinforcement learning can be achieved by exploiting complex physical processes controlled by simple algorithms. | [
"['Shun Kotoku' 'Takatomo Mihana' 'André Röhm' 'Ryoichi Horisaki']"
] |
null | null | 2407.09127 | null | null | http://arxiv.org/pdf/2407.09127v1 | 2024-07-12T09:46:26Z | 2024-07-12T09:46:26Z | Robustness of Explainable Artificial Intelligence in Industrial Process
Modelling | eXplainable Artificial Intelligence (XAI) aims at providing understandable explanations of black box models. In this paper, we evaluate current XAI methods by scoring them based on ground truth simulations and sensitivity analysis. To this end, we used an Electric Arc Furnace (EAF) model to better understand the limits and robustness characteristics of XAI methods such as SHapley Additive exPlanations (SHAP), Local Interpretable Model-agnostic Explanations (LIME), as well as Averaged Local Effects (ALE) or Smooth Gradients (SG) in a highly topical setting. These XAI methods were applied to various types of black-box models and then scored based on their correctness compared to the ground-truth sensitivity of the data-generating processes using a novel scoring evaluation methodology over a range of simulated additive noise. The resulting evaluation shows that the capability of the Machine Learning (ML) models to capture the process accurately is, indeed, coupled with the correctness of the explainability of the underlying data-generating process. We furthermore show the differences between XAI methods in their ability to correctly predict the true sensitivity of the modeled industrial process. | [
"['Benedikt Kantz' 'Clemens Staudinger' 'Christoph Feilmayr'\n 'Johannes Wachlmayr' 'Alexander Haberl' 'Stefan Schuster'\n 'Franz Pernkopf']"
] |
null | null | 2407.09136 | null | null | http://arxiv.org/pdf/2407.09136v1 | 2024-07-12T10:11:40Z | 2024-07-12T10:11:40Z | Stepwise Verification and Remediation of Student Reasoning Errors with
Large Language Model Tutors | Large language models (LLMs) present an opportunity to scale high-quality personalized education to all. A promising approach towards this means is to build dialog tutoring models that scaffold students' problem-solving. However, even though existing LLMs perform well in solving reasoning questions, they struggle to precisely detect student's errors and tailor their feedback to these errors. Inspired by real-world teaching practice where teachers identify student errors and customize their response based on them, we focus on verifying student solutions and show how grounding to such verification improves the overall quality of tutor response generation. We collect a dataset of 1K stepwise math reasoning chains with the first error step annotated by teachers. We show empirically that finding the mistake in a student solution is challenging for current models. We propose and evaluate several verifiers for detecting these errors. Using both automatic and human evaluation we show that the student solution verifiers steer the generation model towards highly targeted responses to student errors which are more often correct with less hallucinations compared to existing baselines. | [
"['Nico Daheim' 'Jakub Macina' 'Manu Kapur' 'Iryna Gurevych'\n 'Mrinmaya Sachan']"
] |
null | null | 2407.09141 | null | null | http://arxiv.org/pdf/2407.09141v1 | 2024-07-12T10:19:02Z | 2024-07-12T10:19:02Z | Accuracy is Not All You Need | When Large Language Models (LLMs) are compressed using techniques such as quantization, the predominant way to demonstrate the validity of such techniques is by measuring the model's accuracy on various benchmarks.If the accuracies of the baseline model and the compressed model are close, it is assumed that there was negligible degradation in quality.However, even when the accuracy of baseline and compressed model are similar, we observe the phenomenon of flips, wherein answers change from correct to incorrect and vice versa in proportion.We conduct a detailed study of metrics across multiple compression techniques, models and datasets, demonstrating that the behavior of compressed models as visible to end-users is often significantly different from the baseline model, even when accuracy is similar.We further evaluate compressed models qualitatively and quantitatively using MT-Bench and show that compressed models are significantly worse than baseline models in this free-form generative task.Thus, we argue that compression techniques should also be evaluated using distance metrics.We propose two such metrics, KL-Divergence and flips, and show that they are well correlated. | [
"['Abhinav Dutta' 'Sanjeev Krishnan' 'Nipun Kwatra' 'Ramachandran Ramjee']"
] |
null | null | 2407.09150 | null | null | http://arxiv.org/pdf/2407.09150v1 | 2024-07-12T10:32:53Z | 2024-07-12T10:32:53Z | Evaluating the Adversarial Robustness of Semantic Segmentation: Trying
Harder Pays Off | Machine learning models are vulnerable to tiny adversarial input perturbations optimized to cause a very large output error. To measure this vulnerability, we need reliable methods that can find such adversarial perturbations. For image classification models, evaluation methodologies have emerged that have stood the test of time. However, we argue that in the area of semantic segmentation, a good approximation of the sensitivity to adversarial perturbations requires significantly more effort than what is currently considered satisfactory. To support this claim, we re-evaluate a number of well-known robust segmentation models in an extensive empirical study. We propose new attacks and combine them with the strongest attacks available in the literature. We also analyze the sensitivity of the models in fine detail. The results indicate that most of the state-of-the-art models have a dramatically larger sensitivity to adversarial perturbations than previously reported. We also demonstrate a size-bias: small objects are often more easily attacked, even if the large objects are robust, a phenomenon not revealed by current evaluation metrics. Our results also demonstrate that a diverse set of strong attacks is necessary, because different models are often vulnerable to different attacks. | [
"['Levente Halmosi' 'Bálint Mohos' 'Márk Jelasity']"
] |
null | null | 2407.09157 | null | null | http://arxiv.org/pdf/2407.09157v1 | 2024-07-12T10:44:51Z | 2024-07-12T10:44:51Z | Movie Recommendation with Poster Attention via Multi-modal Transformer
Feature Fusion | Pre-trained models learn general representations from large datsets which can be fine-turned for specific tasks to significantly reduce training time. Pre-trained models like generative pretrained transformers (GPT), bidirectional encoder representations from transformers (BERT), vision transfomers (ViT) have become a cornerstone of current research in machine learning. This study proposes a multi-modal movie recommendation system by extract features of the well designed posters for each movie and the narrative text description of the movie. This system uses the BERT model to extract the information of text modality, the ViT model applied to extract the information of poster/image modality, and the Transformer architecture for feature fusion of all modalities to predict users' preference. The integration of pre-trained foundational models with some smaller data sets in downstream applications capture multi-modal content features in a more comprehensive manner, thereby providing more accurate recommendations. The efficiency of the proof-of-concept model is verified by the standard benchmark problem the MovieLens 100K and 1M datasets. The prediction accuracy of user ratings is enhanced in comparison to the baseline algorithm, thereby demonstrating the potential of this cross-modal algorithm to be applied for movie or video recommendation. | [
"['Linhan Xia' 'Yicheng Yang' 'Ziou Chen' 'Zheng Yang' 'Shengxin Zhu']"
] |
null | null | 2407.09162 | null | null | http://arxiv.org/pdf/2407.09162v1 | 2024-07-12T10:58:01Z | 2024-07-12T10:58:01Z | Exploring State Space and Reasoning by Elimination in Tsetlin Machine | The Tsetlin Machine (TM) has gained significant attention in Machine Learning (ML). By employing logical fundamentals, it facilitates pattern learning and representation, offering an alternative approach for developing comprehensible Artificial Intelligence (AI) with a specific focus on pattern classification in the form of conjunctive clauses. In the domain of Natural Language Processing (NLP), TM is utilised to construct word embedding and describe target words using clauses. To enhance the descriptive capacity of these clauses, we study the concept of Reasoning by Elimination (RbE) in clauses' formulation, which involves incorporating feature negations to provide a more comprehensive representation. In more detail, this paper employs the Tsetlin Machine Auto-Encoder (TM-AE) architecture to generate dense word vectors, aiming at capturing contextual information by extracting feature-dense vectors for a given vocabulary. Thereafter, the principle of RbE is explored to improve descriptivity and optimise the performance of the TM. Specifically, the specificity parameter s and the voting margin parameter T are leveraged to regulate feature distribution in the state space, resulting in a dense representation of information for each clause. In addition, we investigate the state spaces of TM-AE, especially for the forgotten/excluded features. Empirical investigations on artificially generated data, the IMDB dataset, and the 20 Newsgroups dataset showcase the robustness of the TM, with accuracy reaching 90.62% for the IMDB. | [
"['Ahmed K. Kadhim' 'Ole-Christoffer Granmo' 'Lei Jiao' 'Rishad Shafik']"
] |
null | null | 2407.09165 | null | null | http://arxiv.org/pdf/2407.09165v1 | 2024-07-12T10:59:44Z | 2024-07-12T10:59:44Z | Robust Yet Efficient Conformal Prediction Sets | Conformal prediction (CP) can convert any model's output into prediction sets guaranteed to include the true label with any user-specified probability. However, same as the model itself, CP is vulnerable to adversarial test examples (evasion) and perturbed calibration data (poisoning). We derive provably robust sets by bounding the worst-case change in conformity scores. Our tighter bounds lead to more efficient sets. We cover both continuous and discrete (sparse) data and our guarantees work both for evasion and poisoning attacks (on both features and labels). | [
"['Soroush H. Zargarbashi' 'Mohammad Sadegh Akhondzadeh'\n 'Aleksandar Bojchevski']"
] |
null | null | 2407.09167 | null | null | http://arxiv.org/pdf/2407.09167v1 | 2024-07-12T11:01:28Z | 2024-07-12T11:01:28Z | SE(3)-bi-equivariant Transformers for Point Cloud Assembly | Given a pair of point clouds, the goal of assembly is to recover a rigid transformation that aligns one point cloud to the other. This task is challenging because the point clouds may be non-overlapped, and they may have arbitrary initial positions. To address these difficulties, we propose a method, called SE(3)-bi-equivariant transformer (BITR), based on the SE(3)-bi-equivariance prior of the task: it guarantees that when the inputs are rigidly perturbed, the output will transform accordingly. Due to its equivariance property, BITR can not only handle non-overlapped PCs, but also guarantee robustness against initial positions. Specifically, BITR first extracts features of the inputs using a novel $SE(3) times SE(3)$-transformer, and then projects the learned feature to group SE(3) as the output. Moreover, we theoretically show that swap and scale equivariances can be incorporated into BITR, thus it further guarantees stable performance under scaling and swapping the inputs. We experimentally show the effectiveness of BITR in practical tasks. | [
"['Ziming Wang' 'Rebecka Jörnsten']"
] |
null | null | 2407.09173 | null | null | http://arxiv.org/pdf/2407.09173v1 | 2024-07-12T11:12:49Z | 2024-07-12T11:12:49Z | Conformal Inductive Graph Neural Networks | Conformal prediction (CP) transforms any model's output into prediction sets guaranteed to include (cover) the true label. CP requires exchangeability, a relaxation of the i.i.d. assumption, to obtain a valid distribution-free coverage guarantee. This makes it directly applicable to transductive node-classification. However, conventional CP cannot be applied in inductive settings due to the implicit shift in the (calibration) scores caused by message passing with the new nodes. We fix this issue for both cases of node and edge-exchangeable graphs, recovering the standard coverage guarantee without sacrificing statistical efficiency. We further prove that the guarantee holds independently of the prediction time, e.g. upon arrival of a new node/edge or at any subsequent moment. | [
"['Soroush H. Zargarbashi' 'Aleksandar Bojchevski']"
] |
null | null | 2407.09186 | null | null | http://arxiv.org/pdf/2407.09186v1 | 2024-07-12T11:38:41Z | 2024-07-12T11:38:41Z | Variational Inference via Smoothed Particle Hydrodynamics | A new variational inference method, SPH-ParVI, based on smoothed particle hydrodynamics (SPH), is proposed for sampling partially known densities (e.g. up to a constant) or sampling using gradients. SPH-ParVI simulates the flow of a fluid under external effects driven by the target density; transient or steady state of the fluid approximates the target density. The continuum fluid is modelled as an interacting particle system (IPS) via SPH, where each particle carries smoothed properties, interacts and evolves as per the Navier-Stokes equations. This mesh-free, Lagrangian simulation method offers fast, flexible, scalable and deterministic sampling and inference for a class of probabilistic models such as those encountered in Bayesian inference and generative modelling. | [
"['Yongchao Huang']"
] |
null | null | 2407.09212 | null | null | http://arxiv.org/pdf/2407.09212v1 | 2024-07-12T12:20:39Z | 2024-07-12T12:20:39Z | Generating SROI^{-} Ontologies via Knowledge Graph Query Embedding
Learning | Query embedding approaches answer complex logical queries over incomplete knowledge graphs (KGs) by computing and operating on low-dimensional vector representations of entities, relations, and queries. However, current query embedding models heavily rely on excessively parameterized neural networks and cannot explain the knowledge learned from the graph. We propose a novel query embedding method, AConE, which explains the knowledge learned from the graph in the form of SROI^{-} description logic axioms while being more parameter-efficient than most existing approaches. AConE associates queries to a SROI^{-} description logic concept. Every SROI^{-} concept is embedded as a cone in complex vector space, and each SROI^{-} relation is embedded as a transformation that rotates and scales cones. We show theoretically that AConE can learn SROI^{-} axioms, and defines an algebra whose operations correspond one to one to SROI^{-} description logic concept constructs. Our empirical study on multiple query datasets shows that AConE achieves superior results over previous baselines with fewer parameters. Notably on the WN18RR dataset, AConE achieves significant improvement over baseline models. We provide comprehensive analyses showing that the capability to represent axioms positively impacts the results of query answering. | [
"['Yunjie He' 'Daniel Hernandez' 'Mojtaba Nayyeri' 'Bo Xiong'\n 'Yuqicheng Zhu' 'Evgeny Kharlamov' 'Steffen Staab']"
] |
null | null | 2407.09216 | null | null | http://arxiv.org/pdf/2407.09216v1 | 2024-07-12T12:28:08Z | 2024-07-12T12:28:08Z | A Fair Ranking and New Model for Panoptic Scene Graph Generation | In panoptic scene graph generation (PSGG), models retrieve interactions between objects in an image which are grounded by panoptic segmentation masks. Previous evaluations on panoptic scene graphs have been subject to an erroneous evaluation protocol where multiple masks for the same object can lead to multiple relation distributions per mask-mask pair. This can be exploited to increase the final score. We correct this flaw and provide a fair ranking over a wide range of existing PSGG models. The observed scores for existing methods increase by up to 7.4 mR@50 for all two-stage methods, while dropping by up to 19.3 mR@50 for all one-stage methods, highlighting the importance of a correct evaluation. Contrary to recent publications, we show that existing two-stage methods are competitive to one-stage methods. Building on this, we introduce the Decoupled SceneFormer (DSFormer), a novel two-stage model that outperforms all existing scene graph models by a large margin of +11 mR@50 and +10 mNgR@50 on the corrected evaluation, thus setting a new SOTA. As a core design principle, DSFormer encodes subject and object masks directly into feature space. | [
"['Julian Lorenz' 'Alexander Pest' 'Daniel Kienzle' 'Katja Ludwig'\n 'Rainer Lienhart']"
] |
null | null | 2407.09250 | null | null | http://arxiv.org/pdf/2407.09250v1 | 2024-07-12T13:23:54Z | 2024-07-12T13:23:54Z | FedsLLM: Federated Split Learning for Large Language Models over
Communication Networks | Addressing the challenges of deploying large language models in wireless communication networks, this paper combines low-rank adaptation technology (LoRA) with the splitfed learning framework to propose the federated split learning for large language models (FedsLLM) framework. The method introduced in this paper utilizes LoRA technology to reduce processing loads by dividing the network into client subnetworks and server subnetworks. It leverages a federated server to aggregate and update client models. As the training data are transmitted through a wireless network between clients and both main and federated servers, the training delay is determined by the learning accuracy and the allocation of communication bandwidth. This paper models the minimization of the training delay by integrating computation and communication optimization, simplifying the optimization problem into a convex problem to find the optimal solution. Additionally, it presents a lemma that describes the precise solutions to this problem. Simulation results demonstrate that the proposed optimization algorithm reduces delays by an average of 47.63% compared to unoptimized scenarios. | [
"['Kai Zhao' 'Zhaohui Yang' 'Chongwen Huang' 'Xiaoming Chen'\n 'Zhaoyang Zhang']"
] |
null | null | 2407.09251 | null | null | http://arxiv.org/pdf/2407.09251v1 | 2024-07-12T13:30:00Z | 2024-07-12T13:30:00Z | Deep Adversarial Defense Against Multilevel-Lp Attacks | Deep learning models have shown considerable vulnerability to adversarial attacks, particularly as attacker strategies become more sophisticated. While traditional adversarial training (AT) techniques offer some resilience, they often focus on defending against a single type of attack, e.g., the $ell_infty$-norm attack, which can fail for other types. This paper introduces a computationally efficient multilevel $ell_p$ defense, called the Efficient Robust Mode Connectivity (EMRC) method, which aims to enhance a deep learning model's resilience against multiple $ell_p$-norm attacks. Similar to analytical continuation approaches used in continuous optimization, the method blends two $p$-specific adversarially optimal models, the $ell_1$- and $ell_infty$-norm AT solutions, to provide good adversarial robustness for a range of $p$. We present experiments demonstrating that our approach performs better on various attacks as compared to AT-$ell_infty$, E-AT, and MSD, for datasets/architectures including: CIFAR-10, CIFAR-100 / PreResNet110, WideResNet, ViT-Base. | [
"['Ren Wang' 'Yuxuan Li' 'Alfred Hero']"
] |
null | null | 2407.09271 | null | null | http://arxiv.org/pdf/2407.09271v1 | 2024-07-12T13:57:49Z | 2024-07-12T13:57:49Z | iNeMo: Incremental Neural Mesh Models for Robust Class-Incremental
Learning | Different from human nature, it is still common practice today for vision tasks to train deep learning models only initially and on fixed datasets. A variety of approaches have recently addressed handling continual data streams. However, extending these methods to manage out-of-distribution (OOD) scenarios has not effectively been investigated. On the other hand, it has recently been shown that non-continual neural mesh models exhibit strong performance in generalizing to such OOD scenarios. To leverage this decisive property in a continual learning setting, we propose incremental neural mesh models that can be extended with new meshes over time. In addition, we present a latent space initialization strategy that enables us to allocate feature space for future unseen classes in advance and a positional regularization term that forces the features of the different classes to consistently stay in respective latent space regions. We demonstrate the effectiveness of our method through extensive experiments on the Pascal3D and ObjectNet3D datasets and show that our approach outperforms the baselines for classification by $2-6%$ in the in-domain and by $6-50%$ in the OOD setting. Our work also presents the first incremental learning approach for pose estimation. Our code and model can be found at https://github.com/Fischer-Tom/iNeMo. | [
"['Tom Fischer' 'Yaoyao Liu' 'Artur Jesslen' 'Noor Ahmed' 'Prakhar Kaushik'\n 'Angtian Wang' 'Alan Yuille' 'Adam Kortylewski' 'Eddy Ilg']"
] |
null | null | 2407.09274 | null | null | http://arxiv.org/pdf/2407.09274v1 | 2024-07-12T14:03:02Z | 2024-07-12T14:03:02Z | Unifying Sequences, Structures, and Descriptions for Any-to-Any Protein
Generation with the Large Multimodal Model HelixProtX | Proteins are fundamental components of biological systems and can be represented through various modalities, including sequences, structures, and textual descriptions. Despite the advances in deep learning and scientific large language models (LLMs) for protein research, current methodologies predominantly focus on limited specialized tasks -- often predicting one protein modality from another. These approaches restrict the understanding and generation of multimodal protein data. In contrast, large multimodal models have demonstrated potential capabilities in generating any-to-any content like text, images, and videos, thus enriching user interactions across various domains. Integrating these multimodal model technologies into protein research offers significant promise by potentially transforming how proteins are studied. To this end, we introduce HelixProtX, a system built upon the large multimodal model, aiming to offer a comprehensive solution to protein research by supporting any-to-any protein modality generation. Unlike existing methods, it allows for the transformation of any input protein modality into any desired protein modality. The experimental results affirm the advanced capabilities of HelixProtX, not only in generating functional descriptions from amino acid sequences but also in executing critical tasks such as designing protein sequences and structures from textual descriptions. Preliminary findings indicate that HelixProtX consistently achieves superior accuracy across a range of protein-related tasks, outperforming existing state-of-the-art models. By integrating multimodal large models into protein research, HelixProtX opens new avenues for understanding protein biology, thereby promising to accelerate scientific discovery. | [
"['Zhiyuan Chen' 'Tianhao Chen' 'Chenggang Xie' 'Yang Xue' 'Xiaonan Zhang'\n 'Jingbo Zhou' 'Xiaomin Fang']"
] |