categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2407.05261 | null | null | http://arxiv.org/pdf/2407.05261v1 | 2024-07-07T05:13:51Z | 2024-07-07T05:13:51Z | Disciplined Geodesically Convex Programming | Convex programming plays a fundamental role in machine learning, data science, and engineering. Testing convexity structure in nonlinear programs relies on verifying the convexity of objectives and constraints. citet{grant2006disciplined} introduced a framework, Disciplined Convex Programming (DCP), for automating this verification task for a wide range of convex functions that can be decomposed into basic convex functions (atoms) using convexity-preserving compositions and transformations (rules). However, the restriction to Euclidean convexity concepts can limit the applicability of the framework. For instance, many notable instances of statistical estimators and matrix-valued (sub)routines in machine learning applications are Euclidean non-convex, but exhibit geodesic convexity through a more general Riemannian lens. In this work, we extend disciplined programming to this setting by introducing Disciplined Geodesically Convex Programming (DGCP). We determine convexity-preserving compositions and transformations for geodesically convex functions on general Cartan-Hadamard manifolds, as well as for the special case of symmetric positive definite matrices, a common setting in matrix-valued optimization. For the latter, we also define a basic set of atoms. Our paper is accompanied by a Julia package SymbolicAnalysis.jl, which provides functionality for testing and certifying DGCP-compliant expressions. Our library interfaces with manifold optimization software, which allows for directly solving verified geodesically convex programs. | [
"['Andrew Cheng' 'Vaibhav Dixit' 'Melanie Weber']"
] |
null | null | 2407.05262 | null | null | http://arxiv.org/pdf/2407.05262v1 | 2024-07-07T05:17:17Z | 2024-07-07T05:17:17Z | FastSpiker: Enabling Fast Training for Spiking Neural Networks on
Event-based Data through Learning Rate Enhancements for Autonomous Embedded
Systems | Autonomous embedded systems (e.g., robots) typically necessitate intelligent computation with low power/energy processing for completing their tasks. Such requirements can be fulfilled by embodied neuromorphic intelligence with spiking neural networks (SNNs) because of their high learning quality (e.g., accuracy) and sparse computation. Here, the employment of event-based data is preferred to ensure seamless connectivity between input and processing parts. However, state-of-the-art SNNs still face a long training time to achieve high accuracy, thereby incurring high energy consumption and producing a high rate of carbon emission. Toward this, we propose FastSpiker, a novel methodology that enables fast SNN training on event-based data through learning rate enhancements targeting autonomous embedded systems. In FastSpiker, we first investigate the impact of different learning rate policies and their values, then select the ones that quickly offer high accuracy. Afterward, we explore different settings for the selected learning rate policies to find the appropriate policies through a statistical-based decision. Experimental results show that our FastSpiker offers up to 10.5x faster training time and up to 88.39% lower carbon emission to achieve higher or comparable accuracy to the state-of-the-art on the event-based automotive dataset (i.e., NCARS). In this manner, our FastSpiker methodology paves the way for green and sustainable computing in realizing embodied neuromorphic intelligence for autonomous embedded systems. | [
"['Iqra Bano' 'Rachmad Vidya Wicaksana Putra' 'Alberto Marchisio'\n 'Muhammad Shafique']"
] |
null | null | 2407.05268 | null | null | http://arxiv.org/pdf/2407.05268v1 | 2024-07-07T05:46:01Z | 2024-07-07T05:46:01Z | Federated Knowledge Transfer Fine-tuning Large Server Model with
Resource-Constrained IoT Clients | The training of large models, involving fine-tuning, faces the scarcity of high-quality data. Compared to the solutions based on centralized data centers, updating large models in the Internet of Things (IoT) faces challenges in coordinating knowledge from distributed clients by using their private and heterogeneous data. To tackle such a challenge, we propose KOALA (Federated Knowledge Transfer Fine-tuning Large Server Model with Resource-Constrained IoT Clients) to impel the training of large models in IoT. Since the resources obtained by IoT clients are limited and restricted, it is infeasible to locally execute large models and also update them in a privacy-preserving manner. Therefore, we leverage federated learning and knowledge distillation to update large models through collaboration with their small models, which can run locally at IoT clients to process their private data separately and enable large-small model knowledge transfer through iterative learning between the server and clients. Moreover, to support clients with similar or different computing capacities, KOALA is designed with two kinds of large-small model joint learning modes, namely to be homogeneous or heterogeneous. Experimental results demonstrate that compared to the conventional approach, our method can not only achieve similar training performance but also significantly reduce the need for local storage and computing power resources. | [
"['Shaoyuan Chen' 'Linlin You' 'Rui Liu' 'Shuo Yu' 'Ahmed M. Abdelmoniem']"
] |
null | null | 2407.05285 | null | null | http://arxiv.org/pdf/2407.05285v1 | 2024-07-07T07:06:49Z | 2024-07-07T07:06:49Z | Gradient Diffusion: A Perturbation-Resilient Gradient Leakage Attack | Recent years have witnessed the vulnerability of Federated Learning (FL) against gradient leakage attacks, where the private training data can be recovered from the exchanged gradients, making gradient protection a critical issue for the FL training process. Existing solutions often resort to perturbation-based mechanisms, such as differential privacy, where each participating client injects a specific amount of noise into local gradients before aggregating to the server, and the global distribution variation finally conceals the gradient privacy. However, perturbation is not always the panacea for gradient protection since the robustness heavily relies on the injected noise. This intuition raises an interesting question: textit{is it possible to deactivate existing protection mechanisms by removing the perturbation inside the gradients?} In this paper, we present the answer: textit{yes} and propose the Perturbation-resilient Gradient Leakage Attack (PGLA), the first attempt to recover the perturbed gradients, without additional access to the original model structure or third-party data. Specifically, we leverage the inherent diffusion property of gradient perturbation protection and construct a novel diffusion-based denoising model to implement PGLA. Our insight is that capturing the disturbance level of perturbation during the diffusion reverse process can release the gradient denoising capability, which promotes the diffusion model to generate approximate gradients as the original clean version through adaptive sampling steps. Extensive experiments demonstrate that PGLA effectively recovers the protected gradients and exposes the FL training process to the threat of gradient leakage, achieving the best quality in gradient denoising and data recovery compared to existing models. We hope to arouse public attention on PGLA and its defense. | [
"['Xuan Liu' 'Siqi Cai' 'Qihua Zhou' 'Song Guo' 'Ruibin Li' 'Kaiwei Lin']"
] |
null | null | 2407.05286 | null | null | http://arxiv.org/pdf/2407.05286v1 | 2024-07-07T07:07:04Z | 2024-07-07T07:07:04Z | Stability and Generalization for Stochastic Recursive Momentum-based
Algorithms for (Strongly-)Convex One to $K$-Level Stochastic Optimizations | STOchastic Recursive Momentum (STORM)-based algorithms have been widely developed to solve one to $K$-level ($K geq 3$) stochastic optimization problems. Specifically, they use estimators to mitigate the biased gradient issue and achieve near-optimal convergence results. However, there is relatively little work on understanding their generalization performance, particularly evident during the transition from one to $K$-level optimization contexts. This paper provides a comprehensive generalization analysis of three representative STORM-based algorithms: STORM, COVER, and SVMR, for one, two, and $K$-level stochastic optimizations under both convex and strongly convex settings based on algorithmic stability. Firstly, we define stability for $K$-level optimizations and link it to generalization. Then, we detail the stability results for three prominent STORM-based algorithms. Finally, we derive their excess risk bounds by balancing stability results with optimization errors. Our theoretical results provide strong evidence to complete STORM-based algorithms: (1) Each estimator may decrease their stability due to variance with its estimation target. (2) Every additional level might escalate the generalization error, influenced by the stability and the variance between its cumulative stochastic gradient and the true gradient. (3) Increasing the batch size for the initial computation of estimators presents a favorable trade-off, enhancing the generalization performance. | [
"['Xiaokang Pan' 'Xingyu Li' 'Jin Liu' 'Tao Sun' 'Kai Sun' 'Lixing Chen'\n 'Zhe Qu']"
] |
null | null | 2407.05287 | null | null | http://arxiv.org/pdf/2407.05287v1 | 2024-07-07T07:07:48Z | 2024-07-07T07:07:48Z | Model-agnostic meta-learners for estimating heterogeneous treatment
effects over time | Estimating heterogeneous treatment effects (HTEs) over time is crucial in many disciplines such as personalized medicine. For example, electronic health records are commonly collected over several time periods and then used to personalize treatment decisions. Existing works for this task have mostly focused on model-based learners (i.e., learners that adapt specific machine-learning models). In contrast, model-agnostic learners -- so-called meta-learners -- are largely unexplored. In our paper, we propose several meta-learners that are model-agnostic and thus can be used in combination with arbitrary machine learning models (e.g., transformers) to estimate HTEs over time. Here, our focus is on learners that can be obtained via weighted pseudo-outcome regressions, which allows for efficient estimation by targeting the treatment effect directly. We then provide a comprehensive theoretical analysis that characterizes the different learners and that allows us to offer insights into when specific learners are preferable. Finally, we confirm our theoretical insights through numerical experiments. In sum, while meta-learners are already state-of-the-art for the static setting, we are the first to propose a comprehensive set of meta-learners for estimating HTEs in the time-varying setting. | [
"['Dennis Frauen' 'Konstantin Hess' 'Stefan Feuerriegel']"
] |
null | null | 2407.05302 | null | null | http://arxiv.org/pdf/2407.05302v1 | 2024-07-07T08:37:43Z | 2024-07-07T08:37:43Z | Mamba Hawkes Process | Irregular and asynchronous event sequences are prevalent in many domains, such as social media, finance, and healthcare. Traditional temporal point processes (TPPs), like Hawkes processes, often struggle to model mutual inhibition and nonlinearity effectively. While recent neural network models, including RNNs and Transformers, address some of these issues, they still face challenges with long-term dependencies and computational efficiency. In this paper, we introduce the Mamba Hawkes Process (MHP), which leverages the Mamba state space architecture to capture long-range dependencies and dynamic event interactions. Our results show that MHP outperforms existing models across various datasets. Additionally, we propose the Mamba Hawkes Process Extension (MHP-E), which combines Mamba and Transformer models to enhance predictive capabilities. We present the novel application of the Mamba architecture to Hawkes processes, a flexible and extensible model structure, and a theoretical analysis of the synergy between state space models and Hawkes processes. Experimental results demonstrate the superior performance of both MHP and MHP-E, advancing the field of temporal point process modeling. | [
"['Anningzhe Gao' 'Shan Dai' 'Yan Hu']"
] |
null | null | 2407.05315 | null | null | http://arxiv.org/abs/2407.05315v1 | 2024-07-07T10:08:34Z | 2024-07-07T10:08:34Z | Topological Persistence Guided Knowledge Distillation for Wearable
Sensor Data | Deep learning methods have achieved a lot of success in various applications involving converting wearable sensor data to actionable health insights. A common application areas is activity recognition, where deep-learning methods still suffer from limitations such as sensitivity to signal quality, sensor characteristic variations, and variability between subjects. To mitigate these issues, robust features obtained by topological data analysis (TDA) have been suggested as a potential solution. However, there are two significant obstacles to using topological features in deep learning: (1) large computational load to extract topological features using TDA, and (2) different signal representations obtained from deep learning and TDA which makes fusion difficult. In this paper, to enable integration of the strengths of topological methods in deep-learning for time-series data, we propose to use two teacher networks, one trained on the raw time-series data, and another trained on persistence images generated by TDA methods. The distilled student model utilizes only the raw time-series data at test-time. This approach addresses both issues. The use of KD with multiple teachers utilizes complementary information, and results in a compact model with strong supervisory features and an integrated richer representation. To assimilate desirable information from different modalities, we design new constraints, including orthogonality imposed on feature correlation maps for improving feature expressiveness and allowing the student to easily learn from the teacher. Also, we apply an annealing strategy in KD for fast saturation and better accommodation from different features, while the knowledge gap between the teachers and student is reduced. Finally, a robust student model is distilled, which uses only the time-series data as an input, while implicitly preserving topological features. | [
"['Eun Som Jeon' 'Hongjun Choi' 'Ankita Shukla' 'Yuan Wang' 'Hyunglae Lee'\n 'Matthew P. Buman' 'Pavan Turaga']"
] |
null | null | 2407.05330 | null | null | http://arxiv.org/pdf/2407.05330v1 | 2024-07-07T11:09:38Z | 2024-07-07T11:09:38Z | Fast Proxy Experiment Design for Causal Effect Identification | Identifying causal effects is a key problem of interest across many disciplines. The two long-standing approaches to estimate causal effects are observational and experimental (randomized) studies. Observational studies can suffer from unmeasured confounding, which may render the causal effects unidentifiable. On the other hand, direct experiments on the target variable may be too costly or even infeasible to conduct. A middle ground between these two approaches is to estimate the causal effect of interest through proxy experiments, which are conducted on variables with a lower cost to intervene on compared to the main target. Akbari et al. [2022] studied this setting and demonstrated that the problem of designing the optimal (minimum-cost) experiment for causal effect identification is NP-complete and provided a naive algorithm that may require solving exponentially many NP-hard problems as a sub-routine in the worst case. In this work, we provide a few reformulations of the problem that allow for designing significantly more efficient algorithms to solve it as witnessed by our extensive simulations. Additionally, we study the closely-related problem of designing experiments that enable us to identify a given effect through valid adjustments sets. | [
"['Sepehr Elahi' 'Sina Akbari' 'Jalal Etesami' 'Negar Kiyavash'\n 'Patrick Thiran']"
] |
null | null | 2407.05340 | null | null | http://arxiv.org/pdf/2407.05340v1 | 2024-07-07T12:13:03Z | 2024-07-07T12:13:03Z | Interpreting the Residual Stream of ResNet18 | A mechanistic understanding of the computations learned by deep neural networks (DNNs) is far from complete. In the domain of visual object recognition, prior research has illuminated inner workings of InceptionV1, but DNNs with different architectures have remained largely unexplored. This work investigates ResNet18 with a particular focus on its residual stream, an architectural mechanism which InceptionV1 lacks. We observe that for a given block, channel features of the stream are updated along a spectrum: either the input feature skips to the output, the block feature overwrites the output, or the output is some mixture between the input and block features. Furthermore, we show that many residual stream channels compute scale invariant representations through a mixture of the input's smaller-scale feature with the block's larger-scale feature. This not only mounts evidence for the universality of scale equivariance, but also presents how the residual stream further implements scale invariance. Collectively, our results begin an interpretation of the residual stream in visual object recognition, finding it to be a flexible feature manager and a medium to build scale invariant representations. | [
"['André Longon']"
] |
null | null | 2407.05364 | null | null | http://arxiv.org/pdf/2407.05364v2 | 2024-07-15T08:37:49Z | 2024-07-07T13:32:03Z | PTaRL: Prototype-based Tabular Representation Learning via Space
Calibration | Tabular data have been playing a mostly important role in diverse real-world fields, such as healthcare, engineering, finance, etc. With the recent success of deep learning, many tabular machine learning (ML) methods based on deep networks (e.g., Transformer, ResNet) have achieved competitive performance on tabular benchmarks. However, existing deep tabular ML methods suffer from the representation entanglement and localization, which largely hinders their prediction performance and leads to performance inconsistency on tabular tasks. To overcome these problems, we explore a novel direction of applying prototype learning for tabular ML and propose a prototype-based tabular representation learning framework, PTaRL, for tabular prediction tasks. The core idea of PTaRL is to construct prototype-based projection space (P-Space) and learn the disentangled representation around global data prototypes. Specifically, PTaRL mainly involves two stages: (i) Prototype Generation, that constructs global prototypes as the basis vectors of P-Space for representation, and (ii) Prototype Projection, that projects the data samples into P-Space and keeps the core global data information via Optimal Transport. Then, to further acquire the disentangled representations, we constrain PTaRL with two strategies: (i) to diversify the coordinates towards global prototypes of different representations within P-Space, we bring up a diversification constraint for representation calibration; (ii) to avoid prototype entanglement in P-Space, we introduce a matrix orthogonalization constraint to ensure the independence of global prototypes. Finally, we conduct extensive experiments in PTaRL coupled with state-of-the-art deep tabular ML models on various tabular benchmarks and the results have shown our consistent superiority. | [
"['Hangting Ye' 'Wei Fan' 'Xiaozhuang Song' 'Shun Zheng' 'He Zhao'\n 'Dandan Guo' 'Yi Chang']"
] |
null | null | 2407.05370 | null | null | http://arxiv.org/pdf/2407.05370v1 | 2024-07-07T13:46:22Z | 2024-07-07T13:46:22Z | Learning Label Refinement and Threshold Adjustment for Imbalanced
Semi-Supervised Learning | Semi-supervised learning (SSL) algorithms struggle to perform well when exposed to imbalanced training data. In this scenario, the generated pseudo-labels can exhibit a bias towards the majority class, and models that employ these pseudo-labels can further amplify this bias. Here we investigate pseudo-labeling strategies for imbalanced SSL including pseudo-label refinement and threshold adjustment, through the lens of statistical analysis. We find that existing SSL algorithms which generate pseudo-labels using heuristic strategies or uncalibrated model confidence are unreliable when imbalanced class distributions bias pseudo-labels. To address this, we introduce SEmi-supervised learning with pseudo-label optimization based on VALidation data (SEVAL) to enhance the quality of pseudo-labelling for imbalanced SSL. We propose to learn refinement and thresholding parameters from a partition of the training dataset in a class-balanced way. SEVAL adapts to specific tasks with improved pseudo-labels accuracy and ensures pseudo-labels correctness on a per-class basis. Our experiments show that SEVAL surpasses state-of-the-art SSL methods, delivering more accurate and effective pseudo-labels in various imbalanced SSL situations. SEVAL, with its simplicity and flexibility, can enhance various SSL techniques effectively. The code is publicly available~footnote{url{https://github.com/ZerojumpLine/SEVAL}}. | [
"['Zeju Li' 'Ying-Qiu Zheng' 'Chen Chen' 'Saad Jbabdi']"
] |
null | null | 2407.05375 | null | null | http://arxiv.org/pdf/2407.05375v1 | 2024-07-07T13:57:50Z | 2024-07-07T13:57:50Z | Online Drift Detection with Maximum Concept Discrepancy | Continuous learning from an immense volume of data streams becomes exceptionally critical in the internet era. However, data streams often do not conform to the same distribution over time, leading to a phenomenon called concept drift. Since a fixed static model is unreliable for inferring concept-drifted data streams, establishing an adaptive mechanism for detecting concept drift is crucial. Current methods for concept drift detection primarily assume that the labels or error rates of downstream models are given and/or underlying statistical properties exist in data streams. These approaches, however, struggle to address high-dimensional data streams with intricate irregular distribution shifts, which are more prevalent in real-world scenarios. In this paper, we propose MCD-DD, a novel concept drift detection method based on maximum concept discrepancy, inspired by the maximum mean discrepancy. Our method can adaptively identify varying forms of concept drift by contrastive learning of concept embeddings without relying on labels or statistical properties. With thorough experiments under synthetic and real-world scenarios, we demonstrate that the proposed method outperforms existing baselines in identifying concept drifts and enables qualitative analysis with high explainability. | [
"['Ke Wan' 'Yi Liang' 'Susik Yoon']"
] |
null | null | 2407.05379 | null | null | http://arxiv.org/pdf/2407.05379v1 | 2024-07-07T14:04:57Z | 2024-07-07T14:04:57Z | AiGAS-dEVL: An Adaptive Incremental Neural Gas Model for Drifting Data
Streams under Extreme Verification Latency | The ever-growing speed at which data are generated nowadays, together with the substantial cost of labeling processes cause Machine Learning models to face scenarios in which data are partially labeled. The extreme case where such a supervision is indefinitely unavailable is referred to as extreme verification latency. On the other hand, in streaming setups data flows are affected by exogenous factors that yield non-stationarities in the patterns (concept drift), compelling models learned incrementally from the data streams to adapt their modeled knowledge to the concepts within the stream. In this work we address the casuistry in which these two conditions occur together, by which adaptation mechanisms to accommodate drifts within the stream are challenged by the lack of supervision, requiring further mechanisms to track the evolution of concepts in the absence of verification. To this end we propose a novel approach, AiGAS-dEVL (Adaptive Incremental neural GAS model for drifting Streams under Extreme Verification Latency), which relies on growing neural gas to characterize the distributions of all concepts detected within the stream over time. Our approach exposes that the online analysis of the behavior of these prototypical points over time facilitates the definition of the evolution of concepts in the feature space, the detection of changes in their behavior, and the design of adaptation policies to mitigate the effect of such changes in the model. We assess the performance of AiGAS-dEVL over several synthetic datasets, comparing it to that of state-of-the-art approaches proposed in the recent past to tackle this stream learning setup. Our results reveal that AiGAS-dEVL performs competitively with respect to the rest of baselines, exhibiting a superior adaptability over several datasets in the benchmark while ensuring a simple and interpretable instance-based adaptation strategy. | [
"['Maria Arostegi' 'Miren Nekane Bilbao' 'Jesus L. Lobo' 'Javier Del Ser']"
] |
null | null | 2407.05385 | null | null | http://arxiv.org/pdf/2407.05385v1 | 2024-07-07T14:21:04Z | 2024-07-07T14:21:04Z | Harmony in Diversity: Merging Neural Networks with Canonical Correlation
Analysis | Combining the predictions of multiple trained models through ensembling is generally a good way to improve accuracy by leveraging the different learned features of the models, however it comes with high computational and storage costs. Model fusion, the act of merging multiple models into one by combining their parameters reduces these costs but doesn't work as well in practice. Indeed, neural network loss landscapes are high-dimensional and non-convex and the minima found through learning are typically separated by high loss barriers. Numerous recent works have been focused on finding permutations matching one network features to the features of a second one, lowering the loss barrier on the linear path between them in parameter space. However, permutations are restrictive since they assume a one-to-one mapping between the different models' neurons exists. We propose a new model merging algorithm, CCA Merge, which is based on Canonical Correlation Analysis and aims to maximize the correlations between linear combinations of the model features. We show that our alignment method leads to better performances than past methods when averaging models trained on the same, or differing data splits. We also extend this analysis into the harder setting where more than 2 models are merged, and we find that CCA Merge works significantly better than past methods. Our code is publicly available at https://github.com/shoroi/align-n-merge | [
"['Stefan Horoi' 'Albert Manuel Orozco Camacho' 'Eugene Belilovsky'\n 'Guy Wolf']"
] |
null | null | 2407.05398 | null | null | http://arxiv.org/pdf/2407.05398v1 | 2024-07-07T14:53:41Z | 2024-07-07T14:53:41Z | A Fair Post-Processing Method based on the MADD Metric for Predictive
Student Models | Predictive student models are increasingly used in learning environments. However, due to the rising social impact of their usage, it is now all the more important for these models to be both sufficiently accurate and fair in their predictions. To evaluate algorithmic fairness, a new metric has been developed in education, namely the Model Absolute Density Distance (MADD). This metric enables us to measure how different a predictive model behaves regarding two groups of students, in order to quantify its algorithmic unfairness. In this paper, we thus develop a post-processing method based on this metric, that aims at improving the fairness while preserving the accuracy of relevant predictive models' results. We experiment with our approach on the task of predicting student success in an online course, using both simulated and real-world educational data, and obtain successful results. Our source code and data are in open access at https://github.com/melinaverger/MADD . | [
"['Mélina Verger' 'Chunyang Fan' 'Sébastien Lallé' 'François Bouchet'\n 'Vanda Luengo']"
] |
null | null | 2407.05399 | null | null | http://arxiv.org/pdf/2407.05399v1 | 2024-07-07T14:55:04Z | 2024-07-07T14:55:04Z | IL-TUR: Benchmark for Indian Legal Text Understanding and Reasoning | Legal systems worldwide are inundated with exponential growth in cases and documents. There is an imminent need to develop NLP and ML techniques for automatically processing and understanding legal documents to streamline the legal system. However, evaluating and comparing various NLP models designed specifically for the legal domain is challenging. This paper addresses this challenge by proposing IL-TUR: Benchmark for Indian Legal Text Understanding and Reasoning. IL-TUR contains monolingual (English, Hindi) and multi-lingual (9 Indian languages) domain-specific tasks that address different aspects of the legal system from the point of view of understanding and reasoning over Indian legal documents. We present baseline models (including LLM-based) for each task, outlining the gap between models and the ground truth. To foster further research in the legal domain, we create a leaderboard (available at: https://exploration-lab.github.io/IL-TUR/) where the research community can upload and compare legal text understanding systems. | [
"['Abhinav Joshi' 'Shounak Paul' 'Akshat Sharma' 'Pawan Goyal'\n 'Saptarshi Ghosh' 'Ashutosh Modi']"
] |
null | null | 2407.05404 | null | null | http://arxiv.org/pdf/2407.05404v1 | 2024-07-07T15:07:35Z | 2024-07-07T15:07:35Z | iSign: A Benchmark for Indian Sign Language Processing | Indian Sign Language has limited resources for developing machine learning and data-driven approaches for automated language processing. Though text/audio-based language processing techniques have shown colossal research interest and tremendous improvements in the last few years, Sign Languages still need to catch up due to the need for more resources. To bridge this gap, in this work, we propose iSign: a benchmark for Indian Sign Language (ISL) Processing. We make three primary contributions to this work. First, we release one of the largest ISL-English datasets with more than 118K video-sentence/phrase pairs. To the best of our knowledge, it is the largest sign language dataset available for ISL. Second, we propose multiple NLP-specific tasks (including SignVideo2Text, SignPose2Text, Text2Pose, Word Prediction, and Sign Semantics) and benchmark them with the baseline models for easier access to the research community. Third, we provide detailed insights into the proposed benchmarks with a few linguistic insights into the workings of ISL. We streamline the evaluation of Sign Language processing, addressing the gaps in the NLP research community for Sign Languages. We release the dataset, tasks, and models via the following website: https://exploration-lab.github.io/iSign/ | [
"['Abhinav Joshi' 'Romit Mohanty' 'Mounika Kanakanti' 'Andesha Mangla'\n 'Sudeep Choudhary' 'Monali Barbate' 'Ashutosh Modi']"
] |
null | null | 2407.05410 | null | null | http://arxiv.org/abs/2407.05410v1 | 2024-07-07T15:28:41Z | 2024-07-07T15:28:41Z | Synthetic Test Data Generation Using Recurrent Neural Networks: A
Position Paper | Testing in production-like test environments is an essential part of quality assurance processes in many industries. Provisioning of such test environments, for information-intensive services, involves setting up databases that are rich-enough to enable simulating a wide variety of user scenarios. While production data is perhaps the gold-standard here, many organizations, particularly within the public sectors, are not allowed to use production data for testing purposes due to privacy concerns. The alternatives are to use anonymized data, or synthetically generated data. In this paper, we elaborate on these alternatives and compare them in an industrial context. Further we focus on synthetic data generation and investigate the use of recurrent neural networks for this purpose. In our preliminary experiments, we were able to generate representative and highly accurate data using a recurrent neural network. These results open new research questions that we discuss here, and plan to investigate in our future research. | [
"['Razieh Behjati' 'Erik Arisholm' 'Chao Tan' 'Margrethe M. Bedregal']"
] |
null | null | 2407.05413 | null | null | http://arxiv.org/pdf/2407.05413v2 | 2024-07-10T09:01:31Z | 2024-07-07T15:37:13Z | SBoRA: Low-Rank Adaptation with Regional Weight Updates | This paper introduces Standard Basis LoRA (SBoRA), a novel parameter-efficient fine-tuning approach for Large Language Models that builds upon the pioneering works of Low-Rank Adaptation (LoRA) and Orthogonal Adaptation. SBoRA further reduces the computational and memory requirements of LoRA while enhancing learning performance. By leveraging orthogonal standard basis vectors to initialize one of the low-rank matrices, either A or B, SBoRA enables regional weight updates and memory-efficient fine-tuning. This approach gives rise to two variants, SBoRA-FA and SBoRA-FB, where only one of the matrices is updated, resulting in a sparse update matrix with a majority of zero rows or columns. Consequently, the majority of the fine-tuned model's weights remain unchanged from the pre-trained weights. This characteristic of SBoRA, wherein regional weight updates occur, is reminiscent of the modular organization of the human brain, which efficiently adapts to new tasks. Our empirical results demonstrate the superiority of SBoRA-FA over LoRA in various fine-tuning tasks, including commonsense reasoning and arithmetic reasoning. Furthermore, we evaluate the effectiveness of QSBoRA on quantized LLaMA models of varying scales, highlighting its potential for efficient adaptation to new tasks. Code is available at https://github.com/cityuhkai/SBoRA | [
"['Lai-Man Po' 'Yuyang Liu' 'Haoxuan Wu' 'Tianqi Zhang' 'Wing-Yin Yu'\n 'Zeyu Jiang' 'Kun Li']"
] |
null | null | 2407.05417 | null | null | http://arxiv.org/pdf/2407.05417v1 | 2024-07-07T15:44:42Z | 2024-07-07T15:44:42Z | See Further for Parameter Efficient Fine-tuning by Standing on the
Shoulders of Decomposition | The rapid expansion of large foundation models within the pre-training and fine-tuning framework has underscored that larger models often yield better results. However, the scaling up of large foundation models has led to soaring costs in fine-tuning and parameter storage, rendering extensive adaptations impractical. This challenge has sparked the development of parameter-efficient fine-tuning (PEFT), which focuses on optimizing a select subset of parameters while keeping the rest fixed, significantly lowering computational and storage overheads. While recent years have witnessed a significant success in PEFT, a deep understanding of the fundamental principles behind these methods remains unexplored. To this end, here we take the first step to unify all approaches by dissecting them from a decomposition perspective. We initiate a comprehensive mathematical analysis of these methods, allowing us to delve deeply into their underlying mechanisms, and we explore the reasons behind the variations in performance among different techniques. Furthermore, inspired by our theoretical analysis, we introduce two novel PEFT methods alongside a simple yet effective framework designed to enhance the performance of PEFT techniques across various applications. Our empirical validations, conducted across multiple datasets, demonstrate the efficacy of these methods, showcasing both theoretical validity and practical performance improvements under the guidance of our analytical findings. We believe our work will deepen researchers' understanding of PEFT and other techniques, prompting further contemplation and advancing the research across the whole community. | [
"['Chongjie Si' 'Xiaokang Yang' 'Wei Shen']"
] |
null | null | 2407.05426 | null | null | http://arxiv.org/pdf/2407.05426v1 | 2024-05-21T09:26:52Z | 2024-05-21T09:26:52Z | AI in Manufacturing: Market Analysis and Opportunities | In this paper, we explore the transformative impact of Artificial Intelligence (AI) in the manufacturing sector, highlighting its potential to revolutionize industry practices and enhance operational efficiency. We delve into various applications of AI in manufacturing, with a particular emphasis on human-machine interfaces (HMI) and AI-powered milling machines, showcasing how these technologies contribute to more intuitive operations and precision in production processes. Through rigorous market analysis, the paper presents insightful data on AI adoption rates among German manufacturers, comparing these figures with global trends and exploring the specific uses of AI in production, maintenance, customer service, and more. In addition, the paper examines the emerging field of Generative AI and the potential applications of large language models in manufacturing processes. The findings indicate a significant increase in AI adoption from 6% in 2020 to 13.3% in 2023 among German companies, with a projection of substantial economic impact by 2030. The study also addresses the challenges faced by companies, such as data quality and integration hurdles, providing a balanced view of the opportunities and obstacles in AI implementation. | [
"['Mohamed Abdelaal']"
] |
null | null | 2407.05483 | null | null | http://arxiv.org/pdf/2407.05483v1 | 2024-07-07T19:55:09Z | 2024-07-07T19:55:09Z | Just read twice: closing the recall gap for recurrent language models | Recurrent large language models that compete with Transformers in language modeling perplexity are emerging at a rapid rate (e.g., Mamba, RWKV). Excitingly, these architectures use a constant amount of memory during inference. However, due to the limited memory, recurrent LMs cannot recall and use all the information in long contexts leading to brittle in-context learning (ICL) quality. A key challenge for efficient LMs is selecting what information to store versus discard. In this work, we observe the order in which information is shown to the LM impacts the selection difficulty. To formalize this, we show that the hardness of information recall reduces to the hardness of a problem called set disjointness (SD), a quintessential problem in communication complexity that requires a streaming algorithm (e.g., recurrent model) to decide whether inputted sets are disjoint. We empirically and theoretically show that the recurrent memory required to solve SD changes with set order, i.e., whether the smaller set appears first in-context. Our analysis suggests, to mitigate the reliance on data order, we can put information in the right order in-context or process prompts non-causally. Towards that end, we propose: (1) JRT-Prompt, where context gets repeated multiple times in the prompt, effectively showing the model all data orders. This gives $11.0 pm 1.3$ points of improvement, averaged across $16$ recurrent LMs and the $6$ ICL tasks, with $11.9times$ higher throughput than FlashAttention-2 for generation prefill (length $32$k, batch size $16$, NVidia H100). We then propose (2) JRT-RNN, which uses non-causal prefix-linear-attention to process prompts and provides $99%$ of Transformer quality at $360$M params., $30$B tokens and $96%$ at $1.3$B params., $50$B tokens on average across the tasks, with $19.2times$ higher throughput for prefill than FA2. | [
"['Simran Arora' 'Aman Timalsina' 'Aaryan Singhal' 'Benjamin Spector'\n 'Sabri Eyuboglu' 'Xinyi Zhao' 'Ashish Rao' 'Atri Rudra' 'Christopher Ré']"
] |
null | null | 2407.05484 | null | null | http://arxiv.org/pdf/2407.05484v1 | 2024-07-07T20:02:52Z | 2024-07-07T20:02:52Z | Learning to Price Homogeneous Data | We study a data pricing problem, where a seller has access to $N$ homogeneous data points (e.g. drawn i.i.d. from some distribution). There are $m$ types of buyers in the market, where buyers of the same type $i$ have the same valuation curve $v_i:[N]rightarrow [0,1]$, where $v_i(n)$ is the value for having $n$ data points. textit{A priori}, the seller is unaware of the distribution of buyers, but can repeat the market for $T$ rounds so as to learn the revenue-optimal pricing curve $p:[N] rightarrow [0, 1]$. To solve this online learning problem, we first develop novel discretization schemes to approximate any pricing curve. When compared to prior work, the size of our discretization schemes scales gracefully with the approximation parameter, which translates to better regret in online learning. Under assumptions like smoothness and diminishing returns which are satisfied by data, the discretization size can be reduced further. We then turn to the online learning problem, both in the stochastic and adversarial settings. On each round, the seller chooses an emph{anonymous} pricing curve $p_t$. A new buyer appears and may choose to purchase some amount of data. She then reveals her type emph{only if} she makes a purchase. Our online algorithms build on classical algorithms such as UCB and FTPL, but require novel ideas to account for the asymmetric nature of this feedback and to deal with the vastness of the space of pricing curves. Using the improved discretization schemes previously developed, we are able to achieve $tilde{O}left(msqrt{T}right)$ regret in the stochastic setting and $tilde{O}left(m^{frac{3}{2}}sqrt{T}right)$ regret in the adversarial setting. | [
"['Keran Chen' 'Joon Suk Huh' 'Kirthevasan Kandasamy']"
] |
null | null | 2407.05487 | null | null | http://arxiv.org/pdf/2407.05487v1 | 2024-07-07T20:15:10Z | 2024-07-07T20:15:10Z | Multi-level Reliability Interface for Semantic Communications over
Wireless Networks | Semantic communication, when examined through the lens of joint source-channel coding (JSCC), maps source messages directly into channel input symbols, where the measure of success is defined by end-to-end distortion rather than traditional metrics such as block error rate. Previous studies have shown significant improvements achieved through deep learning (DL)-driven JSCC compared to traditional separate source and channel coding. However, JSCC is impractical in existing communication networks, where application and network providers are typically different entities connected over general-purpose TCP/IP links. In this paper, we propose designing the source and channel mappings separately and sequentially via a novel multi-level reliability interface. This conceptual interface enables semi-JSCC at both the learned source and channel mappers and achieves many of the gains observed in existing DL-based JSCC work (which would require a fully joint design between the application and the network), such as lower end-to-end distortion and graceful degradation of distortion with channel quality. We believe this work represents an important step towards realizing semantic communications in wireless networks. | [
"['Tze-Yang Tung' 'Homa Esfahanizadeh' 'Jinfeng Du' 'Harish Viswanathan']"
] |
null | null | 2407.05494 | null | null | http://arxiv.org/pdf/2407.05494v2 | 2024-07-09T01:20:32Z | 2024-07-07T20:54:14Z | Prospective Messaging: Learning in Networks with Communication Delays | Inter-neuron communication delays are ubiquitous in physically realized neural networks such as biological neural circuits and neuromorphic hardware. These delays have significant and often disruptive consequences on network dynamics during training and inference. It is therefore essential that communication delays be accounted for, both in computational models of biological neural networks and in large-scale neuromorphic systems. Nonetheless, communication delays have yet to be comprehensively addressed in either domain. In this paper, we first show that delays prevent state-of-the-art continuous-time neural networks called Latent Equilibrium (LE) networks from learning even simple tasks despite significant overparameterization. We then propose to compensate for communication delays by predicting future signals based on currently available ones. This conceptually straightforward approach, which we call prospective messaging (PM), uses only neuron-local information, and is flexible in terms of memory and computation requirements. We demonstrate that incorporating PM into delayed LE networks prevents reaction lags, and facilitates successful learning on Fourier synthesis and autoregressive video prediction tasks. | [
"['Ryan Fayyazi' 'Christian Weilbach' 'Frank Wood']"
] |
null | null | 2407.05510 | null | null | http://arxiv.org/pdf/2407.05510v1 | 2024-07-07T22:57:44Z | 2024-07-07T22:57:44Z | SCATTER: Algorithm-Circuit Co-Sparse Photonic Accelerator with
Thermal-Tolerant, Power-Efficient In-situ Light Redistribution | Photonic computing has emerged as a promising solution for accelerating computation-intensive artificial intelligence (AI) workloads. However, limited reconfigurability, high electrical-optical conversion cost, and thermal sensitivity limit the deployment of current optical analog computing engines to support power-restricted, performance-sensitive AI workloads at scale. Sparsity provides a great opportunity for hardware-efficient AI accelerators. However, current dense photonic accelerators fail to fully exploit the power-saving potential of algorithmic sparsity. It requires sparsity-aware hardware specialization with a fundamental re-design of photonic tensor core topology and cross-layer device-circuit-architecture-algorithm co-optimization aware of hardware non-ideality and power bottleneck. To trim down the redundant power consumption while maximizing robustness to thermal variations, we propose SCATTER, a novel algorithm-circuit co-sparse photonic accelerator featuring dynamically reconfigurable signal path via thermal-tolerant, power-efficient in-situ light redistribution and power gating. A power-optimized, crosstalk-aware dynamic sparse training framework is introduced to explore row-column structured sparsity and ensure marginal accuracy loss and maximum power efficiency. The extensive evaluation shows that our cross-stacked optimized accelerator SCATTER achieves a 511X area reduction and 12.4X power saving with superior crosstalk tolerance that enables unprecedented circuit layout compactness and on-chip power efficiency. | [
"['Ziang Yin' 'Nicholas Gangi' 'Meng Zhang' 'Jeff Zhang' 'Rena Huang'\n 'Jiaqi Gu']"
] |
null | null | 2407.05511 | null | null | http://arxiv.org/pdf/2407.05511v1 | 2024-07-07T22:58:52Z | 2024-07-07T22:58:52Z | Provably Efficient Long-Horizon Exploration in Monte Carlo Tree Search
through State Occupancy Regularization | Monte Carlo tree search (MCTS) has been successful in a variety of domains, but faces challenges with long-horizon exploration when compared to sampling-based motion planning algorithms like Rapidly-Exploring Random Trees. To address these limitations of MCTS, we derive a tree search algorithm based on policy optimization with state occupancy measure regularization, which we call {it Volume-MCTS}. We show that count-based exploration and sampling-based motion planning can be derived as approximate solutions to this state occupancy measure regularized objective. We test our method on several robot navigation problems, and find that Volume-MCTS outperforms AlphaZero and displays significantly better long-horizon exploration properties. | [
"['Liam Schramm' 'Abdeslam Boularias']"
] |
null | null | 2407.05520 | null | null | http://arxiv.org/pdf/2407.05520v1 | 2024-07-07T23:57:10Z | 2024-07-07T23:57:10Z | A Theory of Machine Learning | We critically review three major theories of machine learning and provide a new theory according to which machines learn a function when the machines successfully compute it. We show that this theory challenges common assumptions in the statistical and the computational learning theories, for it implies that learning true probabilities is equivalent neither to obtaining a correct calculation of the true probabilities nor to obtaining an almost-sure convergence to them. We also briefly discuss some case studies from natural language processing and macroeconomics from the perspective of the new theory. | [
"['Jinsook Kim' 'Jinho Kang']"
] |
null | null | 2407.05526 | null | null | http://arxiv.org/pdf/2407.05526v1 | 2024-07-08T00:19:43Z | 2024-07-08T00:19:43Z | Can Machines Learn the True Probabilities? | When there exists uncertainty, AI machines are designed to make decisions so as to reach the best expected outcomes. Expectations are based on true facts about the objective environment the machines interact with, and those facts can be encoded into AI models in the form of true objective probability functions. Accordingly, AI models involve probabilistic machine learning in which the probabilities should be objectively interpreted. We prove under some basic assumptions when machines can learn the true objective probabilities, if any, and when machines cannot learn them. | [
"['Jinsook Kim']"
] |
null | null | 2407.05527 | null | null | http://arxiv.org/pdf/2407.05527v1 | 2024-07-08T00:21:17Z | 2024-07-08T00:21:17Z | Rethinking Image Skip Connections in StyleGAN2 | Various models based on StyleGAN have gained significant traction in the field of image synthesis, attributed to their robust training stability and superior performances. Within the StyleGAN framework, the adoption of image skip connection is favored over the traditional residual connection. However, this preference is just based on empirical observations; there has not been any in-depth mathematical analysis on it yet. To rectify this situation, this brief aims to elucidate the mathematical meaning of the image skip connection and introduce a groundbreaking methodology, termed the image squeeze connection, which significantly improves the quality of image synthesis. Specifically, we analyze the image skip connection technique to reveal its problem and introduce the proposed method which not only effectively boosts the GAN performance but also reduces the required number of network parameters. Extensive experiments on various datasets demonstrate that the proposed method consistently enhances the performance of state-of-the-art models based on StyleGAN. We believe that our findings represent a vital advancement in the field of image synthesis, suggesting a novel direction for future research and applications. | [
"['Seung Park' 'Yong-Goo Shin']"
] |
null | null | 2407.05580 | null | null | http://arxiv.org/pdf/2407.05580v1 | 2024-07-08T03:30:25Z | 2024-07-08T03:30:25Z | $\mathrm{E^{2}CFD}$: Towards Effective and Efficient Cost Function
Design for Safe Reinforcement Learning via Large Language Model | Different classes of safe reinforcement learning algorithms have shown satisfactory performance in various types of safety requirement scenarios. However, the existing methods mainly address one or several classes of specific safety requirement scenario problems and cannot be applied to arbitrary safety requirement scenarios. In addition, the optimization objectives of existing reinforcement learning algorithms are misaligned with the task requirements. Based on the need to address these issues, we propose $mathrm{E^{2}CFD}$, an effective and efficient cost function design framework. $mathrm{E^{2}CFD}$ leverages the capabilities of a large language model (LLM) to comprehend various safety scenarios and generate corresponding cost functions. It incorporates the textit{fast performance evaluation (FPE)} method to facilitate rapid and iterative updates to the generated cost function. Through this iterative process, $mathrm{E^{2}CFD}$ aims to obtain the most suitable cost function for policy training, tailored to the specific tasks within the safety scenario. Experiments have proven that the performance of policies trained using this framework is superior to traditional safe reinforcement learning algorithms and policies trained with carefully designed cost functions. | [
"['Zepeng Wang' 'Chao Ma' 'Linjiang Zhou' 'Libing Wu' 'Lei Yang'\n 'Xiaochuan Shi' 'Guojun Peng']"
] |
null | null | 2407.05591 | null | null | http://arxiv.org/pdf/2407.05591v1 | 2024-07-08T04:08:35Z | 2024-07-08T04:08:35Z | On the Power of Convolution Augmented Transformer | The transformer architecture has catalyzed revolutionary advances in language modeling. However, recent architectural recipes, such as state-space models, have bridged the performance gap. Motivated by this, we examine the benefits of Convolution-Augmented Transformer (CAT) for recall, copying, and length generalization tasks. CAT incorporates convolutional filters in the K/Q/V embeddings of an attention layer. Through CAT, we show that the locality of the convolution synergizes with the global view of the attention. Unlike comparable architectures, such as Mamba or transformer, CAT can provably solve the associative recall (AR) and copying tasks using a single layer while also enjoying guaranteed length generalization. We also establish computational tradeoffs between convolution and attention by characterizing how convolution can mitigate the need for full attention by summarizing the context window and creating salient summary tokens to attend. Evaluations on real datasets corroborate our findings and demonstrate that CAT and its variations indeed enhance the language modeling performance. | [
"['Mingchen Li' 'Xuechen Zhang' 'Yixiao Huang' 'Samet Oymak']"
] |
null | null | 2407.05593 | null | null | http://arxiv.org/pdf/2407.05593v1 | 2024-07-08T04:15:43Z | 2024-07-08T04:15:43Z | Unmasking Trees for Tabular Data | We herein describe UnmaskingTrees, a method and open-source software package for tabular data generation and, especially, imputation. Our experiments suggest that training gradient-boosted trees to incrementally unmask features offers a simple, strong baseline for imputation. | [
"['Calvin McCarter']"
] |
null | null | 2407.05615 | null | null | http://arxiv.org/pdf/2407.05615v1 | 2024-07-08T05:03:46Z | 2024-07-08T05:03:46Z | OSN: Infinite Representations of Dynamic 3D Scenes from Monocular Videos | It has long been challenging to recover the underlying dynamic 3D scene representations from a monocular RGB video. Existing works formulate this problem into finding a single most plausible solution by adding various constraints such as depth priors and strong geometry constraints, ignoring the fact that there could be infinitely many 3D scene representations corresponding to a single dynamic video. In this paper, we aim to learn all plausible 3D scene configurations that match the input video, instead of just inferring a specific one. To achieve this ambitious goal, we introduce a new framework, called OSN. The key to our approach is a simple yet innovative object scale network together with a joint optimization module to learn an accurate scale range for every dynamic 3D object. This allows us to sample as many faithful 3D scene configurations as possible. Extensive experiments show that our method surpasses all baselines and achieves superior accuracy in dynamic novel view synthesis on multiple synthetic and real-world datasets. Most notably, our method demonstrates a clear advantage in learning fine-grained 3D scene geometry. Our code and data are available at https://github.com/vLAR-group/OSN | [
"['Ziyang Song' 'Jinxi Li' 'Bo Yang']"
] |
null | null | 2407.05622 | null | null | http://arxiv.org/pdf/2407.05622v1 | 2024-07-08T05:30:34Z | 2024-07-08T05:30:34Z | On the Complexity of Learning Sparse Functions with Statistical and
Gradient Queries | The goal of this paper is to investigate the complexity of gradient algorithms when learning sparse functions (juntas). We introduce a type of Statistical Queries ($mathsf{SQ}$), which we call Differentiable Learning Queries ($mathsf{DLQ}$), to model gradient queries on a specified loss with respect to an arbitrary model. We provide a tight characterization of the query complexity of $mathsf{DLQ}$ for learning the support of a sparse function over generic product distributions. This complexity crucially depends on the loss function. For the squared loss, $mathsf{DLQ}$ matches the complexity of Correlation Statistical Queries $(mathsf{CSQ})$--potentially much worse than $mathsf{SQ}$. But for other simple loss functions, including the $ell_1$ loss, $mathsf{DLQ}$ always achieves the same complexity as $mathsf{SQ}$. We also provide evidence that $mathsf{DLQ}$ can indeed capture learning with (stochastic) gradient descent by showing it correctly describes the complexity of learning with a two-layer neural network in the mean field regime and linear scaling. | [
"['Nirmit Joshi' 'Theodor Misiakiewicz' 'Nathan Srebro']"
] |
null | null | 2407.05625 | null | null | http://arxiv.org/pdf/2407.05625v2 | 2024-07-10T20:44:39Z | 2024-07-08T05:35:54Z | New User Event Prediction Through the Lens of Causal Inference | Modeling and analysis for event series generated by heterogeneous users of various behavioral patterns are closely involved in our daily lives, including credit card fraud detection, online platform user recommendation, and social network analysis. The most commonly adopted approach to this task is to classify users into behavior-based categories and analyze each of them separately. However, this approach requires extensive data to fully understand user behavior, presenting challenges in modeling newcomers without historical knowledge. In this paper, we propose a novel discrete event prediction framework for new users through the lens of causal inference. Our method offers an unbiased prediction for new users without needing to know their categories. We treat the user event history as the ''treatment'' for future events and the user category as the key confounder. Thus, the prediction problem can be framed as counterfactual outcome estimation, with the new user model trained on an adjusted dataset where each event is re-weighted by its inverse propensity score. We demonstrate the superior performance of the proposed framework with a numerical simulation study and two real-world applications, including Netflix rating prediction and seller contact prediction for customer support at Amazon. | [
"['Henry Shaowu Yuchi' 'Shixiang Zhu' 'Li Dong' 'Yigit M. Arisoy'\n 'Matthew C. Spencer']"
] |
null | null | 2407.05627 | null | null | http://arxiv.org/pdf/2407.05627v1 | 2024-07-08T05:42:29Z | 2024-07-08T05:42:29Z | New Directions in Text Classification Research: Maximizing The
Performance of Sentiment Classification from Limited Data | The stakeholders' needs in sentiment analysis for various issues, whether positive or negative, are speed and accuracy. One new challenge in sentiment analysis tasks is the limited training data, which often leads to suboptimal machine learning models and poor performance on test data. This paper discusses the problem of text classification based on limited training data (300 to 600 samples) into three classes: positive, negative, and neutral. A benchmark dataset is provided for training and testing data on the issue of Kaesang Pangarep's appointment as Chairman of PSI. External data for aggregation and augmentation purposes are provided, consisting of two datasets: the topic of Covid Vaccination sentiment and an open topic. The official score used is the F1-score, which balances precision and recall among the three classes, positive, negative, and neutral. A baseline score is provided as a reference for researchers for unoptimized classification methods. The optimized score is provided as a reference for the target score to be achieved by any proposed method. Both scoring (baseline and optimized) use the SVM method, which is widely reported as the state-of-the-art in conventional machine learning methods. The F1-scores achieved by the baseline and optimized methods are 40.83% and 51.28%, respectively. | [
"['Surya Agustian' 'Muhammad Irfan Syah' 'Nurul Fatiara' 'Rahmad Abdillah']"
] |
null | null | 2407.05633 | null | null | http://arxiv.org/pdf/2407.05633v1 | 2024-07-08T05:58:49Z | 2024-07-08T05:58:49Z | AdaPI: Facilitating DNN Model Adaptivity for Efficient Private Inference
in Edge Computing | Private inference (PI) has emerged as a promising solution to execute computations on encrypted data, safeguarding user privacy and model parameters in edge computing. However, existing PI methods are predominantly developed considering constant resource constraints, overlooking the varied and dynamic resource constraints in diverse edge devices, like energy budgets. Consequently, model providers have to design specialized models for different devices, where all of them have to be stored on the edge server, resulting in inefficient deployment. To fill this gap, this work presents AdaPI, a novel approach that achieves adaptive PI by allowing a model to perform well across edge devices with diverse energy budgets. AdaPI employs a PI-aware training strategy that optimizes the model weights alongside weight-level and feature-level soft masks. These soft masks are subsequently transformed into multiple binary masks to enable adjustments in communication and computation workloads. Through sequentially training the model with increasingly dense binary masks, AdaPI attains optimal accuracy for each energy budget, which outperforms the state-of-the-art PI methods by 7.3% in terms of test accuracy on CIFAR-100. The code of AdaPI can be accessed via https://github.com/jiahuiiiiii/AdaPI. | [
"['Tong Zhou' 'Jiahui Zhao' 'Yukui Luo' 'Xi Xie' 'Wujie Wen' 'Caiwen Ding'\n 'Xiaolin Xu']"
] |
null | null | 2407.05639 | null | null | http://arxiv.org/pdf/2407.05639v1 | 2024-07-08T06:07:51Z | 2024-07-08T06:07:51Z | Deep Learning-based Anomaly Detection and Log Analysis for Computer
Networks | Computer network anomaly detection and log analysis, as an important topic in the field of network security, has been a key task to ensure network security and system reliability. First, existing network anomaly detection and log analysis methods are often challenged by high-dimensional data and complex network topologies, resulting in unstable performance and high false-positive rates. In addition, traditional methods are usually difficult to handle time-series data, which is crucial for anomaly detection and log analysis. Therefore, we need a more efficient and accurate method to cope with these problems. To compensate for the shortcomings of current methods, we propose an innovative fusion model that integrates Isolation Forest, GAN (Generative Adversarial Network), and Transformer with each other, and each of them plays a unique role. Isolation Forest is used to quickly identify anomalous data points, and GAN is used to generate synthetic data with the real data distribution characteristics to augment the training dataset, while the Transformer is used for modeling and context extraction on time series data. The synergy of these three components makes our model more accurate and robust in anomaly detection and log analysis tasks. We validate the effectiveness of this fusion model in an extensive experimental evaluation. Experimental results show that our model significantly improves the accuracy of anomaly detection while reducing the false alarm rate, which helps to detect potential network problems in advance. The model also performs well in the log analysis task and is able to quickly identify anomalous behaviors, which helps to improve the stability of the system. The significance of this study is that it introduces advanced deep learning techniques, which work anomaly detection and log analysis. | [
"['Shuzhan Wang' 'Ruxue Jiang' 'Zhaoqi Wang' 'Yan Zhou']"
] |
null | null | 2407.05649 | null | null | http://arxiv.org/pdf/2407.05649v1 | 2024-07-08T06:21:56Z | 2024-07-08T06:21:56Z | Graph Attention with Random Rewiring | Graph Neural Networks (GNNs) have become fundamental in graph-structured deep learning. Key paradigms of modern GNNs include message passing, graph rewiring, and Graph Transformers. This paper introduces Graph-Rewiring Attention with Stochastic Structures (GRASS), a novel GNN architecture that combines the advantages of these three paradigms. GRASS rewires the input graph by superimposing a random regular graph, enhancing long-range information propagation while preserving structural features of the input graph. It also employs a unique additive attention mechanism tailored for graph-structured data, providing a graph inductive bias while remaining computationally efficient. Our empirical evaluations demonstrate that GRASS achieves state-of-the-art performance on multiple benchmark datasets, confirming its practical efficacy. | [
"['Tongzhou Liao' 'Barnabás Póczos']"
] |
null | null | 2407.05650 | null | null | http://arxiv.org/pdf/2407.05650v1 | 2024-07-08T06:22:10Z | 2024-07-08T06:22:10Z | The Dynamic Net Architecture: Learning Robust and Holistic Visual
Representations Through Self-Organizing Networks | We present a novel intelligent-system architecture called "Dynamic Net Architecture" (DNA) that relies on recurrence-stabilized networks and discuss it in application to vision. Our architecture models a (cerebral cortical) area wherein elementary feature neurons encode details of visual structures, and coherent nets of such neurons model holistic object structures. By interpreting smaller or larger coherent pieces of an area network as complex features, our model encodes hierarchical feature representations essentially different than artificial neural networks (ANNs). DNA models operate on a dynamic connectionism principle, wherein neural activations stemming from initial afferent signals undergo stabilization through a self-organizing mechanism facilitated by Hebbian plasticity alongside periodically tightening inhibition. In contrast to ANNs, which rely on feed-forward connections and backpropagation of error, we posit that this processing paradigm leads to highly robust representations, as by employing dynamic lateral connections, irrelevant details in neural activations are filtered out, freeing further processing steps from distracting noise and premature decisions. We empirically demonstrate the viability of the DNA by composing line fragments into longer lines and show that the construction of nets representing lines remains robust even with the introduction of up to $59%$ noise at each spatial location. Furthermore, we demonstrate the model's capability to reconstruct anticipated features from partially obscured inputs and that it can generalize to patterns not observed during training. In this work, we limit the DNA to one cortical area and focus on its internals while providing insights into a standalone area's strengths and shortcomings. Additionally, we provide an outlook on how future work can implement invariant object recognition by combining multiple areas. | [
"['Pascal J. Sager' 'Jan M. Deriu' 'Benjamin F. Grewe' 'Thilo Stadelmann'\n 'Christoph von der Malsburg']"
] |
null | null | 2407.05656 | null | null | http://arxiv.org/pdf/2407.05656v1 | 2024-07-08T06:29:46Z | 2024-07-08T06:29:46Z | Multi-label Learning with Random Circular Vectors | The extreme multi-label classification~(XMC) task involves learning a classifier that can predict from a large label set the most relevant subset of labels for a data instance. While deep neural networks~(DNNs) have demonstrated remarkable success in XMC problems, the task is still challenging because it must deal with a large number of output labels, which make the DNN training computationally expensive. This paper addresses the issue by exploring the use of random circular vectors, where each vector component is represented as a complex amplitude. In our framework, we can develop an output layer and loss function of DNNs for XMC by representing the final output layer as a fully connected layer that directly predicts a low-dimensional circular vector encoding a set of labels for a data instance. We conducted experiments on synthetic datasets to verify that circular vectors have better label encoding capacity and retrieval ability than normal real-valued vectors. Then, we conducted experiments on actual XMC datasets and found that these appealing properties of circular vectors contribute to significant improvements in task performance compared with a previous model using random real-valued vectors, while reducing the size of the output layers by up to 99%. | [
"['Ken Nishida' 'Kojiro Machi' 'Kazuma Onishi' 'Katsuhiko Hayashi'\n 'Hidetaka Kamigaito']"
] |
null | null | 2407.05658 | null | null | http://arxiv.org/pdf/2407.05658v1 | 2024-07-08T06:35:13Z | 2024-07-08T06:35:13Z | Random Features Hopfield Networks generalize retrieval to previously
unseen examples | It has been recently shown that a learning transition happens when a Hopfield Network stores examples generated as superpositions of random features, where new attractors corresponding to such features appear in the model. In this work we reveal that the network also develops attractors corresponding to previously unseen examples generated with the same set of features. We explain this surprising behaviour in terms of spurious states of the learned features: we argue that, increasing the number of stored examples beyond the learning transition, the model also learns to mix the features to represent both stored and previously unseen examples. We support this claim with the computation of the phase diagram of the model. | [
"['Silvio Kalaj' 'Clarissa Lauditi' 'Gabriele Perugini' 'Carlo Lucibello'\n 'Enrico M. Malatesta' 'Matteo Negri']"
] |
null | null | 2407.05664 | null | null | http://arxiv.org/pdf/2407.05664v1 | 2024-07-08T06:59:29Z | 2024-07-08T06:59:29Z | How DNNs break the Curse of Dimensionality: Compositionality and
Symmetry Learning | We show that deep neural networks (DNNs) can efficiently learn any composition of functions with bounded $F_{1}$-norm, which allows DNNs to break the curse of dimensionality in ways that shallow networks cannot. More specifically, we derive a generalization bound that combines a covering number argument for compositionality, and the $F_{1}$-norm (or the related Barron norm) for large width adaptivity. We show that the global minimizer of the regularized loss of DNNs can fit for example the composition of two functions $f^{*}=hcirc g$ from a small number of observations, assuming $g$ is smooth/regular and reduces the dimensionality (e.g. $g$ could be the modulo map of the symmetries of $f^{*}$), so that $h$ can be learned in spite of its low regularity. The measures of regularity we consider is the Sobolev norm with different levels of differentiability, which is well adapted to the $F_{1}$ norm. We compute scaling laws empirically and observe phase transitions depending on whether $g$ or $h$ is harder to learn, as predicted by our theory. | [
"['Arthur Jacot' 'Seok Hoan Choi' 'Yuxiao Wen']"
] |
null | null | 2407.05684 | null | null | http://arxiv.org/pdf/2407.05684v1 | 2024-07-08T07:34:35Z | 2024-07-08T07:34:35Z | Multi-Fidelity Bayesian Neural Network for Uncertainty Quantification in
Transonic Aerodynamic Loads | Multi-fidelity models are becoming more prevalent in engineering, particularly in aerospace, as they combine both the computational efficiency of low-fidelity models with the high accuracy of higher-fidelity simulations. Various state-of-the-art techniques exist for fusing data from different fidelity sources, including Co-Kriging and transfer learning in neural networks. This paper aims to implement a multi-fidelity Bayesian neural network model that applies transfer learning to fuse data generated by models at different fidelities. Bayesian neural networks use probability distributions over network weights, enabling them to provide predictions along with estimates of their confidence. This approach harnesses the predictive and data fusion capabilities of neural networks while also quantifying uncertainty. The results demonstrate that the multi-fidelity Bayesian model outperforms the state-of-the-art Co-Kriging in terms of overall accuracy and robustness on unseen data. | [
"['Andrea Vaiuso' 'Gabriele Immordino' 'Marcello Righi' 'Andrea Da Ronch']"
] |
null | null | 2407.05693 | null | null | http://arxiv.org/pdf/2407.05693v1 | 2024-07-08T07:47:30Z | 2024-07-08T07:47:30Z | Sub-SA: Strengthen In-context Learning via Submodular Selective
Annotation | In-context learning (ICL) leverages in-context examples as prompts for the predictions of Large Language Models (LLMs). These prompts play a crucial role in achieving strong performance. However, the selection of suitable prompts from a large pool of labeled examples often entails significant annotation costs. To address this challenge, we propose textbf{Sub-SA} (textbf{Sub}modular textbf{S}elective textbf{A}nnotation), a submodule-based selective annotation method. The aim of Sub-SA is to reduce annotation costs while improving the quality of in-context examples and minimizing the time consumption of the selection process. In Sub-SA, we design a submodular function that facilitates effective subset selection for annotation and demonstrates the characteristics of monotonically and submodularity from the theoretical perspective. Specifically, we propose textbf{RPR} (textbf{R}eward and textbf{P}enalty textbf{R}egularization) to better balance the diversity and representativeness of the unlabeled dataset attributed to a reward term and a penalty term, respectively. Consequently, the selection for annotations can be effectively addressed with a simple yet effective greedy search algorithm based on the submodular function. Finally, we apply the similarity prompt retrieval to get the examples for ICL. | [
"['Jian Qian' 'Miao Sun' 'Sifan Zhou' 'Ziyu Zhao' 'Ruizhi Hun'\n 'Patrick Chiang']"
] |
null | null | 2407.05694 | null | null | http://arxiv.org/pdf/2407.05694v1 | 2024-07-08T07:53:06Z | 2024-07-08T07:53:06Z | On the Limitations of Compute Thresholds as a Governance Strategy | At face value, this essay is about understanding a fairly esoteric governance tool called compute thresholds. However, in order to grapple with whether these thresholds will achieve anything, we must first understand how they came to be. This requires engaging with a decades-old debate at the heart of computer science progress, namely, is bigger always better? Hence, this essay may be of interest not only to policymakers and the wider public but also to computer scientists interested in understanding the role of compute in unlocking breakthroughs. Does a certain inflection point of compute result in changes to the risk profile of a model? This discussion is increasingly urgent given the wide adoption of governance approaches that suggest greater compute equates with higher propensity for harm. Several leading frontier AI companies have released responsible scaling policies. Both the White House Executive Orders on AI Safety (EO) and the EU AI Act encode the use of FLOP or floating-point operations as a way to identify more powerful systems. What is striking about the choice of compute thresholds to-date is that no models currently deployed in the wild fulfill the current criteria set by the EO. This implies that the emphasis is often not on auditing the risks and harms incurred by currently deployed models - but rather is based upon the belief that future levels of compute will introduce unforeseen new risks. A key conclusion of this essay is that compute thresholds as currently implemented are shortsighted and likely to fail to mitigate risk. Governance that is overly reliant on compute fails to understand that the relationship between compute and risk is highly uncertain and rapidly changing. It also overestimates our ability to predict what abilities emerge at different scales. This essay ends with recommendations for a better way forward. | [
"['Sara Hooker']"
] |
null | null | 2407.05704 | null | null | http://arxiv.org/pdf/2407.05704v1 | 2024-07-08T08:06:45Z | 2024-07-08T08:06:45Z | Narrowing the Gap between Adversarial and Stochastic MDPs via Policy
Optimization | In this paper, we consider the problem of learning in adversarial Markov decision processes [MDPs] with an oblivious adversary in a full-information setting. The agent interacts with an environment during $T$ episodes, each of which consists of $H$ stages, and each episode is evaluated with respect to a reward function that will be revealed only at the end of the episode. We propose an algorithm, called APO-MVP, that achieves a regret bound of order $tilde{mathcal{O}}(mathrm{poly}(H)sqrt{SAT})$, where $S$ and $A$ are sizes of the state and action spaces, respectively. This result improves upon the best-known regret bound by a factor of $sqrt{S}$, bridging the gap between adversarial and stochastic MDPs, and matching the minimax lower bound $Omega(sqrt{H^3SAT})$ as far as the dependencies in $S,A,T$ are concerned. The proposed algorithm and analysis completely avoid the typical tool given by occupancy measures; instead, it performs policy optimization based only on dynamic programming and on a black-box online linear optimization strategy run over estimated advantage functions, making it easy to implement. The analysis leverages two recent techniques: policy optimization based on online linear optimization strategies (Jonckheere et al., 2023) and a refined martingale analysis of the impact on values of estimating transitions kernels (Zhang et al., 2023). | [
"['Daniil Tiapkin' 'Evgenii Chzhen' 'Gilles Stoltz']"
] |
null | null | 2407.05732 | null | null | http://arxiv.org/pdf/2407.05732v1 | 2024-07-08T08:36:44Z | 2024-07-08T08:36:44Z | FairPFN: Transformers Can do Counterfactual Fairness | Machine Learning systems are increasingly prevalent across healthcare, law enforcement, and finance but often operate on historical data, which may carry biases against certain demographic groups. Causal and counterfactual fairness provides an intuitive way to define fairness that closely aligns with legal standards. Despite its theoretical benefits, counterfactual fairness comes with several practical limitations, largely related to the reliance on domain knowledge and approximate causal discovery techniques in constructing a causal model. In this study, we take a fresh perspective on counterfactually fair prediction, building upon recent work in in context learning (ICL) and prior fitted networks (PFNs) to learn a transformer called FairPFN. This model is pretrained using synthetic fairness data to eliminate the causal effects of protected attributes directly from observational data, removing the requirement of access to the correct causal model in practice. In our experiments, we thoroughly assess the effectiveness of FairPFN in eliminating the causal impact of protected attributes on a series of synthetic case studies and real world datasets. Our findings pave the way for a new and promising research area: transformers for causal and counterfactual fairness. | [
"['Jake Robertson' 'Noah Hollmann' 'Noor Awad' 'Frank Hutter']"
] |
null | null | 2407.05749 | null | null | http://arxiv.org/pdf/2407.05749v1 | 2024-07-08T08:55:25Z | 2024-07-08T08:55:25Z | LDGCN: An Edge-End Lightweight Dual GCN Based on Single-Channel EEG for
Driver Drowsiness Monitoring | Driver drowsiness electroencephalography (EEG) signal monitoring can timely alert drivers of their drowsiness status, thereby reducing the probability of traffic accidents. Graph convolutional networks (GCNs) have shown significant advancements in processing the non-stationary, time-varying, and non-Euclidean nature of EEG signals. However, the existing single-channel EEG adjacency graph construction process lacks interpretability, which hinders the ability of GCNs to effectively extract adjacency graph features, thus affecting the performance of drowsiness monitoring. To address this issue, we propose an edge-end lightweight dual graph convolutional network (LDGCN). Specifically, we are the first to incorporate neurophysiological knowledge to design a Baseline Drowsiness Status Adjacency Graph (BDSAG), which characterizes driver drowsiness status. Additionally, to express more features within limited EEG data, we introduce the Augmented Graph-level Module (AGM). This module captures global and local information at the graph level, ensuring that BDSAG features remain intact while enhancing effective feature expression capability. Furthermore, to deploy our method on the fourth-generation Raspberry Pi, we utilize Adaptive Pruning Optimization (APO) on both channels and neurons, reducing inference latency by almost half. Experiments on benchmark datasets demonstrate that LDGCN offers the best trade-off between monitoring performance and hardware resource utilization compared to existing state-of-the-art algorithms. All our source code can be found at https://github.com/BryantDom/Driver-Drowsiness-Monitoring. | [
"['Jingwei Huang' 'Chuansheng Wang' 'Jiayan Huang' 'Haoyi Fan'\n 'Antoni Grau' 'Fuquan Zhang']"
] |
null | null | 2407.05781 | null | null | http://arxiv.org/pdf/2407.05781v1 | 2024-07-08T09:41:42Z | 2024-07-08T09:41:42Z | Regret Analysis of Multi-task Representation Learning for
Linear-Quadratic Adaptive Control | Representation learning is a powerful tool that enables learning over large multitudes of agents or domains by enforcing that all agents operate on a shared set of learned features. However, many robotics or controls applications that would benefit from collaboration operate in settings with changing environments and goals, whereas most guarantees for representation learning are stated for static settings. Toward rigorously establishing the benefit of representation learning in dynamic settings, we analyze the regret of multi-task representation learning for linear-quadratic control. This setting introduces unique challenges. Firstly, we must account for and balance the $textit{misspecification}$ introduced by an approximate representation. Secondly, we cannot rely on the parameter update schemes of single-task online LQR, for which least-squares often suffices, and must devise a novel scheme to ensure sufficient improvement. We demonstrate that for settings where exploration is "benign", the regret of any agent after $T$ timesteps scales as $tilde O(sqrt{T/H})$, where $H$ is the number of agents. In settings with "difficult" exploration, the regret scales as $tilde{mathcal O}(sqrt{d_u d_theta} sqrt{T} + T^{3/4}/H^{1/5})$, where $d_x$ is the state-space dimension, $d_u$ is the input dimension, and $d_theta$ is the task-specific parameter count. In both cases, by comparing to the minimax single-task regret $tilde{mathcal O}(sqrt{d_x d_u^2}sqrt{T})$, we see a benefit of a large number of agents. Notably, in the difficult exploration case, by sharing a representation across tasks, the effective task-specific parameter count can often be small $d_theta < d_x d_u$. Lastly, we provide numerical validation of the trends we predict. | [
"['Bruce D. Lee' 'Leonardo F. Toso' 'Thomas T. Zhang' 'James Anderson'\n 'Nikolai Matni']"
] |
null | null | 2407.05782 | null | null | http://arxiv.org/pdf/2407.05782v1 | 2024-07-08T09:45:20Z | 2024-07-08T09:45:20Z | Sequential Contrastive Audio-Visual Learning | Contrastive learning has emerged as a powerful technique in audio-visual representation learning, leveraging the natural co-occurrence of audio and visual modalities in extensive web-scale video datasets to achieve significant advancements. However, conventional contrastive audio-visual learning methodologies often rely on aggregated representations derived through temporal aggregation, which neglects the intrinsic sequential nature of the data. This oversight raises concerns regarding the ability of standard approaches to capture and utilize fine-grained information within sequences, information that is vital for distinguishing between semantically similar yet distinct examples. In response to this limitation, we propose sequential contrastive audio-visual learning (SCAV), which contrasts examples based on their non-aggregated representation space using sequential distances. Retrieval experiments with the VGGSound and Music datasets demonstrate the effectiveness of SCAV, showing 2-3x relative improvements against traditional aggregation-based contrastive learning and other methods from the literature. We also show that models trained with SCAV exhibit a high degree of flexibility regarding the metric employed for retrieval, allowing them to operate on a spectrum of efficiency-accuracy trade-offs, potentially making them applicable in multiple scenarios, from small- to large-scale retrieval. | [
"['Ioannis Tsiamas' 'Santiago Pascual' 'Chunghsin Yeh' 'Joan Serrà']"
] |
null | null | 2407.05788 | null | null | http://arxiv.org/pdf/2407.05788v1 | 2024-07-08T09:49:38Z | 2024-07-08T09:49:38Z | Automated Computational Energy Minimization of ML Algorithms using
Constrained Bayesian Optimization | Bayesian optimization (BO) is an efficient framework for optimization of black-box objectives when function evaluations are costly and gradient information is not easily accessible. BO has been successfully applied to automate the task of hyperparameter optimization (HPO) in machine learning (ML) models with the primary objective of optimizing predictive performance on held-out data. In recent years, however, with ever-growing model sizes, the energy cost associated with model training has become an important factor for ML applications. Here we evaluate Constrained Bayesian Optimization (CBO) with the primary objective of minimizing energy consumption and subject to the constraint that the generalization performance is above some threshold. We evaluate our approach on regression and classification tasks and demonstrate that CBO achieves lower energy consumption without compromising the predictive performance of ML models. | [
"['Pallavi Mitra' 'Felix Biessmann']"
] |
null | null | 2407.05789 | null | null | http://arxiv.org/pdf/2407.05789v1 | 2024-07-08T09:51:02Z | 2024-07-08T09:51:02Z | CANDID DAC: Leveraging Coupled Action Dimensions with Importance
Differences in DAC | High-dimensional action spaces remain a challenge for dynamic algorithm configuration (DAC). Interdependencies and varying importance between action dimensions are further known key characteristics of DAC problems. We argue that these Coupled Action Dimensions with Importance Differences (CANDID) represent aspects of the DAC problem that are not yet fully explored. To address this gap, we introduce a new white-box benchmark within the DACBench suite that simulates the properties of CANDID. Further, we propose sequential policies as an effective strategy for managing these properties. Such policies factorize the action space and mitigate exponential growth by learning a policy per action dimension. At the same time, these policies accommodate the interdependence of action dimensions by fostering implicit coordination. We show this in an experimental study of value-based policies on our new benchmark. This study demonstrates that sequential policies significantly outperform independent learning of factorized policies in CANDID action spaces. In addition, they overcome the scalability limitations associated with learning a single policy across all action dimensions. The code used for our experiments is available under https://github.com/PhilippBordne/candidDAC. | [
"['Philipp Bordne' 'M. Asif Hasan' 'Eddie Bergman' 'Noor Awad'\n 'André Biedenkapp']"
] |
null | null | 2407.05793 | null | null | http://arxiv.org/pdf/2407.05793v1 | 2024-07-08T09:55:31Z | 2024-07-08T09:55:31Z | A Primal-Dual Online Learning Approach for Dynamic Pricing of
Sequentially Displayed Complementary Items under Sale Constraints | We address the challenging problem of dynamically pricing complementary items that are sequentially displayed to customers. An illustrative example is the online sale of flight tickets, where customers navigate through multiple web pages. Initially, they view the ticket cost, followed by ancillary expenses such as insurance and additional luggage fees. Coherent pricing policies for complementary items are essential because optimizing the pricing of each item individually is ineffective. Our scenario also involves a sales constraint, which specifies a minimum number of items to sell, and uncertainty regarding customer demand curves. To tackle this problem, we originally formulate it as a Markov Decision Process with constraints. Leveraging online learning tools, we design a primal-dual online optimization algorithm. We empirically evaluate our approach using synthetic settings randomly generated from real-world data, covering various configurations from stationary to non-stationary, and compare its performance in terms of constraints violation and regret against well-known baselines optimizing each state singularly. | [
"['Francesco Emanuele Stradi' 'Filippo Cipriani' 'Lorenzo Ciampiconi'\n 'Marco Leonardi' 'Alessandro Rozza' 'Nicola Gatti']"
] |
null | null | 2407.05800 | null | null | http://arxiv.org/pdf/2407.05800v1 | 2024-07-08T10:10:07Z | 2024-07-08T10:10:07Z | FedMRL: Data Heterogeneity Aware Federated Multi-agent Deep
Reinforcement Learning for Medical Imaging | Despite recent advancements in federated learning (FL) for medical image diagnosis, addressing data heterogeneity among clients remains a significant challenge for practical implementation. A primary hurdle in FL arises from the non-IID nature of data samples across clients, which typically results in a decline in the performance of the aggregated global model. In this study, we introduce FedMRL, a novel federated multi-agent deep reinforcement learning framework designed to address data heterogeneity. FedMRL incorporates a novel loss function to facilitate fairness among clients, preventing bias in the final global model. Additionally, it employs a multi-agent reinforcement learning (MARL) approach to calculate the proximal term $(mu)$ for the personalized local objective function, ensuring convergence to the global optimum. Furthermore, FedMRL integrates an adaptive weight adjustment method using a Self-organizing map (SOM) on the server side to counteract distribution shifts among clients' local data distributions. We assess our approach using two publicly available real-world medical datasets, and the results demonstrate that FedMRL significantly outperforms state-of-the-art techniques, showing its efficacy in addressing data heterogeneity in federated learning. The code can be found here~{url{https://github.com/Pranabiitp/FedMRL}}. | [
"['Pranab Sahoo' 'Ashutosh Tripathi' 'Sriparna Saha' 'Samrat Mondal']"
] |
null | null | 2407.05816 | null | null | http://arxiv.org/pdf/2407.05816v1 | 2024-07-08T10:53:49Z | 2024-07-08T10:53:49Z | Graph Reasoning Networks | Graph neural networks (GNNs) are the predominant approach for graph-based machine learning. While neural networks have shown great performance at learning useful representations, they are often criticized for their limited high-level reasoning abilities. In this work, we present Graph Reasoning Networks (GRNs), a novel approach to combine the strengths of fixed and learned graph representations and a reasoning module based on a differentiable satisfiability solver. While results on real-world datasets show comparable performance to GNN, experiments on synthetic datasets demonstrate the potential of the newly proposed method. | [
"['Markus Zopf' 'Francesco Alesiani']"
] |
null | null | 2407.05832 | null | null | http://arxiv.org/pdf/2407.05832v2 | 2024-07-11T09:10:09Z | 2024-07-08T11:25:30Z | A Machine Learning Approach to Detecting Albedo Anomalies on the Lunar
Surface | This study introduces a data-driven approach using machine learning (ML) techniques to explore and predict albedo anomalies on the Moon's surface. The research leverages diverse planetary datasets, including high-spatial-resolution albedo maps and element maps (LPFe, LPK, LPTh, LPTi) derived from laser and gamma-ray measurements. The primary objective is to identify relationships between chemical elements and albedo, thereby expanding our understanding of planetary surfaces and offering predictive capabilities for areas with incomplete datasets. To bridge the gap in resolution between the albedo and element maps, we employ Gaussian blurring techniques, including an innovative adaptive Gaussian blur. Our methodology culminates in the deployment of an Extreme Gradient Boosting Regression Model, optimized to predict full albedo based on elemental composition. Furthermore, we present an interactive analytical tool to visualize prediction errors, delineating their spatial and chemical characteristics. The findings not only pave the way for a more comprehensive understanding of the Moon's surface but also provide a framework for similar studies on other celestial bodies. | [
"['Sofia Strukova' 'Sergei Gleyzer' 'Patrick Peplowski' 'Jason P. Terry']"
] |
null | null | 2407.05841 | null | null | http://arxiv.org/pdf/2407.05841v1 | 2024-07-08T11:38:49Z | 2024-07-08T11:38:49Z | An Empirical Comparison of Vocabulary Expansion and Initialization
Approaches for Language Models | Language Models (LMs) excel in natural language processing tasks for English but show reduced performance in most other languages. This problem is commonly tackled by continually pre-training and fine-tuning these models for said languages. A significant issue in this process is the limited vocabulary coverage in the original model's tokenizer, leading to inadequate representation of new languages and necessitating an expansion of the tokenizer. The initialization of the embeddings corresponding to new vocabulary items presents a further challenge. Current strategies require cross-lingual embeddings and lack a solid theoretical foundation as well as comparisons with strong baselines. In this paper, we first establish theoretically that initializing within the convex hull of existing embeddings is a good initialization, followed by a novel but simple approach, Constrained Word2Vec (CW2V), which does not require cross-lingual embeddings. Our study evaluates different initialization methods for expanding RoBERTa and LLaMA 2 across four languages and five tasks. The results show that CW2V performs equally well or even better than more advanced techniques. Additionally, simpler approaches like multivariate initialization perform on par with these advanced methods indicating that efficient large-scale multilingual continued pretraining can be achieved even with simpler initialization methods. | [
"['Nandini Mundra' 'Aditya Nanda Kishore' 'Raj Dabre' 'Ratish Puduppully'\n 'Anoop Kunchukuttan' 'Mitesh M. Khapra']"
] |
null | null | 2407.05864 | null | null | http://arxiv.org/abs/2407.05864v1 | 2024-07-08T12:29:29Z | 2024-07-08T12:29:29Z | Neural Network-based Information Set Weighting for Playing
Reconnaissance Blind Chess | In imperfect information games, the game state is generally not fully observable to players. Therefore, good gameplay requires policies that deal with the different information that is hidden from each player. To combat this, effective algorithms often reason about information sets; the sets of all possible game states that are consistent with a player's observations. While there is no way to distinguish between the states within an information set, this property does not imply that all states are equally likely to occur in play. We extend previous research on assigning weights to the states in an information set in order to facilitate better gameplay in the imperfect information game of Reconnaissance Blind Chess. For this, we train two different neural networks which estimate the likelihood of each state in an information set from historical game data. Experimentally, we find that a Siamese neural network is able to achieve higher accuracy and is more efficient than a classical convolutional neural network for the given domain. Finally, we evaluate an RBC-playing agent that is based on the generated weightings and compare different parameter settings that influence how strongly it should rely on them. The resulting best player is ranked 5th on the public leaderboard. | [
"['Timo Bertram' 'Johannes Fürnkranz' 'Martin Müller']"
] |
null | null | 2407.05872 | null | null | http://arxiv.org/pdf/2407.05872v1 | 2024-07-08T12:32:51Z | 2024-07-08T12:32:51Z | Scaling Exponents Across Parameterizations and Optimizers | Robust and effective scaling of models from small to large width typically requires the precise adjustment of many algorithmic and architectural details, such as parameterization and optimizer choices. In this work, we propose a new perspective on parameterization by investigating a key assumption in prior work about the alignment between parameters and data and derive new theoretical results under weaker assumptions and a broader set of optimizers. Our extensive empirical investigation includes tens of thousands of models trained with all combinations of three optimizers, four parameterizations, several alignment assumptions, more than a dozen learning rates, and fourteen model sizes up to 26.8B parameters. We find that the best learning rate scaling prescription would often have been excluded by the assumptions in prior work. Our results show that all parameterizations, not just maximal update parameterization (muP), can achieve hyperparameter transfer; moreover, our novel per-layer learning rate prescription for standard parameterization outperforms muP. Finally, we demonstrate that an overlooked aspect of parameterization, the epsilon parameter in Adam, must be scaled correctly to avoid gradient underflow and propose Adam-atan2, a new numerically stable, scale-invariant version of Adam that eliminates the epsilon hyperparameter entirely. | [
"['Katie Everett' 'Lechao Xiao' 'Mitchell Wortsman' 'Alexander A. Alemi'\n 'Roman Novak' 'Peter J. Liu' 'Izzeddin Gur' 'Jascha Sohl-Dickstein'\n 'Leslie Pack Kaelbling' 'Jaehoon Lee' 'Jeffrey Pennington']"
] |
null | null | 2407.05876 | null | null | http://arxiv.org/pdf/2407.05876v1 | 2024-07-08T12:37:07Z | 2024-07-08T12:37:07Z | Efficiently Training Neural Networks for Imperfect Information Games by
Sampling Information Sets | In imperfect information games, the evaluation of a game state not only depends on the observable world but also relies on hidden parts of the environment. As accessing the obstructed information trivialises state evaluations, one approach to tackle such problems is to estimate the value of the imperfect state as a combination of all states in the information set, i.e., all possible states that are consistent with the current imperfect information. In this work, the goal is to learn a function that maps from the imperfect game information state to its expected value. However, constructing a perfect training set, i.e. an enumeration of the whole information set for numerous imperfect states, is often infeasible. To compute the expected values for an imperfect information game like textit{Reconnaissance Blind Chess}, one would need to evaluate thousands of chess positions just to obtain the training target for a single state. Still, the expected value of a state can already be approximated with appropriate accuracy from a much smaller set of evaluations. Thus, in this paper, we empirically investigate how a budget of perfect information game evaluations should be distributed among training samples to maximise the return. Our results show that sampling a small number of states, in our experiments roughly 3, for a larger number of separate positions is preferable over repeatedly sampling a smaller quantity of states. Thus, we find that in our case, the quantity of different samples seems to be more important than higher target quality. | [
"['Timo Bertram' 'Johannes Fürnkranz' 'Martin Müller']"
] |
null | null | 2407.05887 | null | null | http://arxiv.org/pdf/2407.05887v1 | 2024-07-08T12:47:03Z | 2024-07-08T12:47:03Z | Generation and De-Identification of Indian Clinical Discharge Summaries
using LLMs | The consequences of a healthcare data breach can be devastating for the patients, providers, and payers. The average financial impact of a data breach in recent months has been estimated to be close to USD 10 million. This is especially significant for healthcare organizations in India that are managing rapid digitization while still establishing data governance procedures that align with the letter and spirit of the law. Computer-based systems for de-identification of personal information are vulnerable to data drift, often rendering them ineffective in cross-institution settings. Therefore, a rigorous assessment of existing de-identification against local health datasets is imperative to support the safe adoption of digital health initiatives in India. Using a small set of de-identified patient discharge summaries provided by an Indian healthcare institution, in this paper, we report the nominal performance of de-identification algorithms (based on language models) trained on publicly available non-Indian datasets, pointing towards a lack of cross-institutional generalization. Similarly, experimentation with off-the-shelf de-identification systems reveals potential risks associated with the approach. To overcome data scarcity, we explore generating synthetic clinical reports (using publicly available and Indian summaries) by performing in-context learning over Large Language Models (LLMs). Our experiments demonstrate the use of generated reports as an effective strategy for creating high-performing de-identification systems with good generalization capabilities. | [
"['Sanjeet Singh' 'Shreya Gupta' 'Niralee Gupta' 'Naimish Sharma'\n 'Lokesh Srivastava' 'Vibhu Agarwal' 'Ashutosh Modi']"
] |
null | null | 2407.05895 | null | null | http://arxiv.org/pdf/2407.05895v1 | 2024-07-08T13:01:53Z | 2024-07-08T13:01:53Z | Link Representation Learning for Probabilistic Travel Time Estimation | Travel time estimation is a crucial application in navigation apps and web mapping services. Current deterministic and probabilistic methods primarily focus on modeling individual trips, assuming independence among trips. However, in real-world scenarios, we often observe strong inter-trip correlations due to factors such as weather conditions, traffic management, and road works. In this paper, we propose to model trip-level link travel time using a Gaussian hierarchical model, which can characterize both inter-trip and intra-trip correlations. The joint distribution of travel time of multiple trips becomes a multivariate Gaussian parameterized by learnable link representations. To effectively use the sparse GPS trajectories, we also propose a data augmentation method based on trip sub-sampling, which allows for fine-grained gradient backpropagation in learning link representations. During inference, we estimate the probability distribution of the travel time of a queried trip conditional on the completed trips that are spatiotemporally adjacent. We refer to the overall framework as ProbTTE. We evaluate ProbTTE on two real-world GPS trajectory datasets, and the results demonstrate its superior performance compared to state-of-the-art deterministic and probabilistic baselines. Additionally, we find that the learned link representations align well with the physical geometry of the network, making them suitable as input for other applications. | [
"['Chen Xu' 'Qiang Wang' 'Lijun Sun']"
] |
null | null | 2407.05919 | null | null | http://arxiv.org/pdf/2407.05919v1 | 2024-07-08T13:25:28Z | 2024-07-08T13:25:28Z | Fostering Trust and Quantifying Value of AI and ML | Artificial Intelligence (AI) and Machine Learning (ML) providers have a responsibility to develop valid and reliable systems. Much has been discussed about trusting AI and ML inferences (the process of running live data through a trained AI model to make a prediction or solve a task), but little has been done to define what that means. Those in the space of ML- based products are familiar with topics such as transparency, explainability, safety, bias, and so forth. Yet, there are no frameworks to quantify and measure those. Producing ever more trustworthy machine learning inferences is a path to increase the value of products (i.e., increased trust in the results) and to engage in conversations with users to gather feedback to improve products. In this paper, we begin by examining the dynamic of trust between a provider (Trustor) and users (Trustees). Trustors are required to be trusting and trustworthy, whereas trustees need not be trusting nor trustworthy. The challenge for trustors is to provide results that are good enough to make a trustee increase their level of trust above a minimum threshold for: 1- doing business together; 2- continuation of service. We conclude by defining and proposing a framework, and a set of viable metrics, to be used for computing a trust score and objectively understand how trustworthy a machine learning system can claim to be, plus their behavior over time. | [
"['Dalmo Cirne' 'Veena Calambur']"
] |
null | null | 2407.05920 | null | null | http://arxiv.org/pdf/2407.05920v1 | 2024-07-08T13:27:41Z | 2024-07-08T13:27:41Z | LPGD: A General Framework for Backpropagation through Embedded
Optimization Layers | Embedding parameterized optimization problems as layers into machine learning architectures serves as a powerful inductive bias. Training such architectures with stochastic gradient descent requires care, as degenerate derivatives of the embedded optimization problem often render the gradients uninformative. We propose Lagrangian Proximal Gradient Descent (LPGD) a flexible framework for training architectures with embedded optimization layers that seamlessly integrates into automatic differentiation libraries. LPGD efficiently computes meaningful replacements of the degenerate optimization layer derivatives by re-running the forward solver oracle on a perturbed input. LPGD captures various previously proposed methods as special cases, while fostering deep links to traditional optimization methods. We theoretically analyze our method and demonstrate on historical and synthetic data that LPGD converges faster than gradient descent even in a differentiable setup. | [
"['Anselm Paulus' 'Georg Martius' 'Vít Musil']"
] |
null | null | 2407.05921 | null | null | http://arxiv.org/pdf/2407.05921v1 | 2024-07-08T13:28:47Z | 2024-07-08T13:28:47Z | TAPVid-3D: A Benchmark for Tracking Any Point in 3D | We introduce a new benchmark, TAPVid-3D, for evaluating the task of long-range Tracking Any Point in 3D (TAP-3D). While point tracking in two dimensions (TAP) has many benchmarks measuring performance on real-world videos, such as TAPVid-DAVIS, three-dimensional point tracking has none. To this end, leveraging existing footage, we build a new benchmark for 3D point tracking featuring 4,000+ real-world videos, composed of three different data sources spanning a variety of object types, motion patterns, and indoor and outdoor environments. To measure performance on the TAP-3D task, we formulate a collection of metrics that extend the Jaccard-based metric used in TAP to handle the complexities of ambiguous depth scales across models, occlusions, and multi-track spatio-temporal smoothness. We manually verify a large sample of trajectories to ensure correct video annotations, and assess the current state of the TAP-3D task by constructing competitive baselines using existing tracking models. We anticipate this benchmark will serve as a guidepost to improve our ability to understand precise 3D motion and surface deformation from monocular video. Code for dataset download, generation, and model evaluation is available at https://tapvid3d.github.io | [
"['Skanda Koppula' 'Ignacio Rocco' 'Yi Yang' 'Joe Heyward' 'João Carreira'\n 'Andrew Zisserman' 'Gabriel Brostow' 'Carl Doersch']"
] |
null | null | 2407.05934 | null | null | http://arxiv.org/pdf/2407.05934v1 | 2024-07-08T13:41:21Z | 2024-07-08T13:41:21Z | Graph Anomaly Detection with Noisy Labels by Reinforcement Learning | Graph anomaly detection (GAD) has been widely applied in many areas, e.g., fraud detection in finance and robot accounts in social networks. Existing methods are dedicated to identifying the outlier nodes that deviate from normal ones. While they heavily rely on high-quality annotation, which is hard to obtain in real-world scenarios, this could lead to severely degraded performance based on noisy labels. Thus, we are motivated to cut the edges of suspicious nodes to alleviate the impact of noise. However, it remains difficult to precisely identify the nodes with noisy labels. Moreover, it is hard to quantitatively evaluate the regret of cutting the edges, which may have either positive or negative influences. To this end, we propose a novel framework REGAD, i.e., REinforced Graph Anomaly Detector. Specifically, we aim to maximize the performance improvement (AUC) of a base detector by cutting noisy edges approximated through the nodes with high-confidence labels. (i) We design a tailored action and search space to train a policy network to carefully prune edges step by step, where only a few suspicious edges are prioritized in each step. (ii) We design a policy-in-the-loop mechanism to iteratively optimize the policy based on the feedback from base detector. The overall performance is evaluated by the cumulative rewards. Extensive experiments are conducted on three datasets under different anomaly ratios. The results indicate the superior performance of our proposed REGAD. | [
"['Zhu Wang' 'Shuang Zhou' 'Junnan Dong' 'Chang Yang' 'Xiao Huang'\n 'Shengjie Zhao']"
] |
null | null | 2407.05941 | null | null | http://arxiv.org/pdf/2407.05941v1 | 2024-07-01T17:42:40Z | 2024-07-01T17:42:40Z | Reducing Vision Transformer Latency on Edge Devices via GPU Tail Effect
and Training-free Token Pruning | This paper investigates how to efficiently deploy transformer-based neural networks on edge devices. Recent methods reduce the latency of transformer neural networks by removing or merging tokens, with small accuracy degradation. However, these methods are not designed with edge device deployment in mind, and do not leverage information about the hardware characteristics to improve efficiency. First, we show that the relationship between latency and workload size is governed by the GPU tail-effect. This relationship is used to create a token pruning schedule tailored for a pre-trained model and device pair. Second, we demonstrate a training-free token pruning method utilizing this relationship. This method achieves accuracy-latency trade-offs in a hardware aware manner. We show that for single batch inference, other methods may actually increase latency by 18.6-30.3% with respect to baseline, while we can reduce it by 9%. For similar latency (within 5.2%) across devices we achieve 78.6%-84.5% ImageNet1K accuracy, while the state-of-the-art, Token Merging, achieves 45.8%-85.4%. | [
"['Nick John Eliopoulos' 'Purvish Jajal' 'James Davis' 'Gaowen Liu'\n 'George K. Thiravathukal' 'Yung-Hsiang Lu']"
] |
null | null | 2407.05952 | null | null | http://arxiv.org/pdf/2407.05952v1 | 2024-06-29T21:24:19Z | 2024-06-29T21:24:19Z | H-STAR: LLM-driven Hybrid SQL-Text Adaptive Reasoning on Tables | Tabular reasoning involves interpreting unstructured queries against structured tables, requiring a synthesis of textual understanding and symbolic reasoning. Existing methods rely on either of the approaches and are constrained by their respective limitations. Textual reasoning excels in semantic interpretation unlike symbolic reasoning (SQL logic), but falls short in mathematical reasoning where SQL excels. In this paper, we introduce a novel algorithm H-STAR, comprising table extraction and adaptive reasoning, integrating both symbolic and semantic (text-based) approaches. To enhance evidence extraction, H-STAR employs a multi-view approach, incorporating step-by-step row and column retrieval. It also adapts reasoning strategies based on question types, utilizing symbolic reasoning for quantitative and logical tasks, and semantic reasoning for direct lookup and complex lexical queries. Our extensive experiments demonstrate that H-STAR significantly outperforms state-of-the-art methods across three tabular question-answering (QA) and fact-verification datasets, underscoring its effectiveness and efficiency. | [
"['Nikhil Abhyankar' 'Vivek Gupta' 'Dan Roth' 'Chandan K. Reddy']"
] |
null | null | 2407.05954 | null | null | http://arxiv.org/pdf/2407.05954v1 | 2024-06-30T10:40:54Z | 2024-06-30T10:40:54Z | Causality-driven Sequence Segmentation for Enhancing Multiphase
Industrial Process Data Analysis and Soft Sensing | The dynamic characteristics of multiphase industrial processes present significant challenges in the field of industrial big data modeling. Traditional soft sensing models frequently neglect the process dynamics and have difficulty in capturing transient phenomena like phase transitions. To address this issue, this article introduces a causality-driven sequence segmentation (CDSS) model. This model first identifies the local dynamic properties of the causal relationships between variables, which are also referred to as causal mechanisms. It then segments the sequence into different phases based on the sudden shifts in causal mechanisms that occur during phase transitions. Additionally, a novel metric, similarity distance, is designed to evaluate the temporal consistency of causal mechanisms, which includes both causal similarity distance and stable similarity distance. The discovered causal relationships in each phase are represented as a temporal causal graph (TCG). Furthermore, a soft sensing model called temporal-causal graph convolutional network (TC-GCN) is trained for each phase, by using the time-extended data and the adjacency matrix of TCG. The numerical examples are utilized to validate the proposed CDSS model, and the segmentation results demonstrate that CDSS has excellent performance on segmenting both stable and unstable multiphase series. Especially, it has higher accuracy in separating non-stationary time series compared to other methods. The effectiveness of the proposed CDSS model and the TC-GCN model is also verified through a penicillin fermentation process. Experimental results indicate that the breakpoints discovered by CDSS align well with the reaction mechanisms and TC-GCN significantly has excellent predictive accuracy. | [
"['Yimeng He' 'Le Yao' 'Xinmin Zhang' 'Xiangyin Kong' 'Zhihuan Song']"
] |
null | null | 2407.05965 | null | null | http://arxiv.org/pdf/2407.05965v1 | 2024-07-08T14:04:58Z | 2024-07-08T14:04:58Z | T2VSafetyBench: Evaluating the Safety of Text-to-Video Generative Models | The recent development of Sora leads to a new era in text-to-video (T2V) generation. Along with this comes the rising concern about its security risks. The generated videos may contain illegal or unethical content, and there is a lack of comprehensive quantitative understanding of their safety, posing a challenge to their reliability and practical deployment. Previous evaluations primarily focus on the quality of video generation. While some evaluations of text-to-image models have considered safety, they cover fewer aspects and do not address the unique temporal risk inherent in video generation. To bridge this research gap, we introduce T2VSafetyBench, a new benchmark designed for conducting safety-critical assessments of text-to-video models. We define 12 critical aspects of video generation safety and construct a malicious prompt dataset using LLMs and jailbreaking prompt attacks. Based on our evaluation results, we draw several important findings, including: 1) no single model excels in all aspects, with different models showing various strengths; 2) the correlation between GPT-4 assessments and manual reviews is generally high; 3) there is a trade-off between the usability and safety of text-to-video generative models. This indicates that as the field of video generation rapidly advances, safety risks are set to surge, highlighting the urgency of prioritizing video safety. We hope that T2VSafetyBench can provide insights for better understanding the safety of video generation in the era of generative AI. | [
"['Yibo Miao' 'Yifan Zhu' 'Yinpeng Dong' 'Lijia Yu' 'Jun Zhu'\n 'Xiao-Shan Gao']"
] |
null | null | 2407.05966 | null | null | http://arxiv.org/pdf/2407.05966v1 | 2024-07-08T14:05:03Z | 2024-07-08T14:05:03Z | On Bellman equations for continuous-time policy evaluation I:
discretization and approximation | We study the problem of computing the value function from a discretely-observed trajectory of a continuous-time diffusion process. We develop a new class of algorithms based on easily implementable numerical schemes that are compatible with discrete-time reinforcement learning (RL) with function approximation. We establish high-order numerical accuracy as well as the approximation error guarantees for the proposed approach. In contrast to discrete-time RL problems where the approximation factor depends on the effective horizon, we obtain a bounded approximation factor using the underlying elliptic structures, even if the effective horizon diverges to infinity. | [
"['Wenlong Mou' 'Yuhua Zhu']"
] |
null | null | 2407.05973 | null | null | http://arxiv.org/pdf/2407.05973v1 | 2024-07-08T14:16:05Z | 2024-07-08T14:16:05Z | Active Label Refinement for Robust Training of Imbalanced Medical Image
Classification Tasks in the Presence of High Label Noise | The robustness of supervised deep learning-based medical image classification is significantly undermined by label noise. Although several methods have been proposed to enhance classification performance in the presence of noisy labels, they face some challenges: 1) a struggle with class-imbalanced datasets, leading to the frequent overlooking of minority classes as noisy samples; 2) a singular focus on maximizing performance using noisy datasets, without incorporating experts-in-the-loop for actively cleaning the noisy labels. To mitigate these challenges, we propose a two-phase approach that combines Learning with Noisy Labels (LNL) and active learning. This approach not only improves the robustness of medical image classification in the presence of noisy labels, but also iteratively improves the quality of the dataset by relabeling the important incorrect labels, under a limited annotation budget. Furthermore, we introduce a novel Variance of Gradients approach in LNL phase, which complements the loss-based sample selection by also sampling under-represented samples. Using two imbalanced noisy medical classification datasets, we demonstrate that that our proposed technique is superior to its predecessors at handling class imbalance by not misidentifying clean samples from minority classes as mostly noisy samples. | [
"['Bidur Khanal' 'Tianhong Dai' 'Binod Bhattarai' 'Cristian Linte']"
] |
null | null | 2407.05982 | null | null | http://arxiv.org/pdf/2407.05982v1 | 2024-07-08T14:25:39Z | 2024-07-08T14:25:39Z | MTL-Split: Multi-Task Learning for Edge Devices using Split Computing | Split Computing (SC), where a Deep Neural Network (DNN) is intelligently split with a part of it deployed on an edge device and the rest on a remote server is emerging as a promising approach. It allows the power of DNNs to be leveraged for latency-sensitive applications that do not allow the entire DNN to be deployed remotely, while not having sufficient computation bandwidth available locally. In many such embedded systems scenarios, such as those in the automotive domain, computational resource constraints also necessitate Multi-Task Learning (MTL), where the same DNN is used for multiple inference tasks instead of having dedicated DNNs for each task, which would need more computing bandwidth. However, how to partition such a multi-tasking DNN to be deployed within a SC framework has not been sufficiently studied. This paper studies this problem, and MTL-Split, our novel proposed architecture, shows encouraging results on both synthetic and real-world data. The source code is available at https://github.com/intelligolabs/MTL-Split. | [
"['Luigi Capogrosso' 'Enrico Fraccaroli' 'Samarjit Chakraborty'\n 'Franco Fummi' 'Marco Cristani']"
] |
null | null | 2407.05986 | null | null | http://arxiv.org/pdf/2407.05986v1 | 2024-07-08T14:26:30Z | 2024-07-08T14:26:30Z | KidSat: satellite imagery to map childhood poverty dataset and benchmark | Satellite imagery has emerged as an important tool to analyse demographic, health, and development indicators. While various deep learning models have been built for these tasks, each is specific to a particular problem, with few standard benchmarks available. We propose a new dataset pairing satellite imagery and high-quality survey data on child poverty to benchmark satellite feature representations. Our dataset consists of 33,608 images, each 10 km $times$ 10 km, from 19 countries in Eastern and Southern Africa in the time period 1997-2022. As defined by UNICEF, multidimensional child poverty covers six dimensions and it can be calculated from the face-to-face Demographic and Health Surveys (DHS) Program . As part of the benchmark, we test spatial as well as temporal generalization, by testing on unseen locations, and on data after the training years. Using our dataset we benchmark multiple models, from low-level satellite imagery models such as MOSAIKS , to deep learning foundation models, which include both generic vision models such as Self-Distillation with no Labels (DINOv2) models and specific satellite imagery models such as SatMAE. We provide open source code for building the satellite dataset, obtaining ground truth data from DHS and running various models assessed in our work. | [
"['Makkunda Sharma' 'Fan Yang' 'Duy-Nhat Vo' 'Esra Suel' 'Swapnil Mishra'\n 'Samir Bhatt' 'Oliver Fiala' 'William Rudgard' 'Seth Flaxman']"
] |
null | null | 2407.06015 | null | null | http://arxiv.org/pdf/2407.06015v1 | 2024-07-08T15:06:03Z | 2024-07-08T15:06:03Z | Simulation-based Benchmarking for Causal Structure Learning in Gene
Perturbation Experiments | Causal structure learning (CSL) refers to the task of learning causal relationships from data. Advances in CSL now allow learning of causal graphs in diverse application domains, which has the potential to facilitate data-driven causal decision-making. Real-world CSL performance depends on a number of $textit{context-specific}$ factors, including context-specific data distributions and non-linear dependencies, that are important in practical use-cases. However, our understanding of how to assess and select CSL methods in specific contexts remains limited. To address this gap, we present $textit{CausalRegNet}$, a multiplicative effect structural causal model that allows for generating observational and interventional data incorporating context-specific properties, with a focus on the setting of gene perturbation experiments. Using real-world gene perturbation data, we show that CausalRegNet generates accurate distributions and scales far better than current simulation frameworks. We illustrate the use of CausalRegNet in assessing CSL methods in the context of interventional experiments in biology. | [
"['Luka Kovačević' 'Izzy Newsham' 'Sach Mukherjee' 'John Whittaker']"
] |
null | null | 2407.06018 | null | null | http://arxiv.org/pdf/2407.06018v1 | 2024-07-08T15:08:41Z | 2024-07-08T15:08:41Z | Leveraging Transformers for Weakly Supervised Object Localization in
Unconstrained Videos | Weakly-Supervised Video Object Localization (WSVOL) involves localizing an object in videos using only video-level labels, also referred to as tags. State-of-the-art WSVOL methods like Temporal CAM (TCAM) rely on class activation mapping (CAM) and typically require a pre-trained CNN classifier. However, their localization accuracy is affected by their tendency to minimize the mutual information between different instances of a class and exploit temporal information during training for downstream tasks, e.g., detection and tracking. In the absence of bounding box annotation, it is challenging to exploit precise information about objects from temporal cues because the model struggles to locate objects over time. To address these issues, a novel method called transformer based CAM for videos (TrCAM-V), is proposed for WSVOL. It consists of a DeiT backbone with two heads for classification and localization. The classification head is trained using standard classification loss (CL), while the localization head is trained using pseudo-labels that are extracted using a pre-trained CLIP model. From these pseudo-labels, the high and low activation values are considered to be foreground and background regions, respectively. Our TrCAM-V method allows training a localization network by sampling pseudo-pixels on the fly from these regions. Additionally, a conditional random field (CRF) loss is employed to align the object boundaries with the foreground map. During inference, the model can process individual frames for real-time localization applications. Extensive experiments on challenging YouTube-Objects unconstrained video datasets show that our TrCAM-V method achieves new state-of-the-art performance in terms of classification and localization accuracy. | [
"['Shakeeb Murtaza' 'Marco Pedersoli' 'Aydin Sarraf' 'Eric Granger']"
] |
null | null | 2407.06053 | null | null | http://arxiv.org/pdf/2407.06053v2 | 2024-07-10T15:20:10Z | 2024-07-08T15:55:12Z | Learning local equivariant representations for quantum operators | Predicting quantum operator matrices such as Hamiltonian, overlap, and density matrices in the density functional theory (DFT) framework is crucial for understanding material properties. Current methods often focus on individual operators and struggle with efficiency and scalability for large systems. Here we introduce a novel deep learning model, SLEM (strictly localized equivariant message-passing) for predicting multiple quantum operators, that achieves state-of-the-art accuracy while dramatically improving computational efficiency. SLEM's key innovation is its strict locality-based design, constructing local, equivariant representations for quantum tensors while preserving physical symmetries. This enables complex many-body dependence without expanding the effective receptive field, leading to superior data efficiency and transferability. Using an innovative SO(2) convolution technique, SLEM reduces the computational complexity of high-order tensor products and is therefore capable of handling systems requiring the $f$ and $g$ orbitals in their basis sets. We demonstrate SLEM's capabilities across diverse 2D and 3D materials, achieving high accuracy even with limited training data. SLEM's design facilitates efficient parallelization, potentially extending DFT simulations to systems with device-level sizes, opening new possibilities for large-scale quantum simulations and high-throughput materials discovery. | [
"['Zhanghao Zhouyin' 'Zixi Gan' 'Shishir Kumar Pandey' 'Linfeng Zhang'\n 'Qiangqiang Gu']"
] |
null | null | 2407.06057 | null | null | http://arxiv.org/pdf/2407.06057v1 | 2024-07-08T15:59:44Z | 2024-07-08T15:59:44Z | Variational Best-of-N Alignment | Best-of-N (BoN) is a popular and effective algorithm for aligning language models to human preferences. The algorithm works as follows: at inference time, N samples are drawn from the language model, and the sample with the highest reward, as judged by a reward model, is returned as the output. Despite its effectiveness, BoN is computationally expensive; it reduces sampling throughput by a factor of N. To make BoN more efficient at inference time, one strategy is to fine-tune the language model to mimic what BoN does during inference. To achieve this, we derive the distribution induced by the BoN algorithm. We then propose to fine-tune the language model to minimize backward KL divergence to the BoN distribution. Our approach is analogous to mean-field variational inference and, thus, we term it variational BoN (vBoN). To the extent this fine-tuning is successful and we end up with a good approximation, we have reduced the inference cost by a factor of N. Our experiments on a controlled generation task suggest that while variational BoN is not as effective as BoN in aligning language models, it is close to BoN performance as vBoN appears more often on the Pareto frontier of reward and KL divergence compared to models trained with KL-constrained RL objective. | [
"['Afra Amini' 'Tim Vieira' 'Ryan Cotterell']"
] |
null | null | 2407.06060 | null | null | http://arxiv.org/pdf/2407.06060v1 | 2024-07-08T16:01:04Z | 2024-07-08T16:01:04Z | MERGE -- A Bimodal Dataset for Static Music Emotion Recognition | The Music Emotion Recognition (MER) field has seen steady developments in recent years, with contributions from feature engineering, machine learning, and deep learning. The landscape has also shifted from audio-centric systems to bimodal ensembles that combine audio and lyrics. However, a severe lack of public and sizeable bimodal databases has hampered the development and improvement of bimodal audio-lyrics systems. This article proposes three new audio, lyrics, and bimodal MER research datasets, collectively called MERGE, created using a semi-automatic approach. To comprehensively assess the proposed datasets and establish a baseline for benchmarking, we conducted several experiments for each modality, using feature engineering, machine learning, and deep learning methodologies. In addition, we propose and validate fixed train-validate-test splits. The obtained results confirm the viability of the proposed datasets, achieving the best overall result of 79.21% F1-score for bimodal classification using a deep neural network. | [
"['Pedro Lima Louro' 'Hugo Redinho' 'Ricardo Santos' 'Ricardo Malheiro'\n 'Renato Panda' 'Rui Pedro Paiva']"
] |
null | null | 2407.06083 | null | null | http://arxiv.org/pdf/2407.06083v1 | 2024-07-04T09:50:50Z | 2024-07-04T09:50:50Z | A Survey of Controllable Learning: Methods and Applications in
Information Retrieval | Controllable learning (CL) emerges as a critical component in trustworthy machine learning, ensuring that learners meet predefined targets and can adaptively adjust without retraining according to the changes in those targets. We provide a formal definition of CL, and discuss its applications in information retrieval (IR) where information needs are often complex and dynamic. The survey categorizes CL according to who controls (users or platforms), what is controllable (e.g., retrieval objectives, users' historical behaviors, controllable environmental adaptation), how control is implemented (e.g., rule-based method, Pareto optimization, Hypernetwork), and where to implement control (e.g.,pre-processing, in-processing, post-processing methods). Then, we identify challenges faced by CL across training, evaluation, task setting, and deployment in online environments. Additionally, we outline promising directions for CL in theoretical analysis, efficient computation, empowering large language models, application scenarios and evaluation frameworks in IR. | [
"['Chenglei Shen' 'Xiao Zhang' 'Teng Shi' 'Changshuo Zhang' 'Guofu Xie'\n 'Jun Xu']"
] |
null | null | 2407.06085 | null | null | http://arxiv.org/pdf/2407.06085v1 | 2024-07-03T09:59:27Z | 2024-07-03T09:59:27Z | LLMcap: Large Language Model for Unsupervised PCAP Failure Detection | The integration of advanced technologies into telecommunication networks complicates troubleshooting, posing challenges for manual error identification in Packet Capture (PCAP) data. This manual approach, requiring substantial resources, becomes impractical at larger scales. Machine learning (ML) methods offer alternatives, but the scarcity of labeled data limits accuracy. In this study, we propose a self-supervised, large language model-based (LLMcap) method for PCAP failure detection. LLMcap leverages language-learning abilities and employs masked language modeling to learn grammar, context, and structure. Tested rigorously on various PCAPs, it demonstrates high accuracy despite the absence of labeled data during training, presenting a promising solution for efficient network analysis. Index Terms: Network troubleshooting, Packet Capture Analysis, Self-Supervised Learning, Large Language Model, Network Quality of Service, Network Performance. | [
"['Lukasz Tulczyjew' 'Kinan Jarrah' 'Charles Abondo' 'Dina Bennett'\n 'Nathanael Weill']"
] |
null | null | 2407.06087 | null | null | http://arxiv.org/pdf/2407.06087v1 | 2024-07-03T07:10:54Z | 2024-07-03T07:10:54Z | Analytic Convolutional Layer: A Step to Analytic Neural Network | The prevailing approach to embedding prior knowledge within convolutional layers typically includes the design of steerable kernels or their modulation using designated kernel banks. In this study, we introduce the Analytic Convolutional Layer (ACL), an innovative model-driven convolutional layer, which is a mosaic of analytical convolution kernels (ACKs) and traditional convolution kernels. ACKs are characterized by mathematical functions governed by analytic kernel parameters (AKPs) learned in training process. Learnable AKPs permit the adaptive update of incorporated knowledge to align with the features representation of data. Our extensive experiments demonstrate that the ACLs not only have a remarkable capacity for feature representation with a reduced number of parameters but also attain increased reliability through the analytical formulation of ACKs. Furthermore, ACLs offer a means for neural network interpretation, thereby paving the way for the intrinsic interpretability of neural network. The source code will be published in company with the paper. | [
"['Jingmao Cui' 'Donglai Tao' 'Linmi Tao' 'Ruiyang Liu' 'Yu Cheng']"
] |
null | null | 2407.06092 | null | null | http://arxiv.org/pdf/2407.06092v1 | 2024-07-08T16:31:49Z | 2024-07-08T16:31:49Z | Assessing Cardiomegaly in Dogs Using a Simple CNN Model | This paper introduces DogHeart, a dataset comprising 1400 training, 200 validation, and 400 test images categorized as small, normal, and large based on VHS score. A custom CNN model is developed, featuring a straightforward architecture with 4 convolutional layers and 4 fully connected layers. Despite the absence of data augmentation, the model achieves a 72% accuracy in classifying cardiomegaly severity. The study contributes to automated assessment of cardiac conditions in dogs, highlighting the potential for early detection and intervention in veterinary care. | [
"['Nikhil Deekonda']"
] |
null | null | 2407.06099 | null | null | http://arxiv.org/pdf/2407.06099v1 | 2024-07-08T16:38:52Z | 2024-07-08T16:38:52Z | Physics-Informed Machine Learning Towards A Real-Time Spacecraft Thermal
Simulator | Modeling thermal states for complex space missions, such as the surface exploration of airless bodies, requires high computation, whether used in ground-based analysis for spacecraft design or during onboard reasoning for autonomous operations. For example, a finite-element thermal model with hundreds of elements can take significant time to simulate, which makes it unsuitable for onboard reasoning during time-sensitive scenarios such as descent and landing, proximity operations, or in-space assembly. Further, the lack of fast and accurate thermal modeling drives thermal designs to be more conservative and leads to spacecraft with larger mass and higher power budgets. The emerging paradigm of physics-informed machine learning (PIML) presents a class of hybrid modeling architectures that address this challenge by combining simplified physics models with machine learning (ML) models resulting in models which maintain both interpretability and robustness. Such techniques enable designs with reduced mass and power through onboard thermal-state estimation and control and may lead to improved onboard handling of off-nominal states, including unplanned down-time. The PIML model or hybrid model presented here consists of a neural network which predicts reduced nodalizations (distribution and size of coarse mesh) given on-orbit thermal load conditions, and subsequently a (relatively coarse) finite-difference model operates on this mesh to predict thermal states. We compare the computational performance and accuracy of the hybrid model to a data-driven neural net model, and a high-fidelity finite-difference model of a prototype Earth-orbiting small spacecraft. The PIML based active nodalization approach provides significantly better generalization than the neural net model and coarse mesh model, while reducing computing cost by up to 1.7x compared to the high-fidelity model. | [
"['Manaswin Oddiraju' 'Zaki Hasnain' 'Saptarshi Bandyopadhyay'\n 'Eric Sunada' 'Souma Chowdhury']"
] |
null | null | 2407.06100 | null | null | http://arxiv.org/pdf/2407.06100v1 | 2024-07-08T16:39:25Z | 2024-07-08T16:39:25Z | Leveraging data-driven weather models for improving numerical weather
prediction skill through large-scale spectral nudging | Operational meteorological forecasting has long relied on physics-based numerical weather prediction (NWP) models. Recently, this landscape has been disrupted by the advent of data-driven artificial intelligence (AI)-based weather models, which offer tremendous computational performance and competitive forecasting skill. However, data-driven models for medium-range forecasting generally suffer from major limitations, including low effective resolution and a narrow range of predicted variables. This study illustrates the relative strengths and weaknesses of these competing paradigms using the GEM (Global Environmental Multiscale) and GraphCast models to represent physics-based and AI-based approaches, respectively. By analyzing global predictions from these two models against observations and analyses in both physical and spectral spaces, this study demonstrates that GraphCast-predicted large scales outperform GEM, particularly for longer lead times. Building on this insight, a hybrid NWP-AI system is proposed, wherein GEM-predicted large-scale state variables are spectrally nudged toward GraphCast predictions, while allowing GEM to freely generate fine-scale details critical for weather extremes. Results indicate that this hybrid approach is capable of leveraging the strengths of GraphCast to enhance the prediction skill of the GEM model. Importantly, trajectories of tropical cyclones are predicted with enhanced accuracy without significant changes in intensity. Furthermore, this new hybrid system ensures that meteorologists have access to a complete set of forecast variables, including those relevant for high-impact weather events. | [
"['Syed Zahid Husain' 'Leo Separovic' 'Jean-François Caron' 'Rabah Aider'\n 'Mark Buehner' 'Stéphane Chamberland' 'Ervig Lapalme'\n 'Ron McTaggart-Cowan' 'Christopher Subich' 'Paul Vaillancourt'\n 'Jing Yang' 'Ayrton Zadra']"
] |
null | null | 2407.06116 | null | null | http://arxiv.org/pdf/2407.06116v1 | 2024-05-15T19:33:35Z | 2024-05-15T19:33:35Z | Data-driven Nucleus Subclassification on Colon H&E using
Style-transferred Digital Pathology | Understanding the way cells communicate, co-locate, and interrelate is essential to furthering our understanding of how the body functions. H&E is widely available, however, cell subtyping often requires expert knowledge and the use of specialized stains. To reduce the annotation burden, AI has been proposed for the classification of cells on H&E. For example, the recent Colon Nucleus Identification and Classification (CoNIC) Challenge focused on labeling 6 cell types on H&E of the colon. However, the CoNIC Challenge was unable to classify epithelial subtypes (progenitor, enteroendocrine, goblet), lymphocyte subtypes (B, helper T, cytotoxic T), and connective subtypes (fibroblasts). We use inter-modality learning to label previously un-labelable cell types on H&E. We take advantage of multiplexed immunofluorescence (MxIF) histology to label 14 cell subclasses. We performed style transfer on the same MxIF tissues to synthesize realistic virtual H&E which we paired with the MxIF-derived cell subclassification labels. We evaluated the efficacy of using a supervised learning scheme where the input was realistic-quality virtual H&E and the labels were MxIF-derived cell subclasses. We assessed our model on private virtual H&E and public real H&E. On virtual H&E, we were able to classify helper T cells and epithelial progenitors with positive predictive values of $0.34 pm 0.15$ (prevalence $0.03 pm 0.01$) and $0.47 pm 0.1$ (prevalence $0.07 pm 0.02$) respectively, when using ground truth centroid information. On real H&E we could classify helper T cells and epithelial progenitors with upper bound positive predictive values of $0.43 pm 0.03$ (parent class prevalence 0.21) and $0.94 pm 0.02$ (parent class prevalence 0.49) when using ground truth centroid information. This is the first work to provide cell type classification for helper T and epithelial progenitor nuclei on H&E. | [
"['Lucas W. Remedios' 'Shunxing Bao' 'Samuel W. Remedios' 'Ho Hin Lee'\n 'Leon Y. Cai' 'Thomas Li' 'Ruining Deng' 'Nancy R. Newlin'\n 'Adam M. Saunders' 'Can Cui' 'Jia Li' 'Qi Liu' 'Ken S. Lau'\n 'Joseph T. Roland' 'Mary K Washington' 'Lori A. Coburn' 'Keith T. Wilson'\n 'Yuankai Huo' 'Bennett A. Landman']"
] |
null | null | 2407.06120 | null | null | http://arxiv.org/pdf/2407.06120v1 | 2024-07-08T16:57:26Z | 2024-07-08T16:57:26Z | Sketchy Moment Matching: Toward Fast and Provable Data Selection for
Finetuning | We revisit data selection in a modern context of finetuning from a fundamental perspective. Extending the classical wisdom of variance minimization in low dimensions to high-dimensional finetuning, our generalization analysis unveils the importance of additionally reducing bias induced by low-rank approximation. Inspired by the variance-bias tradeoff in high dimensions from the theory, we introduce Sketchy Moment Matching (SkMM), a scalable data selection scheme with two stages. (i) First, the bias is controlled using gradient sketching that explores the finetuning parameter space for an informative low-dimensional subspace $mathcal{S}$; (ii) then the variance is reduced over $mathcal{S}$ via moment matching between the original and selected datasets. Theoretically, we show that gradient sketching is fast and provably accurate: selecting $n$ samples by reducing variance over $mathcal{S}$ preserves the fast-rate generalization $O(dim(mathcal{S})/n)$, independent of the parameter dimension. Empirically, we concretize the variance-bias balance via synthetic experiments and demonstrate the effectiveness of SkMM for finetuning in real vision tasks. | [
"['Yijun Dong' 'Hoang Phan' 'Xiang Pan' 'Qi Lei']"
] |
null | null | 2407.06121 | null | null | http://arxiv.org/pdf/2407.06121v1 | 2024-07-08T16:58:57Z | 2024-07-08T16:58:57Z | Periodic agent-state based Q-learning for POMDPs | The standard approach for Partially Observable Markov Decision Processes (POMDPs) is to convert them to a fully observed belief-state MDP. However, the belief state depends on the system model and is therefore not viable in reinforcement learning (RL) settings. A widely used alternative is to use an agent state, which is a model-free, recursively updateable function of the observation history. Examples include frame stacking and recurrent neural networks. Since the agent state is model-free, it is used to adapt standard RL algorithms to POMDPs. However, standard RL algorithms like Q-learning learn a stationary policy. Our main thesis that we illustrate via examples is that because the agent state does not satisfy the Markov property, non-stationary agent-state based policies can outperform stationary ones. To leverage this feature, we propose PASQL (periodic agent-state based Q-learning), which is a variant of agent-state-based Q-learning that learns periodic policies. By combining ideas from periodic Markov chains and stochastic approximation, we rigorously establish that PASQL converges to a cyclic limit and characterize the approximation error of the converged periodic policy. Finally, we present a numerical experiment to highlight the salient features of PASQL and demonstrate the benefit of learning periodic policies over stationary policies. | [
"['Amit Sinha' 'Mathieu Geist' 'Aditya Mahajan']"
] |
null | null | 2407.06124 | null | null | http://arxiv.org/pdf/2407.06124v2 | 2024-07-12T15:15:03Z | 2024-07-08T17:00:28Z | Structured Generations: Using Hierarchical Clusters to guide Diffusion
Models | This paper introduces Diffuse-TreeVAE, a deep generative model that integrates hierarchical clustering into the framework of Denoising Diffusion Probabilistic Models (DDPMs). The proposed approach generates new images by sampling from a root embedding of a learned latent tree VAE-based structure, it then propagates through hierarchical paths, and utilizes a second-stage DDPM to refine and generate distinct, high-quality images for each data cluster. The result is a model that not only improves image clarity but also ensures that the generated samples are representative of their respective clusters, addressing the limitations of previous VAE-based methods and advancing the state of clustering-based generative modeling. | [
"['Jorge da Silva Goncalves' 'Laura Manduchi' 'Moritz Vandenhirtz'\n 'Julia E. Vogt']"
] |
null | null | 2407.06162 | null | null | http://arxiv.org/pdf/2407.06162v1 | 2024-06-02T17:09:59Z | 2024-06-02T17:09:59Z | RNNs, CNNs and Transformers in Human Action Recognition: A Survey and A
Hybrid Model | Human Action Recognition (HAR) encompasses the task of monitoring human activities across various domains, including but not limited to medical, educational, entertainment, visual surveillance, video retrieval, and the identification of anomalous activities. Over the past decade, the field of HAR has witnessed substantial progress by leveraging Convolutional Neural Networks (CNNs) to effectively extract and comprehend intricate information, thereby enhancing the overall performance of HAR systems. Recently, the domain of computer vision has witnessed the emergence of Vision Transformers (ViTs) as a potent solution. The efficacy of transformer architecture has been validated beyond the confines of image analysis, extending their applicability to diverse video-related tasks. Notably, within this landscape, the research community has shown keen interest in HAR, acknowledging its manifold utility and widespread adoption across various domains. This article aims to present an encompassing survey that focuses on CNNs and the evolution of Recurrent Neural Networks (RNNs) to ViTs given their importance in the domain of HAR. By conducting a thorough examination of existing literature and exploring emerging trends, this study undertakes a critical analysis and synthesis of the accumulated knowledge in this field. Additionally, it investigates the ongoing efforts to develop hybrid approaches. Following this direction, this article presents a novel hybrid model that seeks to integrate the inherent strengths of CNNs and ViTs. | [
"['Khaled Alomar' 'Halil Ibrahim Aysel' 'Xiaohao Cai']"
] |
null | null | 2407.06167 | null | null | http://arxiv.org/pdf/2407.06167v1 | 2024-07-08T17:45:40Z | 2024-07-08T17:45:40Z | DεpS: Delayed ε-Shrinking for Faster Once-For-All
Training | CNNs are increasingly deployed across different hardware, dynamic environments, and low-power embedded devices. This has led to the design and training of CNN architectures with the goal of maximizing accuracy subject to such variable deployment constraints. As the number of deployment scenarios grows, there is a need to find scalable solutions to design and train specialized CNNs. Once-for-all training has emerged as a scalable approach that jointly co-trains many models (subnets) at once with a constant training cost and finds specialized CNNs later. The scalability is achieved by training the full model and simultaneously reducing it to smaller subnets that share model weights (weight-shared shrinking). However, existing once-for-all training approaches incur huge training costs reaching 1200 GPU hours. We argue this is because they either start the process of shrinking the full model too early or too late. Hence, we propose Delayed $epsilon$-Shrinking (D$epsilon$pS) that starts the process of shrinking the full model when it is partially trained (~50%) which leads to training cost improvement and better in-place knowledge distillation to smaller models. The proposed approach also consists of novel heuristics that dynamically adjust subnet learning rates incrementally (E), leading to improved weight-shared knowledge distillation from larger to smaller subnets as well. As a result, DEpS outperforms state-of-the-art once-for-all training techniques across different datasets including CIFAR10/100, ImageNet-100, and ImageNet-1k on accuracy and cost. It achieves 1.83% higher ImageNet-1k top1 accuracy or the same accuracy with 1.3x reduction in FLOPs and 2.5x drop in training cost (GPU*hrs) | [
"['Aditya Annavajjala' 'Alind Khare' 'Animesh Agrawal' 'Igor Fedorov'\n 'Hugo Latapie' 'Myungjin Lee' 'Alexey Tumanov']"
] |
null | null | 2407.06169 | null | null | http://arxiv.org/pdf/2407.06169v1 | 2024-07-08T17:48:39Z | 2024-07-08T17:48:39Z | Potential Based Diffusion Motion Planning | Effective motion planning in high dimensional spaces is a long-standing open problem in robotics. One class of traditional motion planning algorithms corresponds to potential-based motion planning. An advantage of potential based motion planning is composability -- different motion constraints can be easily combined by adding corresponding potentials. However, constructing motion paths from potentials requires solving a global optimization across configuration space potential landscape, which is often prone to local minima. We propose a new approach towards learning potential based motion planning, where we train a neural network to capture and learn an easily optimizable potentials over motion planning trajectories. We illustrate the effectiveness of such approach, significantly outperforming both classical and recent learned motion planning approaches and avoiding issues with local minima. We further illustrate its inherent composability, enabling us to generalize to a multitude of different motion constraints. | [
"['Yunhao Luo' 'Chen Sun' 'Joshua B. Tenenbaum' 'Yilun Du']"
] |
null | null | 2407.06178 | null | null | http://arxiv.org/pdf/2407.06178v1 | 2024-07-08T17:52:23Z | 2024-07-08T17:52:23Z | Transfer Learning with Self-Supervised Vision Transformers for Snake
Identification | We present our approach for the SnakeCLEF 2024 competition to predict snake species from images. We explore and use Meta's DINOv2 vision transformer model for feature extraction to tackle species' high variability and visual similarity in a dataset of 182,261 images. We perform exploratory analysis on embeddings to understand their structure, and train a linear classifier on the embeddings to predict species. Despite achieving a score of 39.69, our results show promise for DINOv2 embeddings in snake identification. All code for this project is available at https://github.com/dsgt-kaggle-clef/snakeclef-2024. | [
"['Anthony Miyaguchi' 'Murilo Gustineli' 'Austin Fischer' 'Ryan Lundqvist']"
] |
null | null | 2407.06183 | null | null | http://arxiv.org/pdf/2407.06183v1 | 2024-07-08T17:56:00Z | 2024-07-08T17:56:00Z | Stepping on the Edge: Curvature Aware Learning Rate Tuners | Curvature information -- particularly, the largest eigenvalue of the loss Hessian, known as the sharpness -- often forms the basis for learning rate tuners. However, recent work has shown that the curvature information undergoes complex dynamics during training, going from a phase of increasing sharpness to eventual stabilization. We analyze the closed-loop feedback effect between learning rate tuning and curvature. We find that classical learning rate tuners may yield greater one-step loss reduction, yet they ultimately underperform in the long term when compared to constant learning rates in the full batch regime. These models break the stabilization of the sharpness, which we explain using a simplified model of the joint dynamics of the learning rate and the curvature. To further investigate these effects, we introduce a new learning rate tuning method, Curvature Dynamics Aware Tuning (CDAT), which prioritizes long term curvature stabilization over instantaneous progress on the objective. In the full batch regime, CDAT shows behavior akin to prefixed warm-up schedules on deep learning objectives, outperforming tuned constant learning rates. In the mini batch regime, we observe that stochasticity introduces confounding effects that explain the previous success of some learning rate tuners at appropriate batch sizes. Our findings highlight the critical role of understanding the joint dynamics of the learning rate and curvature, beyond greedy minimization, to diagnose failures and design effective adaptive learning rate tuners. | [
"['Vincent Roulet' 'Atish Agarwala' 'Jean-Bastien Grill'\n 'Grzegorz Swirszcz' 'Mathieu Blondel' 'Fabian Pedregosa']"
] |
null | null | 2407.06190 | null | null | http://arxiv.org/pdf/2407.06190v2 | 2024-07-10T01:32:28Z | 2024-07-08T17:59:54Z | 4D Contrastive Superflows are Dense 3D Representation Learners | In the realm of autonomous driving, accurate 3D perception is the foundation. However, developing such models relies on extensive human annotations -- a process that is both costly and labor-intensive. To address this challenge from a data representation learning perspective, we introduce SuperFlow, a novel framework designed to harness consecutive LiDAR-camera pairs for establishing spatiotemporal pretraining objectives. SuperFlow stands out by integrating two key designs: 1) a dense-to-sparse consistency regularization, which promotes insensitivity to point cloud density variations during feature learning, and 2) a flow-based contrastive learning module, carefully crafted to extract meaningful temporal cues from readily available sensor calibrations. To further boost learning efficiency, we incorporate a plug-and-play view consistency module that enhances the alignment of the knowledge distilled from camera views. Extensive comparative and ablation studies across 11 heterogeneous LiDAR datasets validate our effectiveness and superiority. Additionally, we observe several interesting emerging properties by scaling up the 2D and 3D backbones during pretraining, shedding light on the future research of 3D foundation models for LiDAR-based perception. | [
"['Xiang Xu' 'Lingdong Kong' 'Hui Shuai' 'Wenwei Zhang' 'Liang Pan'\n 'Kai Chen' 'Ziwei Liu' 'Qingshan Liu']"
] |
null | null | 2407.06195 | null | null | http://arxiv.org/pdf/2407.06195v1 | 2024-06-10T15:03:54Z | 2024-06-10T15:03:54Z | Higher-Order Spatial Information for Self-Supervised Place Cell Learning | Mammals navigate novel environments and exhibit resilience to sparse environmental sensory cues via place and grid cells, which encode position in space. While the efficiency of grid cell coding has been extensively studied, the computational role of place cells is less well understood. This gap arises partially because spatial information measures have, until now, been limited to single place cells. We derive and implement a higher-order spatial information measure, allowing for the study of the emergence of multiple place cells in a self-supervised manner. We show that emergent place cells have many desirable features, including high-accuracy spatial decoding. This is the first work in which higher-order spatial information measures that depend solely on place cells' firing rates have been derived and which focuses on the emergence of multiple place cells via self-supervised learning. By quantifying the spatial information of multiple place cells, we enhance our understanding of place cell formation and capabilities in recurrent neural networks, thereby improving the potential navigation capabilities of artificial systems in novel environments without objective location information. | [
"['Jared Deighton' 'Wyatt Mackey' 'Ioannis Schizas' 'David L. Boothe Jr.'\n 'Vasileios Maroulas']"
] |
null | null | 2407.06204 | null | null | http://arxiv.org/pdf/2407.06204v1 | 2024-06-26T16:34:33Z | 2024-06-26T16:34:33Z | A Survey on Mixture of Experts | Large language models (LLMs) have garnered unprecedented advancements across diverse fields, ranging from natural language processing to computer vision and beyond. The prowess of LLMs is underpinned by their substantial model size, extensive and diverse datasets, and the vast computational power harnessed during training, all of which contribute to the emergent abilities of LLMs (e.g., in-context learning) that are not present in small models. Within this context, the mixture of experts (MoE) has emerged as an effective method for substantially scaling up model capacity with minimal computation overhead, gaining significant attention from academia and industry. Despite its growing prevalence, there lacks a systematic and comprehensive review of the literature on MoE. This survey seeks to bridge that gap, serving as an essential resource for researchers delving into the intricacies of MoE. We first briefly introduce the structure of the MoE layer, followed by proposing a new taxonomy of MoE. Next, we overview the core designs for various MoE models including both algorithmic and systemic aspects, alongside collections of available open-source implementations, hyperparameter configurations and empirical evaluations. Furthermore, we delineate the multifaceted applications of MoE in practice, and outline some potential directions for future research. To facilitate ongoing updates and the sharing of cutting-edge developments in MoE research, we have established a resource repository accessible at https://github.com/withinmiaov/A-Survey-on-Mixture-of-Experts. | [
"['Weilin Cai' 'Juyong Jiang' 'Fan Wang' 'Jing Tang' 'Sunghun Kim'\n 'Jiayi Huang']"
] |