text
stringlengths 70
7.94k
| __index_level_0__
int64 105
711k
|
---|---|
Title: Multi-Interest Refinement by Collaborative Attributes Modeling for Click-Through Rate Prediction
Abstract: ABSTRACTLearning interest representation plays a core role in click-through rate prediction task. Existing Transformer-based approaches learn multi-interests from a sequence of interacted items with rich attributes. The attention weights explain how relevant an item's specific attribute sequence is to the user's interest. However, it implicitly assumes the independence of attributes regarding the same item, which may not always hold in practice. Empirically, the user places varied emphasis on different attributes to consider whether interacting with one item, which is unobserved. Independently modeling each attribute may allow attention to assign probability mass to some unimportant attributes. Collaborative attributes of varied emphasis can be incorporated to help the model more reasonably approximate attributes' relevance to others and generate refined interest representations. To this end, we novelly propose to integrate a dynamic collaborative attribute routing module into Transformer. The module assigns collaborative scores to each attribute of clicked items and induces the extended Transformer to prioritize the influential attributes. To learn collaborative scores without labels, we design a diversity loss to facilitate score differentiation. The comparison with baselines on two real-world benchmark datasets and one industrial dataset validates the effectiveness of the framework. | 710,523 |
Title: Multiple Instance Learning for Uplift Modeling
Abstract: ABSTRACTUplift modeling is widely used in performance marketing to estimate effects of promotion campaigns (e.g., increase of customer retention rate). Since it is impossible to observe outcomes of a recipient in treatment (e.g., receiving a certain promotion) and control (e.g., without promotion) groups simultaneously (i.e., counter-factual), uplift models are mainly trained on instances of treatment and control groups separately to form two models respectively, and uplifts are predicted by the difference of predictions from these two models (i.e., two-model method). When responses are noisy and the treatment effect is fractional, induced individual uplift predictions will be inaccurate, resulting in targeting undesirable customers. Though it is impossible to obtain the ideal ground-truth individual uplifts, known as Individual Treatment Effects (ITEs), alternatively, an average uplift of a group of users, called Average Treatment Effect (ATE), can be observed from experimental deliveries. Upon this, similar to Multiple Instance Learning (MIL) in which each training sample is a bag of instances, our framework sums up individual user uplift predictions for each bag of users as its bag-wise ATE prediction, and regularizes it to its ATE label, thus learning more accurate individual uplifts. Additionally, to amplify the fractional treatment effect, bags are composed of instances with adjacent individual uplift predictions, instead of random instances. Experiments conducted on two datasets show the effectiveness and universality of the proposed framework. | 710,524 |
Title: WDRASS: A Web-scale Dataset for Document Retrieval and Answer Sentence Selection
Abstract: ABSTRACTOpen-Domain Question Answering (ODQA) systems generate answers from relevant text returned by search engines, e.g., lexical features-based such as BM25, or embeddings-based such as dense passage retrieval (DPR). Few datasets are available for this task: they mainly focus on QA systems based on machine reading (MR) approach, and show problematic evaluation, mostly based on uncontextualized short answer matching. In this paper, we present WDRASS, a dataset for ODQA based on answer sentence selection (AS2) models, which consider sentences as candidate answers for QA systems. WDRASS consists of ∼64k questions and 800k+ labeled passages and sentences extracted from 30M documents. We evaluate the dataset by training models on it and comparing with the same models trained on Google NQ. Our experiments show that WDRASS significantly improves the performance of retrieval and reranking models, thus boosting the accuracy of downstream QA tasks. We believe our dataset can produce significant impact in advancing IR research. | 710,525 |
Title: Revisiting Cold-Start Problem in CTR Prediction: Augmenting Embedding via GAN
Abstract: ABSTRACTClick-through rate (CTR) prediction is one of the core tasks in industrial applications such as online advertising and recommender systems. However, the performance of existing CTR models is hampered by the cold-start users who have very few historical behavior data, given that these models often rely on enough sequential behavior data to learn the embedding vectors. In this paper, we propose a novel framework dubbed GF2 to alleviate the cold-start problem in deep learning based CTR prediction. GF2 augments the embeddings of cold-start users after the embedding layer in the deep CTR model based on the Generative Adversarial Network (GAN), and the obtained generator by GAN can be further fine-tuned locally to enhance the CTR prediction in cold-start settings. GF2 is general for deep CTR models that use embeddings to model the features of users, and it has already been deployed in real-world online display advertising system. Experimental results on two large-scale real-world datasets show that GF2 can significantly improve the prediction performance over three polular deep CTR models. | 710,526 |
Title: Co-Training with Validation: A Generic Framework for Semi-Supervised Relation Extraction
Abstract: ABSTRACTIn the scenarios of low-resource natural language applications, Semi-supervised Relation Extraction (SRE) plays a key role in mitigating the scarcity of labelled sentences by harnessing a large amount of unlabeled corpus. Current SRE methods are mainly designed based on the paradigm of Self-Training with Validation (STV), which employs two learners and each of them plays the single role of annotator or validator. However, such a single role setting under-utilizes the potential of learners in promoting new labelled instances from unlabeled corpus. In this paper, we propose a generic SRE paradigm, called Co-Training with Validation (CTV), for making full use of learners to benefit more from unlabeled corpus. In CTV, each learner alternately plays the roles of annotator and validator to generate and validate pseudo-labelled instances. Thus, more high-quality instances are exploited and two learners can be reinforced by each other during the learning process. Experimental results on two public datasets show that our CTV considerably outperforms the state-of-the-art SRE techniques, and works well with different kinds of learners for relation extraction. | 710,527 |
Title: Selectively Expanding Queries and Documents for News Background Linking
Abstract: ABSTRACTBackground articles are crucial for readers to grasp the context of news stories fully. However, existing approaches of background article search tend to apply a single ranking method to all types of search topics. In this paper, we focus on exploring search topics on news articles by classifying them into two types:time-sensitive andnon-time-sensitive. To verify whether or not these two types of search topics can benefit from different retrieving methods, we examined a suite of strategies such as document expansion, query rewriting, and semantic re-ranking. Moreover, the relationship between background articles and topics is verified by the two strategies of document expansion (specificity and diversity). The experimental results demonstrate that the optimal usage of the aforementioned strategies is indeed different between the two types of search topics. Furthermore, our in-depth analysis of topics and search results verified that: time-sensitive topics benefit from background articles that can provide more specific knowledge, while non-time-sensitive topics benefit from diversified retrieved documents. | 710,528 |
Title: Graph Representation Learning via Adaptive Multi-layer Neighborhood Diffusion Contrast
Abstract: ABSTRACTIn recent years, graph neural network (GNN) has become the most important method for graph representation learning. However, most GNNs focus on using the message passing mechanism to guide the information aggregation between neighbors, which results in the over-smoothing and weak robustness. To address the above issues, we propose a novel graph representation learning framework via Adaptive Multi-layer Neighborhood Diffusion Contrast, called AM-NDC in this paper. Without using the message passing mechanism, AM-NDC can still capture the complex structural information between nodes through a neighborhood diffusion contrast loss. Experimental results show that AM-NDC outperforms the existing state-of-the-art models in both node classification and robustness against adversarial attacks. Our dataset and code are available at https://github.com/YJ199804/AM-NDC. | 710,529 |
Title: A Hyperbolic-to-Hyperbolic User Representation with Multi-aspect for Social Recommendation
Abstract: ABSTRACTSocial recommender systems play a key role in solving the problem of information overload. In order to better extract latent hierarchical property in the data, they usually explore the user-user connections and user-item interactions in hyperbolic space. Existing methods resort tangent spaces to realize some operations (e.g., matrix multiplication) on hyperbolic manifolds. However, frequently projecting between the hyperbolic space and the tangent space will destroy the global structure of the manifold and reduce the accuracy of predictions. Besides, decisions made by users are often influenced by multi-aspect potential preferences, which are usually represented as a vector for each user. To this end, we design a novel hyperbolic-to-hyperbolic user representation with multi-aspect social recommender system, namely H2HMSR, which directly works in hyperbolic space. Extensive experiments on three public datasets demonstrate that our model can adequately extract social information of users with multi-aspect preferences and outperforms hyperbolic and Euclidean counterparts. | 710,530 |
Title: The SimIIR 2.0 Framework: User Types, Markov Model-Based Interaction Simulation, and Advanced Query Generation
Abstract: ABSTRACTSimulated user retrieval system interactions enable studies with controlled user behavior. To this end, the SimIIR framework offers static, rule-based methods. We present an extended SimIIR 2.0 version with new components for dynamic user type-specific Markov model-based interactions and more realistic query generation. A flexible modularization ensures that the SimIIR 2.0 framework can serve as a platform to implement, combine, and run the growing number of proposed search behavior and query simulation ideas. | 710,531 |
Title: Multi-scale Multi-modal Dictionary BERT For Effective Text-image Retrieval in Multimedia Advertising
Abstract: ABSTRACTVisual content in multimedia advertising effectively attracts the customer's attention. Search-based multimedia advertising is a cross-modal retrieval problem. Due to the modal gap between texts and images/videos, cross-modal image/video retrieval is a challenging problem. Recently, multi-modal dictionary BERT has bridged the model gap by unifying the images/videos and texts from different modalities through a multi-modal dictionary. In this work, we improve the multi-modal dictionary BERT by developing a multi-scale multi-modal dictionary and propose a Multi-scale Multi-modal Dictionary BERT (M^2D-BERT). The multi-scale dictionary partitions the feature space into different levels and is effective in describing the fine-level relevance and the coarse-level relevance between the text and images. Meanwhile, we constrain that the code-words in dictionaries from different scales to be orthogonal to each other. Thus, it ensures multiple dictionaries are complementary to each other. Moreover, we adopt a two-level residual quantization to enhance the capacity of each multi-modal dictionary. Systematic experiments conducted on large-scale cross-modal retrieval datasets demonstrate the excellent performance of our M2D-BERT. | 710,532 |
Title: MetaRule: A Meta-path Guided Ensemble Rule Set Learning for Explainable Fraud Detection
Abstract: ABSTRACTMachine learning methods for fraud detection have achieved impressive prediction performance, but often sacrifice critical interpretability in many applications. In this work, we propose to learn interpretable models for fraud detection as a simple rule set. More specifically, we design a novel neural rule learning method by building a condition graph with an expectation to capture the high-order feature interactions. Each path in this condition graph can be regarded as a single rule. Inspired by the key idea of meta learning, we combine the neural rules with rules extracted from the tree-based models in order to provide generalizable rule candidates. Finally, we propose a flexible rule set learning framework by designing a greedy optimization method towards maximizing the recall number of fraud samples with a predefined criterion as the cost. We conduct comprehensive experiments on large-scale industrial datasets. Interestingly, we find that the neural rules and rules extracted from tree-based models can be complementary to each other to improve the prediction performance. | 710,533 |
Title: Unanswerable Question Correction and Explanation over Personal Knowledge Base
Abstract: ABSTRACTHandling unanswerable questions in knowledge base question answering (KBQA) has been a focus in recent years. However, how to explain why a given question is unanswerable is rarely discussed. In this work, we seek not only to correct unanswerable questions based on a personal knowledge base, but also to explain the reason of the correction. We argue that different types of questions need heterogeneous subgraphs with different types of connections. We thus propose a heterogeneous subgraph aggregation network with a two-level attention mechanism to detect important entities and relations in subgraphs and attend to informative subgraphs for different questions. We conduct comprehensive experiments on five subgraphs and their combinations, with results that attest the effectiveness of incorporating heterogeneous subgraphs. | 710,534 |
Title: Calibrate Automated Graph Neural Network via Hyperparameter Uncertainty
Abstract: ABSTRACTAutomated graph learning has drawn widespread research attention due to its great potential to reduce human efforts when dealing with graph data, among which hyperparameter optimization (HPO) is one of the mainstream directions and has made promising progress. However, how to obtain reliable and trustworthy prediction results with automated graph neural networks (GNN) is still quite underexplored. To this end, we investigate automated GNN calibration by marrying uncertainty estimation to the HPO problem. Specifically, we propose a hyperparameter uncertainty-induced graph convolutional network (HyperU-GCN) with a bilevel formulation, where the upper-level problem explicitly reasons uncertainties by developing a probabilistic hypernetworks through a variational Bayesian lens, while the lower-level problem learns how the GCN weights respond to a hyperparameter distribution. By squeezing model uncertainty into the hyperparameter space, the proposed HyperU-GCN could achieve calibrated predictions in a similar way to Bayesian model averaging over hyperparameters. Extensive experimental results on six public datasets were provided in terms of node classification accuracy and expected calibration error (ECE), demonstrating the effectiveness of our approach compared with several state-of-the-art uncertainty-aware and calibrated GCN methods. | 710,535 |
Title: Task Similarity Aware Meta Learning for Cold-Start Recommendation
Abstract: ABSTRACTIn recommender systems, content-based methods and meta-learning involved methods usually have been adopted to alleviate the item cold-start problem. The former consider utilizing item attributes at the feature level and the latter aim at learning a globally shared initialization for all tasks to achieve fast adaptation with limited data at the task level. However, content-based methods only focus on the similarity of item attributes, ignoring the relationships established by user interactions. And for tasks with different distributions, most meta-learning-based methods are difficult to achieve better performance under a single initialization. To address the limitations mentioned above and combine the strengths of both methods, we propose a Task Similarity Aware Meta-Learning (TSAML) framework from two aspects. Specifically, at the feature level, we simultaneously introduce content information and user-item relationships to exploit task similarity. At the task level, we design an automatic soft clustering module to cluster similar tasks and generate the same initialization for similar tasks. Extensive offline experiments demonstrate that the TSAML framework has superior performance and recommends cold items to preferred users more effectively than other state-of-the-art methods. | 710,536 |
Title: A Multi-granularity Network for Emotion-Cause Pair Extraction via Matrix Capsule
Abstract: ABSTRACTThe task of Emotion-Cause Pair Extraction (ECPE) aims at extracting the clause pairs with the corresponding causality from the text.Existing approaches emphasize their multi-task settings. We argue that the clause-level encoders are ill-suited to the ECPE task where text information has many granularity features. In this paper, we design a Matrix Capsule-based multi-granularity framework (MaCa) for this task. Specifically, we first introduce a word-level encoder to obtain the token-aware representations. Then, two sentence-level extractors are used to generate emotion prediction and cause prediction. Finally, to obtain more fine-grained features of clause pairs, the matrix capsule is introduced, which can cluster the relationship of each clause pair. The empirical results on the widely used ECPE dataset show that our framework significantly outperforms most current methodsin the Emotion-Cause Extraction (ECE) and the challenging ECPE task. | 710,537 |
Title: Lightweight Unbiased Multi-teacher Ensemble for Review-based Recommendation
Abstract: ABSTRACTReview-based recommender systems (RRS) have received an increasing interest since reviews greatly enhance recommendation quality and interpretability. However, existing RRS suffer from high computational complexity, biased recommendation and poor generalization. The three problems make them inadequate to handle real recommendation scenarios. Previous studies address each issue separately, while none of them consider solving three problems together under a unified framework. This paper presents LUME (a Lightweight Unbiased Multi-teacher Ensemble) for RRS. LUME is a novel framework that addresses the three problems simultaneously. LUME uses multi-teacher ensemble and debiased knowledge distillation to aggregate knowledge from multiple pretrained RRS, and generates a small, unbiased student recommender which generalizes better. Extensive experiments on various real-world benchmarks demonstrate that LUME successfully tackles the three problems and has superior performance than state-of-the-art RRS and knowledge distillation based RS. | 710,538 |
Title: Visual Encoding and Debiasing for CTR Prediction
Abstract: ABSTRACTExtracting expressive visual features is crucial for accurate Click-Through-Rate (CTR) prediction in visual search advertising systems. Current commercial systems use off-the-shelf visual encoders to facilitate fast online service. However, the extracted visual features are coarse-grained and/or biased. In this paper, we present a visual encoding framework for CTR prediction to overcome these problems. The framework is based on contrastive learning which pulls positive pairs closer and pushes negative pairs apart in the visual feature space. To obtain fine-grained visual features, we present contrastive learning supervised by click-through data to fine-tune the visual encoder. To reduce sample selection bias, firstly we train the visual encoder offline by leveraging both unbiased self-supervision and click supervision signals. Secondly, we incorporate a debiasing network in the online CTR predictor to adjust the visual features by contrasting high impression items with selected, low impression items. We deploy the framework in a mobile E-commerce app. Offline experiments on billion-scale datasets and online experiments demonstrate that the proposed framework can make accurate and unbiased predictions. | 710,539 |
Title: Texture BERT for Cross-modal Texture Image Retrieval
Abstract: ABSTRACTWe propose Texture BERT, a model describing visual attributes of texture using natural language. To capture the rich details in texture images, we propose a group-wise compact bilinear pooling method, which represents the texture image by a set of visual patterns. The similarity between the texture image and the corresponding language description is determined by the cross-matching between the set of visual patterns from the texture image and the set of word features from the language description. We also exploit the self-attention transformer layers to provide the cross-modal context and enhance the effectiveness of matching. Our efforts achieve state-of-the-art accuracy on both text retrieval and image retrieval tasks, demonstrating the effectiveness of the proposed Texture BERT model in describing texture through natural language. | 710,540 |
Title: Modeling Latent Autocorrelation for Session-based Recommendation
Abstract: ABSTRACTSession-based Recommendation (SBR) aims to predict the next item for the current session, which consists of several clicked items in a short period by an anonymous user. Most of the sequential modeling approaches to SBR are focusing on adopting advanced Deep Neural Networks (DNNs), and these methods require increasingly longer training times. Existing studies have shown that some traditional SBR methods can outperform some DNN-based sequential models, however, few studies have attempted to investigate the effectiveness of traditional methods in recent years. In this paper, we propose a novel and concise SBR model inspired by the basic concept of autocorrelation in the Stochastic Process. Autocorrelation measures the correlation of a process at different moments. Therefore, it is natural to use it to model the correlation of clicked item sequences at different time shifts. Specifically, we use Fast Fourier Transforms (FFT) to compute the autocorrelation and combine it with several linear transformations to enhance the session representation. By this means, our proposed method can learn better session preferences and is more efficient than most DNN-based models. Extensive experiments on two public datasets show that the proposed method outperforms state-of-the-art models in both effectiveness and efficiency. | 710,541 |
Title: BidH: A Bidirectional Hierarchical Model for Nested Named Entity Recognition
Abstract: ABSTRACTNested Name Entity Recognition is to identify the entities with nested relationships from sentences, which has various applications ranging from relation extraction to semantic understanding. However, existing methods have two drawbacks, i.e., 1) error propagation when identifying entities at different nesting levels and 2) unable to uncover and utilize the complex correlations between the inner and outer entities. To address these two defects, we propose a bidirectional hierarchical(BidH) model for nested name entity recognition. BidH consists of a forward module and a backward module, where the former first extracts the inner entities and then extracts the outer ones, while the latter extracts the entities in the opposite direction. Furthermore, we design an entity masked self attention mechanism to combine the two modules by fusing their predictions and hidden states layer by layer. BidH can effectively deal with error propagation and exploit the correlations between entities at different nesting levels to improve the recognition accuracy. Experiments on the GENIA dataset show that BidH outperforms the state-of-the-art nested named entity recognition models in terms of F1 score. | 710,542 |
Title: Multi-granularity Fatigue in Recommendation
Abstract: ABSTRACTPersonalized recommendation aims to provide appropriate items according to user preferences mainly from their behaviors. Excessive homogeneous user behaviors on similar items will lead to fatigue, which may decrease user activeness and degrade user experience. However, existing models seldom consider user fatigue in recommender systems. In this work, we propose a novel multi-granularity fatigue, modeling user fatigue from coarse to fine. Specifically, we focus on the recommendation feed scenario, where the underexplored global session fatigue and coarse-grained taxonomy fatigue have large impacts. We conduct extensive analyses to demonstrate the characteristics and influence of different types of fatigues in real-world recommender systems. In experiments, we verify the effectiveness of multi-granularity fatigue in both offline and online evaluations. Currently, the fatigue-enhanced model has also been deployed on a widely-used recommendation system of WeChat. | 710,543 |
Title: Balancing Utility and Exposure Fairness for Integrated Ranking with Reinforcement Learning
Abstract: ABSTRACTIntegrated ranking is critical in industrial recommendation systems and has attracted increasing attention. In an integrated ranking system, items from multiple channels are merged together and form an integrated list. During this process, apart from optimizing the system's utility like the total number of clicks, a fair allocation of the exposure opportunities over different channels also needs to be satisfied. To address this problem, we propose an integrated ranking model called Integrated Deep-Q Network (iDQN), which jointly considers user preferences, the platform's utility, and the exposure fairness. Extensive offline experiments validate the effectiveness of iDQN in managing the tradeoff between utility and fairness. Moreover, iDQN also has been deployed onto the online AppStore platform in Huawei, where the online A/B test shows iDQN outperforms the baseline by 1.87% and 2.21% in terms of utility and fairness, respectively. | 710,544 |
Title: Efficiently Answering Minimum Reachable Label Set Queries in Edge-Labeled Graphs
Abstract: ABSTRACTThe reachability query is a fundamental problem in graph analysis. Recently, many studies focus on label-constraint reachability queries, which tries to verify whether two vertices are reachable under a given label set. However, in many real-life applications, it is more practical to find the minimum label set required to ensure the reachability of two vertices, which is neglected by previous research. To fill the gap, in this paper, we propose and investigate the minimum reachable label set (MRLS) problem in edge-labeled graphs. Specifically, given an edge-labeled graph and two vertices s, t, the MRLS problem aims to find a label set L with the minimum size such that s can reach t through L. We prove the hardness of our problem, and develop different optimization strategies to improve the scalability of the algorithms. Extensive experiments on 6 datasets demonstrate the advantages of the proposed algorithms. | 710,545 |
Title: Nonlinear Causal Discovery in Time Series
Abstract: ABSTRACTRecent years have witnessed the proliferation of the Functional Causal Model (FCM) for causal learning due to its intuitive representation and accurate learning results. However, existing FCM-based algorithms suffer from the ubiquitous nonlinear relations in time-series data, mainly because these algorithms either assume linear relationships, or nonlinear relationships with additive noise, or do not introduce additional assumptions but can only identify nonlinear causality between two variables. This paper contributes in particular to a practical FCM-based causal learning approach, which can maintain effectiveness for real-world nonstationary data with general nonlinear relationships and unlimited variable scale.Specifically, the non-stationarity of time series data is first exploited with the nonlinear independent component analysis, to discover the underlying components or latent disturbances. Then, the conditional independence between variables and these components is studied to obtain a relation matrix, which guides the algorithm to recover the underlying causal graph. The correctness of the proposal is theoretically proved, and extensive experiments further verify its effectiveness. To the best of our knowledge, the proposal is the first so far that can fully identify causal relationships under general nonlinear conditions. | 710,546 |
Title: MNCM: Multi-level Network Cascades Model for Multi-Task Learning
Abstract: ABSTRACTRecently, multi-task learning based on the deep neural network has been successfully applied in many recommender system scenarios. The prediction quality of current mainstream multi-task models often relies on the extent to which the relationships among tasks are extracted. Much of the prior research work has focused on two important tasks in recommender systems: predicting click-through rate (CTR) and post-click conversion rate (CVR), which rely on sequential user action pattern of impression → click → conversion. Therefore, there exists sequential dependence between CTR and CVR tasks. However, there is no satisfactory solution to explicitly model the sequential dependence among tasks without sacrificing the first task in terms of the design of the model network structure. In this paper, inspired by the Multi-task Network Cascades (MNC) and Adaptive Information Transfer Multi-task (AITM) frameworks, we propose a Multi-level Network Cascades Model (MNCM) based on the pattern of specific and shared experts separation. In MNCM, we introduce two types of information transfer modules: Task-Level Information Transfer Module (TITM) and Expert-Level Information Transfer Module (EITM), which can learn transferred information adaptively from task level and task-specific experts level, respectively, thereby fully capture sequential dependence among tasks. Compared with AITM, MNCM effectively avoids the problem of the first task in a task sequence becoming the sacrificial side of the seesaw phenomenon and contributes to mitigating potential conflicts among tasks. We conduct considerable experiments based on open-source large-scale recommendation datasets. The experimental results demonstrate that MNCM outperforms AITM and the mainstream baseline models in the mixture-experts-bottom pattern and probability-transfer pattern. In addition, we conduct an ablation study on the necessity of introducing two kinds of information transfer modules and verify the effectiveness of this pattern. | 710,547 |
Title: Self-supervision Meets Adversarial Perturbation: A Novel Framework for Anomaly Detection
Abstract: ABSTRACTAnomaly detection is a fundamental yet challenging problem in machine learning due to the lack of label information. In this work, we propose a novel and powerful framework, dubbed as SLA2P, for unsupervised anomaly detection. After extracting representative embeddings from raw data, we apply random projections to the features and regard features transformed by different projections as belonging to distinct pseudo-classes. We then train a classifier network on these transformed features to perform self-supervised learning. Next, we add adversarial perturbation to the transformed features to decrease their softmax scores of the predicted labels and design anomaly scores based on the predictive uncertainties of the classifier on these perturbed features. Our motivation is that because of the relatively small number and the decentralized modes of anomalies, 1) the pseudo label classifier's training concentrates more on learning the semantic information of normal data rather than anomalous data; 2) the transformed features of the normal data are more robust to the perturbations than those of the anomalies. Consequently, the perturbed transformed features of anomalies fail to be classified well and accordingly have lower anomaly scores than those of the normal samples. Extensive experiments on image, text, and inherently tabular benchmark datasets back up our findings and indicate that SLA2 achieves state-of-the-art anomaly detection performance consistently. Our code is made publicly available at https://github.com/wyzjack/SLA2P | 710,548 |
Title: Towards a Learned Cost Model for Distributed Spatial Join: Data, Code & Models
Abstract: ABSTRACTGeospatial data comprise around 60% of all the publicly available data. One of the essential and most complex operations that brings together multiple geospatial datasets is the spatial join operation. Due to its complexity, there is a lot of partitioning techniques and parallel algorithms for the spatial join problem. This leads to a complex query optimization problem: which algorithm to use for a given pair of input datasets that we want to join? With the rise of machine learning, there is a promise in addressing this problem with the use of various learned models. However, one of the concerns is the lack of a standard and publicly available data to train and test on, as well as the lack of accessible baseline models. This resource paper helps the research community to solve this problem by providing synthetic and real datasets for spatial join, source code for constructing more datasets, and several baseline solutions that researchers can further extend and compare to. | 710,549 |
Title: Leveraging the Graph Structure of Neural Network Training Dynamics
Abstract: ABSTRACTUnderstanding the training dynamics of deep neural networks (DNNs) is important as it can lead to improved training efficiency and task performance. Recent works have demonstrated that representing the wirings of neurons in feedforward DNNs as graphs is an effective strategy for understanding how architectural choices can affect performance. However, these approaches fail to model training dynamics since a single, static graph cannot capture how DNNs change over the course of training. Thus, in this work, we propose a compact, expressive temporal graph framework that effectively captures the dynamics of many workhorse architectures in computer vision. Specifically, our framework extracts an informative summary of graph properties (e.g., degree, eigenvector centrality) over a sequence of DNN graphs obtained during training. We demonstrate that the proposed framework captures useful dynamics by accurately predicting trained, task performance when using a summary over early training epochs (<5) across four different architectures and two image datasets. Moreover, by using a novel, highly-scalable DNN graph representation, we further demonstrate that the proposed framework captures generalizable dynamics as summaries extracted from smaller-width networks are effective when evaluated on larger widths. | 710,550 |
Title: ML-1M++: MovieLens-Compatible Additional Preferences for More Robust Offline Evaluation of Sequential Recommenders
Abstract: ABSTRACTSequential recommendation is the task of predicting the next interacted item of a target user, given his/her past interaction sequence. Conventionally, sequential recommenders are evaluated offline with the last item in each sequence as the sole correct (relevant) label for the testing example of the corresponding user. However, little is known about how this sparsity of preference data affects the robustness of the offline evaluation's outcomes. To help researchers address this, we collect additional preference data via crowdsourcing. Specifically, we propose an assessment interface tailored to the sequential recommendation task and ask crowd workers to assess the (potential) relevance of each candidate item in MovieLens 1M, a commonly used dataset. Toward establishing a more robust evaluation methodology, we release the collected preference data, which we call ML-1M++, as well as the code of the assessment interface. | 710,551 |
Title: Improving Downstream Task Performance by Treating Numbers as Entities
Abstract: ABSTRACTNumbers are essential components of text, like any other word tokens, from which natural language processing (NLP) models are built and deployed. Though numbers are typically not accounted for distinctly in most NLP tasks, there is still an underlying amount of numeracy already exhibited by NLP models. For instance, in named entity recognition (NER), numbers are not treated as an entity with distinct tags. In this work, we attempt to tap the potential of state-of-the-art language models and transfer their ability to boost performance in related downstream tasks dealing with numbers. Our proposed classification of numbers into entities helps NLP models perform well on several tasks, including a handcrafted Fill-In-The-Blank (FITB) task and on question answering, using joint embeddings, outperforming the BERT and RoBERTa baseline classification. | 710,552 |
Title: Global and Local Feature Interaction with Vision Transformer for Few-shot Image Classification
Abstract: ABSTRACTImage classification is a classical machine learning task and has been widely used. Due to the high costs of annotation and data collection in real scenarios, few-shot learning has become a vital technique to improve image classification performances. However, most existing few-shot image classification methods only focus on modeling the global image feature or image local patches, which ignore the global-local interactions. In this study, we propose a new method, named GL-ViT, to integrate both global and local features to fully exploit the few-shot samples for image classification. Firstly, we design a feature extractor module to calculate the interactions between the global representation and local patch embeddings, where ViT is also adopted to achieve efficient and effective image representation. Then, Earth Mover's Distance is adopted to measure the similarity between two images. Abundant Experimental results on several widely-used open datasets show that GL-ViT outperforms state-of-the-art algorithms significantly, and our ablation studies also verify the effectiveness of both global-local features. | 710,553 |
Title: A Preliminary Exploration of Extractive Multi-Document Summarization in Hyperbolic Space
Abstract: ABSTRACTSummary matching is a recently proposed paradigm for extractive summarization. It aims to calculate similarities between candidate summaries and their corresponding document and extract summaries by ranking similarities. Due to natural languages often exhibiting the inherent hierarchical structures ingrained with complex syntax and semantics, the latent hierarchical structures between candidate summaries and their corresponding document should be considered when calculating the summary-document similarities. However, the above structural property is hard to model in the Euclidean space. Inspired by the above issues, we explore extractive summarization in the hyperbolic space and propose a new Hyperbolic Siamese Network for the matching-based extractive summarization (HyperSiameseNet). Specifically, HyperSiameseNet projects candidate summaries and their corresponding document representations from the Euclidean space to the Hyperbolic space and then models the summary-document similarities via the squared poincaré distance. Finally, the summary-document similarities are optimized by the margin-based triplet loss for extracting the final summary. The results on the Multi-News dataset have shown the superiority of our model HyperSiameseNet by comparing with the state-of-the-art baselines. | 710,554 |
Title: ST-GAT: A Spatio-Temporal Graph Attention Network for Accurate Traffic Speed Prediction
Abstract: ABSTRACTSpatio-temporal models, which combine GNNs (Graph Neural Networks) and RNNs (Recurrent Neural Networks), have shown state-of-the-art accuracy in traffic speed prediction. However, we find that they consider the spatial and temporal dependencies between speeds separately in the two (i.e., space and time) dimensions, thereby unable to exploit the joint-dependencies of speeds in space and time. In this paper, with the evidence via preliminary analysis, we point out the importance of considering individual dependencies between two speeds from all possible points in space and time for accurate traffic speed prediction. Then, we propose an Individual Spatio-Temporal graph (IST-graph) that represents the Individual Spatio-Temporal dependencies (IST-dependencies) very effectively and a Spatio-Temporal Graph ATtention network (ST-GAT), a novel model to predict the future traffic speeds based on the IST-graph and the attention mechanism. The results from our extensive evaluation with five real-world datasets demonstrate (1) the effectiveness of the IST-graph in modeling traffic speed data, (2) the superiority of ST-GAT over 5 state-of-the-art models (i.e., 2-33% gains) in prediction accuracy, and (3) the robustness of our ST-GAT even in abnormal traffic situations. | 710,555 |
Title: Rethinking Large-scale Pre-ranking System: Entire-chain Cross-domain Models
Abstract: ABSTRACTIndustrial systems such as recommender systems and online advertising, have been widely equipped with multi-stage architectures, which are divided into several cascaded modules, including matching, pre-ranking, ranking and re-ranking. As a critical bridge between matching and ranking, existing pre-ranking approaches mainly endure sample selection bias (SSB) problem owing to ignoring the entire-chain data dependence, resulting in sub-optimal performances. In this paper, we rethink pre-ranking system from the perspective of the entire sample space, and propose Entire-chain Cross-domain Models (ECM), which leverage samples from the whole cascaded stages to effectively alleviate SSB problem. Besides, we design a fine-grained neural structure named ECMM to further improve the pre-ranking accuracy. Specifically, we propose a cross-domain multi-tower neural network to comprehensively predict for each stage result, and introduce the sub-networking routing strategy with L0 regularization to reduce computational costs. Evaluations on real-world large-scale traffic logs demonstrate that our pre-ranking models outperform SOTA methods while time consumption is maintained within an acceptable level, which achieves better trade-off between efficiency and effectiveness. | 710,556 |
Title: Data Oversampling with Structure Preserving Variational Learning
Abstract: ABSTRACTTraditional oversampling methods are well explored for binary and multi-class imbalanced datasets. In most cases, the data space is adapted for oversampling the imbalanced classes. It leads to various issues like poor modelling of the structure of the data, resulting in data overlapping between minority and majority classes that lead to poor classification performance of minority class(es). To overcome these limitations, we propose a novel data oversampling architecture called Structure Preserving Variational Learning (SPVL). This technique captures an uncorrelated distribution among classes in the latent space using an encoder-decoder framework. Hence, minority samples are generated in the latent space, preserving the structure of the data distribution. The improved latent space distribution (oversampled training data) is evaluated by training an MLP classifier and testing with unseen test dataset. The proposed SPVL method is applied to various benchmark datasets with i) binary and multi-class imbalance data, ii) high-dimensional data and, iii) large or small-scale data. Extensive experimental results demonstrated that the proposed SPVL technique outperforms the state-of-the-art counterparts. | 710,557 |
Title: CStory: A Chinese Large-scale News Storyline Dataset
Abstract: ABSTRACTIn today's massive news streams, storylines can help us discover related event pairs and understand the evolution of hot events. Hence many efforts have been devoted to automatically constructing news storylines. However, the development of these methods is strongly limited by the size and quality of existing storyline datasets since news storylines are expensive to annotate as they contain a myriad of unlabeled relationships growing quadratically with the number of news events. Working around these difficulties, we propose a sophisticated pre-processing method to filter candidate news pairs by entity co-occurrence and semantic similarity. With the filter reducing annotation overhead, we construct CStory, a large-scale Chinese news storyline dataset, which contains 11,978 news articles, 112,549 manually labeled storyline relation pairs, and 49,832 evidence sentences for annotation judgment. We conduct extensive experiments on CStory using various algorithms and find that constructing news storylines is challenging even for pre-trained language models. Empirical analysis shows that the sample unbalance issue significantly influences model performance, which shall be the focus of future works. Our dataset is now publicly available at https://github.com/THU-KEG/CStory. | 710,558 |
Title: A Graph-based Spatiotemporal Model for Energy Markets
Abstract: ABSTRACTEnergy markets enable matching supply and demand through inter- and intra-region electricity trading. Due to the interconnected nature of the energy markets, the supply-demand constraints in one region can impact prices in another connected region. To incorporate these spatiotemporal relationships, we propose a novel graph neural network architecture incorporating multidimensional time-series features to forecast price (node attribute) and energy flow (edge attribute) between regions simultaneously. To the best of our knowledge, this paper is the first attempt to combine node and edge level forecasting in energy markets. We show that our proposed approach has a mean absolute prediction percentage error of 12.8%, which significantly beats the state-of-the-art baseline techniques. | 710,559 |
Title: Measuring and Comparing the Consistency of IR Models for Query Pairs with Similar and Different Information Needs
Abstract: ABSTRACTA widespread use of supervised ranking models has necessitated an investigation on how consistent their outputs align with user expectations. While a match between the user expectations and system outputs can be sought at different levels of granularity, we study this alignment for search intent transformation across a pair of queries. Specifically, we propose a consistency metric, which for a given pair of queries - one reformulated from the other with at least one term in common, measures if the change in the set of the top-retrieved documents induced by this reformulation is as per a user's expectation. Our experiments led to a number of observations, such as DRMM (an early interaction based IR model) exhibits better alignment with set-level user expectations, whereas transformer-based neural models (e.g., MonoBERT) agree more consistently with the content and rank-based expectations of overlap. | 710,560 |
Title: A Model-Centric Explainer for Graph Neural Network based Node Classification
Abstract: ABSTRACTGraph Neural Networks (GNNs) learn node representations by aggregating a node's feature vector with its neighbors. They perform well across a variety of graph tasks. However, to enhance the reliability and trustworthiness of these models during use in critical scenarios, it is of essence to look into the decision making mechanisms of these models rather than treating them as black boxes. Our model-centric method gives insight into the kind of information learnt by GNNs about node neighborhoods during the task of node classification. We propose a neighborhood generator as an explainer that generates optimal neighborhoods to maximize a particular class prediction of the trained GNN model. We formulate neighborhood generation as a reinforcement learning problem and use a policy gradient method to train our generator using feedback from the trained GNN-based node classifier. Our method provides intelligible explanations of learning mechanisms of GNN models on synthetic as well as real-world datasets and even highlights certain shortcomings of these models. | 710,561 |
Title: Explainable Graph-based Fraud Detection via Neural Meta-graph Search
Abstract: ABSTRACTThough graph neural networks (GNNs)-based fraud detectors have received remarkable success in identifying fraudulent activities, few of them pay equal attention to models' performance and explainability. In this paper, we attempt to achieve high performance for graph-based fraud detection while considering model explainability. We propose NGS (Neural meta-Graph Search), in which the message passing process of a GNN is formalized as a meta-graph, and a differentiable neural architecture search is devised to determine the optimized message passing graph structure. We further enhance the model by aggregating multiple searched meta-graphs to make the final prediction. Experimental results on two real-world datasets demonstrate that NGS outperforms state-of-the-art baselines. In addition, the searched meta-graphs concisely describe the information used for prediction and produce reasonable explanations. | 710,562 |
Title: Robust Semi-supervised Domain Adaptation against Noisy Labels
Abstract: ABSTRACTBuilt upon clean/correct labels, semi-supervised domain adaptation (SSDA) is a well-explored task, which, however, may not be easily obtained. This paper considers a challenging but practical scenario, i.e., the noisy SSDA with polluted labels. Specifically, it is observed that abnormal samples appear to have more randomness and inconsistency among the various views. To this end, we have devised an anomaly score function to detect noisy samples based on the similarity of differently augmented instances. The noisy labeled target samples are re-weighted according to such anomaly scores where the abnormal data contribute less to model training. Moreover, pseudo labeling usually suffers from confirmation bias. To remedy it, we have introduced the adversarial disturbance to raise the divergence across differently augmented views. The experimental results on the contaminated SSDA benchmarks demonstrate the effectiveness of our method over the baselines in both robustness and accuracy. | 710,563 |
Title: FwSeqBlock: A Field-wise Approach for Modeling Behavior Representation in Sequential Recommendation
Abstract: ABSTRACTModeling users' historical behaviors is an essential task in many industrial recommender systems. The user interest representation, in previous works, is obtained through the following paradigm: concrete behaviors are firstly embedded as low-dimensional behavior representations, which are then aggregated conditioning on the target item for final user interest representation. Most existing researches focus on the aggregation process that explores the intrinsic structure of the behavior sequences. However, the quality of behavior representation is largely ignored. In this paper, we present a pluggable module, FwSeqBlock, to enhance the expressiveness of behavior representations. Specifically, FwSeqBlock introduces the multiplicative operation among users' historical behaviors and the target item, where a field memory unit is designed to dynamically identify the dominant features from the behavior sequence and filter out the noise. Extensive experiments validate that FwSeqBlock consistently generates higher-quality user representations compared with competitive methods. Besides, online A/B testing reports a 4.46% improvement in Click-Through Rate (CTR), confirming the effectiveness of the proposed method. | 710,564 |
Title: Do Graph Neural Networks Build Fair User Models? Assessing Disparate Impact and Mistreatment in Behavioural User Profiling
Abstract: ABSTRACTRecent approaches to behavioural user profiling employ Graph Neural Networks (GNNs) to turn users' interactions with a platform into actionable knowledge. The effectiveness of an approach is usually assessed with accuracy-based perspectives, where the capability to predict user features (such as gender or age) is evaluated. In this work, we perform a beyond-accuracy analysis of the state-of-the-art approaches to assess the presence of disparate impact and disparate mistreatment, meaning that users characterised by a given sensitive feature are unintentionally, but systematically, classified worse than their counterparts. Our analysis on two real-world datasets shows that different user profiling paradigms can impact fairness results. The source code and the preprocessed datasets are available at: https://github.com/erasmopurif/do_gnns_build_fair_models. | 710,565 |
Title: CLNews: The First Dataset of the Chilean Social Outbreak for Disinformation Analysis
Abstract: ABSTRACTDisinformation is one of the main threats that loom on social networks. Detecting disinformation is not trivial and requires training and maintaining fact-checking teams, which is labor-intensive. Recent studies show that the propagation structure of claims and user messages allows a better understanding of rumor dynamics. Despite these findings, the availability of verified claims and structural propagation data is low. This paper presents a new dataset with Twitter claims verified by fact-checkers along with the propagation structure of retweets and replies. The dataset contains verified claims checked during the Chilean social outbreak, which allows for studying the phenomenon of disinformation during this crisis. We study propagation patterns of verified content in CLNews, showing differences between false rumors and other types of content. Our results show that false rumors are more persistent than the rest of verified contents, reaching more people than truthful news and presenting low barriers of readability to users. The dataset is fully available and helps understand the phenomenon of disinformation during social crises being one of the first of its kind to be released. | 710,566 |
Title: Plotly.plus, an Improved Dataset for Visualization Recommendation
Abstract: ABSTRACTVisualization recommendation is a novel and challenging field of study, whose aim is to provide non-expert users with automatic tools for insight discovery from data. Advances in this research area are hindered by the absence of reliable datasets on which to train the recommender systems. To the best of our knowledge, Plotly corpus is the only publicly available dataset, but as complained by many authors and discussed in this article, it contains many labeling errors, which greatly limits its usefulness. We release an improved version of the original dataset, named Plotly.plus, which we obtained through an automated procedure with minimal post-editing. In addition to a manual validation by a group of data science students, we demonstrate that when training two state-of-the-art abstract image classifiers on Plotly.plus, systems' performance improves more than twice as much as when the original dataset is used, showing that Plotly.plus facilitates the discovery of significant perceptual patterns. | 710,567 |
Title: Improving Graph-based Document-Level Relation Extraction Model with Novel Graph Structure
Abstract: ABSTRACTDocument-level relation extraction is a natural language processing task for extracting relations among entities in a document. Compared with sentence-level relation extraction, there are more challenges to document-level relation extraction. To acquire mutual information among entities in a document, recent studies have designed mention-level graphs or improved pretrained language models based on co-occurrence or coreference information. However, these methods cannot utilize the anaphoric information of pronouns, which play an important role in document-level relation extraction. In addition, there is a possibility of losing lexical information of the relations among entities directly expressed in a sentence. To address this issue, we propose two novel graph structures: an anaphoric graph and a local-context graph. The proposed method outperforms the existing graph-based relation extraction method when applying the document-level relation extraction dataset, DocRED. | 710,568 |
Title: Cross-domain Prototype Learning from Contaminated Faces via Disentangling Latent Factors
Abstract: ABSTRACTThis paper focuses on an emerging challenging problem called heterogeneous prototype learning (HPL) across face domains-It aims to learn the variation-free target domain prototype for a contaminated input image from the source domain and meanwhile preserve the personal identity. HPL involves two coupled subproblems, i.e., domain transfer and prototype learning. To address the two subproblems in a unified manner, we advocate disentangling the prototype and domain factors in their respected latent feature spaces, and replace the latent source domain features with the target domain ones to generate the heterogeneous prototype. To this end, we propose a disentangled heterogeneous prototype learning framework, dubbed DisHPL, which consists of one encoder-decoder generator and two discriminators. The generator and discriminators play adversarial games such that the generator learns to embed the contaminated image into a prototype feature space only capturing identity information and a domain-specific feature space, as well as generating a realistic-looking heterogeneous prototype. The two discriminators aim to predict personal identities and distinguish between real prototypes versus fake generated prototypes in the source/target domain. Experiments on various heterogeneous face datasets validate the effectiveness of DisHPL. | 710,569 |
Title: Locality Sensitive Hashing with Temporal and Spatial Constraints for Efficient Population Record Linkage
Abstract: ABSTRACTRecord linkage is the process of identifying which records within or across databases refer to the same entity. Min-hash based Locality Sensitive Hashing (LSH) is commonly used in record linkage as a blocking technique to reduce the number of records to be compared. However, when applied on large databases, min-hash LSH can yield highly skewed block size distributions and many redundant record pair comparisons, where only few of those correspond to true matches (records that refer to the same entity). Furthermore, min-hash LSH is highly parameter sensitive and requires trial and error to determine the optimal trade-off between blocking quality and efficiency of the record pair comparison step. In this paper, we present a novel method to improve the scalability and robustness of min-hash LSH for linking large population databases by exploiting temporal and spatial information available in personal data, and by filtering record pairs based on block sizes and min-hash similarity. Our evaluation on three real-world data sets shows that our method can improve the efficiency of record pair comparison by 75% to 99%, whereas the final average linkage precision can be improved by 28% at the cost of a reduction in the average recall by 4%. | 710,570 |
Title: Music4All-Onion -- A Large-Scale Multi-faceted Content-Centric Music Recommendation Dataset
Abstract: ABSTRACTWhen we appreciate a piece of music, it is most naturally because of its content, including rhythmic, tonal, and timbral elements as well as its lyrics and semantics. This suggests that the human affinity for music is inherently content-driven. This kind of information is, however, still frequently neglected by mainstream recommendation models based on collaborative filtering that rely solely on user-item interactions to recommend items to users. A major reason for this neglect is the lack of standardized datasets that provide both collaborative and content information. The work at hand addresses this shortcoming by introducing Music4All-Onion, a large-scale, multi-modal music dataset. The dataset expands the Music4All dataset by including 26 additional audio, video, and metadata characteristics for 109,269 music pieces. In addition, it provides a set of 252,984,396 listening records of 119,140 users, extracted from the online music platform Last.fm, which allows leveraging user-item interactions as well. We organize distinct item content features in an onion model according to their semantics, and perform a comprehensive examination of the impact of different layers of this model (e.g., audio features, user-generated content, and derivative content) on content-driven music recommendation, demonstrating how various content features influence accuracy, novelty, and fairness of music recommendation systems. In summary, with Music4All-Onion, we seek to bridge the gap between collaborative filtering music recommender systems and content-centric music recommendation requirements. | 710,571 |
Title: Not All Neighbors are Friendly: Learning to Choose Hop Features to Improve Node Classification
Abstract: ABSTRACTThe fundamental operation of Graph Neural Networks (GNNs) is the feature aggregation step performed over neighbors of the node based on the structure of the graph. In addition to its own features, the node gets additional combined features from its neighbors for each hop. These aggregated features help define the similarity or dissimilarity of the nodes with respect to the labels and are useful for tasks like node classification. However, in real-world data, features of neighbors at different hops may not correlate with the node's features. Thus, any indiscriminate feature aggregation by GNN might cause the addition of noisy features leading to degradation in model's performance. In this work, we show that selective aggregation leads to better performance than default aggregation on the node classification task. Furthermore, we propose Dual-Net GNN architecture with a classifier model and a selector model. The classifier model trains over a subset of input node features to predict node labels while the selector model learns to provide optimal input subset to the classifier for best performance. These two models are trained jointly to learn the best subset of features that give higher accuracy in node label predictions. With extensive experiments, we show that our proposed model outperforms the state-of-the-art GNN models with remarkable improvements up to 27.8%. | 710,572 |
Title: Contextualized Formula Search Using Math Abstract Meaning Representation
Abstract: ABSTRACTIn math formula search, relevance is determined not only by the similarity of formulas in isolation, but also by their surrounding context. We introduce MathAMR, a new unified representation for sentences containing math. MathAMR generalizes Abstract Meaning Representation (AMR) graphs to include math formula operations and arguments. We then use Sentence-BERT to embed linearized MathAMR graphs for use in formula retrieval. In our first experiment, we compare MathAMR against raw text using the same formula representation (Operator Trees), and find that MathAMR produces more effective rankings. We then apply our MathAMR embeddings to reranking runs from the ARQMath-2 formula retrieval task, where in most cases effectiveness measures are improved. The strongest reranked run matches the best P$'[email protected] for an original run, and exceeds the original runs in nDCG$'[email protected] | 710,573 |
Title: Locality Aware Temporal FMs for Crime Prediction
Abstract: ABSTRACTCrime forecasting techniques can play a leading role in hindering crime occurrences, especially in areas under possible threat. In this paper, we propose Locality Aware Temporal Factorization Machines (LTFMs) for crime prediction. Its locality representation module deploys a spatial encoder to estimate the regional dependencies using Graph Convolutional Networks (GCNs). Then, the Point of Interest (POI) encoder computes the weighted attentive aggregation of location, crime, and POI latent representations. The dynamic crime representation module utilizes the transformer-based positional encodings to capture the dependencies among space, time, and crime categories. The encodings learnt from locality representation and crime category encoders, are projected into a factorization machine-based architecture via a shared feed-forward network. An extensive comparison with state-of-art techniques, using Chicago and New York's criminal records, shows the significance of LTFMs. | 710,574 |
Title: Robustness of Sketched Linear Classifiers to Adversarial Attacks
Abstract: ABSTRACTLinear classifiers are well-known to be vulnerable to adversarial attacks: they may predict incorrect labels for input data that are adversarially modified with small perturbations. However, this phenomenon has not been properly understood in the context of sketch-based linear classifiers, typically used in memory-constrained paradigms, which rely on random projections of the features for model compression. In this paper, we propose novel Fast-Gradient-Sign Method (FGSM) attacks for sketched classifiers in full, partial, and black-box information settings with regards to their internal parameters. We perform extensive experiments on the MNIST dataset to characterize their robustness as a function of perturbation budget. Our results suggest that, in the full-information setting, these classifiers are less accurate on unaltered input than their uncompressed counterparts but just as susceptible to adversarial attacks. But in more realistic partial and black-box information settings, sketching improves robustness while having lower memory footprint. | 710,575 |
Title: Curriculum Contrastive Learning for Fake News Detection
Abstract: ABSTRACTDue to the rapid spread of fake news on social media, society and economy have been negatively affected in many ways. How to effectively identify fake news is a challenging problem that has received great attention from academic and industry. Existing deep learning methods for fake news detection require a large amount of labeled data to train the model, but obtaining labeled data is a time-consuming and labor-intensive process. To extract useful information from a large amount of unlabeled data, some contrastive learning methods for fake news detection are proposed. However, existing contrastive learning methods only randomly sample negative samples at different training stages, resulting in the role of negative samples not being fully played. Intuitively, increasing the contrastive difficulty of negative samples gradually in a way similar to human learning will contribute to improve the performance of the model. Inspired by the idea of curriculum learning, we propose a curriculum contrastive model (CCFD) for fake news detection which automatically select and train negative samples with different difficulty at different training stages. Furthermore, we also propose three new augmentation methods which consider the importance of edges and node attributes in the propagation structure to obtain more effective positive samples. The experimental results on three public datasets show that our model CCFD outperforms the existing state-of-the-art models for fake news detection. | 710,576 |
Title: A Prerequisite Attention Model for Knowledge Proficiency Diagnosis of Students
Abstract: ABSTRACTWith the rapid development of intelligent education platforms, how to enhance the performance of diagnosing students' knowledge proficiency has become an important issue, e.g., by incorporating the prerequisite relation of knowledge concepts. Unfortunately, the differentiated influence from different predecessor concepts to successor concepts is still underexplored in existing approaches. To this end, we propose a Prerequisite Attention model for Knowledge Proficiency diagnosis of students (PAKP) to learn the attentive weights of precursor concepts on successor concepts and model it for inferring the knowledge proficiency. Specifically, given the student response records and knowledge prerequisite graph, we design an embedding layer to output the representations of students, exercises, and concepts. Influence coefficient among concepts is calculated via an efficient attention mechanism in a fusion layer. Finally, the performance of each student is predicted based on the mined student and exercise factors. Extensive experiments on real-data sets demonstrate that PAKP exhibits great efficiency and interpretability advantages without accuracy loss. | 710,577 |
Title: See Clicks Differently: Modeling User Clicking Alternatively with Multi Classifiers for CTR Prediction
Abstract: ABSTRACTMany recommender systems optimize click through rates (CTRs) as one of their core goals, and it further breaks down to predicting each item's click probability for a user (user-item click probability) and recommending the top ones to this particular user. User-item click probability is then estimated as a single term, and the basic assumption is that the user has different preferences over items. This is presumably true, but from real-world data, we observe that some people are naturally more active in clicking on items while some are not. This intrinsic tendency contributes to their user-item click probabilities. Besides this, when a user sees a particular item she likes, the click probability for this item increases due to this user-item preference. Therefore, instead of estimating the user-item click probability directly, we break it down into two finer attributes: user's intrinsic tendency of clicking and user-item preference. Inspired by studies that emphasize item features for overall enhancements and research progress in multi-task learning, we for the first time design a Multi Classifier Click Rate prediction model (MultiCR) to better exploit item-level information by building a separate classifier for each item. Furthermore, in addition to utilizing static user features, we learn implicit connections between user's item preferences and the often-overlooked indirect user behaviors (e.g., click histories from other services within the app). In a common new-campaign/new-service scenario, MultiCR outperforms various baselines in large-scale offline and online experiments and demonstrates good resilience when the amount of training data decreases. | 710,578 |
Title: Urban Region Profiling via Multi-Graph Representation Learning
Abstract: ABSTRACTProfiling urban regions is essential for urban analytics and planning. Although existing studies have made great efforts to learn urban region representation from multi-source urban data, there are still limitations on modelling local-level signals, developing an effective yet integrated fusion framework, and performing well in regions with high variance socioeconomic attributes. Thus, we propose a multi-graph representation learning framework, called Region2Vec, for urban region profiling. Specifically, except that human mobility is encoded for inter-region relations, geographic neighborhood is introduced for capturing geographical contextual information while POI side information is adopted for representing intra-region information. Then, graphs are used to capture accessibility, vicinity, and functionality correlations among regions. An encoder-decoder multi-graph fusion module is further proposed to jointly learn comprehensive representations. Experiments on real-world datasets show that Region2Vec can be employed in three applications and outperforms all state-of-the-art baselines. Particularly, Region2Vec has better performance than previous studies in regions with high variance socioeconomic attributes. | 710,579 |
Title: Self-Paced and Discrete Multiple Kernel k-Means
Abstract: ABSTRACTMultiple Kernel K-means (MKKM) uses various kernels from different sources to improve clustering performance. However, most of the existing models are non-convex, which is prone to be stuck into bad local optimum, especially with noise and outliers. To address the issue, we propose a novel Self-Paced and Discrete Multiple Kernel K-Means (SPD-MKKM). It learns the MKKM model in a meaningful order by progressing both samples and kernels from easy to complex, which is beneficial to avoid bad local optimum. In addition, whereas existing methods optimize in two stages: learning the relaxation matrix and then finding the discrete one by extra discretization, our work can directly gain the discrete cluster indicator matrix without extra process. What's more, a well-designed alternative optimization is employed to reduce the overall computational complexity via using the coordinate descent technique. Finally, thorough experiments performed on real-world datasets illustrated the excellence and efficacy of our method. | 710,580 |
Title: Scalable Multiple Kernel k-means Clustering
Abstract: ABSTRACTWith its simplicity and effectiveness, k-means is immensely popular, but it cannot perform well on complex nonlinear datasets. Multiple kernel k-means (MKKM) demonstrates the ability to describe highly complex nonlinear separable data structures. However, its speed requirement cannot scale as well as the data size grows beyond tens of thousands. Nowadays, digital data explosion mandates more scalable clustering methods to assist the machine learning tasks in easy-to-access form. To address the issue, we propose to employ the Nystrom scheme for MKKM clustering, termed scalable multiple kernel k-means clustering. It significantly reduces the computational complexity by replacing the original kernel matrix with a low-rank approximation. Analytically and empirically, we demonstrate that our method performs as well as existing state-of-the-art methods, but at a significantly lower compute cost, allowing us to scale the method more effectively for clustering tasks. | 710,581 |
Title: PyKale: Knowledge-Aware Machine Learning from Multiple Sources in Python
Abstract: ABSTRACTPyKale is a Python library for Knowledge-aware machine learning from multiple sources of data to enable/accelerate interdisciplinary research. It embodies green machine learning principles to reduce repetitions/redundancy, reuse existing resources, and recycle learning models across areas. We propose a pipeline-based application programming interface (API) so all machine learning workflows follow a standardized six-step pipeline. PyKale focuses on leveraging knowledge from multiple sources for accurate and interpretable prediction, particularly multimodal learning and transfer learning. To be more accessible, it separates code and configurations to enable non-programmers to configure systems without coding. PyKale is officially part of the PyTorch ecosystem and includes interdisciplinary examples in bioinformatics, knowledge graph, image/video recognition, and medical imaging: https://pykale.github.io/. | 710,582 |
Title: MomNet: Gender Prediction using Mechanism of Working Memory
Abstract: ABSTRACTIn social media analysis, gender prediction is one of the most important tasks of user profiling. Web users often post messages in a timeline manner to record their living moments. These messages containing texts and images, constitute long multi-modal data that potentially represents the living style, preference, or opinion regarding users. Therefore, it is feasible to predict the gender of a user by utilizing such living moments. However, the rich modalities (time, length, text, and image) of living moments with difficult challenges have not been fully exploited by the research communities for practical applications. To this end, we propose a novel gender prediction framework based on user-posted living Moments MomNet). The MomNet mainly consists of a moment memory module and a central executive module inspired by the two characteristics of working memory theory. One is that humans can associate related information to facilitate memory. Our moment memory module aggregates similar uni-modal moments of a user to form different chunks and encode the chunks into moment memory representations. The other is that humans coordinate information from different modalities to make judgments. Our central executive module is designed to coordinate comprehensive attentions of moment memory representations from texts, images, and their combinations. Finally, a softmax classifier is used to predict gender. Extensive experiments conducted on a real-world public dataset show that our framework achieves 86.63% accuracy and outperforms all state-of-the-art methods in terms of accuracy. | 710,583 |
Title: Memory Augmented Graph Learning Networks for Multivariate Time Series Forecasting
Abstract: ABSTRACTMultivariate time series (MTS) forecasting is a challenging task. In MTS forecasting, We need to consider both intra-series temporal correlations and inter-series spatial correlations simultaneously. However, existing methods capture spatial correlations from the local data of the time series, without taking the global historical information of time series into account. In addition, most methods base on graph neural network mining for the temporal correlations tend to the redundancy of information at adjacent time points in the time-series data, which introduces noise. In this paper, we propose a memory augmented graph learning network (MAGL), which captures the spatial correlations in terms of the global historical features of MTS. Specifically, we use a memory unit to learn from the local data of MTS. The memory unit records the global historical features of the time series, which is used to mine the spatial correlations. We also design a temporal feature distiller to reduce the noise in extracting temporal features. We extensively evaluate our model on four real-world datasets, comparing with several state-of-the-art methods. The experimental results show MAGL outperforms the state-of-the-art baseline methods on several datasets. | 710,584 |
Title: Embedding Global and Local Influences for Dynamic Graphs
Abstract: ABSTRACTGraph embedding is becoming increasingly popular due to its ability of representing large-scale graph data by mapping nodes to low-dimensional space. Current research usually focuses on transductive learning, which aims to generates fixed node embeddings by training the whole graph. However, dynamic graph changes constantly with new node additions and interactions. Unlike transductive learning, inductive learning attempts to dynamically generate node embeddings over time even for unseen nodes, which is more suitable for real-world applications. Therefore, we propose an inductive dynamic graph embedding method called AGLI by aggregating global and local influences. We propose an aggregator function that integrates global influence with local influence to generate node embeddings at any time. We conduct extensive experiments on several real-world datasets and compare AGLI with several state-of-the-art baseline methods on various tasks. The experimental results show that AGLI achieves better performance than the state-of-the-art baseline methods. | 710,585 |
Title: ExpertBert: Pretraining Expert Finding
Abstract: ABSTRACTExpert Finding is an important task in Community Question Answering (CQA) platforms, which could help route questions to potential expertise users to answer. The key is to model the question content and experts based on their historical answered questions accurately. Recently Pretrained Language Models (PLMs, e.g., Bert) have shown superior text modeling ability and have been used in expert finding preliminary. However, most PLMs-based models focus on the corpus or document granularity during pretraining, which is inconsistent with the downstream expert modeling and finding task. In this paper, we propose an expert-level pretraining language model named ExpertBert, aiming to model questions, experts as well as question-expert matching effectively in a pretraining manner. In our approach, we aggregate the historical answered questions of an expert as the expert-specific input.Besides, we integrate the target question into the input and design a label-augmented Masked Language Model (MLM) task to further capture the matching pattern between question and experts, which makes the pretraining objectives that more closely resemble the downstream expert finding task. Experimental results and detailed analysis on real-world CQA datasets demonstrate the effectiveness of our ExpertBert. | 710,586 |
Title: Knowledge Distillation via Hypersphere Features Distribution Transfer
Abstract: ABSTRACTKnowledge distillation (KD) is a widely applicable DNN (Deep Neural Network) compression technology, which aims to transfer knowledge from a pretrained teacher neural network to a target student neural network. In practice, an enormous teacher is extracted through the compression of a neural network to train a relatively compact student. In general, current KD approaches mostly minimize divergence between the intermediate layers or logits of the teacher network and student network. However, these methods ignore important features distribution in the teacher network space, which leads to the defect of current KD approaches in the fine-grained categorization task, e.g., metric learning. For this, we propose a novel approach that transfers features distribution in the hyperspherical space from the teacher network to the student network. Specifically, our approach facilitates the student to learn the distribution among samples in the teacher and reduces the intra-class variance. Extensive experimental evaluations on three well-known metric learning datasets show that our method can distill higher-level knowledge from the teacher network and achieve state-of-the-art performance. | 710,587 |
Title: JavaScript&Me, A Tool to Support Research into Code Transformation and Browser Security
Abstract: ABSTRACTDoing research into code variations and their applications to browser security is challenging. One of the most important aspects of this research is to choose a relevant dataset on which machine learning algorithms can be applied to yield useful results. Although JavaScript code is widely available on various sources, such as package managers, code hosting platforms, and websites, collecting a large corpus of JavaScript and curating it is not a simple task. We present a novel open-source tool that helps with this task by allowing the automatic and systematic collection, processing, and transformation of JavaScript code. These three steps are performed by independent modules, and each one can be extended to incorporate new features, such as additional code sources, or transformation tools, adding to the flexibility of our tool and expanding its usability. Additionally, we use our tool to create a corpus of around 270k JavaScript files, including regular, minified, and obfuscated code, on which we perform a brief analysis. The conclusions from this analysis show the importance of properly curating a dataset before using it in research tasks, such as machine learning classifiers, reinforcing the relevance of our tool. | 710,588 |
Title: Invariance Testing and Feature Selection Using Sparse Linear Layers
Abstract: ABSTRACTMachine learning testing and evaluation are largely overlooked by the community. In many cases, the only way to conduct testing is through formula-based scores, e.g., accuracy, f1, etc. However, these simple statistical scores cannot fully represent the performance of the ML model. Therefore, new testing frameworks are attracting more attention. In this work, we propose a novel invariance testing approach that does not utilise traditional statistical scores. Instead, we train a series of sparse linear layers which are more easily to be compared due to their sparsity. We then use different divergence functions to numerically compare them and fuse the difference scores into a visual matrix. Additionally, testing using sparse linear layers allows us to conduct a novel testing oracle: associativity: by comparing merged weights and weights obtained by combined augmentation. We then assess whether a model is invariant by checking the visual matrix, the associativity, and its sparse layers. Finally, we also show that this testing approach can potentially provide an actionable item for feature selection. | 710,589 |
Title: Heterogeneous Hypergraph Neural Network for Friend Recommendation with Human Mobility
Abstract: ABSTRACTFriend recommendation from human mobility is a vital real-world application of location-based social networks (LBSN). It is necessary to recognize patterns from human mobility to assist friend recommendation because previous works have shown complex relations between them. However, most of previous works either modelled social networks and user trajectories separately, or only used classical simple graph-based methods with an edge linking two nodes that cannot fully model the complex data structure of LBSN. Inspired by the fact that hyperedges can connect multiple nodes of different types, we model user trajectories and check-in records as hyperedges in a novel heterogeneous LBSN hypergraph to represent complex spatio-temporal information. And then, we design a type-specific attention mechanism for an end-to-end trainable heterogeneous hypergraph neural network (HHGNN) with supervised contrastive learning, which can learn hypergraph node embedding for the next friend recommendation task. At last, our model HHGNN outperforms the state-of-the-art methods on four real-world city datasets, while ablation studies also confirm the effectiveness of each model part. | 710,590 |
Title: An Extreme Semi-supervised Framework Based on Transformer for Network Intrusion Detection
Abstract: ABSTRACTNetwork intrusion detection (NID) aims to detect various network attacks and is an important task for guaranteeing network security. However, existing NID methods usually require a large amount of labeled data for training, which is impractical in many real application scenarios due to the high cost. To address this issue, we proposed an extreme semi-supervised framework based on transformer (ESet) for NID. ESeT first developed a multi-level feature extraction module to learn both packet-level byte encoded features and flow-level frequency domain features to enrich the information for detection. Then, during the semi-supervised learning, ESeT designed the dual-encoding transformer to fuse the extracted features for intrusion detection and introduced the credibility selector to reduce the negative impacts of incorrect pseudo-labeling of unlabeled data. The experiment results show that our model achieves excellent performance (F1-score: 97.60%) with only a small proportion of labeled data (1%) on CIC-IDS2017 and CSE-CIC-IDS2018 datasets. | 710,591 |
Title: CNewsTS - A Large-scale Chinese News Dataset with Hierarchical Topic Category and Summary
Abstract: ABSTRACTIn this paper, we present a large Chinese news article dataset with 4.4 million articles. These articles are obtained from different news channels and sources. They are labeled with multi-level topic categories, and some of them also have summaries. This is the first Chinese news dataset that has both hierarchical topic labels and article full texts. And it is also the largest Chinese news topic dataset. We describe the data collection, annotation and quality evaluation process. The basic statistics of the dataset, comparison with other datasets and benchmark experiments are also presented. | 710,592 |
Title: Dual-Augment Graph Neural Network for Fraud Detection
Abstract: ABSTRACTGraph Neural Networks (GNNs) have drawn attention due to their excellent performance in fraud detection tasks, which reveal fraudsters by aggregating the features of their neighbors. However, some fraudsters typically tend to alleviate their suspiciousness by connecting with many benign ones. Besides, label-imbalanced neighborhood also deteriorates fraud detection accuracy. Such behaviors violate the homophily assumption and worsen the performance of GNN-based fraud detectors. In this paper, we propose a Dual-Augment Graph Neural Network (DAGNN) for fraud detection tasks. In DAGNN, we design a two-pathway framework including disparity augment (DA) pathway and similarity augment (SA) pathway. Accordingly, we devise two novel information aggregation strategies. One is to augment the disparity between target node and its heterogenous neighbors in original topology. The other is to augment its similarity to homogenous neighbors in a relatively label-balanced neighborhood. The experimental results compared with the state-of-the-art models on two real-world datasets demonstrate the superiority of the proposed DAGNN. | 710,593 |
Title: An Exploratory Study of Information Cocoon on Short-form Video Platform
Abstract: ABSTRACTIn recent years, short-form video platforms have emerged rapidly and attracted a large and wide variety of users, with the help of advanced recommendation algorithms. Despite the great success, the algorithms have caused some negative effects, such as information cocoon, algorithm unfairness,etc. In this work, we focus on theinformation cocoon that measures overwhelmingly homogeneity of users' video consumption. Specifically, we conduct an exploratory study of this phenomenon on a top short-form video platform, with one-year behavioral records of new users. First, we evaluate the evolution of users' information cocoons and find the limitation of the diversity of video content that users consume. In addition, we further explore user cocoons via the correlation analysis from three aspects, including user demographics, video content, and user-recommender interactions driven by algorithms and user preferences. Correspondingly, we observe that video content plays a more significant role in affecting user cocoons than demographics does. In terms of user-recommender interactions, more accurate personalization does not contribute to more severe information cocoons necessarily, while users with narrow preferences are more likely to be trapped. In summary, our study illuminates the current concern of information cocoons that may hurt user experience on short-form video platforms, and offers potential directions for mitigation implied by the correlation analysis. | 710,594 |
Title: Cooperative Max-Pressure Enhanced Traffic Signal Control
Abstract: ABSTRACTAdaptive traffic signal control is an important and challenging real-world problem that fits well with the task framework of deep reinforcement learning. As one of the critical design elements, the environmental state plays a crucial role in traffic signal control decisions. The state definitions of most existing works mostly contain lane-level queue length, intersection phase, and other features. However, these works are heuristically designed in representing states. This results in highly sensitive and unstable performances of next actions. The paper proposes a Cooperative Max-Pressure enhanced State Learning for the traffic signal control (CMP-SL), which is inspired by the advanced pressure definition for an intersection in the transportation field to cope with this problem. First, our CMP-SL explicitly extends the cooperative max-pressure to the state definition of a target intersection, aiming to obtain accurate environment information by including the traffic pressures of surrounding intersections. From then on, a graph attention mechanism (GAT) is used to learn the state representation of the target intersection in our spatial-temporal state module. Second, since the state is coupled with the reward in reinforcement learning, our method takes the cooperative max-pressure of the target intersection into the reward definition. Furthermore, a temporal convolutional network (TCN) based sequence model is used to capture the historical state of traffic flow. And the historical spatial-temporal and the current spatial state features are concatenated into a DQN network to predict the Q value and generate each phase action. Finally, experiments with two real-world traffic datasets demonstrate that our method achieves shorter vehicle average times and higher network throughput than the state-of-the-art models. | 710,595 |
Title: Do Simpler Statistical Methods Perform Better in Multivariate Long Sequence Time-Series Forecasting?
Abstract: ABSTRACTLong sequence time-series forecasting has become a central problem in multivariate time-series analysis due to its difficulty of consistently maintaining low prediction errors. Recent research has concentrated on developing large deep learning frameworks such as Informer and SCINet with remarkable results. However, these complex approaches were not benchmarked with simpler statistical methods and hence this part of the puzzle is missing for multivariate long sequence time-series forecasting (MLSTF). We investigate two simple statistical methods for MLSTF and provide analysis to indicate that linear regression owns a lower upper bound of error than deep learning methods and SNaive can act as an effective nonparametric method with unpredictable trends. Evaluations across six real-world datasets demonstrate that linear regression and SNaive are able to achieve state-of-the-art performance for MLSTF. | 710,596 |
Title: A Hierarchical User Behavior Modeling Framework for Cross-Domain Click-Through Rate Prediction
Abstract: ABSTRACTClick-through rate (CTR) prediction is a long-standing problem in advertising systems. Existing single-domain CTR prediction methods suffer from the data sparsity problem since few users can click advertisements on many items. Recently, cross-domain CTR prediction leverages the relatively richer information from a source domain to improve the performance on a target domain with sparser information, but it cannot explicitly capture users' diverse interests in different domains. In this paper, we propose a novel hierarchical user behavior modeling framework for cross-domain CTR prediction, named HBMNet. HBMNet contains two main components: an element-wise behavior transfer(EWBT) layer and a user representation layer. EWBT layer transfers the information collected from one domain by element-level masks to dynamically highlight the informative elements in another domain. The user representation layer performs behavior-level attention between these behavior representations and the ranking item representation. Extensive experimental results on two cross-domain datasets show that the proposed HBMNet outperforms SOTA models. | 710,597 |
Title: A Multi-grained Dataset for News Event Triggered Knowledge Update
Abstract: ABSTRACTKeeping knowledge facts up-to-date is labored and costly as the world rapidly changes and new information emerges every second. In this work, we introduce a novel task, news event triggered knowledge update. Given an existing article about a topic with a news event about the topic, the aim of our task is to generate an updated article according to the information from the news event. We create a multi-grained dataset for the investigation of our task. The articles from Wikipedia are collected and aligned with news events at multiple language units, including the citation text, the first paragraph, and the full content of the news article. Baseline models are also explored at three levels of knowledge update, including the first paragraph, the summary, and the full content of the knowledge facts. | 710,598 |
Title: Efficient Data Augmentation Policy for Electrocardiograms
Abstract: ABSTRACTWe present the taxonomy of data augmentation for electrocardiogram (ECG) after reviewing various ECG augmentation methods. On the basis of the taxonomy, we demonstrate the effect of augmentation methods on the ECG classification via extensive experiments. Initially, we examine the performance trend as the magnitude of distortion increases and identify the optimal distortion magnitude. Secondly, we investigate the synergistic combinations of the transformations and identify the pairs of transformations with the greatest positive effect. Finally, based on our experimental findings, we propose an efficient augmentation policy and demonstrate that it outperforms previous augmentation policies. | 710,599 |
Title: Neuron Specific Pruning for Communication Efficient Federated Learning
Abstract: ABSTRACTFederated Learning (FL) is a distributed training framework where a model is collaboratively trained over a set of clients without communicating their private data to the central server. However, each client shares the parameters of its local model. The first challenge faced by the FL is high communication cost due to the size of Deep Neural Network (DNN) models. Pruning is an efficient technique to reduce the number of parameters in DNN models, in which insignificant neurons are removed from the model. This paper introduces a federated pruning method based on Neuron Importance Scope Propagation (NISP) algorithm. The importance scores of output layer neurons are back-propagated layer-wise to every neuron in the network. The central server iteratively broadcasts the sparsified weights to all selected clients. Then, each participating client intermittently downloads the mask vector and reconstructs the weights in their original form. The locally updated model is pruned using the mask vector and shared with the server. After receiving model updates from each participating client, the server reconstructs and aggregates the weights. Experiments on MNIST and CIFAR10 datasets demonstrate that the proposed approach achieves accuracy close to Federated Averaging (FedAvg) algorithm with less communication cost. | 710,600 |
Title: EEG-Oriented Self-Supervised Learning and Cluster-Aware Adaptation
Abstract: ABSTRACTRecently, deep learning-based electroencephalogram (EEG) analysis and decoding have gained widespread attention to monitor a user's clinical condition or identify his/her intention/emotion. Nevertheless, the existing methods mostly model EEG signals with limited viewpoints or restricted concerns about the characteristics of the EEG signals, thus suffering from representing complex spatio-spectro-temporal patterns as well as inter-subject variability. In this work, we propose novel EEG-oriented self-supervised learning methods to discover complex and diverse patterns of spatio-spectral characteristics and spatio-temporal dynamics. Combined with the proposed self-supervised representation learning, we also devise a feature normalization strategy to resolve an inter-subject variability problem via clustering. We demonstrated the validity of the proposed framework on three publicly available datasets by comparing with state-of-the-art methods. | 710,601 |
Title: Is It Enough Just Looking at the Title?: Leveraging Body Text To Enrich Title Words Towards Accurate News Recommendation
Abstract: ABSTRACTIn a news recommender system, a user tends to click on a news article if she is interested in its topic understood by looking at its title. Such a behavior is possible since, when viewing the title, humans naturally think of the contextual meaning of each title word by leveraging their own background knowledge. Motivated by this, we propose a novel personalized news recommendation framework CAST (Context-aware Attention network with a Selection module for Title word representation), which is capable of enriching title words by leveraging body text that fully provides the whole content of a given article as the context. Through extensive experiments, we demonstrate (1) the effectiveness of core modules in CAST, (2) the superiority of CAST over 9 state-of-the-art news recommendation methods, and (3) the interpretability with CAST. | 710,602 |
Title: Context-aware Traffic Flow Forecasting in New Roads
Abstract: ABSTRACTThis paper focuses on the problem of forecasting daily traffic of new roads, where very little data is available for prediction. We propose a novel prediction model based on Generative Adversarial Networks (GAN) that learns the subtle patterns of the changes in the traffic flow according to the various contextual factors. Then the trained generator makes a prediction via generating a realistic traffic flow data of a target new road given its weather and day type. Both the quantitative and qualitative results of our extensive experiments indicate the effectiveness of our method. | 710,603 |
Title: Bootstrapped Knowledge Graph Embedding based on Neighbor Expansion
Abstract: ABSTRACTMost Knowledge Graph(KG) embedding models require negative sampling to learn the representations of KG by discriminating the differences between positive and negative triples. Knowledge representation learning tasks such as link prediction are heavily influenced by the quality of negative samples. Despite many attempts, generating high-quality negative samples remains a challenge. In this paper, we propose a novel framework, Bootstrapped Knowledge graph Embedding based on Neighbor Expansion (BKENE), which learns representations of KG without using negative samples. Our model avoids using augmentation methods that can alter the semantic information when creating the two semantically similar views of KG. In particular, we generate an alternative view of KG by aggregating the information of the expanded neighbor of each node with multi-hop relation. Experimental results show that our BKENE outperforms the state-of-the-art methods for link prediction tasks. | 710,604 |
Title: Mining Entry Gates for Points of Interest
Abstract: ABSTRACTIn this paper, we propose two algorithms for identifying entry gates for Points of Interest (PoIs) using polygon representations of the PoIs (PoI polygons) and the Global Positioning System (GPS) trajectories of the Delivery Partners (DPs) obtained from their smartphones in the context of online food delivery platforms. PoIs include residential complexes, office complexes, and educational institutes where customers can order from. Identifying entry gates of PoIs helps avoid delivery hassles by routing the DPs to the nearest entry gates for customers within the PoIs. The DPs mark 'reached' on their smartphone applications when they reach the entry gate or the parking spot of the PoI. However, it is not possible to ensure compliance, and the 'reached' locations are dispersed throughout the PoI. The first algorithm is based on density-based clustering of GPS traces where the DPs mark 'reached'. The clusters that overlap with the PoI polygon as measured by a metric that we propose, namely Cluster Fraction in Polygon (CFIP), are declared as entry gate clusters. The second algorithm obtains the entry gate clusters as density-based clustering of intersections of GPS trajectories of the DPs with the PoI polygon edges. The entry gates are obtained as median centroids of the entry gate clusters for both the algorithms which are then snapped to the nearest polygon edge in the case of the first algorithm. We evaluate the algorithms for a few thousand PoIs across 9 large cities in India using appropriately defined precision and recall metrics. For single-gate PoIs, we obtain a mean precision of 84%, a mean recall of 77%, and an average haversine distance error of 14.7 meters for the first algorithm. For the second algorithm, the mean precision is the same as the first algorithm while the recall obtained is 78% and the average haversine distance error is 14.3 meters. The algorithmically identified gates were evaluated by manual validation. To the best of our knowledge, this is the first published work with metrics that solves for a ''last-last mile" entity of digital maps for India, i.e., the entry gates. | 710,605 |
Title: Convolutional Transformer Networks for Epileptic Seizure Detection
Abstract: ABSTRACTEpilepsy is a chronic neurological disease that affects many people in the world. Automatic epileptic seizure detection based on electroencephalogram (EEG) signals is of great significance and has been widely studied. The current deep learning epilepsy detection algorithms are often designed to be relatively simple and seldom consider the characteristics of EEG signals. In this paper, we propose a promising epilepsy detection model based on convolutional transformer networks. We demonstrate that integrating convolution and transformer modules can achieve higher detection performance. Our convolutional transformer model is composed of two branches: one extracts time-domain features from multiple inputs of channel-exchanged EEG signals, and the other handle frequency-domain representations. Experiments on two EEG datasets show that our model offers state-of-the-art performance. Particularly on the CHB-MIT dataset, our model achieves 96.02% in average sensitivity and 97.94% in average specificity, outperforming other existing methods with clear margins. | 710,606 |
Title: AI-Augmented Art Psychotherapy through a Hierarchical Co-Attention Mechanism
Abstract: ABSTRACTOne of the significant social problems emerging in modern society is mental illness, and a growing number of people are seeking psychological help. Art therapy is a technique that can alleviate psychological and emotional conflicts through creation. However, the expression of a drawing varies by individuals, and the subjective judgments made by art therapists raise the need to secure an objective assessment. In this paper, we present M2C (Multimodal classification with 2-stage Co-attention), a deep learning model that predicts stress from art therapy psychological test data. M2C employs a co-attention mechanism that combines two modalities-drawings and post-questionnaire answers-to complement the weaknesses of each, which corresponds to therapists' psychometric diagnostic processes. The results of the experiment show that M2C yielded higher performance than other state-of-the-art single- or multi-modal models, demonstrating the effectiveness of the co-attention approach that reflects the diagnosis process. | 710,607 |
Title: RealGraphGPU: A High-Performance GPU-Based Graph Engine toward Large-Scale Real-World Network Analysis
Abstract: ABSTRACTA graph, consisting of vertices and edges, has been widely adopted for network analysis. Recently, with the increasing size of real-world networks, many graph engines have been studied to efficiently process large-scale real-world graphs. RealGraph, one of the state-of-the-art single-machine-based graph engines, efficiently processes storage-to-memory I/Os by considering unique characteristics of real-world graphs. Via an in-depth analysis of RealGraph, however, we found that there is still a chance for more performance improvement in the computation part of RealGraph despite its great I/O processing ability. Motivated by this, in this paper, we propose RealGraphGPU, a GPU-based single-machine graph engine. We design the core components required for GPU-based graph processing and incorporate them into the architecture of RealGraph. Further, we propose two optimizations that successfully address the technical issues that could cause the performance degradation in the GPU-based graph engine: buffer pre-checking and edge-based workload allocation strategies. Through extensive evaluation with 6 real-world datasets, we demonstrate that (1) RealGraphGPU improves RealGraph by up to 546%, (2) RealGraphGPU outperforms existing state-of-the-art graph engines dramatically, and (3) the optimizations are all effective in large-scale graph processing. | 710,608 |
Title: LGP: Few-Shot Class-Evolutionary Learning on Dynamic Graphs
Abstract: ABSTRACTGraph few-shot learning aims to learn how to quickly adapt to new tasks using only a few labeled data, which transfers learned knowledge of base classes to novel classes. Existing methods are mainly designed for static graphs, while many real-world graphs are dynamic and evolving over time, resulting in a phenomenon of structure and class evolutions. To address the challenges caused by the phenomenon, in this paper, we propose a novel algorithm named Learning to Generate Parameters (LGP) to deal with few-shot class-evolutionary learning on dynamic graphs. Specifically, for the structure evolution, LGP integrates ensemble learning into a backbone network to effectively learn invariant representation across different snapshots within a dynamic graph. For the class evolution, LGP adopts a meta-learning strategy that can learn to generate the classified parameters of novel classes via the parameters of the base classes. Therefore, LGP can quickly adapt to new tasks on a combination of base and novel classes. Besides, LGP utilizes an attention mechanism to capture the evolutionary pattern between the novel and based classes. Extensive experiments on a real-world dataset demonstrate the effectiveness of LGP. | 710,609 |
Title: GDA-HIN: A Generalized Domain Adaptive Model across Heterogeneous Information Networks
Abstract: ABSTRACTDomain adaptation using graph-structured networks learns label-discriminative and network-invariant node embeddings by sharing graph parameters. Most existing works focus on domain adaptation of homogeneous networks. The few works that study heterogeneous cases only consider shared node types but ignore private node types in individual networks. However, for given source and target heterogeneous networks, they generally contain shared and private node types, where private types bring an extra challenge for graph domain adaptation. In this paper, we investigate Heterogeneous Information Networks (HINs) with both shared and private node types and propose a Generalized Domain Adaptive model across HINs (GDA-HIN) to handle the domain shift between them. GDA-HIN can not only align the distribution of identical-type nodes and edges in two HINs but also make full use of different-type nodes and edges to improve the performance of knowledge transfer. Extensive experiments on several datasets demonstrate that GDA-HIN can outperform state-of-the-art methods in various domain adaptation tasks across heterogeneous networks. | 710,610 |
Title: Deep Presentation Bias Integrated Framework for CTR Prediction
Abstract: ABSTRACTIn online advertising, click-through rate (CTR) prediction typically utilizes click data to train models for estimating the probability of a user clicking on an item. However, the different presentations of an item, including its position and contextual items, etc., will affect the user's attention and lead to different click propensities, thus the presentation bias arises. Most previous works generally consider position bias and pay less attention to overall presentation bias including context. Simultaneously, since the final presentation list is unreachable during online inference, the bias independence assumption is adopted so that the debiased relevance can be directly used for ranking. But this assumption is difficult to hold because the click propensity to the item presentation varies with user intent. Therefore, predicted CTR with personalized click propensity rather than debiased relevance should be closer to real CTR. In this work, we propose a Deep Presentation Bias Integrated Framework (DPBIF). With DPBIF, the presentation block containing item and contextual items on the same screen is introduced into user behavior sequence and predicted target item for personalizing the integration of presentation bias caused by different click propensities into CTR prediction network. While avoiding modeling with the independence assumption, the network is capable of estimating multiple integrated CTRs under different presentations for each item. The multiple CTRs are used to transform the ranking problem into an item-to-position assignment problem so that the Kuhn-Munkres (KM) algorithm is employed to optimize the global benefit of the presentation list. Extensive offline experiments and online A/B tests are performed in a real-world system to demonstrate the effectiveness of the proposed framework. | 710,611 |
Title: Pattern Adaptive Specialist Network for Learning Trading Patterns in Stock Market
Abstract: ABSTRACTStock prediction is a challenging task due to the uncertainty of stock markets. Despite the success of previous works, most of them rely on the assumption that stock data follow the identically identical distribution while the existence of multiple trading patterns in stock market violates it, ignoring multiple patterns in stock market will inevitably lead to the performance decline, and the lack of pattern prior knowledge further hinders the learning of patterns. In this paper, we propose a novel training process Pattern Adaptive Training based on Optimal Transport (OT) to train a set of predictors specializing in diverse patterns while without any prior pattern knowledge and inconsistent assumption. Based on this process, we further mine the potential fitness rank among specialists and design the Pattern Adaptive Specialist Network (PASN) with proposed ranking based selector to choose appropriate specialist predictor for samples. Extensive experimental results show that our method achieves best IC and other metrics on real-world stock datasets. | 710,612 |
Title: Extreme Systematic Reviews: A Large Literature Screening Dataset to Support Environmental Policymaking
Abstract: ABSTRACTThe United States Environmental Protection Agency (EPA) periodically releases Integrated Science Assessments (ISAs) that synthesize the latest research on each of six air pollutants to inform environmental policymaking. To guarantee the best possible coverage of relevant literature, EPA scientists spend months manually screening hundreds of thousands of references to identify a small proportion to be cited in an ISA. The challenge of extreme scale and the pursuit of maximum recall calls for effective machine-assisted approaches to reducing the time and effort required by the screening process. This work introduces the ISA literature screening dataset and the associated research challenges to the information and knowledge management community. Our pilot experiments show that combining multiple approaches in tackling this challenge is both promising and necessary. The dataset is available at https://catalog.data.gov/dataset/isa-literature-screening-dataset-v-1. | 710,613 |
Title: Semi-supervised Continual Learning with Meta Self-training
Abstract: ABSTRACTContinual learning (CL) aims to enhance sequential learning by alleviating the forgetting of previously acquired knowledge. Recent advances in CL lack consideration of the real-world scenarios, where labeled data are scarce and unlabeled data are abundant. To narrow this gap, we focus on semi-supervised continual learning (SSCL). We exploit unlabeled data under limited supervision in the CL setting and demonstrate the feasibility of semi-supervised learning in CL. In this work, we propose a novel method, namely Meta-SSCL, which combines meta-learning with pseudo-labeling and data augmentations to learn a sequence of semi-supervised tasks without catastrophic forgetting. Extensive experiments on CL benchmark text classification datasets show that our method achieves promising results in SSCL. | 710,614 |
Title: Facial Emotion Recognition of Virtual Humans with Different Genders, Races, and Ages
Abstract: ABSTRACTResearch studies suggest that racial and gender stereotypes can influence emotion recognition accuracy both for adults and children. Stereotypical biases have severe consequences in social life but are especially critical in domains such as education and healthcare, where virtual humans have been extending their applications. In this work, we explore potential perceptual differences in the facial emotion recognition accuracy of virtual humans of different genders, races, and ages. We use realistic 3D models of male/female, Black/White, and child/adult characters. Using blendshapes and the Facial Action Coding System, we created videos of the models displaying facial expressions of six universal emotions with varying intensities. We ran an Amazon Mechanical Turk study to collect perceptual data. The results indicate statistically significant main effects of emotion type and intensity on emotion recognition accuracy. Although overall emotion recognition accuracy was similar across model race, gender, and age groups, there were some statistically significant effects across different groups for individual emotion types. | 710,615 |
Title: Query-Aware Sequential Recommendation
Abstract: ABSTRACTSequential recommenders aim to capture users' dynamic interests from their historical action sequences, but remain challenging due to data sparsity issues, as well as the noisy and complex relationships among items in a sequence. Several approaches have sought to alleviate these issues using side-information, such as item content (e.g., images), action types (e.g., click, purchase). While useful, we argue one of the main contextual signals is largely ignored-namely users' queries. When users browse and consume products (e.g., music, movies), their sequential interactions are usually a combination of queries, clicks (etc.). Most interaction datasets discard queries, and corresponding methods simply model sequential behaviors over items and thus ignore this critical context of user interactions. In this work, we argue that user queries should be an important contextual cue for sequential recommendation. First, we propose a new query-aware sequential recommendation setting, i.e. incorpo- rating explicit user queries to model users' intent. Next, we propose a model, namely Query-SeqRec, to (1) incorporate query information into user behavior sequences; and (2) improve model generalization ability using query-item co-occurrence information. Last, we demonstrate the effectiveness of incorporating query features in sequential recommendation on three datasets.1 | 710,616 |
Title: Human Latent Metrics: Perceptual and Cognitive Response Correlates to Distance in GAN Latent Space for Facial Images
Abstract: ABSTRACT Generative adversarial networks (GANs) generate high-dimensional vector spaces (latent spaces) that can interchangeably represent vectors as images. Advancements have extended their ability to computationally generate images indistinguishable from real images such as faces, and more importantly, to manipulate images using their inherit vector values in the latent space. This interchangeability of latent vectors has the potential to calculate not only the distance in the latent space, but also the human perceptual and cognitive distance toward images, that is, how humans perceive and recognize images. However, it is still unclear how the distance in the latent space correlates with human perception and cognition. Our studies investigated the relationship between latent vectors and human perception or cognition through psycho-visual experiments that manipulates the latent vectors of face images. In the perception study, a change perception task was used to examine whether participants could perceive visual changes in face images before and after moving an arbitrary distance in the latent space. In the cognition study, a face recognition task was utilized to examine whether participants could recognize a face as the same, even after moving an arbitrary distance in the latent space. Our experiments show that the distance between face images in the latent space correlates with human perception and cognition for visual changes in face imagery, which can be modeled with a logistic function. By utilizing our methodology, it will be possible to interchangeably convert between the distance in the latent space and the metric of human perception and cognition, potentially leading to image processing that better reflects human perception and cognition. | 710,617 |
Title: Causal Intervention for Sentiment De-biasing in Recommendation
Abstract: ABSTRACTBiases and de-biasing in recommender systems have received increasing attention recently. This study focuses on a newly identified bias, i.e., sentiment bias, which is defined as the divergence in recommendation performance between positive users/items and negative users/items. Existing methods typically employ a regularization strategy to eliminate the bias. However, blindly fitting the data without modifying the training procedure would result in a biased model, sacrificing recommendation performance. In this study, we resolve the sentiment bias with causal reasoning. We develop a causal graph to model the cause-effect relationships in recommender systems, in which the sentiment polarity presented by review text acts as a confounder between user/item representations and observed ratings. The existence of confounders inspires us to go beyond conditional probability and embrace causal inference. To that aim, we use causal intervention in model training to remove the negative effect of sentiment bias. Furthermore, during model inference, we adjust the prediction score to produce personalized recommendations. Extensive experiments on five benchmark datasets validate that the deconfounded training can remove the sentiment bias and the inference adjustment is helpful to improve recommendation accuracy. | 710,618 |
Title: VR Distance Judgments are Affected by the Amount of Pre-Experiment Blind Walking
Abstract: ABSTRACT Many studies have found people can accurately judge distances in the real world while they underestimate distances in virtual reality (VR). This discrepancy negatively impacts some VR applications. Direct blind walking is a popular method of measuring distance judgments where participants view a target and then walk to it while blindfolded. To ensure that participants are comfortable with blindfolded walking, researchers often require participants to practice blind walking beforehand. We call this practice ”pre-experiment blind walking” (PEBW). Few studies report details about their PEBW procedure, and little research has been conducted on how PEBW might affect subsequent distance judgments. This between-participant study varied the amount of the PEBW and had participants perform distance judgments in VR. The results show that a longer PEBW causes less distance underestimation. This work demonstrates the importance of clearly reporting PEBW procedures and suggests that a consistent procedure may be necessary to reliably compare direct blind walking research studies. | 710,619 |
Title: Long-tail Mixup for Extreme Multi-label Classification
Abstract: ABSTRACTExtreme multi-label classification (XMC) aims at finding multiple relevant labels for a given sample from a huge label set at the industrial scale. The XMC problem inherently poses two challenges: scalability and label sparsity - the number of labels is too large, and labels follow the long-tail distribution. To resolve these problems, we propose a novel Mixup-based augmentation method for long-tail labels, called TailMix. Building upon the partition-based model, TailMix utilizes the context vectors generated from the label attention layer. It first selectively chooses two context vectors using the inverse propensity score of labels and the label proximity graph representing the co-occurrence of labels. Using two context vectors, it augments new samples with the long-tail label to improve the accuracy of long-tail labels. Despite its simplicity, experimental results show that TailMix consistently outperforms other augmentation methods on three benchmark datasets, especially for long-tail labels in terms of two metrics, [email protected] and [email protected] | 710,620 |
Title: The influence of body orientation relative to gravity on egocentric distance estimates in virtual reality
Abstract: ABSTRACTVirtual reality head mounted displays (VR-HMD) can immerse individuals into a variety of virtual environments while accounting for head orientation to update the virtual environment. VR-HMDs also allow users to explore environments while maintaining different body positions (e.g. sitting and laying down). How discrepancies between real world body position and the virtual environment impact the perception of virtual space or, additionally, how a visual upright with incongruent changes in head orientation affects space perception within VR has not been fully defined. In this study we sought to further understand how changes in head-on-body orientation (laying supine, laying prone, laying on left side and, being upright) while a steady visual virtual upright is maintained can affect the perception of distance. We used a new psychophysics perceptual matching based approach with two different probe configurations (“L” & “T” shape) to extract distance perception thresholds in the four previously mentioned positions at egocentric distances of 4, 5, and 6 virtual meters. Our results indicate that changes in observer orientation with respect to gravity impact the perception of distances with a virtual environment when it is maintained at a visual upright. Here we found significant differences between perceived distances in the upright condition compared to the prone and laying on left side positions. Additionally, we found that distance perception results were impacted by differences in probe configuration. Our results add to a body of work on how changes in head-on-body orientation can affect the perception of distance, while stressing that more research is still needed to fully understand how these changes with respect to gravity affect the perception of space within virtual environments. | 710,621 |
Title: OpenHGNN: An Open Source Toolkit for Heterogeneous Graph Neural Network
Abstract: ABSTRACTHeterogeneous Graph Neural Networks (HGNNs), as a kind of powerful graph representation learning methods on heterogeneous graphs, have attracted increasing attention of many researchers. Although, several existing libraries have supported HGNNs, they just provide the most basic models and operators. Building and benchmarking various downstream tasks on HGNNs is still painful and time consuming with them. In this paper, we will introduce OpenHGNN, an open-source toolkit for HGNNs. OpenHGNN defines a unified and standard pipeline for training and testing, which can allow users to run a model on a specific dataset with just one command line. OpenHGNN has integrated 20+ mainstream HGNNs and 20+ heterogeneous graph datasets, which can be used for various advanced tasks, such as node classification, link prediction, and recommendation. In addition, thanks to the modularized design of OpenHGNN, it can be extended to meet users' customized needs. We also release several novel and useful tools and features, including leaderboard, autoML, design space, and visualization, to provide users with better usage experiences. OpenHGNN is an open-source project, and the source code is available at https://github.com/BUPT-GAMMA/OpenHGNN. | 710,622 |