text
stringlengths
70
7.94k
__index_level_0__
int64
105
711k
Title: A primal-dual algorithm for the minimum power partial cover problem Abstract: In this paper, we study the minimum power partial cover problem (MinPPC). Suppose X is a set of points and $${\mathcal {S}}$$ is a set of sensors on the plane, each sensor can adjust its power and the covering range of a sensor s with power p(s) is a disk of radius r(s) satisfying $$p(s)=c\cdot r(s)^\alpha $$ . Given an integer $$k\le |X|$$ , the MinPPC problem is to determine the power assignment on every sensor such that at least k points are covered and the total power consumption is the minimum. We present a primal-dual algorithm for MinPPC with approximation ratio at most $$3^{\alpha }$$ . This ratio coincides with the best known ratio for the minimum power full cover problem, and improves previous ratio $$(12+\varepsilon )$$ for MinPPC which was obtained only for $$\alpha =2$$ .
116,264
Title: Prediction of the nonsampled units in survey design with the finite population using Bayesian nonparametric mixture model Abstract: In the sampling approaches framework, the combination of the information on the sizes of the nonsampled units can help to attain better estimators by using semiparametric models. Sometimes, the design variables that have an important role in the sampling mechanism are not available. Hence predictions require to be adapted for the consequence of selection. To infer the population mean in a sample survey, we study Bayesian nonparametric model with Dirichlet process prior by considering the inverse-probability weights as the only available information. Indeed, we present a Bayesian nonparametric mixture of regression models for the survey outcomes with the weights as predictors and impute the nonsampled units. Finally, the model-based estimators that are derived from the Bayesian (parametric and nonparametric) methods are compared with the design-based estimator based on the simulation approaches.
116,281
Title: Modelling insurance losses using a new beta power transformed family of distributions Abstract: Actuaries are often in search of new distributions suitable for modeling financial and insurance losses. In this work, we propose a new family of distributions, called a new beta power transformed family of distributions. A special sub-model of the proposed class, called a new beta power transformed Weibull, suitable for modeling heavy tailed data in the scenario of actuarial statistics and finance, is considered in detail. The proposed distribution possesses desirable properties relevant to actuarial sciences. Expressions for the actuarial quantities such as value at risk, tail value at risk, tailed variance and tailed variance premium are derived. A simulation study is conducted to evaluate the behavior of the proposed distribution in actuarial sciences. Some distributional properties with estimation of parameters using maximum likelihood method are also discussed. Finally, a practical application of the proposed model to insurance data is presented.
116,363
Title: Two-parameter ridge estimation in seemingly unrelated regression models Abstract: Seemingly unrelated regression (SUR) models were applied when several linear regression equations were investigated at the same time. To reduce the multicollinearity influence in the SUR models, the one-parameter ridge (Ridge-1) solution was proposed and discussed by some researchers. As a generalization of the Ridge-1 solution, in the context of SUR models having multicollinearity problem, the two-parameter ridge (Ridge-2) solution was presented. Some simulations were performed to compare the proposed solution with the ordinary generalized least squares (GLS) and Ridge-1 solutions. Lastly, the proposed solution was applied on chronic renal failure effect data.
116,399
Title: Application of AHP method for project selection in the context of sustainable development Abstract: Any activity of an industrial enterprise will have an impact on the economic, social and environmental sectors. At present, the main business goal is to generate and maximize profits. The 2030 Agenda for Sustainable Development put the focus on maximizing profit also how profit is generated and what impact have profit-generating activity on society and the environment. Every activity in an industrial enterprise is preceded by decision-making. The multi-criteria nature of decision-making in the context of sustainable development (and its economic, social and environmental criteria) is a prerequisite for the application of multi-criteria optimization methods. The basic precondition to apply exact methods in decision-making is the manager who has knowledge of the exact methods and interest in their application. Based on different theories and decision-making approaches that do not arise from an exact decision-making approach, the decision-makers are unable to make rational decisions. The results of several conducted surveys of managers' decision-making show that managers decide on complex issues based on subjective reasoning or intuition (Liebowitz, in: Forum: intuition-based decision-making: the other side of analytics. https://analytics-magazine.org/forum-intuition-based-decision-making-the-other-side-of-analytics, 2015). The authors of the article focused on the issue of decision-making in the selection of a production project based on the criteria of sustainable development. The aim of the paper is to present the application of exact method-Analytical Hierarchical Process in decision process on above mentioned issue.
116,429
Title: A multiple feature fusion framework for video emotion recognition in the wild Abstract: Human emotions can be recognized from facial expressions captured in videos. It is a growing research area in which many have attempted to improve video emotion detection in both lab-controlled and unconstrained environments. While existing methods show a decent recognition accuracy on lab-controlled datasets, they deliver much lower accuracy in a real-world uncontrolled environment, where a variety of challenges need to be addressed such as variations in illumination, head pose, and individual appearance. Moreover, automatically identifying the key frames consisting of the expression from real-world videos is another challenge. In this article, to overcome these challenges, we provide a video emotion recognition via multiple feature fusion method. First, a uniform local binary pattern (LBP) and the scale-invariant feature transform features are extracted from each frame in the video sequences. By applying a random forest classifier, all of the static frames are then labelled by the related emotion class. In this way, the key frames can be automatically identified, including neutral and other expressions. Furthermore, from the key frames, a new geometric feature vector and the LBP from three orthogonal planes are extracted. To further improve robustness, audio features are extracted from the video sequences as an additional dimension to augmenting visual facial expression analysis. The audio and visual features are fused through a kernel multimodal sparse representation. Finally, the corresponding emotion labels to the video sequences can be assigned when a multimodal quality measure specifies the quality of each modality and its role in the decision. The results on both acted facial expressions in the Wild and MMI datasets demonstrate that the proposed method outperforms several counterpart video emotion recognition methods.
116,493
Title: Weighted thresholding homotopy method for sparsity constrained optimization Abstract: We propose in this paper a novel weighted thresholding method for the sparsity-constrained optimization problem. By reformulating the problem equivalently as a mixed-integer programming, we investigate the Lagrange duality with respect to an $$l_1$$ -norm constraint and show the strong duality property. Then we derive a weighted thresholding method for the inner Lagrangian problem, and analyze its convergence. In addition, we give an error bound of the solution under some assumptions. Further, based on the proposed method, we develop a homotopy algorithm with varying sparsity level and Lagrange multiplier, and prove that the algorithm converges to an L-stationary point of the primal problem under some conditions. Computational experiments show that the proposed algorithm is competitive with state-of-the-art methods for the sparsity-constrained optimization problem.
116,512
Title: Weight fused functional sliced average variance estimation Abstract: Selecting the number of slice is a key step for the implement of the sliced average variance estimation (SAVE) method. To our knowledge, there is no widely accepted method for it in a practical application. And an incorrect number of the slice may leads to an inaccurate conclusion. In traditional multivariate sufficient dimension reduction procedure, it is usually to adopt the fuze approach which combined the kernel operators of SAVE with various numbers of slices to solve this problem. Due to the infinite dimension in functional data, the fuze approach can not be directly applied to functional SAVE (FSAVE). Hence we propose a novel fused approach which based on a weighted kernel operator of FSAVE that named as the approach weight fused FSAVE (WFFSAVE). Simulation studies show that the WFFSAVE method performs better than FSAVE. And we also apply this new method to the Tecator data set.
116,522
Title: Comparative study on electrocardiogram encryption using elliptic curves cryptography and data encryption standard for applications in Internet of medical things Abstract: Electrocardiogram (ECG) monitoring systems are widely used in tele-cardiology healthcare and applications of Internet of Medical Things. In order to guarantee the rapidity, stability, and accuracy of ECG signals in acquisition and transmission, the data delay, loss, and destruction must be considered in dynamic ECG monitoring system. Hence, data encryption as a key processing technique must carefully be selected in such applications. In this article, we comparatively studied the data encryption standard (DES) and elliptic curves cryptography (ECC) for applications in remote ECG monitoring. The study revealed that although both the encryption methods could effectively protect the integrity of data and ensure the operation of ECG monitoring system, the ECC method perform better than DES especially from the perspective of rapidity. In conclusion, the ECC is recommended as the candidate of encryption method in remote ECG monitoring systems.
116,589
Title: Adaptive Bayesian prediction of reliability based on degradation process Abstract: For long-time running electric devices used in satellites, the accurate reliability prediction is crucial in engineering. The reliability of these devices is often directly related to the degradation of a performance characteristic. However, the problem about predicting the reliability of these devices based on a subset which is chosen from the real-time data flow adaptively has received scant attention in academic research. In this paper, an adaptive Bayesian conditional c-optimal criterion is proposed to select observations from the real-time data flow effectively. The conjugate prior which is described as MNG for the parameters in the model is derived. Then, based on the Bayesian conditional c-optimal criterion and the MNG conjugate prior, an approach to choose a subset of data, which makes the prediction robust, is suggested. Based on the simulated data from emulator created by Beijing Spacecrafts, an illustration and some simulations are done to study the performance of the proposed method for predicting the reliability of the devices from 16 to 20 years. The results show that our proposed method with MNG conjugate prior performs better than the local c-optimal method and the Bayesian method with Jeffreys's non-informative prior.
116,639
Title: Solving The Clustered Traveling Salesman Problem Withd-Relaxed Priority Rule Abstract: The clustered traveling salesman problem with a prespecified order on the clusters, a variant of the well-known traveling salesman problem, is studied in the literature. In this problem, delivery locations are divided into clusters with different urgency levels and more urgent locations must be visited before less urgent ones. However, this could lead to an inefficient route in terms of traveling cost. This priority-oriented constraint can be relaxed by a rule calledd-relaxed priority that provides a trade-off between transportation cost and emergency level. Given a positive integerd, at any point along the route, thed-relaxed rule allows the vehicle to visit locations with priorityp,p+1, horizontal ellipsis ,p+dbefore visiting all locations in classp, wherepis the highest priority class among all unvisited locations. Our research proposes two approaches to solve the problem withd-relaxed priority rule. We improve the mathematical formulation proposed in the literature to construct an exact solution method. A metaheuristic method based on the framework of iterated local search with problem-tailored operators is also introduced to find approximate solutions. Experimental results show the effectiveness of our methods.
116,670
Title: Lung nodule detection and classification from Thorax CT-scan using RetinaNet with transfer learning. Abstract: Abstract Lung malignancy is one of the most common causes of death in the world caused by malignant lung nodules which commonly diagnosed radiologically by radiologists. Unfortunately, the continuous flow of medical images in hospitals drives radiologists to prioritize quantity over quality. This work condition allows misinterpretation especially on ambiguous anatomical structures that resemble lung nodule for example enlarged lymph nodes and resulting in decreasing sensitivity and accuracy of malignant lung nodule detections and late diagnosis proven to be fatal to patients. To address the problem, this paper proposed a novel lung nodule detection and classification model using one stage detector called as “I3DR-Net.” The model was formed by combining pre-trained natural images weight of Inflated 3D ConvNet (I3D) backbone with feature pyramid network to multi-scale 3D Thorax Computed tomography scan (CT-scan) dataset. I3DR-Net able to produce remarkable results on lung nodule texture detection task with mAP 49.61% and 22.86%, and area under curve (AUC) 81.84% and 70.36% for public and private dataset. Additionally, I3DR-Net successfully outperform previous state-of-the-art Retina U-Net and U-FRCNN + mean average precision (mAP) by 7.9% and 7.2% (57.71% VS 49.8% VS 50.5%) for malignant nodule detection and classification task.
116,711
Title: Echo state network based on improved fruit fly optimization algorithm for chaotic time series prediction Abstract: Chaos is a common phenomenon in nature and society. Chaotic system affects many fields. It is of great significance to find out the regularity of chaotic time series from chaotic system. Chaotic system has extremely complex dynamic characteristics and unpredictability. The traditional prediction methods for chaotic time series have some problems, such as low accuracy, slow convergence speed and complex model structure. In this paper, an echo state network prediction method based on improved fruit fly optimization algorithm for chaotic time series is proposed. The phase space reconstruction is introduced for the prediction of chaotic time series. The C–C method is used to determine the delay time. The embedding dimension is obtained by the G–P method. After reconstructing the phase space of the chaotic time series, an improved echo state network is proposed as the prediction model. In order to improve the prediction accuracy, an improved fruit fly optimization algorithm is proposed to optimize the parameters of the prediction model. Three typical chaotic time series, including Lorenz, Mackey–Glass, and short-term wind speed, are selected as simulation objects. The simulation results show that the prediction method proposed in this paper has good prediction indicators. At the same time, the results of the reliability and Pearson's test also show the better predictive effect.
116,726
Title: Cosine adapted modified whale optimization algorithm for control of switched reluctance motor Abstract: Whale optimization algorithm (WOA) imitates social conduct of humpback whales which is inspired by bubble net hunting strategy of humpback whales. In the present study, Cosine adapted modified whale optimization algorithm (CamWOA) which is a modified version of WOA, has been proposed where cosine function is incorporated for the selection of control parameter "d" which governs the position of whales during optimization process. Also, correction factors are employed to modify the movement of search agents during the search process. These changes provide a proper balance between exploration and exploitation phases in CamWOA technique. The performance of CamWOA is analyzed by testing on a set of benchmark functions and compared with other state-of-the-art algorithms. It is observed that CamWOA outperforms other state-of-the-art metaheuristic algorithms in majority of benchmark functions. The efficiency of CamWOA is also evaluated by solving a multiobjective engineering problem pertaining to control of switched reluctance motor. The simulation results confirm that CamWOA yields very promising and competitive results compared to that of WOA and other metaheuristic optimization algorithms.
116,729
Title: Capacitated vehicle routing problem on line with unsplittable demands Abstract: In this paper we study the capacitated vehicle routing problem. An instance of capacitated vehicle routing problem consists of a set of vertices with demands in a metric space, a specified depot, and a capacity bound C. The objective is to find a set of tours originating at the depot that cover all the demands, such that the capacity of each tour does not exceed C and the sum of the tour lengths is minimized. For the case that the metric space is a line and the demands are unsplittable, we provide a $$\frac{5}{3}$$ -approximation algorithm. An instance is given to show that the bound is tight.
116,744
Title: A simulated multi-objective model for flexible job shop transportation scheduling Abstract: This paper proposes a new dynamic algorithm based on simulation approach and multi-objective optimization to solve the FJSP with transportation assignment. The objectives considered in scheduling jobs and transportation tasks in a flexible job shop manufacturing system include makespan, robot travel distance, time difference with due date and critical waiting time. The results obtained from the computational experiments have shown that the proposed approach is efficient and competitive.
116,822
Title: Optimal hydropower station dispatch using quantum social spider optimization algorithm Abstract: In this article, a new quantum social spider optimization (QSSO) algorithm is proposed. In the QSSO algorithm, we introduce an encoding approach based on bits described on social spider optimization (SSO) and serves as the evolution method of the population space. For the encoding of individuals, the probability amplitude expression of quantum bit is applied to describe the position of individuals, by which one individual's position can be expressed as the superposition of multistates. In such a way, the population diversity and the global searching capability of the SSO algorithm are enhanced. The QSSO algorithm was used to optimize the hydropower station dispatch, and the calculation results show that QSSO algorithm has fast convergence, small number of tuning parameters, high calculation accuracy, stability, simple, and is easy to be implemented with strong global search capability.
116,869
Title: Calibration estimation of mean by using double use of auxiliary information Abstract: In the survey sampling, when the study and auxiliary variables are tolerably correlated, the ranks of an auxiliary variable also relate mutually with the study variable and thus by using these ranks as a valid mechanism, increases the accuracy of an estimator. In the current study, an enhanced estimator of the finite population mean is proposed that uses the ancillary information in form of the ranks of the auxiliary variable in stratified sampling design. Simulated as well as empirical evidence is provided for the efficiency confirmation of proposed estimators.
116,899
Title: Improved approximated median filter algorithm for real-time computer vision applications Abstract: Median filter is one of the predominant filters that are used to suppress impulse noise. Its simplicity and ability to maintain edges has led to an extensive application in the domain of image processing and computer vision. However, challenges such as moderate to high running time of the standard median filter algorithm and relatively poorer performance when the image is highly corrupted with impulse noise, have led to the design of several variations of the algorithm. One set of variation of the algorithm concentrates on generating quality outputs, while the other set focuses on reducing running time. Among the set targeting the reduction of the running time of the median filter is the DP approximated median filter. However, DP performs poorly when images are corrupted with moderate to high levels of noise. This paper therefore proposes an Improved Approximation Median Filtering Algorithms (IAMFA-I & IAMFA-II) based on DP to generate a better output. The introduction of Mid-Value-Decision-Median in DP reduces the chances of selecting corrupted pixel for denoised image. Experimental results indicate that the IAMFA-II has better running time and equivalent output compared with DP, while IAMFA-I generates better output and has equivalent running time when compared with DP. (C) 2020 Production and hosting by Elsevier B.V. on behalf of King Saud University.
116,965
Title: Comparative study of L-1 regularized logistic regression methods for variable selection Abstract: L-1 regularized logistic regression consists an important tool in data science and is dedicated to solve sparse generalized linear problems. The L-1 regularization is widely used in variable selection and estimation in generalized linear model analysis. This approach is intended to select the statistically important predictors. In this paper we compare the performance of some existing L-1 regularized logistic regression methods. The goal of our simulation study is directed toward the variable selection performance of regularized logistic regression in high dimensions. We consider three varying n (number of observations), p (number of predictors) settings and we support this comparison analysis by conducting various simulated experiments taking into consideration the correlation structure of the design matrix.
116,971
Title: Separation of Cartesian Products of Graphs Into Several Connected Components by the Removal of Vertices Abstract: A set S subset of V (G) is a vertex k-cut in a graph G = (V (G), E(G)) if G - S has at least k connected components. The k-connectivity of G, denoted as kappa(k)(G), is the minimum cardinality of a vertex k-cut in G. We give several constructions of a set S such that (GH) - S has at least three connected components. Then we prove that for any 2-connected graphs G and H, of order at least six, one of the defined sets S is a minimum vertex 3-cut in GH. This yields a formula for kappa(3)(GH).
117,022
Title: Assessing the performance of Canadian credit unions using a three-stage network bootstrap DEA Abstract: We use a novel three-stage network data envelopment analysis (DEA) model (based on production, intermediation, and revenue generation operations) with bootstrapping to evaluate the performance of 14 of the largest Canadian credit unions for the period 2007–2017 and the impact of various events on this performance. For each analysis, we contrast the results of the network DEA with those of a black box DEA. We show that the former provides more insightful information regarding the sources of the inefficiencies. We first found that while overall, the credit unions showed high-efficiency ratios, there is room for improvement, especially for the production sub-process. Moreover, the efficiency of individual credit unions is not consistent across the three different stages. Through the years 2007–2017, the credit union system exhibits a relatively sharp decline in its efficiency, mainly due to managerial issues at the revenue generation stage. Our analyses show that the various stages of Canadian credit union operations have been affected by the 2007–2009 financial crisis, the low policy interest rates that occurred in the following years, and the fact that in Canada, the federal government has eliminated the discount on the federal tax rate. The credit unions can improve their performance at the different stages by exploring Fintech Solutions to reduce their operating costs, seeking a better mix of loans and securities investments, and improving their interest and saving rate settings.
117,072
Title: Reliable machine prognostic health management in the presence of missing data Abstract: Prognostics and health management enables the prediction of future degradation and remaining useful life (RUL) for in-service systems based on historical and contemporary data, showing promise for many practical applications. One major challenge for prognostics is the common occurrence of missing values in time-series data, often caused by disruptions in sensor communication or hardware/software failures. Another major concern is that the sufficient prior knowledge of critical component degradation with a clear failure threshold is often not readily available in practice. These issues can significantly hinder the application of advanced signal and data analysis methods and consequently degrade the health management performance. In this article, we propose a novel data-driven framework that is capable of providing accurate and reliable predictions of degradation and RUL. In this approach, one-hot health state indicators are appended to the historical time series so that the model learns end-of-life automatically. A modified gate recurrent unit based variational autoencoder is employed in generative adversarial networks to model the temporal irregularity of the incomplete time series. Experiments on multivariate time-series datasets collected from real-world aeroengines verify that significant performance improvement can be achieved using the proposed model for robust long-term prognostics.
117,153
Title: On the exact distribution of generalized Hollander-Proschan type statistics Abstract: In this article we derive the exact null distribution of a class of Hollander-Proschan type statistics for testing exponentiality against NWBUE alternatives. The exact null distribution of a previously established statistic is obtained as a special case. Critical values are tabulated for different combinations of parameter values. Performance of the tests for small sample sizes has also been studied in various alternative scenarios. We analyse two wellknown data sets in the light of our result.
117,237
Title: Surface approximation using GPU-based localized fourier transform Abstract: The process of surface reconstruction has received considerable interest from researchers in recent years. Surface reconstruction plays a major role in many applications, such as visualization, geometric modeling and multiresolution analysis. In this paper, we present an approach that approximates a surface from a set of oriented points. Our algorithm combines the implicit surface and frequency-based frameworks to convert the indicator function of the surface into an implicit function from which we can extract the required surface. In contrast to traditional frequency-based approaches, our approach avoids voxelization of the input points and calculates the Fourier coefficients directly from the surface, which reduces the amount of memory required to settle the voxel grid and eliminates the mathematical errors corresponding to this voxelization. In addition, we exploit the recent advances of GPUs embedded in graphics cards to accelerate the calculation of the Fourier coefficients. Finally, some examples are given to demonstrate the validity of the proposed technique.
117,311
Title: A community-based hierarchical user authentication scheme for Industry 4.0 Abstract: The vision of Industry 4.0 is characterized by the amalgamation of cyber-physical systems and industrial Internet of Things. Such a complex ecosystem urges for the requirement of novel security protocol and mechanisms for access control so as to allow the smart devices to authorize external entities and granting them access rights without depending on centralized authentication entities. The work proposed in this article aims to utilize a community-based hierarchical approach to define the procedure for obtaining access rights in the Industry 4.0 ecosystem. The proposed scheme considers a hierarchy of authorizing devices that work in collaboration for providing access control of the smart end devices to the users. The adoption of hierarchical structure ensures that the access rights are eventually given to only those users that have passed multiple levels of successful authorization. The proposed scheme also combats any infringement of users identity since the authorizing entities involved in the proposed system work in close collaboration for user authentication. The proposed user authentication scheme has been validated using burrows-abadi-needham (BAN)-logic and is proved to be secure against a variety of security attacks.
117,457
Title: Research on the application of mobile payment security system based on the Internet of Things Abstract: Based on the relationship between the Internet of Things (IoT) and mobile payment, this article analyzed the security characteristics of IoT mobile payment. The security system of IoT mobile payment was constructed according to the security demands of mobile payment. Finally, the mobile payment security system was utilized to grade the current level of mobile payment security in China, which verified the urgent problems in China's mobile payment security. The security system of mobile payment based on IoT is a set of macro theoretical systems, whose establishment can lay a theoretical foundation for its branch theoretical research in the future. Meanwhile, it is applicable to the quantitative evaluation of mobile payment security level in China, and has certain guiding significance in theory and practice.
117,525
Title: Multidomain security authentication for the Internet of things Abstract: With the rapid development of Internet of Things (IoT) information technology, the IoT has become a key infrastructure for telemedicine, smart home, and intelligent transportation. One of the key technologies for these applications is information sharing and interoperation among multiple domains. However, the security and privacy issues of multidomain interaction face severe security challenges. Aiming at these problems, an certificateless multidomain authentication technology is proposed for IoT. Bilinear mapping and short signature technology are used to realize mutual authentication among entities in different domains, which protects secure data sharing and secure interoperability among domains. Certificateless multidomain authentication can avoid inherent security risks of key escrow in existing identity-based authentication. And it also solves complex certificate management and network bottlenecks problems in traditional certificate-based authentication. The proof and analysis show that the proposed scheme has good security and performance, it supports anonymous authentication among entities, and it is also suitable for large-scale distributed network security alliance authentication mechanism.
117,586
Title: Composite quasi-likelihood for single-index models with massive datasets Abstract: The single-index models (SIMs) provide an efficient way of coping with high-dimensional nonparametric estimation problems and avoid the "curse of dimensionality." Many existing estimation procedures for SIMs were built on least square loss, which is popular for its mathematical beauty but is non-robust to non-normal errors and outliers. This article addressed the question of both robustness and efficiency of estimation methods based on a new data-driven weighted linear combination of convex loss functions instead of only quadratic loss for SIMs. The optimal weights can be chosen to provide maximum efficiency and these optimal weights can be estimated from data. As a specific example, we introduce a robust method of composite least square and least absolute deviation methods. Moreover, we extend the proposed method to the analysis of massive datasets via a divide-and-conquer strategy. The proposed approach significantly reduces the required primary memory and the resulting estimate is as efficient as if the entire dataset was analyzed simultaneously. The asymptotic normality of the proposed estimators is established. The simulation studies and real data applications are conducted to illustrate the finite sample performance of the proposed methods.
117,677
Title: The seeding algorithm for spherical k-means clustering with penalties Abstract: Spherical k-means clustering as a known NP-hard variant of the k-means problem has broad applications in data mining. In contrast to k-means, it aims to partition a collection of given data distributed on a spherical surface into k sets so as to minimize the within-cluster sum of cosine dissimilarity. In the paper, we introduce spherical k-means clustering with penalties and give a $$2\max \{2,M\}(1+M)(\ln k+2)$$ -approximation algorithm. Moreover, we prove that when against spherical k-means clustering with penalties but on separable instances, our algorithm is with an approximation ratio $$2\max \{3,M+1\}$$ with high probability, where M is the ratio of the maximal and the minimal penalty cost of the given data set.
117,693
Title: Detection of the symmetry of model errors for partial linear single-index models Abstract: In this paper, we propose a k-th correlation coefficient estimator between the density function and distribution function of the model errors in single-index models and partial linear single-models. This k-th correlation coefficient estimator is used to test whether the density function of the true model error is symmetric or not. First, we propose a moment based estimator of k-th correlation coefficient and present its asymptotic results. Second, we consider statistical inference of the k-th correlation coefficient estimator by using the empirical likelihood method. The empirical likelihood statistic is shown to be asymptotically distributed a centered chi-squared distribution with degree of freedom one. Simulation studies are conducted to examine the performance of the proposed estimators and test statistics.
117,753
Title: Appraisal of Two Arabic Opinion Summarization Methods: Statistical Versus Machine Learning Abstract: In this paper, we propose to overcome the challenge of digesting opinions in a news article. Our objective is to provide a summary of opinions delivered by many sources about a main topic in an Arabic news article. In literature, several studies addressed issues related to opinion summarization. However, we noticed a lack of studies that address this problem in Arabic language. So, we have proposed two different methods: multi-criteria and machine learning-based methods. We proceed by comparing the results provided by the proposed methods for opinionated sentence extraction. The proposed methods were evaluated using two feature types: text-based features and opinion-specific features. Experimental results show the robustness of machine learning method to extract opinionated sentences with consideration of two sets of features.
117,767
Title: Enhanced semantic representation of coaxiality with double material requirements Abstract: In order to improve the assemblability and reduce the cost of the parts, the material requirements can be applied to the toleranced feature and the datum feature of the coaxiality at the same time in the practical application. How to realize the information exchange between different heterogeneous systems and how to read and explain the semantics of this complex tolerance specification automatically is one of the urgent problems to be solved in intelligent digital manufacturing. Ontology described in OWL2 DL and SWRL language has strict mathematical basis of description logic, and the knowledge represented by it has clear semantics and can be read and understood directly by computer. In addition, because of its consistency checking, knowledge reasoning, and flexible query ability, it becomes the most potential knowledge representation technology in the future. On the other hand, the semantic interpretation of tolerance specification by the new generation GPS theory based on skin model has the advantages of clear semantics, no ambiguity, and consistency at different stages of the product life cycle. In this article, the enhanced semantic representation ontology for this kind of coaxiality tolerance specification is constructed based on the advantages of the new GPS theory and ontology technology and the effectiveness of the method is verified and analyzed through a case.
117,799
Title: The application of big data network crawler technology for architectural culture and environment protection Abstract: In today's big data environment, we get the Maonan residential information of Maonan Nationality in Huanjiang through the web crawler technology, and further study the protection and renewal of the local architectural culture and environment, in order to promote the social civilization and sustainable development. With the continuous development and improvement of Internet technology, information acquisition technology of Web crawler based on large data is particularly important. It can collect network information intelligently and efficiently. Taking the architectural protection as the research object, Maonan Ethnic Group in the area is chosen as the object of study. Based on the web crawler technology, the protection and development of the local residential architecture culture and environment are analyzed. Through data collection and investigation of big data crawler technology, we find that the Maonan Ethnic Group has unique local customs and cultural details. Therefore, how to carry out sustainable development according to the unique local cultural characteristics and by means of information technology will become the focus of further research.
117,829
Title: On periodic EGARCH models Abstract: This article deals with some probabilistic and statistical properties of a periodic exponential model, which is very adequate and appropriate to capture and describe, at the same time, three stylized facts very often encountered in the field of financial time series namely, the volatility clustering, the asymmetry and the periodicity exhibited by the autocovariance structure. Indeed, necessary and sufficient conditions for the periodical stationarity, both in second order and strict, are established. The closed form of the second-order moment is, under these conditions, established. Moreover, the higher order moments structure is studied. The conditional least square (CLS) and the quasi maximum likelihood estimations of the model are obtained. Moreover, the CLS estimations of the non-centered model and the are established. The performances of these estimators were assisted by an intensive simulation study, moreover an application on real data set is provided.
117,906
Title: Identifying cross-lingual plagiarism using rich semantic features and deep neural networks: A study on Arabic-English plagiarism cases Abstract: The rapid growth in the digital era initiates the need to inculcate and preserve the academic originality of translated texts. Cross-lingual semantic similarity is concerned with identifying the degree of similarity of textual pairs written in two different languages and determining whether they are plagiarized. Unlike existing approaches, which exploit lexical and syntax features for mono-lingual similarity, this work proposed rich semantic features extracted from cross-language textual pairs, including topic similarity, semantic role labeling, spatial role labeling, named entities recognition, bag-of-stop words, bag-of-meanings for all terms, n-most frequent terms, n-least frequent terms, and different sets of their combinations. Knowledge-based semantic networks such as BabelNet and WordNet were used for computing semantic relatedness across different languages. This paper attempts to investigate two tasks, namely, cross-lingual semantic text similarity (CL-STS) and plagiarism detection and judgement (PD) using deep neural networks, which, to the best of our knowledge, have not been implemented before for STS and PD in cross-lingual setting, and using such combination of features. For this purpose, we proposed different neural network architectures to solve the PD task as either binary classification (plagiarism/independently written), or even deeper classification (literally translated/paraphrased/summarized/independently written). Deep neural networks were also used as regressors to predict semantic connotations for CL-STS tasks. Experimental results were performed on a large number of handmade data taken from multiple sources consisting of 71,910 Arabic-English pairs. Overall, experimental results showed that using deep neural networks with rich semantic features achieves encouraging results in comparison to the baselines. The proposed classifiers and regressors tend to show comparable performances when using different architectures of neural networks, but both the binary and multi-class classifiers outperform the regressors. Finally, the evaluation and analysis of using different sets of features reflected the supremacy of deeper semantic features on the classification results.
117,910
Title: Delta-Sigma Modulation for Noise Cancellation in 5G-Compliant Network Abstract: The issue of accurate reception of signals in wireless communication systems is a constant concern due to the presence of noise/fading/interferences that may occur on the transmission path. Acknowledging that 5G requirements for these systems increase the standards related to high throughput and lowest bit error rate (BER), noise reduction must be performed to achieve reliable communication. Delta-sigma modulation (DSM) proved to be cost efficiency and reduced circuit complexity technique. Therefore, the performance of DSM is analyzed in a potential 5G-compliant system in Additive White Gaussian Noise, Nakagami-m and Weibull fading channels in the context of BER reduction. The main goal is to offer some landmarks related to expectations of DSM use in an OFDM-based communication system. The performance is evaluated for different sets of spreading sequences (Walsh-Hadamard and pseudo-noise) and based on the simulation results several conclusions are highlighted. Further improvement will be proposed for noise effects reduction and interference minimization in DSM-based 5G-compliant systems.
117,922
Title: Knowledge mapping of platform research: a visual analysis using VOSviewer and CiteSpace Abstract: This study offers a systematic review of academic research on platforms in management, business and economics. By using two visualization tools named VOSviewer and CiteSpace, we analyzed 619 articles on platform research with associated 23,093 references from the Web of Science database. We have discerned the most impact publications, authors, journals, institutions and countries in the platform research. In addition, we have explored the structures of the cited references, cited authors and cited journals to further understand the theoretical basis of the platform research. Moreover, by evolution analysis through CiteSpace and co-occurrence analysis through VOSViewer, we explored the evolution process of platform research and predicted the future development trends. The results conjunctively achieved by VOSviewer and CiteSpace will enhance understanding of platform research and enable future developments for both theorists and practitioners.
117,930
Title: Divide-and-conquer ensemble self-training method based on probability difference Abstract: Self-training method can train an effective classifier by exploiting labeled instances and unlabeled instances. In the process of self-training method, the high confidence instances are usually selected iteratively and added to the training set for learning. Unfortunately, the structure information of high confidence instances is so similar that it leads to local over-fitting during the iterations. In order to avoid the over-fitting phenomenon, and improve the classification effect of self-training methods, a novel divide-and-conquer ensemble self-training framework based on probability difference is proposed. Firstly, the probability difference of instances is calculated by the category probability of each classifier, the low-fuzzy and high-fuzzy instances of each classifier are divided through the probability difference. Then, a divide-and-conquer strategy is adopted. That is, the low-fuzzy instances determined by all the classifiers are directly labeled and high-fuzzy instances are manually labeled. Finally, the labeled instances are added to the training set for iteration self-training. This method expands the training set by selecting low-fuzzy instances with accurate structure information and high-fuzzy instances with more comprehensive structure information, and it improves the generalization performance of the method effectively. The method is more suitable for noise data sets and it can obtain structure information even in a few labeled instances. The effectiveness of the proposed method is verified by comparative experiments on the University of California Irvine (UCI).
117,932
Title: Don't forget about synchronization! Guidelines for using locks on graphics processing units Abstract: Heterogeneous devices are becoming necessary components of high performance computing infrastructures, and the graphics processing unit (GPU) plays an important role in this landscape. Given a problem, the established approach for exploiting the GPU is to design solutions that are parallel, without data dependencies. These solutions are then offloaded to the GPU's massively parallel capability. This design principle often leads to developing applications that cannot maximize GPU hardware utilization. The goal of this article is to challenge this common belief by empirically showing that allowing even simple forms of synchronization enables programmers to design solutions that admit conflicts and achieve better performance. Our experience shows that lock-based solutions to the k-means clustering problem, implemented using two well-known locking strategies, outperform the well-engineered and parallel KMCUDA on both synthetic and real datasets; with an average 8x faster runtimes across all locking algorithms on a synthetic dataset and 1.7x faster on a real world dataset across all locking algorithms (and max speedups of 71.3x and 2.75x, respectively). We validate these results using a more sophisticated clustering algorithm, namely fuzzy c-means and summarize our findings by identifying three guidelines to help make concurrency effective when programming GPU applications.
117,971
Title: Identification of the effects of the existing network properties on the performance of current community detection methods Abstract: Community detection has attracted many attentions recently. Considering the effect of current network structure on the result of the recent community detection methods is useful to yield a probable performance trade-off for future algorithm selection. In this paper, we first offer a new ranking method with 3 levels for small-world and scale-free networks to measure such properties more accurately, in determining their influences on the methods performance. Thereafter, we examine 12 popular community detection methods and 43 related datasets. The results show that 24 datasets have small-world properties, 5 datasets have scale-free properties, and 9 datasets have both. However, 5 of them have no features of small-world or scale-free networks. It is also observable that 4 methods work better for networks with small-world features and 8 for both small-world and scale free. Finally, we propose a flexible community detection method based on the detected network type.
117,998
Title: Modified ridge-type estimator for the gamma regression model Abstract: The modified ridge-type estimator has been shown to cushion the effects of multicollinearity in the linear regression model. Recent studies have shown the adverse effects of multicollinearity in the gamma regression model (GRM). We proposed a gamma modified ridge-type estimator to tackle this problem. We derived the properties of this estimator and conducted a theoretical comparison with some of the existing estimators. A real-life example and simulation study show that the proposed estimator gains an advantage over other estimators in terms of the mean square error.
118,000
Title: Bipartite communities via spectral partitioning Abstract: In this paper we are interested in finding communities with bipartite structure. A bipartite community is a pair of disjoint vertex sets S, $$S'$$ such that the number of edges with one endpoint in S and the other endpoint in $$S'$$ is “significantly more than expected.” This additional structure is natural to some applications of community detection. In fact, using other terminology, they have already been used to study correlation networks, social networks, and two distinct biological networks. In 2012 two groups independently [(1) Lee, Oveis Gharan, and Trevisan and (2) Louis, Raghavendra, Tetali, and Vempala] used higher eigenvalues of the normalized Laplacian to find an approximate solution to the k-sparse-cuts problem. In 2015 Liu generalized spectral methods for finding k communities to find k bipartite communities. Our approach improves the bounds on bipartite conductance (measure of strength of a bipartite community) found by Liu and also implies improvements to the original spectral methods by Lee et al. and Louis et al. We also highlight experimental results found when applying a practical algorithm derived from our theoretical results to three distinct real-world networks.
118,168
Title: Optimal Contract Selection For An Online Travel Agent And Two Hotels Under Price Competition Abstract: In this paper, we study two competing hotels selling alternative and substitutable rooms at fixed capacity through the same online travel agent (OTA). The hotels can adopt either the merchant mode or the agent mode to cooperate with the OTA. To investigate all possible situations, three scenarios are compared: (a) both hotels use the agent mode, (b) both hotels adopt the merchant mode, and (c) one hotel implements the agent mode and the other the merchant mode. We show that a unique Nash equilibrium can be achieved when market size and price competition satisfy certain conditions. We find that the OTA always prefers to use a single mode to cooperate with both hotels in any given scenario. By comparing the optimal options for the three players, we find that they will all choose the agent mode when the commission rate drops to a particular level.
118,173
Title: Fuzzy integrated salp swarm algorithm-based RideNN for prostate cancer detection using histopathology images Abstract: One of the dreadful diseases in the medical industry is prostate cancer and it is growing at a higher rate among men. Hence, it is a necessity to detect cancer in an early stage due to the alarming increase in the reports. Various techniques are introduced for effective prostate cancer detection using histopathology images. Accordingly, an automatic method is proposed for segmenting and classifying prostate cancer. This paper presents the prostate cancer detection method using histopathology images by proposing the fuzzy-based salp swarm algorithm-based rider neural network (SSA-RideNN) classifier. At first, the input image is fed to the pre-processing step and then the segmentation is performed using Color Space transformation and thresholding. Once the segmentation is performed, the feature extraction is done by extracting multiple kernel scale invariant feature transform features along with the texture features that are extracted based on local optimal oriented pattern descriptor to improve the classification accuracy. Finally, the prostate cancer detection is done based on the proposed fuzzy-based SSA-RideNN, which is developed by integrating fuzzy approach with SSA-RideNN. The performance of the proposed fuzzy-based SSA-RideNN is analyzed using sensitivity, specificity, and accuracy. The proposed fuzzy-based SSA-RideNN produces the maximum accuracy of 0.9190, a maximum sensitivity of 0.9084, and maximum specificity of 0.9, indicating its superiority.
118,240
Title: A PPA parity theorem about trees in a bipartite graph Abstract: We prove a new PPA parity theorem: Given a bipartite graph G with bipartition (A, B) where B is a set of even-degree vertices, and given a tree T* of G containing all of A, such that any vertex of B in T* has degree 2 in T* and such that each vertex of A which is not a leaf of T* is met by an odd number of edges not in T*, then there is an even number of trees of G containing all of A, with degree 0 or 2 at each vertex of B and with the same degree as T* at each vertex of A. This theorem generalizes Berman & rsquo;s generalization of Thomason & rsquo;s generalization of Smith & rsquo;s Theorem. (c) 2020 Elsevier B.V. All rights reserved. <comment>Superscript/Subscript Available</comment
118,253
Title: Permeability prediction of petroleum reservoirs using stochastic gradient boosting regression Abstract: Reservoir permeability is a crucial parameter for reservoir characterization and the estimation of current and future production from hydrocarbon reservoirs. Permeability can be conventionally estimated from traditional approaches such as core analysis and well-test data, which are time-consuming and expensive. Many scientists tried to estimate permeability from nuclear magnetic resonance (NMR) logs utilizing complex mathematical equations that may achieve imprecise approximation of the permeability values. Gradient boosting generates additive regression models by successively fitting a straightforward base learner to present pseudo-residuals using least squares at every iteration. The execution speed and accuracy of gradient boosting can be considerably enhanced by applying randomization process. Besides, the randomization process improves robustness against over fitting of the base learner. So, the novelty of the current study is the development of a Stochastic Gradient Boosting (SGB) regression model to predict the permeability of petroleum reservoirs based on well logs. Besides benchmark machine learning techniques are used to predict reservoir permeability from well logs. The correlation coefficient (R), relative absolute error (RAE), mean-absolute error (MAE), root mean-squared error (RMSE), and root relative squared error (RRSE) are utilized to check the overall fitting between measured permeability versus predicted ones. It is found that Stochastic Gradient Boosting model achieved overall better performance than the other models.
118,287
Title: Total Protection of Lexicographic Product Graphs Abstract: Given a graph G with vertex set V (G), a function f : V (G) -> {0, 1, 2} is said to be a total dominating function if sigma(u)(is an element of)(N)(()(v)()) f(u) > 0 for every v is an element of V (G), where N(v) denotes the open neighbourhood of v. Let V-i = {x is an element of V (G) : f(x) = i}. A total dominating function f is a total weak Roman dominating function if for every vertex v is an element of V-0 there exists a vertex u is an element of N(v) boolean AND (V-1 ? V-2) such that the function f ', defined by f '(v) = 1, f '(u) = f(u) - 1 and f '(x) = f(x) whenever x is an element of V (G) \ {u, v}, is a total dominating function as well. If f is a total weak Roman dominating function and V-2 = null , then we say that f is a secure total dominating function. The weight of a function f is defined to be omega(f) = sigma(v)(is an element of)(V) (()(G)()) f(v). The total weak Roman domination number (secure total domination number) of a graph G is the minimum weight among all total weak Roman dominating functions (secure total dominating functions) on G. In this article, we show that these two parameters coincide for lexicographic product graphs. Furthermore, we obtain closed formulae and tight bounds for these parameters in terms of invariants of the factor graphs involved in the product.
118,308
Title: Fuzzy rule-based environment-aware autonomous mobile robots for actuated touring Abstract: The involvement of computer-programmed autonomous mobile robots in real-time activities is emerging in the recent years. The actuation and interaction of the robots are controlled through optimized high-level programming to respond to environmental factors. Such robots require an optimized touring plan with a better response to understand the inherent conditions. In this article, fuzzy rule-based optimization for actuated touring (FOAT) is presented. FOAT is responsible for balancing the actuation and response of the mobile robot agents in touring and path exploration. The touring and self-decision analysis of the programmed robots is improved through FOAT by adapting the environmental conditions and then constructing rules for response. Different from the functions of line-based or other robot touring, the proposed actuated touring frames decisive rules based on the varying inputs. With the framed rules, the touring process of the robot is modified to achieve the best solution instantly adaptive to the environment. The performance of FOAT is verified through experiments and is then analyzed using the metrics: touring cost, obstacles hit ratio, time-lapse, and tour length.
118,347
Title: How Social Media Analytics Can Inform Content Strategies Abstract: Social media has become a strategic tool for businesses and nonprofit organizations to connect with audiences. However, no comprehensive framework exists to support the continued improvement of social media outcomes. This work draws on prior studies related to social media analytics and user engagement to develop an overarching, analytics-driven process for social content strategy development and improvement. The process provides firms with a set of procedures to regularly assess competitors' and possibly their own content topic posting activities. It then outlines steps to measure the influence of content topics and post characteristics on engagement outcomes and use the garnered insights to drive future posting activities. A proof-of-concept case in the healthcare context is presented to demonstrate the feasibility of the proposed process.
118,370
Title: QMLE of periodic integer-valued time series models Abstract: In this paper, we establish the consistency and the asymptotic normality of the Periodic Poisson (respectively the Periodic Geometric) Quasi Maximum Likelihood estimators, (respectively , of a general class of periodic count time series models. In this class, the conditional mean is expressed as a parametric and measurable function, with periodic parameters, of the infinite past of the observed process. Applications for some particular periodic models of the class of Periodic Integer-Valued Autoregressive Moving Average, (PINARMA) models, are, under some regularity conditions, considered. The performances of these considered estimation methods are assisted by an intensive simulation study. Moreover, applications on two real datasets are provided.
118,387
Title: Joint model of entity recognition and relation extraction based on artificial neural network Abstract: Entity and relationship extraction is an important step in building a knowledge base, which is the basis for many artificial intelligence products to be used in life, such as Amazon Echo and Intelligent Search. We propose a new artificial neural network model to identify entities and their relationships without any handcrafted features. The neural network model mainly includes the CNN module for extracting text features and relationship classifications, and a bidirectional LSTM module for obtaining context information of the entity. The context information and entity tags between the entities obtained in the entity identification process are further passed to the CNN module of the relationship classification to improve the effectiveness of the relationship classification and achieve the purpose of joint processing. We conducted experiments on the public datasets CoNLL04 (Conference on Computational Natural Language Learning), ACE04 and ACE05 (Automatic Content Extraction program) to verify the effectiveness of our approach. The method we proposed achieves the state-of-the-art results on entity and relation extraction task.
118,426
Title: Applying artificial bee colony algorithm to the multidepot vehicle routing problem Abstract: With advanced information technologies and industrial intelligence, Industry 4.0 has been witnessing a large scale digital transformation. Intelligent transportation plays an important role in the new era and the classic vehicle routing problem (VRP), which is a typical problem in providing intelligent transportation, has been drawing more attention in recent years. In this article, we study multidepot VRP (MDVRP) that considers the management of the vehicles and the optimization of the routes among multiple depots, making the VRP variant more meaningful. In addressing the time efficiency and depot cooperation challenges, we apply the artificial bee colony (ABC) algorithm to the MDVRP. To begin with, we degrade MDVRP to single-depot VRP by introducing depot clustering. Then we modify the ABC algorithm for single-depot VRP to generate solutions for each depot. Finally, we propose a coevolution strategy in depot combination to generate a complete solution of the MDVRP. We conduct extensive experiments with different parameters and compare our algorithm with a greedy algorithm and a genetic algorithm (GA). The results show that the ABC algorithm has a good performance and achieve up to 70% advantage over the greedy algorithm and 3% advantage over the GA.
118,469
Title: Algorithmic aspect on the minimum (weighted) doubly resolving set problem of graphs Abstract: Let G be a simple graph, where each vertex has a nonnegative weight. A vertex subset S of G is a doubly resolving set (DRS) of G if for every pair of vertices u, v in G, there exist $$x,y\in S$$ such that $$d(x,u)-d(x,v)\ne d(y,u)-d(y,v)$$ . The minimum weighted doubly resolving set (MWDRS) problem is finding a doubly resolving set with minimum total weight. We establish a linear time algorithm for the MWDRS problem of all graphs in which each block is complete graph or cycle. Hence, the MWDRS problems for block graphs and cactus graphs can be solved in linear time. We also prove that k-edge-augmented tree (a tree with additional k edges) with minimum degree $$\delta (G)\ge 2$$ admits a doubly resolving set of size at most $$2k+1$$ . This implies that the DRS problem on k-edge-augmented tree can be solved in $$O(n^{2k+3})$$ time.
118,499
Title: Harnessing semantic segmentation masks for accurate facial attribute editing Abstract: In recent years, with the rapid development of adversarial learning technology, facial attribute editing has made great success in a number of areas. Realistic visual effect, invariant identity information, and accurate editing area are the three key issues of facial attribute editing. Unfortunately, most researches focus on the former two problems. However, lack of awareness of the accurate editing area in the task is the main reason for damaging attribute-irrelevant details. To address this issue, this article proposes a novel facial attribute editing algorithm-a generative adversarial network (GAN) with semantic masks-from the perspective of editing location accuracy. By generating the mask with respect to attribute-related areas, the semantic segmentation network can only constrain the manipulation in the target region while not harming any attribute-irrelevant details. The GAN is then combined with the semantic segmentation network to formulate the entire framework, which is referred to as SM-GAN. Extensive experiments on the public datasets CelebA and LFWA prove that the presented method can not only ensure that the attribute manipulation is realistic, but also allow attribute-irrelevant regions to remain unchanged. Moreover, it can also simultaneously edit multiple facial attributes.
118,510
Title: Improving the performance of histogram-based data hiding method in the video environment Abstract: The rapid development of information technology has brought security to be a crucial factor. Information being transferred between different parties must be well protected to prevent the file from illegal access. Data hiding, as one of the possible securing methods, has been introduced; nevertheless, the quality of the stego file and the capacity of the secret are still challenging. In this research, we work on these problems by exploring the histogram developed in the frame and the prediction error generated between two consecutive frames of a video file. In the generated stego file, these updated prediction errors do not change the composition and the total number of frames. The experimental results show that this proposed method has achieved a better result than the compared algorithms, which is shown by the Peak Signal to Noise Ratio (PSNR) value for various capacities. Equivalently, the number of bits that can be held is higher than others for the same stego quality. Besides, this method is reversible, in which both the cover and the secret can be fully extracted from the stego file. It means this scheme is appropriate to implement in a system, which requires the existence of those both data.
118,561
Title: PYTAF: A Python Tool for Spatially Resampling Earth Observation Data Abstract: Earth observation data have revolutionized Earth science and significantly enhanced the ability to forecast weather, climate and natural hazards. The storage format of the majority of Earth observation data can be classified into swath, grid or point structures. Earth science studies frequently involve resampling between swath, grid and point data when combining measurements from multiple instruments, which can provide more insights into geophysical processes than using any single instrument alone. As the amount of Earth observation data increases each day, the demand for a high computational efficient tool to resample and fuse Earth observation data has never been greater. We present a software tool, called pytaf, that resamples Earth observation data stored in swath, grid or point structures using a novel block indexing algorithm. This tool is specially designed to process large scale datasets. The core functions of pytaf were implemented in C with OpenMP to enable parallel computations in a shared memory environment. A user-friendly python interface was also built. The tool has been extensively tested on supercomputers and successfully used to resample the data from five instruments on the EOS-Terra platform at a mission-wide scale.
118,562
Title: Cost and revenue analysis of an impatient customer queue with second optional service and working vacations Abstract: In this article, we propose a finite buffer impatient customer queue with second optional service (SOS) and working vacations. When the server is busy, an arriving customer either joins the queue or balks on the basis of state-dependent joining/balking probabilities. For each customer, the server provides two phases of service, namely, first essential service (FES) and SOS. All the customers demand FES, whereas only few customers opt for SOS after the completion of FES. At a service completion instant, if the system is empty, the server leaves for working vacation. During working vacations, the waiting customers activate an impatient timer which is exponentially distributed. It is assumed that the interarrival times, vacation times, service times during FES, SOS and during working vacations follow exponential distribution. The steady-state probabilities of the model and various performance measures are derived. In order to optimize the total expected cost of the system, particle swarm optimization technique has been adopted for finding the optimum service rates of the server. Numerical results are sketched out to demonstrate the impact of the system and cost parameters.
118,658
Title: A biclustering-based heterogeneous customer requirement determination method from customer participation in product development Abstract: Timely identification of heterogeneous customer requirements serves as a vital step for a company to formulate product strategies to meet the diverse and changing needs of its customers. By relaxing the search for global patterns in classical clustering, we propose a biclustering-based method, BiHCR, to identify heterogeneous customer requirements from the perspective of local patterns detection. Specifically, conforming to customers’ attitudes toward products derived from customer participation, we first transform the original data matrix with customers as rows and customer requirements as columns into a binary matrix. Then, by combining the two significant biclustering algorithms, Bimax and RepBimax, we design BiHCR to identify the biclusters embedded in the binary matrix to improve the detection results from the larger biclusters and their overlaps. Furthermore, the empirical case of smartphone development in a Chinese company verifies that BiHCR can identify homogeneous subgroups of customers with similar requirements without redundant noise compared with Bimax. Additionally, in contrast to RepBimax, our proposed BiHCR can also detect the intractable overlapping biclusters in the binary matrix used to describe the heterogeneity of customer requirements. Since the process of customer participation in product development gradually became a dominant approach to collecting customer requirements information for many industries, a conceptual framework of customer requirements identification is constructed and the detailed steps are clarified for manufacturers.
118,780
Title: New contextual collaborative filtering system with application to personalized healthy nutrition education Abstract: Nowadays, the Internet is becoming a platform of choice where the number of users and items grows dramatically making recommender systems (RS) the most required and widespread technology. This paper deals with context aware collaborative RS and presents a double contribution that consists of a two Dimensions Contextual Collaborative Recommender System (2DCCRS) and a related application. Our first contribution proposes a new framework for collaborative context aware RS that relies on two key ideas. The first one suggests splitting the context into two parts namely internal and external contexts in order to deal with both internal and external context attributes in different and more appropriate manners. This allows addressing the complexity of the context model in a more effective way. In the second idea, we introduce two concepts; namely the “Stakeholders” and “Aggregation” to effectively alleviate the problems of new user and new item. 2DCCRS is based on a multi-layer architecture. Its highest layer relies on a pre-filtering algorithm that deals with the cold start system problem, and is mainly based on the similarity between the user profile and the items features. The middle layer is based on a collaborative filtering algorithm that takes into account the users’ preferences, interests and priorities; while the deepest layer, which is considered the most relevant in our multilayer architecture, focuses on a post-filtering algorithm in which the recommendations are much more adapted to the user environment. In Our second contribution, we present a case study of 2DCCRS in order to demonstrate the usefulness and effectiveness of the proposed approach. Indeed, we propose a personalized Healthy and Tasty application (H&T) that generates items based on 2DCCRS framework to guide the user toward the healthy and tasty meals that best meet his needs. The obtained results are very promising and show the effectiveness of our proposal.
118,798
Title: Compromising Electromagnetic Emanations of USB Mass Storage Devices Abstract: Universal Serial Bus (USB) has become the dominant Plug & Play interface for personal computers and continues to grow. Any digital communication source emits secondary or unwanted emissions, called compromising emanations, as they can be received and used to reconstruct the original transmitted information, thereby compromising the transmitted messages. This paper presents a number of experimental results regarding the USB communication between a personal computer and a USB memory device (USB bulk transfer) that has been performed in a specialized laboratory, and illustrates the capability of restoring information transmitted at bit level only from receiving the compromising radiation emitted by this communications bus. Comparative results for a shielded and unshielded device will also be illustrated. Finally, some TEMPEST protection methods are identified and presented against leakage of information through the compromising radiation of USB communication.
118,799
Title: Inclusion of Unicode Standard seamless characters to expand Arabic text steganography for secure individual uses Abstract: The need to protect confidential data stored on personal computers (PCs) or sent to other parties has increased significantly. Therefore, a secure, high-capacity strategy is needed to cover the data contained within media and make it difficult to detect. Due to the limited number of research studies that have achieved these ends through text steganography, we present an innovative approach for hiding secret bits in Arabic text with Unicode Standard seamless. Our method uses the contextual forms of Arabic characters to hide certain secret bits. Extra characters such as Zero-Width Joiners (ZWJ), Kashida, Medium Mathematical Spaces (MMSPs), and Zero-Width Non-Joiners (ZWNJ) are also used to further enhance the method’s capacity without diminishing the data’s security. Our experimental results show that this technique outperforms similar methods with an average improvement in capacity of more than 50%. This technique also has a higher security ratio than the other methods in this study, with the exception of Method 5, and is robust against electronic text modifications such as copying, pasting, and text formatting. Moreover, our algorithm can be widely adopted in related languages, such as Urdu and Farsi, due to its utilization of Unicode characters, which is the encoding standard used in most of the world’s writing systems.
118,820
Title: On relations between BLUPs under two transformed linear random-effects models Abstract: A general linear random-effects model with that includes both fixed and random effects and its two transformed models and are considered without making any restrictions on correlation of random effects and any full rank assumptions. Predictors of joint unknown parameter vectors under the transformed models and have different algebraic expressions and different properties in the contexts of the two transformed models. In this situation, establishing results on relations and making comparisons in between predictors under the two models are the main focuses. We first investigate relationships of best linear unbiased predictors (BLUPs) of general linear functions of fixed and random effects under the models and and construct several equalities for the BLUPs. Then, the comparison problem of covariance matrices of BLUPs under the models is considered. We derive from matrix rank and inertia formulas the necessary and sufficient conditions for variety of equalities and inequalities of covariance matrices' comparisons under the models and A and B.
118,841
Title: Vision and Wi-Fi fusion in probabilistic appearance-based localization Abstract: This article introduces an indoor topological localization algorithm that uses vision and Wi-Fi signals. Its main contribution is a novel way of merging data from these sensors. The designed system does not require knowledge of the building plan or the positions of the Wi-Fi access points. By making the Wi-Fi signature suited to the FABMAP algorithm, this work develops an early fusion framework that solves global localization and kidnapped robot problems. The resulting algorithm has been tested and compared with FABMAP visual localization, over data acquired by a Pepper robot in three different environments: an office building, a middle school, and a private apartment. Numerous runs of different robots have been realized over several months for a total covered distance of 6.4 km. Constraints were applied during acquisitions to make the experiments fitted to real use cases of Pepper robots. Without any tuning, our early fusion framework outperforms visual localization in all testing situations and with a significant margin in environments where vision faces problems such as moving objects or perceptual aliasing. In such conditions, 90.6% of estimated localizations are less than 5 m away from ground truth with our early fusion framework compared with 77.6% with visual localization. Furthermore, compared with other classical fusion strategies, the early fusion framework produces the best localization results because in all tested situations, it improves visual localization results without damaging them where Wi-Fi signals carry little information.
118,872
Title: Inferences for two Lindley populations based on joint progressive type-II censored data Abstract: The joint censoring scheme is of great importance when the motive of study is to compare the relative merits of products in relation of their service times. In last few years, progressive censoring received considerable attention in order to save cost and time of the experiment. This paper deals with inferences for Lindley populations, when joint progressive type-II censoring scheme is applied on two samples in a joint manner. Here, the maximum likelihood estimators of parameters are derived along with their associated confidence intervals which are dependent on the Fisher's information matrix. The boot-p and boot-t confidence intervals are also obtained. Bayes estimators of the unknown parameters assuming gamma priors are calculated. The concept of importance sampling technique and Gibbs sampling technique are used as the Bayes estimators cannot be calculated in closed form. HPD credible intervals are also constructed. A Monte Carlo simulation study is performed to measure the efficiency of the estimates. A real data set is given for illustrative purpose. Finally some criteria for an optimum censoring scheme are discussed.
118,888
Title: Self-dual cyclic codes over Z(4) of length 4n Abstract: For any odd positive integer n, we express cyclic codes over Z(4) of length 4n in a new way. Based on the expression of each cyclic code C, we provide an efficient encoder and determine the type of C. In particular, we give an explicit representation and enumeration for all distinct self-dual cyclic codes over Z(4) of length 4n and correct a mistake in the paper "Concatenated structure of cyclic codes over Z(4) of length 4n" (Cao et al. in Appl Algebra Eng Commun Comput 10:279-302, 2016). In addition, we obtain 50 new self-dual cyclic codes over Z(4) of length 28.
118,914
Title: Matroid optimization problems with monotone monomials in the objective Abstract: In this paper we investigate non-linear matroid optimization problems with polyno-mial objective functions where the monomials satisfy certain monotonicity properties. Indeed, we study problems where the set of non-linear monomials consists of all non-linear monomials that can be built from a given subset of the variables. Linearizing all non-linear monomials we study the respective polytope. We present a complete description of this polytope. Apart from linearization constraints one needs appropriately strengthened rank inequalities. The separation problem for these inequalities reduces to a submodular function minimization problem. These polyhedral results give rise to a new hierarchy for the solution of matroid optimization problems with polynomial objectives. This hierarchy allows to strengthen the relaxations of arbitrary linearized combinatorial optimization problems with polynomial objective functions and matroidal substructures. Finally, we give suggestions for future work. (c) 2020 Elsevier B.V. All rights reserved. <comment>Superscript/Subscript Available</comment
119,022
Title: Evaluation of process capability in gamma regression profiles Abstract: In many industrial and non-industrial processes, the quality of a product or a process are described by a functional relationship between a response variable and one or more independent variables, known as profile. On the other hand, the process capability indices provide numerical measures of the process ability. Few researches have been done to evaluate the process capability of profiles; especially for linear profiles. In the present article, we calculate the index to measure the process capability for gamma regression profiles and we also construct an approximate confidence interval using the bootstrap method. The performance of the proposed method is assessed through a simulation study. Moreover, we show that the index is very robust when the true distribution of data is a lognormal or a Weibull. Finally, the applicability of the proposed index is illustrated using two real examples.
119,032
Title: Coding structure of JMVDC along saliency mapping: a prespective compression technique Abstract: Summarizing of multi-see video and penetrating information depicts an augmentation of the high-performance video coding (HPVC) criterion. Multi-see recordings taken by numerous cameras of the same scene from various positions and edges progress towards becoming feasible since little estimated and less value cameras are accessible. Multi-see video enables the customers to unreservedly change their viewpoints whenever they need to see the scene from any position. For profundity coding, new intracoding modes, an altered movement pay, and movement vector coding and also the idea of movement parameter legacy are a piece of the HPVC expansion. A novel encoder control utilizes see blend improvement, which ensures that amazing middle of the road perspectives can be produced in view of the decoded information. The bit stream arrange bolsters the extraction of halfway piece streams, with the goal that traditional latest video standards like 2D, stereo, multi-see in addition to profundity organization can be de-transmit from a solitary piece stream. We proposed a novel saliency detection technique for stereoscopic pictures, which depends on local-global differentiation highlights, trailed by encompassing district improvement and stereo focus earlier upgrade and furthermore we have proposed an algorithm that effectively chooses a near-optimalinterview PS and related texture and profundity QPs, at GOP level, when the MVD information organize is utilized for IMVS frameworks. Objective and subjective outcomes are introduced, exhibiting that the proposed approach gives half piece rate investment funds in correlation with JMVDC simulcast and 22% in examination with a direct multi-see expansion of JMVDC without the recently created coding instruments.
119,193
Title: Conditional generative adversarial networks based on the principle of homologycontinuity for face aging Abstract: Age is one of the most important biological characteristics of the human face. The increase of age coincides with the increase of the aging degree of the face. Face aging synthesis is attracting increasingly more attention from domestic and overseas scholars in the computer vision and computer graphics fields, and it can be integrated into the basic research of face correlation, such as cross-age face analysis and age estimation. At present, some achievements have been made in face aging synthesis research; however, it is still an urgent problem to reduce the number of parameters and computational complexity of the network while ensuring the aging effect. Therefore, a new face aging algorithm is proposed in this article. Unlike the previous methods of aging process simulation, we introduce an assisted age classification network based on the principle of homology continuity, which is more in line with the human cognition process. After pretraining, the result of age classification is improved, and the pretraining model is then added to the framework of aging face generation for fine-tuning to constrain the generated aging face, which can improve the aging accuracy of the generated image. Furthermore, we reconstruct the input face by using the age tag of the input face and the synthesized aging face and maintain the identity invariance in the face aging process by minimizing the reconstruction loss. The experimental results show that the method proposed in this article produces a considerable effect of face aging and significantly reduces the number of parameters and the complexity of computational.
119,271
Title: Likelihood ratio test change-point detection in the skew slash distribution Abstract: In this paper, we use the likelihood ratio test to detect changes in the parameters of the skew slash distribution. Simulations have been conducted under different scenarios to investigate the performance of the proposed method. In the end, the real data applications are studied.
119,274
Title: Total Roman {2}-Dominating Functions in Graphs Abstract: A Roman {2}-dominating function (R2F) is a function f : V -> {0, 1, 2} with the property that for every vertex v is an element of V with f(v) = 0 there is a neighbor u of v with f(u) = 2, or there are two neighbors x, y of v with f(x) = f(y) = 1. A total Roman {2}-dominating function (TR2DF) is an R2F f such that the set of vertices with f(v) > 0 induce a subgraph with no isolated vertices. The weight of a TR2DF is the sum of its function values over all vertices, and the minimum weight of a TR2DF of G is the total Roman {2}-domination number gamma(tR)(2)(G). In this paper, we initiate the study of total Roman {2}-dominating functions, where properties are established. Moreover, we present various bounds on the total Roman {2}-domination number. We also show that the decision problem associated with gamma(tR)(2)(G) is possible to compute this parameter in linear time for bounded clique-width graphs (including trees).
119,282
Title: Improved simultaneous monitoring of mean and coefficient of variation under ranked set sampling schemes Abstract: A control chart is very useful to control assignable causes that detect the shifted process parameters (e.g., mean and dispersion). Simultaneous monitoring of the process parameters is a well-known approach utilized for the bilateral processes. This paper explores the joint monitoring to study the impact of both process mean and coefficient of variation (CV) by using log-normal transformation under ranked set sampling (RSS) schemes. Utilizing RSS schemes in joint monitoring of mean and CV with exponentially weighted moving average (EWMA) statistic enhances the capacity of the monitoring plan as compare to EWMA with simple random sampling for a mean shift, CV shift as well as joint shift. RSS schemes using EWMA can lessen the antagonistic impact on recognizing the capacity of a joint monitoring plan. The illustration of the proposed control chart is presented with the aid of a real data set.
119,324
Title: MT-IVSN: a novel model for vehicle re-identification Abstract: Image re-identification is usually used to find specific images from image libraries or video sequences. In recent years, convolutional neural networks have gradually become the dominant method in this field. In this paper, a Multitask Identification-Verification Siamese Network for vehicles is proposed, which combines color feature vectors with the inherent structure, category and other attributes of the vehicle. A vehicle 2D structural model and a method of image viewpoint normalization are also presented to ensure the high consistency of features in the expression of images from different angles. In addition, based on the vehicle 2D structural model, a dynamic annular non-uniform partition color super-pixel sampling strategy for vehicle face is investigated to construct a color feature vector. In the experiments, the proposed model and method are evaluated on the public vehicles and VeRi datasets. The experimental results show that the proposed method and model have made great progress.
119,431
Title: The multicommodity network flow problem: state of the art classification, applications, and solution methods Abstract: Over the past decades, the Multicommodity Network Flow (MCNF) problem has grown popular in the academic literature and a growing number of researchers are interested in this field. It is a powerful operational research approach to tackle and solve many complicated problems, especially in transportation and telecommunication contexts. Yet, few literature reviews have made an effort to classify the existing articles accordingly. In this article, we present a taxonomic review of the MCNF literature published between 2000 and 2019. Based on an adapted version of an existing comprehensive taxonomy, we have classified 263 articles into two main categories of applications and solution methods. We have also analyzed the research interests in the MCNF literature. This classification is the first to categorize the articles into this level of detail. Results show that there are topics, which need to be addressed in future researches.
119,442
Title: Trust data collections via vehicles joint with unmanned aerial vehicles in the smart Internet of Things Abstract: Due to their mobile character, ground vehicles and unmanned aerial vehicles (UAVs) are currently being considered as sensing devices that can collect data in the Internet of Things (IoT). Building and enhancing trust and security environments in data collection processes are fundamental and essential requirements. Here, we proposed a novel scheme named "Trust Data Collections via Vehicles joint with UAVs in the Smart Internet of Things" (T-SIoTs scheme), which targets to establish a trust-based environment for data collections by utilizing both trust vehicles and UAVs. First, to optimize security aspect, data center (DC) selected trust-based vehicles as mobile data collectors via analyzing and digging historical datasets. To promise coverage regions of data collections, several static stations are established, which can be utilized as static data collectors. Second, UAVs are arranged by the DC to collect data stored by both trust-based vehicles and static data collectors. In the T-SIoTs scheme, trajectories of UAVs are designed according to shortest-distance-first routing scheme. Comprehensive theoretical analyses and experiments have been provided to evaluate and support the T-SIoTs scheme. Compared with the previous studies, the T-SIoTs scheme can improve the security ratio by 46.133% to 54.60% approximately. And with the routing scheme, the energy consumptions of UAVs can be reduced by 46.93% approximately.
119,449
Title: Surface EMG signal classification using TQWT, Bagging and Boosting for hand movement recognition Abstract: Hands play a significant role in grasping and manipulating different objects. The loss of even a single hand have impact on the human activity. In this regard, a prosthetic hand is an appealing solution for the subjects who lost their hands. The surface electromyogram (sEMG) plays a vital role in the design of prosthesis hands. The ensemble classifiers achieve better performance by using a weighted combination of several classifier models. Hence, in this paper, the feasibility of the Bagging and the Boosting ensemble classifiers is assessed for the basic hand movement recognition by using sEMG signals, which were recorded during the grasping movements with various objects for the six hand motions. So, the novelty of the current study is the development of an ensemble model for hand movement recognition based on the tunable Q-factor wavelet transform (TQWT). The proposed method consists of three steps. In the first step, MSPCA is used for denoising. In the second step, a novel feature extraction method, TQWT is used for feature extraction from the sEMG signals, then, statistical values of TQWT sub-bands are calculated. In the last step, the obtained feature set is used as input to an ensemble classifier for the identification of intended hand movements. Performances of the Bagging and the Boosting ensemble classifiers are compared in terms of different performance measures. Using TQWT extracted features along with the presented the Adaboost with SVM and the Multiboost with SVM classifier results in a classification accuracy up to 100%. Hence, the results have shown that the proposed framework has achieved overall better performance and it is a potential candidate for the prosthetic hands control.
119,476
Title: Effects of spatial decomposition on the efficiency of kNN search in spatial interpolations Abstract: Spatial interpolations are commonly used in geometric modelling in life science applications such as medical image processing. In large-scale spatial interpolations, it is always needed to find a local set of data points for each interpolated point using the k Nearest Neighbor (kNN) search. To improve the efficiency of kNN, the uniform grid is commonly employed to fastly locate neighbours, and the size of grid cell could strongly affect the efficiency of kNN search. In this paper, we evaluate effects of the size of uniform grid cell on the efficiency of kNN search which is implemented on the CPU and GPU. We employ the Standard Deviation of points' coordinates to measure the spatial distribution of scattered points. For irregularly distributed scattered points, we perform several series of kNN search in two- and three-dimensions. Benchmark results indicate that: for both the sequential version implemented on the CPU and the parallel version implemented on the GPU, with the increase of the Standard Deviation of points' coordinates, the relatively optimal size of the grid cell decreases and eventually converges. Moreover, relationships between the Standard Deviation of scattered points' coordinates and the relatively optimal size of grid cell are fitted. [GRAPHICS] .
119,546
Title: Compaction for two models of logarithmic-depth trees: Analysis and experiments Abstract: We are interested in the quantitative analysis of the compaction ratio for two classical families of trees: recursive trees and plane binary increasing trees. These families are typical representatives of tree models with a small depth. Once a tree of size n is compacted by keeping only one occurrence of all fringe subtrees appearing in the tree the resulting graph contains only O(n/lnn) nodes. This result must be compared to classical results of compaction in the families of simply generated trees, where the analogous result states that the compacted structure is of size of order n/lnn. The result about the plane binary increasing trees has already been proved, but we propose a new and generic approach to get the result. Finally, an experimental study is presented, based on a prototype implementation of compacted binary search trees that are modeled by plane binary increasing trees.
119,598
Title: Pay attention to what you read: Non-recurrent handwritten text-Line recognition Abstract: •Novel adaptation of transformers for handwriting recognition tasks, bypassing recurrent neural nets.•Competitive results achieved in low resource scenario with synthetically pretrained model.•Extensive ablation and comparative studies conducted to understand and modify transformer properly for HTR.•Implicit language modelling ability proved.•The state-of-the-art performance achieved on public IAM dataset.
119,603
Title: State estimation-based robust optimal control of influenza epidemics in an interactive human society Abstract: This paper presents a state estimation-based robust optimal control strategy for influenza epidemics in an interactive human society in the presence of modeling uncertainties. Interactive society is influenced by random entrance of individuals from other human societies whose effects can be modeled as a non-Gaussian noise. Since only the number of exposed and infected humans can be measured, the states of the influenza epidemics are first estimated by an extended maximum correntropy Kalman filter (EMCKF) to provide a robust state estimation in the presence of the non-Gaussian noise. An online quadratic program (QP) optimization is then synthesized subject to a robust control Lyapunov function (RCLF) to minimize susceptible and infected humans, while minimizing and bounding the rates of vaccination and antiviral treatment. The main contribution of this work is twofold. First, the joint QP-RCLF-EMCKF strategy meets multiple design specifications such as state estimation, tracking, pointwise control optimality, and robustness to parameter uncertainty and state estimation errors that have not been achieved simultaneously in previous studies. Second, the uniform ultimate boundedness (UUB)/convergence of all error trajectories is guaranteed by using a Lyapunov stability argument. Simulation results show that the proposed approach achieves appropriate tracking and state estimation performance with good robustness.
119,610
Title: How does working from home affect developer productivity? — A case study of Baidu during the COVID-19 pandemic Abstract: Nowadays, working from home (WFH) has become a popular work arrangement due to its many potential benefits for both companies and employees (e.g., increasing job satisfaction and retention of employees). Many previous studies have investigated the impact of WFH on the productivity of employees. However, most of these studies usually use a qualitative analysis method such as surveys and interviews, and the studied participants do not work from home for a long continuing time. Due to the outbreak of coronavirus disease 2019 (COVID-19), a large number of companies asked their employees to work from home, which provides us an opportunity to investigate whether WFH affects their productivity. In this study, to investigate the difference in developer productivity between WFH and working onsite, we conduct a quantitative analysis based on a dataset of developers’ daily activities from Baidu Inc., one of the largest IT companies in China. In total, we collected approximately four thousand records of 139 developers’ activities of 138 working days. Out of these records, 1103 records are submitted when developers work from home due to the COVID-19 pandemic. We find that WFH has both positive and negative impacts on developer productivity in terms of different metrics, e.g., the number of builds/commits/code reviews. We also notice that WFH has different impacts on projects with different characteristics including programming language, project type/age/size. For example, WFH has a negative impact on developer productivity for large projects. Additionally, we find that productivity varies for different developers. Based on these findings, we get some feedback from developers of Baidu and understand some reasons why WFH has different impacts on developer productivity. We also conclude several implications for both companies and developers.
119,626
Title: Poly-YOLO: higher speed, more precise detection and instance segmentation for YOLOv3 Abstract: We present a new version of YOLO with better performance and extended with instance segmentation called Poly-YOLO. Poly-YOLO builds on the original ideas of YOLOv3 and removes two of its weaknesses: a large amount of rewritten labels and an inefficient distribution of anchors. Poly-YOLO reduces the issues by aggregating features from a light SE-Darknet-53 backbone with a hypercolumn technique, using stairstep upsampling, and produces a single scale output with high resolution. In comparison with YOLOv3, Poly-YOLO has only 60% of its trainable parameters but improves the mean average precision by a relative 40%. We also present Poly-YOLO lite with fewer parameters and a lower output resolution. It has the same precision as YOLOv3, but it is three times smaller and twice as fast, thus suitable for embedded devices. Finally, Poly-YOLO performs instance segmentation by bounding polygons. The network is trained to detect size-independent polygons defined on a polar grid. Vertices of each polygon are being predicted with their confidence, and therefore, Poly-YOLO produces polygons with a varying number of vertices. Source code is available at https://gitlab.com/irafm-ai/poly-yolo .
119,645
Title: OpenQL: A Portable Quantum Programming Framework for Quantum Accelerators Abstract: AbstractWith the potential of quantum algorithms to solve intractable classical problems, quantum computing is rapidly evolving, and more algorithms are being developed and optimized. Expressing these quantum algorithms using a high-level language and making them executable on a quantum processor while abstracting away hardware details is a challenging task. First, a quantum programming language should provide an intuitive programming interface to describe those algorithms. Then a compiler has to transform the program into a quantum circuit, optimize it, and map it to the target quantum processor respecting the hardware constraints such as the supported quantum operations, the qubit connectivity, and the control electronics limitations. In this article, we propose a quantum programming framework named OpenQL, which includes a high-level quantum programming language and its associated quantum compiler. We present the programming interface of OpenQL, we describe the different layers of the compiler and how we can provide portability over different qubit technologies. Our experiments show that OpenQL allows the execution of the same high-level algorithm on two different qubit technologies, namely superconducting qubits and Si-Spin qubits. Besides the executable code, OpenQL also produces an intermediate quantum assembly code, which is technology independent and can be simulated using the QX simulator.
119,651
Title: An Iteratively Optimized Patch Label Inference Network for Automatic Pavement Distress Detection Abstract: We present a novel deep learning framework named the Iteratively Optimized Patch Label Inference Network (IOPLIN) for automatically detecting various pavement distresses that are not solely limited to specific ones, such as cracks and potholes. IOPLIN can be iteratively trained with only the image label via the Expectation-Maximization Inspired Patch Label Distillation (EMIPLD) strategy, and accomplish this task well by inferring the labels of patches from the pavement images. IOPLIN enjoys many desirable properties over the state-of-the-art single branch CNN models such as GoogLeNet and EfficientNet. It is able to handle images in different resolutions, and sufficiently utilize image information particularly for the high-resolution ones, since IOPLIN extracts the visual features from unrevised image patches instead of the resized entire image. Moreover, it can roughly localize the pavement distress without using any prior localization information in the training phase. In order to better evaluate the effectiveness of our method in practice, we construct a large-scale Bituminous Pavement Disease Detection dataset named CQU-BPDD consisting of 60,059 high-resolution pavement images, which are acquired from different areas at different times. Extensive results on this dataset demonstrate the superiority of IOPLIN over the stateof-the-art image classification approaches in automatic pavement distress detection. The source codes of IOPLIN are released on https://github.com/DearCaat/ioplin, and the CQU-BPDD dataset is able to be accessed on https://dearcaat.githublo/CQU-BPDD/.
119,652
Title: Graph neural network for Hamiltonian-based material property prediction Abstract: Development of next-generation electronic devices calls for the discovery of quantum materials hosting novel electronic, magnetic, and topological properties. Traditional electronic structure methods require expensive computation time and memory consumption, thus a fast and accurate prediction model is desired with increasing importance. Representing the interactions among atomic orbitals in material, a Hamiltonian matrix provides all the essential elements that control the structure–property correlations in inorganic compounds. Learning of Hamiltonian by machine learning therefore offers an approach to accelerate the discovery and design of quantum materials. With this motivation, we present and compare several different graph convolution networks that are able to predict the band gap for inorganic materials. The models are developed to incorporate two different features: the information of each orbital itself and the interaction between each other. The information of each orbital includes the name, relative coordinates with respect to the center of super cell and the atom number. The interaction between orbitals is represented by the Hamiltonian matrix. The results show that our model can get a promising prediction accuracy with cross-validation.
119,661
Title: Structure Identifiability of an NDS With LFT Parametrized Subsystems Abstract: Requirements on subsystems have been made clear in this article for a linear time invariant networked dynamic system (NDS), under which subsystem interconnections can be estimated from external output measurements. In this NDS, subsystems may have distinctive dynamics, and subsystem interconnections are arbitrary. It is assumed that system matrices of each subsystem depend on its (pseudo) first principle parameters (FPP) through a linear fractional transformation. It has been proven that if in each subsystem, the transfer function matrix (TFM) from its internal inputs to its external outputs is of full normal column rank, while the TFM from its external inputs to its internal outputs is of full normal row rank, then the structure of the NDS is identifiable. Moreover, under some particular situations like there is no direct information transmission from an internal input to an internal output in each subsystem, a necessary and sufficient condition is established for NDS structure identifiability. A matrix valued polynomial rank-based equivalent condition is further derived, which depends affinely on subsystem (pseudo) FPPs and can be independently verified for each subsystem. From this condition, some necessary conditions are obtained for both subsystem dynamics and its (pseudo) FPPs, using the Kronecker canonical form of a matrix pencil.
119,675
Title: Distributed Aggregative Optimization Over Multi-Agent Networks Abstract: This article proposes a new framework for distributed optimization, called distributed aggregative optimization, which allows local objective functions to be dependent not only on their own decision variables, but also on the sum of functions of decision variables of all the agents. To handle this problem, a distributed algorithm, called distributed aggregative gradient tracking, is proposed and analyzed, where the global objective function is strongly convex, and the communication graph is balanced and strongly connected. It is shown that the algorithm can converge to the optimal variable at a linear rate. A numerical example is provided to corroborate the theoretical result.
119,676
Title: <p>Tolerance for colorful Tverberg partitions</p> Abstract: Tverberg's theorem bounds the number of points Rd needed for the existence of a partition into r parts whose convex hulls intersect. If the points are colored with N colors, we seek partitions where each part has at most one point of each color. In this manuscript, we bound the number of color classes needed for the existence of partitions where the convex hulls of the parts intersect even after any set of t colors is removed. We prove asymptotically optimal bounds for t when r <= d+1, improve known bounds when r > d+1, and give a geometric characterization for the configurations of points for which t = N - o(N).(c) 2022 Elsevier Ltd. All rights reserved.
119,683
Title: The value of a coordination game Abstract: The value of a game is the payoff a player can expect (ex ante) from playing the game. Understanding how the value changes with economic primitives is critical for policy design and welfare. However, for games with multiple equilibria, the value is difficult to determine. We therefore develop a new theory of the value of coordination games. The theory delivers testable comparative statics on the value and delivers novel insights relevant to policy design. For example, policies that shift behavior in the desired direction can make everyone worse off, and policies that increase everyone's payoffs can reduce welfare.
119,988
Title: Abelian Categories from Triangulated Categories via Nakaoka–Palu’s Localization Abstract: The aim of this paper is to provide an expansion of Abe–Nakaoka’s heart construction to the following two different realizations of the module category over the endomorphism ring of a rigid object in a triangulated category: Buan–Marsh’s localization and Iyama–Yoshino’s subfactor. Our method depends on a modification of Nakaoka–Palu’s HTCP localization, a Gabriel–Zisman localization of extriangulated categories which is also realized as a subfactor of the original ones. Besides of the heart construction, our generalized HTCP localization involves the following phenomena: (1) stable category with respect to a class of objects; (2) recollement of triangulated categories; (3) recollement of abelian categories under a certain assumption.
120,044
Title: Markoff–Rosenberger triples and generalized Lucas sequences Abstract: We consider the Markoff–Rosenberger equation $$\begin{aligned} ax^2+by^2+cz^2=dxyz \end{aligned}$$ with $$(x,y,z)=(U_i,U_j,U_k)$$ , where $$U_i$$ denotes the i-th generalized Lucas number of first/second kind. We provide an upper bound for the minimum of the indices and we apply the result to completely resolve concrete equations, e.g. we determine solutions containing only balancing numbers and Jacobsthal numbers, respectively.
120,058
Title: Computing subset transversals in H-free graphs Abstract: We study the computational complexity of two well-known graph transversal problems, namely SUBSET FEEDBACK VERTEX SET and SUBSET ODD CYCLE TRANSVERSAL, by restricting the input to H-free graphs, that is, to graphs that do not contain some fixed graph H as an induced subgraph. By combining known and new results, we determine the computational complexity of both problems on H-free graphs for every graph H except when H = sP1 + P4 for some s > 1. As part of our approach, we introduce the SUBSET VERTEX COVER problem and prove that it is polynomial-time solvable for (sP1 + P4)-free graphs for every s > 1. (c) 2021 Elsevier B.V. All rights reserved.
120,059
Title: Drift-preserving numerical integrators for stochastic Poisson systems. Abstract: We perform a numerical analysis of randomly perturbed Poisson systems. For the considered Ito perturbation of Poisson differential equations, we show the longtime behavior of the energy and quadratic Casimirs for the exact solution. We then propose and analyze a drift-preserving splitting scheme for such problems with the following properties: exact drift preservation of energy and quadratic Casimirs, mean-square order of convergence one, weak order of convergence two. Finally, extensive numerical experiments illustrate the performance of the proposed numerical scheme.
120,069
Title: A Case Study on Stochastic Games on Large Graphs in Mean Field and Sparse Regimes Abstract: We study a class of linear-quadratic stochastic differential games in which each player interacts directly only with its nearest neighbors in a given graph. We find a semi-explicit Markovian equilibrium for any transitive graph, in terms of the empirical eigenvalue distribution of the graph's normalized Laplacian matrix. This facilitates large-population asymptotics for various graph sequences, with several sparse and dense examples discussed in detail. In particular, the mean field game is the correct limit only in the dense graph case, i.e., when the degrees diverge in a suitable sense. Even though equilibrium strategies are nonlocal, depending on the behavior of all players, we use a correlation decay estimate to prove a propagation of chaos result in both the dense and sparse regimes, with the sparse case owing to the large distances between typical vertices. Without assuming the graphs are transitive, we show also that the mean field game solution can be used to construct decentralized approximate equilibria on any sufficiently dense graph sequence.
120,085
Title: LOCAL CONVERGENCE OF THE FEM FOR THE INTEGRAL FRACTIONAL LAPLACIAN Abstract: For first-order discretizations of the integral fractional Laplacian, we provide sharp local error estimates on proper subdomains in both the local H-1-norm and the localized energy norm. Our estimates have the form of a local best approximation error plus a global error measured in a weaker norm.
120,087
Title: Multimodal Feature Fusion and Knowledge-Driven Learning via Experts Consult for Thyroid Nodule Classification Abstract: Computer-aided diagnosis (CAD) is becoming a prominent approach to assist clinicians spanning across multiple fields. These automated systems take advantage of various computer vision (CV) procedures, as well as artificial intelligence (AI) techniques, to formulate a diagnosis of a given image, e.g., computed tomography and ultrasound. Advances in both areas (CV and AI) are enabling ever increasin...
120,089
Title: Exponential Consensus of Linear Systems Over Switching Network: A Subspace Method to Establish Necessity and Sufficiency Abstract: In this article, the consensus problem of linear systems is revisited from a novel geometric perspective. The interaction network of these systems is assumed to be piecewise fixed. Moreover, it is allowed to be disconnected at any time but holds a quite mild joint connectivity property. The system matrix is marginally stable and the input matrix is not of full-row rank. By directly examining the s...
120,374
Title: Supervisory Nonlinear State Observers for Adversarial Sparse Attacks Abstract: This article investigates the problem of secure state estimation for continuous-time linear systems in the presence of sparse sensor attacks. Compared with the existing results, the attacked sensor set can be changed by adversaries against secure estimation. To address the more erratic attacks, a novel supervisory state observer is proposed, which employs a bank of candidate nonlinear subobservers...
120,734
Title: On Containment for Linear Systems With Switching Topologies: A Novel State Transition Matrix Perspective Abstract: This article studies the containment control problem for a group of linear systems, consisting of more than one leader, over switching topologies. The input matrices of these linear systems are not required to have full-row rank and the switching can be arbitrary, making the problem quite general and challenging. We propose a novel analysis framework from the viewpoint of a state transition matrix...
120,735