text
stringlengths
70
7.94k
__index_level_0__
int64
105
711k
Title: Suicidal ideation and mental disorder detection with attentive relation networks Abstract: Mental health is a critical issue in modern society, and mental disorders could sometimes turn to suicidal ideation without effective treatment. Early detection of mental disorders and suicidal ideation from social content provides a potential way for effective social intervention. However, classifying suicidal ideation and other mental disorders is challenging as they share similar patterns in language usage and sentimental polarity. This paper enhances text representation with lexicon-based sentiment scores and latent topics and proposes using relation networks to detect suicidal ideation and mental disorders with related risk indicators. The relation module is further equipped with the attention mechanism to prioritize more critical relational features. Through experiments on three real-world datasets, our model outperforms most of its counterparts.
93,791
Title: Image Quality Assessment: Unifying Structure and Texture Similarity Abstract: Objective measures of image quality generally operate by comparing pixels of a “degraded” image to those of the original. Relative to human observers, these measures are overly sensitive to resampling of texture regions (e.g., replacing one patch of grass with another). Here, we develop the first full-reference image quality model with explicit tolerance to texture resampling. Using a convolutiona...
93,816
Title: Security of quantum key distribution with detection-efficiency mismatch in the multiphoton case Abstract: Detection-efficiency mismatch is a common problem in practical quantum key distribution (QKD) systems. Current security proofs of QKD with detection-efficiency mismatch rely either on the assumption of the single-photon light source on the sender side or on the assumption of the single-photon input of the receiver side. These assumptions impose restrictions on the class of possible eavesdropping strategies. Here we present a rigorous security proof without these assumptions and, thus, solve this important problem and prove the security of QKD with detection-efficiency mismatch against general attacks (in the asymptotic regime). In particular, we adapt the decoy state method to the case of detection-efficiency mismatch.
93,833
Title: Harmonic Based Model Predictive Control for Set-Point Tracking Abstract: This article presents a novel model predictive control (MPC) formulation for set-point tracking. Stabilizing predictive controllers based on terminal ingredients may exhibit stability and feasibility issues in the event of a reference change for small to moderate prediction horizons. In the MPC for tracking formulation, these issues are solved by the addition of an artificial equilibrium point as ...
93,834
Title: On the analysis and approximation of some models of fluids over weighted spaces on convex polyhedra Abstract: We study the Stokes problem over convex polyhedral domains on weighted Sobolev spaces. The weight is assumed to belong to the Muckenhoupt class $$\varvec{A}_{\varvec{q}}$$ for $$\varvec{q} \in (1,\varvec{\infty })$$ . We show that the Stokes problem is well-posed for all $$\varvec{q}$$ . In addition, we show that the finite element Stokes projection is stable on weighted spaces. With the aid of these tools, we provide well-posedness and approximation results to some classes of non-Newtonian fluids.
94,832
Title: A Probabilistic Logic for Verifying Continuous-time Markov Chains Abstract: A continuous-time Markov chain (CTMC) execution is a continuous class of probability distributions over states. This paper proposes a probabilistic linear-time temporal logic, namely continuous-time linear logic (CLL), to reason about the probability distribution execution of CTMCs. We define the syntax of CLL on the space of probability distributions. The syntax of CLL includes multiphase timed until formulas, and the semantics of CLL allows time reset to study relatively temporal properties. We derive a corresponding model-checking algorithm for CLL formulas. The correctness of the model-checking algorithm depends on Schanuel's conjecture, a central open problem in transcendental number theory. Furthermore, we provide a running example of CTMCs to illustrate our method.
94,848
Title: Recommendation system using a deep learning and graph analysis approach Abstract: When a user connects to the Internet to fulfill his needs, he often encounters a huge amount of related information. Recommender systems are the techniques for massively filtering information and offering the items that users find them satisfying and interesting. The advances in machine learning methods, especially deep learning, have led to great achievements in recommender systems, although these systems still suffer from challenges such as cold-start and sparsity problems. To solve these problems, context information such as user communication network is usually used. In this article, we have proposed a novel recommendation method based on matrix factorization and graph analysis methods, namely Louvain for community detection and HITS for finding the most important node within the trust network. In addition, we leverage deep autoencoders to initialize users and items latent factors, and the Node2vec deep embedding method gathers users' latent factors from the user trust graph. The proposed method is implemented on Ciao and Epinions standard datasets. The experimental results and comparisons demonstrate that the proposed approach is superior to the existing state-of-the-art recommendation methods. Our approach outperforms other comparative methods and achieves great improvements, that is, 15.56% RMSE improvement for Epinions and 18.41% RMSE improvement for Ciao.
94,857
Title: On open books for nonorientable 3-manifolds Abstract: We show that the monodromy of Klassen's genus two open book for P-2 x S-1 is the Y-homeomorphism of Lickorish, which is also known as the crosscap slide. Similarly, we show that S-2 (x) over tilde S-1 admits a genus two open book whose monodromy is the crosscap transposition. Moreover, we show that each of P(2)xS(1) and S-2 (x) over tilde S-1 admits infinitely many isomorphic genus two open books whose monodromies are mutually nonisotopic. Furthermore, we include a simple observation about the stable equivalence classes of open books for P-2 x S-1 and S-2 (x) over tilde S-1. Finally, we formulate a version of Stallings' theorem about the Murasugi sum of open books, without imposing any orientability assumption on the pages.
94,867
Title: Dimension of Restricted Classes of Interval Orders Abstract: Rabinovitch showed in 1978 that the interval orders having a representation consisting of only closed unit intervals have order dimension at most $$3$$ . This article shows that the same dimension bound applies to two other classes of posets: those having a representation consisting of unit intervals (but with a mixture of open and closed intervals allowed) and those having a representation consisting of closed intervals with lengths in $$\left\{ 0,1 \right\}$$ .
94,895
Title: CONVERGENCE OF LARGE POPULATION GAMES TO MEAN FIELD GAMES WITH INTERACTION THROUGH THE CONTROLS Abstract: This work considers stochastic differential games with a large number of players, whose costs and dynamics interact through the empirical distribution of both their states and their controls. We develop a new framework to prove convergence of finite-player games to the asymptotic mean field game. Our approach is based on the concept of propagation of chaos for forward and backward weakly interacting particles which we investigate by stochastic analysis methods, and which appear to be of independent interest. These propagation of chaos arguments allow us to derive moment and concentration bounds for the convergence of Nash equilibria.
94,906
Title: On regularity of Max-CSPs and Min-CSPs Abstract: •Regular Max-CSPs have the same (up to o(1) error) approximability as Max-CSPs.•Regular Min-CSPs have the same (up to o(1) error) approximability as Min-CSPs.•Weights do not matter (up to o(1) error) for approximability of Max/Min-CSPs.•Constant degree regular instances essentially capture approximability of Max-CSPs.
94,910
Title: Online scheduling on a single machine with linear deteriorating processing times and delivery times Abstract: This paper considers a class of problems with linear deteriorating jobs. Jobs are released over time and become known to the online scheduler until their release times. Jobs are with deteriorating processing times or deteriorating delivery times. The objective is to minimize the time by which all jobs are delivered. We consider five different models on a single machine, and present an optimal online algorithm in competitiveness for each model, respectively.
95,243
Title: Review rating prediction framework using deep learning Abstract: Nowadays Review websites, such as Amazon and Yelp, allow users to post online reviews for several products, services and businesses. Recently online reviews play a great role in influencing the shopping decisions made by consumers. These reviews provide consumers with information and experience about product quality. Online reviews commonly comprise of a free-text format and user star-level rating Out of five. People believe that reviews will do help to the rating predication based on the idea that high star rating may significantly be attach with really good reviews. However, user’s rating star-level information is not usually available on many online review’s websites. Due to, it’s not possible for a given user to rate every product. On the other hand, most online reviews are written in free-text format, and therefore difficult for computer system to understand and analyze it. Identifying ratings for online reviews lately become an important topic in machine learning. In this paper, we propose a review rating prediction framework using deep learning. The framework consists of two phases based on deep learning bidirectional gated recurrent unit Bi-GRU model architectures, the first phase used for polarity prediction and the second phase used to predict review rating from review text. Extensive experiments were conducted to evaluate the proposed framework on two dataset Amazon and yelp datasets which are real-world datasets. The experimental results demonstrated that the proposed framework can significantly enhance the rating prediction in term of precision, recall, F1-score and root mean square root RMSE compared with the baseline approaches on different datasets.
95,279
Title: Stress-strength reliability of generalized skew-elliptical distributions and its Bayes estimation Abstract: In this paper, we caculate the probability where and are known vectors, C is a known scalar and and are two independent random vectors that have generalized skew-elliptical distributions. In particular, we derive R for scale mixtures of multivariate skew normal distributions and multivariate skew normal-Cauchy distribution, then, by using some matrix variate distributions, the Bayes estimation of R for the multivariate skew normal distribution is obtained. Finally, a simulation study and a real data analysis are presented.
95,294
Title: Selecting from among 12 alternative distributions of financial data Abstract: The special and limiting cases of skewed generalized t distribution developed by Theodossiou (1998) include 11 alternative distributions. This study proposes likelihood ratio tests to select the best distribution from among 12 alternatives. The efficacy of this approach is shown through empirical applications and two kinds of Monte Carlo simulations. Daily stock returns data from 1,045 Japanese firms offer the following evidence. Student's t distribution is selected for 42% of the firms, generalized t distribution is selected for 35% of the firms, and skewed generalized t distribution is selected for 14% of the firms. On the one hand, non-data-based simulation provides evidence that the proposed procedure performs well when individual distributions are clearly separated. On the other hand, data-based simulation shows that a very large sample (e.g., 10,000) is needed to ensure good performance of the proposed procedure.
95,417
Title: A multidomain virtual network embedding algorithm based on multiobjective optimization for Internet of Drones architecture in Industry 4.0 Abstract: Unmanned aerial vehicle (UAV) has a broad application prospect in the future, especially in the Industry 4.0. The development of Internet of Drones (IoD) makes UAV operation more autonomous. Network virtualization technology is a promising technology to support IoD, so the allocation of virtual resources becomes a crucial issue in IoD. How to rationally allocate potential material resources has become an urgent problem to be solved. The main work of this paper is presented as follows: (a) In order to improve the optimization performance and reduce the computation time, we propose a multidomain virtual network embedding algorithm (MP-VNE) adopting the centralized hierarchical multidomain architecture. The proposed algorithm can avoid the local optimum through incorporating the genetic variation factor into the traditional particle swarm optimization process. (b) In order to simplify the multiobjective optimization problem, we transform the multiobjective problem into a single-objective problem through weighted summation method. The results prove that the proposed algorithm can rapidly converge to the optimal solution. (c) In order to reduce the mapping cost, we propose an algorithm for selecting candidate nodes based on the estimated mapping cost. Each physical domain calculates the estimated mapping cost of all nodes according to the formula of the estimated mapping cost, and chooses the node with the lowest estimated mapping cost as the candidate node. The simulation results show that the proposed MP-VNE algorithm has better performance than MC-VNM, LID-VNE, and other algorithms in terms of delay, cost and comprehensive indicators.
95,444
Title: An approximation algorithm for the maximum spectral subgraph problem Abstract: Modifying the topology of a network to mitigate the spread of an epidemic with epidemiological constant $$\lambda $$ amounts to the NP-hard problem of finding a partial subgraph with maximum number of edges and spectral radius bounded above by $$\lambda $$ . A software-defined network capable of real-time topology reconfiguration can then use an algorithm for finding such subgraph to quickly remove spreading malware threats without deploying specific security countermeasures. In this paper, we propose a novel randomized approximation algorithm based on the relaxation and rounding framework that achieves a $$O(\log n)$$ approximation in the case of finding a subgraph with spectral radius bounded by $$\lambda \in [\log n, \lambda _1(G))$$ where $$\lambda _1(G)$$ is the spectral radius of the input graph and n is the number of nodes. We combine this algorithm with a maximum matching algorithm to obtain a $$O(\log ^2 n)$$ -approximation algorithm for all values of $$\lambda $$ . We also describe how the mathematical programming formulation we give has several advantages over previous approaches which attempted at finding a subgraph with minimum spectral radius given an edge removal budget. Finally, we show that the analysis of our randomized rounding scheme is essentially tight by relating it to classical results from random graph theory.
95,460
Title: Analysis of consensus sorting via the cycle metric Abstract: Sorting is studied in this paper as an archetypal example to explore the optimizing power of consensus. In conceptualizing the consensus sort, the classical hill-climbing method of optimization is paired with the modern notion that value and fitness can be judged by data mining. Consensus sorting is a randomized sorting algorithm which is based on randomly selecting pairs of elements within an unsorted list (expressed in this paper as a permutation), and deciding whether to swap them based on appeals to a database of other permutations. The permutations in the database are all scored via some adaptive sorting metric, and the decision to swap depends on whether the database consensus suggests a better score as a result of swapping. This uninformed search process does not require the definition of the concept of sorting, but rather depends on selecting a metric which does a good job of distinguishing a good path to the goal, a sorted list. A previous paper has shown that the ability of the algorithm to converge on the goal depends strongly on the metric which is used, and analyzed the performance of the algorithm when number of inversions was used as a metric. This paper continues by analyzing the performance of a much more efficient metric, the number of cycles in the permutation.
95,620
Title: Deep learning-based classification model for botnet attack detection Abstract: Botnets are vectors through which hackers can seize control of multiple systems and conduct malicious activities. Researchers have proposed multiple solutions to detect and identify botnets in real time. However, these proposed solutions have difficulties in keeping pace with the rapid evolution of botnets. This paper proposes a model for detecting botnets using deep learning to identify zero-day botnet attacks in real time. The proposed model is trained and evaluated on a CTU-13 dataset with multiple neural network designs and hidden layers. Results demonstrate that the deep-learning artificial neural network model can accurately and efficiently identify botnets.
95,763
Title: Intelligent Internet of Things gateway supporting heterogeneous energy data management and processing Abstract: The requisition for electrical energy, smart grid, and renewable energy paradigm extend a new space for Electrical Energy Data Management and Processing Systems (EEDMS), in such a way that can mitigate the consumption of electrical energy. Similarly, the implementation and maintenance of the EEDMS is a challenging task. Moreover, the heterogeneous energy data generated from residential and commercial sector are the leading challenges for standard Internet of Things (IoT) architecture. This contributes enormous energy data preprocessing and analyzing solutions to IoT landscape. To overcome these challenges, we present a scalable multitasking Internet of Things Gateway (IoTGW) for the modern era of IoT by placing reliance on a new entity called Data Loading and Storing Module (DLSM). The provided DLSM module combine with the Gateway module services like orchestrator, flexibility of bridging front end grid, back end grid and fast formatted data trade between sensing domain and application domain enables a high dynamic distributed framework. Specifically, we add Adaboost-Multilayer Perceptron hybrid data classifier module to the proposed work to enhance service provision of IoT gateway toward various IoT application services and protocols to facilitate IoT demands such as multitasking, interoperability, classification, and fast data delivery between different modules. IoTGW is implemented and tested using a real-time IoT data streaming network. The experimental results confirms the superiority of proposed work in terms of scalability to serve novel applications and facilitate broad scope of IoT.
95,781
Title: Predicting Drug-Drug Interactions Based on Integrated Similarity and Semi-Supervised Learning Abstract: AbstractA drug-drug interaction (DDI) is defined as an association between two drugs where the pharmacological effects of a drug are influenced by another drug. Positive DDIs can usually improve the therapeutic effects of patients, but negative DDIs cause the major cause of adverse drug reactions and even result in the drug withdrawal from the market and the patient death. Therefore, identifying DDIs has become a key component of the drug development and disease treatment. In this study, we propose a novel method to predict DDIs based on the integrated similarity and semi-supervised learning (DDI-IS-SL). DDI-IS-SL integrates the drug chemical, biological and phenotype data to calculate the feature similarity of drugs with the cosine similarity method. The Gaussian Interaction Profile kernel similarity of drugs is also calculated based on known DDIs. A semi-supervised learning method (the Regularized Least Squares classifier) is used to calculate the interaction possibility scores of drug-drug pairs. In terms of the 5-fold cross validation, 10-fold cross validation and de novo drug validation, DDI-IS-SL can achieve the better prediction performance than other comparative methods. In addition, the average computation time of DDI-IS-SL is shorter than that of other comparative methods. Finally, case studies further demonstrate the performance of DDI-IS-SL in practical applications.
96,103
Title: False-Data-Injection Attacks on Remote Distributed Consensus Estimation Abstract: This article studies a security issue in remote distributed consensus estimation where sensors transmit their measurements to remote estimators via a wireless communication network. The relative entropy is utilized as a stealthiness metric to detect whether the data transmitted through the wireless network are attacked. The performance degradation induced by an attacker that attempts to be stealth...
96,129
Title: Real-Time 3-D Semantic Scene Parsing With LiDAR Sensors Abstract: This article proposes a novel deep-learning framework, called RSSP, for real-time 3-D scene understanding with LiDAR sensors. To this end, we introduce new sparse strided operations based on the sparse tensor representation of point clouds. Compared with conventional convolution operations, the time and space complexity of our sparse strided operations are proportional to the number of occupied vo...
96,130
Title: A novel authentication and authorization scheme in P2P networking using location-based privacy Abstract: In recent years, peer-to-peer (P2P) network has reached popularity in file sharing as it is a distributed and decentralized network architecture. As there is no centralized authority, there arise various attacks, which lead to insecurity in the network. Thus, the security issues of the P2P networks are to be considered with more care. This paper proposes an authentication and authorization approach, named fuzzy enabled advanced encryption standard (AES)-based multi-level authentication and authorization to offer security against various kinds of attacks that occur in the P2P networks. Here, the authentication is carried out with the security factors, namely location profile, one-time password, spatial information, session password, a hashing function, and so on. Initially, the user and the server are registered in the authentication process, and then, hashing functions and AES are used to perform multi-level authorization and authentication processes. Thus, the proposed scheme improves the security of the P2P network. Using the proposed system, the hit ratio obtained is 0.9, and the success rate is 0.7666.
96,784
Title: The generative adversarial networks and its application in machine vision Abstract: In recent years, the model of improved GAN has been widely applied in the field of machine vision. It not only covers the traditional image processing, but also includes image conversion, image synthesis and so on.. Firstly, this paper describes the basic principles and existing problems of GAN, then introduces several improved GAN models, including Info-GAN, DC-GAN, f-GAN, Cat-GAN and others. Secondly, several improved GAN models for different applications in the field of machine vision are described. Finally, the future trend and development of GAN are prospected.
96,841
Title: A multilevel paradigm for deep convolutional neural network features selection with an application to human gait recognition Abstract: Human gait recognition (HGR) shows high importance in the area of video surveillance due to remote access and security threats. HGR is a technique commonly used for the identification of human style in daily life. However, many typical situations like change of clothes condition and variation in view angles degrade the system performance. Lately, different machine learning (ML) techniques have been introduced for video surveillance which gives promising results among which deep learning (DL) shows best performance in complex scenarios. In this article, an integrated framework is proposed for HGR using deep neural network and fuzzy entropy controlled skewness (FEcS) approach. The proposed technique works in two phases: In the first phase, deep convolutional neural network (DCNN) features are extracted by pre-trained CNN models (VGG19 and AlexNet) and their information is mixed by parallel fusion approach. In the second phase, entropy and skewness vectors are calculated from fused feature vector (FV) to select best subsets of features by suggested FEcS approach. The best subsets of picked features are finally fed to multiple classifiers and finest one is chosen on the basis of accuracy value. The experiments were carried out on four well-known datasets, namely, AVAMVG gait, CASIA A, B and C. The achieved accuracy of each dataset was 99.8, 99.7, 93.3 and 92.2%, respectively. Therefore, the obtained overall recognition results lead to conclude that the proposed system is very promising.
96,859
Title: Efficient Domination in Cayley Graphs of Generalized Dihedral Groups Abstract: An independent subset D of the vertex set V of the graph Gamma is an efficient dominating set for Gamma if each vertex v is an element of V \ D has precisely one neighbour in D. In this article, we classify the connected cubic Cayley graphs on generalized dihedral groups which admit an efficient dominating set.
96,974
Title: Penalized and ridge-type shrinkage estimators in Poisson regression model Abstract: The paper considers the problem of estimation of the regression coefficients in a Poisson regression model under multicollinearity situation. We propose non-penalty Stein-type shrinkage ridge estimation approach when it is conjectured that some prior information is available in the form of potential linear restrictions on the coefficients. We establish the asymptotic distributional biases and risks of the proposed estimators and investigate their relative performance with respect to the unrestricted ridge estimator. For comparison sake, we consider the two penalty estimators, namely, least absolute shrinkage and selection operator and Elastic-Net estimators and compare numerically their relative performance with the other listed estimators. Monte-Carlo simulation experiment is conducted to evaluate the performance of each estimator in terms of the simulated relative efficiency. The results show that the shrinkage ridge estimators perform better than the penalty estimators in certain parts of the parameter space. Finally, a real data example is illustrated to evaluate of the proposed methods.
96,982
Title: Internet of Things (IoT) and Agricultural Unmanned Aerial Vehicles (UAVs) in smart farming: A comprehensive review Abstract: Internet of Things (IoT) and Unmanned Aerial Vehicles (UAVs) are two hot technologies utilized in cultivation fields, which transform traditional farming practices into a new era of precision agriculture. In this paper, we perform a survey of the last research on IoT and UAV technology applied in agriculture. We describe the main principles of IoT technology, including intelligent sensors, IoT sensor types, networks and protocols used in agriculture, as well as IoT applications and solutions in smart farming. Moreover, we present the role of UAV technology in smart agriculture, by analyzing the applications of UAVs in various scenarios, including irrigation, fertilization, use of pesticides, weed management, plant growth monitoring, crop disease management, and field-level phenotyping. Furthermore, the utilization of UAV systems in complex agricultural environments is also analyzed. Our conclusion is that IoT and UAV are two of the most important technologies that transform traditional cultivation practices into a new perspective of intelligence in precision agriculture.
96,993
Title: Chatbot design issues: building intelligence with the Cartesian paradigm. Abstract: The article discusses the functioning of human-like consciousness and the potential for developing a chatbot based on human-like consciousness. The proposed approach was verified experimentally using a sociological method and by attracting a cohort of student volunteers. The chatbot population was created on the back of our complex neural network architecture design. The volunteers were asked to identify their interlocutor, which was either a human agent or a chatbot. For integrity, the conversations between bots and people were organized randomly so that each volunteer could interact several times with all bots in the population and with all participants in the sample. The article discusses the results of the study, the details of the proposed approach. The article explains the features of the functioning and self-reconfiguration of the neural network that provide high reliability of chatbot replicas and high speed of responses to replicas of human users so that the delay time does not raise suspicion of human users. The main idea of the authors’ approach is an attempt to model human self-awareness and self-reflection. The results prove the proposed neural network architecture design successful in terms of real-time self-learning.
97,074
Title: PAPR reduction and spectrum sensing in MIMO systems with optimized model Abstract: Cognitive radio is trending domain, which provides strong solution for addressing spectrum scarcity issues. Many cognitive radios standards suffer from high peak to average power ratio (PAPR), which may distort transmitted signal. This paper proposes a technique for spectrum sensing based on optimization enabled PAPR using hybrid Gaussian mixture model (GMM). The Eigen statistics, energy, and PAPR reduction block is adapted by hybrid mixture model for predicting the availability of spectrum. In order to model network with PAPR, the newly designed optimization algorithm, namely elephant-sunflower optimization (ESO) is adapted. The proposed ESO technique is combination of elephant herd optimization and sunflower optimization. The GMM is enabled using Eigen statistics, energy along with PAPR. The GMM is adjusted with an optimization algorithm, namely Whale elephant-herd optimization. The PAPR is reduced by optimally adjusting the parameters using proposed ESO. The channel availability is evaluated by providing energy, Eigen statistics and PAPR as input. The effectiveness of proposed ESO is illustrated with maximal probability of detection of 1.00, minimal PAPR of 7.534, and minimal bit error rate of 0.000 respectively.
97,091
Title: Mining classroom observation data for understanding teacher's teaching modes Abstract: The study of teacher development and teaching interaction in physical classrooms has presented research problems, but with the development of classroom observation and video analysis, quantitative and visual analysis of the teaching process has been realized. In order to better understand the teaching models of teachers, this study is devoted to mining classroom observation data. First, based on content analysis, a Dynamic Network Analysis of Classroom Teaching Elements (CTE-DNA) framework has been developed in which classroom observation data are divided into three dimensions, teaching behavior, instructional media, and technological pedagogical content knowledge (TPACK). Second, using the method of social network analysis, teacher's teaching models are analyzed by measuring adjacency matrix and relative centrality. Last, key nodes of the network as well as teaching models are visualized. Based on CTE-DNA, classroom observation data can be used for evaluating teacher's teaching behaviors and performance, and can be beneficial to teacher's professional development.
97,204
Title: Incorporating environmental and social considerations into the portfolio optimization process Abstract: Over the last years, more and more companies face increased pressure by the public to provide information on how they perform on environmental, social and governance (ESG) issues. However, so far a very small number of studies have investigated optimal ways to construct socially responsible portfolios, either in the sense of the screening criteria used to narrow the investment universe, or the optimization process employed to determine the asset proportions. This study covers this gap by introducing an algorithm that first performs a screening to eliminate stocks from the investment universe that do not respect the imposed ESG constraint and then on the ESG compliant universe the portfolio optimization is performed. The novelty of the proposed approach lies in the fact that all underlying functionality of the algorithm, including the screening procedure and the imposed constraints, is facilitated seamlessly through a novel solution representation. Three multiobjective evolutionary algorithms have been adapted to work well with the proposed solution representation and the imposed constraints. The study by utilizing data from the FTSE-100 corporate social responsibility index finds that investors that are concerned about the environmental and social impact of their investments that must be ready to sacrifice a part of their welfare by selecting combinations of assets that provide subordinate return and risk combinations, compared to the available investment opportunities.
97,303
Title: Determination of model fitting with power-divergence-type measure of departure from symmetry for sparse and non-sparse square contingency tables Abstract: In this study, we propose a gradation to decide the model fitting under the symmetry model by using the departure measure. An extensive simulation study under different parameter settings and scenarios is conducted to introduce a gradation on unit interval. The proposed gradation provides an opportunity to decide the model fitting under the symmetry model when the chi-squared assumption does not hold for small sample sizes. The usefulness of proposed gradation is proved empirically using four applications to real data sets.
97,315
Title: A novel zero watermark optimization algorithm based on Gabor transform and discrete cosine transform Abstract: Zero watermark has no intervention on the carrier image, which is invisible in nature, and completely solves the problem of mutual constraint between robustness and invisibility in the traditional digital watermark technology. In order to improve the robustness of the zero watermark, a zero watermark algorithm based on Gabor transform and discrete cosine transform (DCT) is proposed in this paper. The algorithm performs Gabor transformation on the carrier image to obtain the directional characteristics of the image. Then it performs DCT and blocks, and performs singular value decomposition (SVD) on each subblock. The feature matrix is constructed using the largest eigenvalue of the subblock, and then XORed with the encrypted watermark to form a zero watermark. The algorithm utilizes the advantages of Gabor transform in the adjustment of scale and direction and selects the optimized parameters to extract image features. At the same time, the information concentration and decorrelation of DCT and SVD are utilized to improve the robustness and antiattack capability of the algorithm. The experimental results show that the proposed algorithm is robust to filtering, JPEG compression, noise, shearing, rotation, and other attacks, especially effective in antishear attack and rotation attack.
97,446
Title: Sea lion with enhanced exploration phase for optimization of polynomial fitness with SEM in lean technology Abstract: With the establishment of lean manufacturing, myriad industries implemented the lean manufacturing principles and guidelines. To train the professionals based on certain policies to achieve continuous improvements in terms of productivity and minimizing wastages. However, the complexities such as variability, sustainability, multi-dimensional views, factory size, to name just a few negatively influences the performance of deploying lean manufacturing in industries. It is, therefore, very important for companies to recognize and understand the critical success factors for successfully implementing lean manufacturing. Hence, this paper plans to develop a model on concerning the analysis of lean manufacturing to find the most important factor regarding the technology among the industries. With this intention, this paper is analyzed through three phases. In the first phase, the prepared questionnaire is distributed to the professionals in various companies. In the questionnaire, all the mandatory questions are included. Then, the professional are recommended to fill the precise information as far as possible. In the second phase, the responses from the concerned practitioners associated with the industries are considered for analysis. Herein, the analysis is carried out based on structural equation modeling approaches with the contribution of higher order statistical analysis, which is performed using the input factors such as the lean awareness, lean technology, organizational support, organizational performance, employee involvement and management commitment among the industries via attaining better maximum likelihood values of the questionnaires. In the third phase, the Prediction of polynomial fitting of the response (i.e. the objective function) is achieved with the aid a novel optimization algorithm SLnO-EE model (Sea Lion with Enhanced Exploration phase), which is the extended version of SLnO (Sea Lion Optimization). Finally, the proposed SLnO-EE model is evaluated over traditional SLnO model in terms of certain performance evaluations to exhibit the improvement in prediction accuracy.
97,550
Title: A systematic mapping study for blockchain based on complex network Abstract: Blockchain has started to appear as a potentially reliable and underlying technology for various fields. There have been lots of surveys focusing on blockchain with respect to specific topics, such as security, architecture, applications, and so on. However, a systematic mapping study, including all related fields about blockchain, has been largely ignored. In this article, we revisit the problem of complex networks in the form of scientific collaboration networks. More specifically, we utilize the method of systematic mapping and implement them into blockchain technology. We collect 233 articles by searching Baidu scholar with the keyword "blockchain," then construct two complex networks according to the relationship of keywords and authors, respectively. The keywords' complex network is a small-world network while the authors' complex network is not. Furthermore, the tool of Netdraw provides a visualized graph for the complex network. Meanwhile, we find some subgroups in the network, which may highlight the future direction of blockchain.
97,573
Title: Tighter price of anarchy for selfish task allocation on selfish machines Abstract: Given a set $$L = \{J_1,J_2,\ldots ,J_n\}$$ of n tasks and a set $$M = \{M_1,M_2, \ldots ,M_m\}$$ of m identical machines, in which tasks and machines are possessed by different selfish clients. Each selfish client of machine $$M_i \in M$$ gets a profit equal to its load and each selfish client of task allocated to $$M_i$$ suffers from a cost equal to the load of $$M_i$$ . Our aim is to allocate the tasks on the m machines so as to minimize the maximum completion times of the tasks on each machine. A stable allocation is referred to as a dual equilibrium (DE). We firstly show that 4/3 is tight upper bound of the Price of Anarchy(PoA) with respect to dual equilibrium for $$m\in \{3,\ldots ,9\}$$ . And secondly $$(7m-6)/(5m-3)$$ is an upper bound for $$m\ge 10$$ . The result is better than the existing bound of 7/5.
97,632
Title: Multi-dimensional stability analysis for Analytic Network Process models Abstract: The decision-making process involves, most of the time, working with limited and/or incomplete information to obtain a one-time synthesis of one’s preferences. However, additional information might be acquired by the decision-maker at a later time. To identify the impact that the additional information could have on the initial result, we developed a new multi-dimensional stability analysis that provides a comprehensive image of how preferences change or evolve, as perturbations are applied to the criteria weights, without reiterating the preference elicitation process. The insights provided by the new method helped define a set of stability measures useful in a practical setting. To show the direct implementation of the findings of the multi-dimensional stability analysis, we applied the method developed to a randomly generated ANP model.
97,643
Title: Standardized maximin criterion for discrimination and parameter estimation of nested models Abstract: A new criterion for approximate designs called the standardized maximin criterion suited for both model discrimination and parameter estimation based on D- and D-s-optimality criteria is introduced and studied. It is proved that the computation of an experimental design which is optimal with respect to this criterion can be reduced to the computation of multiple experimental designs which are optimal with respect to the simpler weighted criterion. Several numerical examples that describe the efficiency of the proposed criterion are provided.
97,705
Title: Optimal confidence interval for the largest exponential location parameter Abstract: A single sampling procedure is proposed for obtaining an optimal confidence interval for the largest location parameter of several k independent exponential populations in the cases of known and unknown common scale parameter, where "optimal" is defined in the sense of minimal expected interval width and best allocation at a fixed confidence. The optimal result can be applied to type II censored samples. Tables of percentage points are provided for practitioners.
97,788
Title: Lowering the Cramer-Rao lower bounds of variance in randomized response sampling Abstract: In this paper, we lower the Cramer-Rao Lower bound of variance due to Singh and Sedory (2011, 2012) for the Odumade and Singh (2009) model in the sense that we propose a randomized response model that is more efficient. We investigate the properties of the proposed model under various situations for protection and efficiency. The adjustment makes use of known proportion of unrelated characteristics. The situation where the proportions of the unrelated characteristics are unknown is also discussed. In addition, we wrote SAS codes to simulate various data sets in order to compute the simulated relative efficiency values.
97,924
Title: A strategy-based framework for supplier selection: a grey PCA-DEA approach Abstract: Supplier selection is one of the key competencies in the sourcing function. Considering the important role of suppliers in the strategy framework of supply chains, it is surprising that the sourcing function has not been subject to more focused research on the development of adequate decision support tools. The relatively simplified ranking systems that often have been presented on an ad hoc basis offer only partial information on the decision. This research attempts to develop a unified and integrated structure for supplier selection practices across a supply chain on the basis of strategic planning. Our evaluation is conducted by means of a multi-attribute efficiency analysis models and a multivariate statistical method, a so-called principal component analysis-data envelopment analysis (PCA-DEA) approach, to support supplier relationship management under uncertainty. The main contribution of this paper is to address the gap in the supply chain management (SCM) literature by proposing a strategy-based method for supplier selection problems when data are interrelated and interdependent. The proposed method in this study is applied to a real-world case study in agri-food industry to demonstrate the advantages and applicability of the proposed framework.
97,963
Title: An O(mn(2)) Algorithm for Computing the Strong Geodetic Number in Outerplanar Graphs Abstract: Let G = (V (G), E(G)) be a graph and S be a subset of vertices of G. Let us denote by gamma[u, v] a geodesic between u and v. Let Gamma(S) = {gamma[v<INF>i</INF>, v<INF>j</INF>] | v<INF>i</INF>, v<INF>j</INF> is an element of S} be a set of exactly |S|(|S|-1)/2 geodesics, one for each pair of distinct vertices in S. Let V (Gamma(S)) = ?<INF>gamma</INF><INF>[</INF><INF>x,y</INF><INF>]</INF><INF>is an element of Gamma (<INF>S</INF>)</INF> V (gamma[x, y]) be the set of all vertices contained in all the geodesics in Gamma(S). If V (Gamma(S)) = V (G) for some Gamma(S), then we say that S is a strong geodetic set of G. The cardinality of a minimum strong geodetic set of a graph is the strong geodetic number of G. It is known that it is NP-hard to determine the strong geodetic number of a general graph. In this paper we show that the strong geodetic number of an outerplanar graph can be computed in polynomial time.
97,964
Title: Deployment of edge servers in 5G cellular networks Abstract: With the rapid development of Internet of Things technology and interactive applications, the number of terminal devices in the network is increasing, and the development of interactive applications is hindered by network delay. To solve the network delay, bandwidth, and workload requirements in the new era, edge computing came into being. Edge computing aims to implement computing, storage, communication, and other services at the edge of the network by sinking cloud services from the core network to the edge of the network. Current research studies pay less attention to the impact of edge server location on the system performance, and edge server deployment is one of the key technologies for mobile edge computing. Therefore, we take 5G macrocellular/microcellular cluster as the edge server deployment scenario, propose an equivalent bandwidth-based deployment strategy, establish a mathematical model for edge server deployment, and contract a task experience function as an evaluation index from two aspects: task time and energy overhead. Based on the analysis of the experimental results, it is verified that the deployment strategy based on equivalent bandwidth is superior to other deployment strategies in terms of terminal device task overhead.
97,994
Title: Aggregation of the nearest consistency matrices with the acceptable consensus in AHP-GDM Abstract: Analytic hierarchy process (AHP) is widely used in group decision making (GDM). There are two traditional aggregation methods for the collective preference in AHP-GDM: aggregation of the individual judgments (AIJ) and aggregation of the individual priorities (AIP). However, AHP-GDM is sometimes less reliable only under the condition of AIJ and AIP because of the consensus and consistency of the individual pair-wise comparison matrices (PCMs) and prioritization methods. In this paper, we propose aggregation of the nearest consistent matrices (ANCM) with the acceptable consensus in AHP-GDM, simultaneously considering the consensus and consistency of the individual PCMs. ANCM is independent of prioritization methods while complying with the Pareto principal of social choice theory. Moreover, ANCM is easy to program and implement in resolving highly complex group decision making problems. Finally, two numerical examples illustrate the applications and advantages of the proposed ANCM.
98,146
Title: Use of BIM technology and impact on productivity in construction project management Abstract: Building information modelling is a illustration of the physical and functional characteristics of a technology which connects project information databases in every fields. Use of building information modelling technology represents one of the most progressive approach in construction project management. Construction project management is a difficult process depends on many factors. Human resources are one of them. Project results depends on productivity of human resources. There are some questions about productivity of employees and managers. Productivity of employees depends on many processes and factors. Progressive technology can be one of them. BIM technology presents probably affective tool for productivity. This research discussed issue of use of BIM technology and analyses BIM effect on productivity in construction project management. Main aim of research was set like analyze of use BIM technology in construction industry and effect of this on productivity.
98,181
Title: Sentiment Classification Using Two Effective Optimization Methods Derived From The Artificial Bee Colony Optimization And Imperialist Competitive Algorithm Abstract: Artificial bee colony (ABC) optimization and imperialist competitive algorithm (ICA) are two famous metaheuristic methods. In ABC, exploration is good because each bee moves toward random neighbors in the first and second phases. In ABC, exploitation is poor because it does not try to examine a promising region of search space carefully to see if it contains a good local minimum. In this study, ICA is considered to improve ABC exploitation, and two novel swarm-based hybrid methods called ABC–ICA and ABC–ICA1 are proposed, which combine the characteristics of ABC and ICA. The proposed methods improve the evaluations results in both continuous and discrete environments compared to the baseline methods. The second method improves the first optimization method as well. Feature selection can be considered to be an optimization problem because selecting the appropriate feature subset is very important and the action of appropriate feature selection has a great influence on the efficiency of classifier algorithms in supervised methods. Therefore, to focus on feature selection is a key issue and is very important. In this study, different discrete versions of the proposed methods have been introduced that can be used in feature selection and feature scoring problems, which have been successful in evaluations. In this study, a problem called cold start is introduced, and a solution is presented that has a great impact on the efficiency of the proposed methods in feature scoring problem. A total of 16 UCI data sets and 2 Amazon data sets have been used for the evaluation of the proposed methods in feature selection problem. The parameters that have been compared are classification accuracy and the number of features required for classification. Also, the proposed methods can be used to create a proper sentiment dictionary. Evaluation results confirm the better performance of the proposed methods in most experiments.
98,230
Title: The Impact of Surface Features on Choice of (in)Secure Answers by Stackoverflow Readers Abstract: Existing research has shown that developers will use StackOverflow to answer programming questions: but what draws them to one particular answer over any other? The choice of answer they select can mean the difference between a secure application and insecure one, as the quality of supposedly secure answers can vary. Prior work has studied people posting on Stack Overflow—a two-way communication between the original poster and the Stack Overflow community. Instead, we study the situation of one-way communication, where people only read a Stack Overflow thread without being actively involved in it, sometimes long after a thread has closed. We report on a mixed-method study including a controlled between-groups experiment and qualitative analysis of participants’ rationale (N=1188), investigating whether explanation detail, answer scoring, accepted answer marks, as well as the security of the code snippet itself affect the answers participants accept. Our findings indicate that explanation detail affects what answers participants reading a thread select (p<0.01), while answer score and acceptance do not (p>0.05)—the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">inverse</i> of what research has shown for those asking and answering questions. The qualitative analysis of participants’ rationale further explains how several cognitive biases underpin these findings. Correspondence bias, in particular, plays an important role in instilling readers with a false sense of confidence in an answer through the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">way it looks</i> , regardless of whether it works, is secure, or if the community agrees with it. As a result, we argue that StackOverflow's use as a knowledge base by people not actively involved in threads—when there is only one-way-communication—may inadvertently contribute to the spread of insecure code, as the community's voting mechanisms hold little power to deter them from answers.
98,410
Title: Supply planning for shelters and emergency management crews Abstract: This paper addresses the problem of supplying provisions to civilians affected by an emergency and to the intervention groups that provide post emergency relief. We define the Emergency Supply using Heterogeneous Fleet Problem (ESHFP) using a Mixed Integer Linear Programming mathematical model that describes the complexities involved in these operations. Furthermore, we propose a novel heuristic algorithm which constructs a plan comprising a set of efficient vehicle routes in order to minimize the total supply time, respecting constraints concerning timing, demand, capacity and supply. The characteristics of the problem have been studied by solving an extensive set of test cases. The efficiency and practicality of the algorithm has been tested by applying it to a large scale ESHFP instance and to a case study involving a forest fire in the Province of Teruel, Spain.
98,572
Title: Comparison of non-parametric tests of ordered alternatives for repeated measures in randomized blocks Abstract: Page, Jonckheere, Hollander tests are the most common non-parametric tests for ordered alternative pattern in randomized blocks where contain independent observations. However, when randomized blocks include correlated observations, properties of null distribution for these tests under independence cannot be applied to the situations where randomized blocks contain dependent observations. Circular bootstrap versions of these tests are suggested to test for ordered alternative hypothesis in randomized blocks which include correlated measurements when the underlying distributions are not known. In this study the effect of correlation coefficient, number of treatment/time point and block sizes on type 1 error and power values of some existing non-parametric test statistics based on the circular bootstrap method are taken into consideration with a stationary auto regressive process. The significance levels and power values of these tests based on circular bootstrap method are simulated by using the statistical software package R, 3.4.2. Based on findings in the simulation study, circular bootstrap version of original Page Test provides better power values among other trend statistics. Finally, these circular bootstrap-based non-parametric tests are applied to a real life dataset.
98,610
Title: Stock market prediction using machine learning classifiers and social media, news Abstract: Accurate stock market prediction is of great interest to investors; however, stock markets are driven by volatile factors such as microblogs and news that make it hard to predict stock market index based on merely the historical data. The enormous stock market volatility emphasizes the need to effectively assess the role of external factors in stock prediction. Stock markets can be predicted using machine learning algorithms on information contained in social media and financial news, as this data can change investors’ behavior. In this paper, we use algorithms on social media and financial news data to discover the impact of this data on stock market prediction accuracy for ten subsequent days. For improving performance and quality of predictions, feature selection and spam tweets reduction are performed on the data sets. Moreover, we perform experiments to find such stock markets that are difficult to predict and those that are more influenced by social media and financial news. We compare results of different algorithms to find a consistent classifier. Finally, for achieving maximum prediction accuracy, deep learning is used and some classifiers are ensembled. Our experimental results show that highest prediction accuracies of 80.53% and 75.16% are achieved using social media and financial news, respectively. We also show that New York and Red Hat stock markets are hard to predict, New York and IBM stocks are more influenced by social media, while London and Microsoft stocks by financial news. Random forest classifier is found to be consistent and highest accuracy of 83.22% is achieved by its ensemble.
98,685
Title: Using Metrics Suites to Improve the Measurement of Privacy in Graphs Abstract: Social graphs are widely used in research (e.g., epidemiology) and business (e.g., recommender systems). However, sharing these graphs poses privacy risks because they contain sensitive information about individuals. Graph anonymization techniques aim to protect individual users in a graph, while graph de-anonymization aims to re-identify users. The effectiveness of anonymization and de-anonymization algorithms is usually evaluated with privacy metrics. However, it is unclear how strong existing privacy metrics are when they are used in graph privacy. In this article, we study 26 privacy metrics for graph anonymization and de-anonymization and evaluate their strength in terms of three criteria: <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">monotonicity</i> indicates whether the metric indicates lower privacy for stronger adversaries; for within-scenario comparisons, <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">evenness</i> indicates whether metric values are spread evenly; and for between-scenario comparisons, <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">shared value range</i> indicates whether metrics use a consistent value range across scenarios. Our extensive experiments indicate that no single metric fulfills all three criteria perfectly. We therefore use methods from multi-criteria decision analysis to aggregate multiple metrics in a metrics suite, and we show that these metrics suites improve monotonicity compared to the best individual metric. This important result enables more monotonic, and thus more accurate, evaluations of new graph anonymization and de-anonymization algorithms.
98,798
Title: Trust based energy efficient data collection with unmanned aerial vehicle in edge network Abstract: Large-scale sensing devices spread over a wide area and compose the supervisory control and data acquisition (SCADA) system to remotely control and monitor a specific process through collecting the sensing data from the working field. However, the trustworthy and energy efficient data collection is still a challenging issue for large-scale Internet of thing systems. In this article, a trust based energy efficient data collection with unmanned aerial vehicle (TEEDC-UAV) scheme is proposed to prolong lifetime with trustworthy style. First, in TEEDC-UAV scheme, an ant colony based unmanned aerial vehicle (UAV) trajectory optimization algorithm is proposed in which form the most data anchors in the working field with the trajectory as short as possible. Thus, the sensor nodes in SCADA system can be responsible for the least amount of data and greatly extend network life. Second, a trust reasoning and evolution mechanism is proposed to identify the trust degree of sensor nodes, and only trusted data will be collected so that the quality of data collection can be proved. In our proposed trust mechanism, the UAV can sense and collect data itself, so that data can be used as the baseline to identify the trust degree of sensor nodes. Finally, proved by sufficient experiment results, our proposed TEEDC-UAV scheme can find an optimized data collection trajectory efficiently, which helps the energy consumption of the network become much more balanced. Compared with previous strategies, the network life is greatly improved by 48.9%. Meanwhile, the trust mechanism proposed in this article can also greatly improve the identification accuracy of node trust degree, which reached 91% when consuming only 8% network life.
98,867
Title: Modelling and prioritizing the factors for online apparel return using BWM approach Abstract: Online apparel industry is suffering from a major issue of return, with a high rate of return for apparels that are sold online it becomes necessary to investigate the probable reasons of return in online apparel industry. The objective of the study is to develop a multi-criterion approach for evaluation of various factors that are responsible for the return of apparels purchased online in context of India. A total of 34 factors were identified through literature review and discussion with experienced experts from the fashion domain. In this study, best–worst method has been employed to prioritize and rank the factors for online return more effectively. Sensitivity analysis has been carried out to check the robustness of the proposed model of the study. The findings of the study show that fit and size variation, defects, found a better product (wisdom of purchase), wrong product delivery, lenient return policy and value for money were identified as crucial factors for online apparel return. The present study provides valuable research implications which can be used for retail policy improvements and also to online selling strategy.
98,875
Title: Lost in translation: Collecting and coding data on social relations from audio-visual recordings Abstract: •Constitutive features of social relations are lost when data naturally produced by sequential social interaction are aggregated into network ties.•Audio-visual recording is a technology that facilitates collection, storage and retrieval of complex time-sensitive information on social relations.•Video-recorded interaction within surgical teams is used to illustrate how social relations may be studied using data on continuous-time interaction.
98,901
Title: The curse of dimensionality (COD), misclassified DMUs, and Bayesian DEA Abstract: Data envelopment analysis (DEA) is used to assess the relative efficiency of a set of decision-making units (DMUs). A potential drawback to DEA is that one must include a sufficient number of observations to ensure that all input-output dimensions are adequately characterized. Employing DEA with too few DMUs for a given set of inputs/outputs generates estimates that overstate efficiency. This is known as the "curse of dimensionality (COD)". Because production processes vary widely in technology and complexity, it is difficult to analytically characterize the effects of the COD on DEA-generated efficiency scores. This paper uses Bayesian methods to characterize and adjust for the possibility of misclassified DMUs and COD-related biases in DEA. Based on the nature of the COD bias we propose an appropriate prior distribution for the proportion of misclassified DMUs and use it to derive the concordant posterior distribution. A simulation analysis compares our model to those obtained with an ignorance prior distribution to evaluate the utility of the new model, and it is then applied to data from the Turkish electricity industry. We find that estimates of the probability of misclassification can be improved using our proposed prior distribution, especially in sample sizes of less than 40 observations.
99,050
Title: GALLAI-RAMSEY NUMBERS FOR RAINBOW S-3(+) AND MONOCHROMATIC PATHS Abstract: Motivated by Ramsey theory and other rainbow-coloring-related problems, we consider edge-colorings of complete graphs without rainbow copy of some fixed subgraphs. Given two graphs G and H, the k-colored Gallai-Ramsey number gr(k) (G : H) is defined to be the minimum positive integer n such that every k-coloring of the complete graph on n vertices contains either a rainbow copy of G or a monochromatic copy of H. Let S-3(+) be the graph on four vertices consisting of a triangle with a pendant edge. In this paper, we prove that gr(k)(S-3(+) : P-5) = k + 4 (k >= 5), gr(k) (S-3(+) : mP(2)) = (m -1)k + m + 1 (k >= 1), gr(k) (S-3(+) : P-3 boolean OR P-2) = k + 4 (k >= 5) and gr(k) (S-3(+) : 2P(3)) = k + 5 (k >= 1).
99,172
Title: The Roman Domatic Problem in Graphs and Digraphs: A Survey Abstract: In this paper, we survey results on the Roman domatic number and its variants in both graphs and digraphs. This fifth survey completes our works on Roman domination and its variations published in two book chapters and two other surveys.
99,259
Title: Comparing the performance of statistical methods that generalize effect estimates from randomized controlled trials to much larger target populations Abstract: Policymakers use results from randomized controlled trials to inform decisions about whether to implement treatments in target populations. Various methods-including inverse probability weighting, outcome modeling, and Targeted Maximum Likelihood Estimation-that use baseline data available in both the trial and target population have been proposed to generalize the trial treatment effect estimate to the target population. Often the target population is significantly larger than the trial sample, which can cause estimation challenges. We conduct simulations to compare the performance of these methods in this setting. We vary the size of the target population, the proportion of the target population selected into the trial, and the complexity of the true selection and outcome models. All methods performed poorly when the trial size was only 2% of the target population size or the target population included only 1,000 units. When the target population or the proportion of units selected into the trial was larger, some methods, such as outcome modeling using Bayesian Additive Regression Trees, performed well. We caution against generalizing using these existing approaches when the target population is much larger than the trial sample and advocate future research strives to improve methods for generalizing to large target populations.
99,279
Title: A hybrid framework based on genetic algorithm and simulated annealing for RNA structure prediction with pseudoknots Abstract: RNA structure prediction with pseudoknots is an NP-complete problem, in which an optimal RNA structure with minimum energy is to be computed. In past decades, several methods have been developed to predict RNA structure with pseudoknots. Among them, metaheuristic approaches have proven to be beneficial for predicting long RNA structure in a very short time. In this paper, we have used two metaheuristic algorithms; Genetic Algorithm (GA) and Simulated Annealing (SA) for predicting RNA secondary structure with pseudoknots. We have also applied a combination of these two algorithms as GA-SA where GA is used for a global search and SA is used for a local search, and conversely SA-GA, where SA is used for a global search and GA is used for a local search. Four different energy models have been applied to calculate the energy of RNA structure. Five datasets, constructed from the RNA STRAND and Pseudobase++ database, have been used in the algorithms. The performances of the algorithms have been compared with several existing metaheuristic algorithms. Here we have obtained that the combination of GA and SA (GA-SA) gives better results than GA, SA and SA-GA algorithms and all other four state-of-art algorithms on all datasets. (C) 2020 The Authors. Production and hosting by Elsevier B.V. on behalf of King Saud University.
99,302
Title: Distribution-free three- sample test for detecting trend in the pure location model Abstract: A three-sample distribution-free test for detecting non-decreasing ordered alternatives is proposed to alleviate some of the problems in the existing tests, including the risk of frequent detection of a trend when it is not actually present. The test is based on the natural U-statistic estimation of the parameter where are three independent continuous random variables. Actual levels of the test and power values in the case of equal variances for a variety of sample sizes and over a wide class of distributions under the null hypothesis and the alternative are obtained via simulation. The results indicate that the proposed test compares favorably with existing tests. In all simulations the nonparametric test provided relatively good power, accurate control over the size of the test, and better protection against false detection of a non-existing trend.
99,312
Title: On the record-based transmuted model of balakrishnan and He based on weibull distribution Abstract: In this paper, it is investigated that the some statistical properties and the methods of estimation for transmuted record type Weibull (TRTW) distribution.The moments, variance, skewness and kurtosis coefficients, geometric mean, quantile function, median, incomplete moments, Renyi entropy, the moments of order statistics are obtained for TRTW distribution. Six different estimators are considered in order to estimate the parameters of TRTW distribution. Furthermore, Monte Carlo (MC) simulations are performed to evaluate the performances of these estimators. Finally, a real data example is presented to illustrate the usefullness of TRTW distribution.
99,353
Title: Statistical inference for the generalized weighted exponential distribution Abstract: Generalized weighted exponential (GWE) distributions are a natural generalization of weighted exponential distributions. This paper attempts to revisit the GWE distribution in order to address the main inferential aspects of this distribution, such as the maximum likelihood estimators of the unknown parameters and their corresponding asymptotic confidence intervals, that vanished in previous works. Also, it develops some new distributional results about the GWE distribution and provides more interesting closed form expressions for previously presented results. A simulation study and a real world application are also worked out to assess the maximum likelihood estimators and to illustrate the theory.
99,435
Title: FANETs in Agriculture - A routing protocol survey Abstract: Breakthrough advances on communication technology, electronics and sensors have led to integrated commercialized products ready to be deployed in several domains. Agriculture is and has always been a domain that adopts state of the art technologies in time, in order to optimize productivity, cost, convenience, and environmental protection. The deployment of Unmanned Aerial Vehicles (UAVs) in agriculture constitutes a recent example. A timely topic in UAV deployment is the transition from a single UAV system to a multi-UAV system. Collaboration and coordination of multiple UAVs can build a system that far exceeds the capabilities of a single UAV. However, one of the most important design problems multi-UAV systems face is choosing the right routing protocol which is prerequisite for the cooperation and collaboration among UAVs. In this study, an extensive review of Flying Ad-hoc network (FANET) routing protocols is performed, where their different strategies and routing techniques are thoroughly described. A classification of UAV deployment in agriculture is conducted resulting in six (6) different applications: Crop Scouting, Crop Surveying and Mapping, Crop Insurance, Cultivation Planning and Management, Application of Chemicals,and Geofencing. Finally, a theoretical analysis is performed that suggests which routing protocol can serve better each agriculture application, depending on the mobility models and the agricultural-specific application requirements.
99,708
Title: A survey of security and privacy issues in the Internet of Things from the layered context Abstract: Internet of Things (IoT) is a novel paradigm, which not only facilitates a large number of devices to be ubiquitously connected over the Internet but also provides a mechanism to remotely control these devices. The IoT is pervasive and is almost an integral part of our daily life. These connected devices often obtain user's personal data and store it online. The security of collected data is a big concern in recent times. As devices are becoming increasingly connected, privacy and security issues become more and more critical and these need to be addressed on an urgent basis. IoT implementations and devices are eminently prone to threats that could compromise the security and privacy of the consumers, which, in turn, could influence its practical deployment. In recent past, some research has been carried out to secure IoT devices with an intention to alleviate the security concerns of users. There have been research on blockchain technologies to tackle the privacy and security issues of the collected data in IoT. The purpose of this paper is to highlight the security and privacy issues in IoT systems. To this effect, the paper examines the security issues at each layer in the IoT protocol stack, identifies the under-lying challenges and key security requirements and provides a brief overview of existing security solutions to safeguard the IoT from the layered context.
99,730
Title: Probabilistic analysis of security attacks in cloud environment using hidden Markov models Abstract: The rapidly growing cloud computing paradigm provides a cost-effective platform for storing, sharing, and delivering data and computation through internet connectivity. However, one of the biggest barriers for massive cloud adoption is the growing cybersecurity threats/risks that influence its confidence and feasibility. Existing threat models for clouds may not be able to capture complex attacks. For example, an attacker may combine multiple security vulnerabilities into an intelligent, persistent, and sequence of attack behaviors that will continuously act to compromise the target on clouds. Hence, new models for detection of complex and diversified network attacks are needed. In this article, we introduce an effective threat modeling approach that has the ability to predict and detect the probability of occurrence of various security threats and attacks within the cloud environment using hidden Markov models (HMMs). The HMM is a powerful statistical analysis technique and is used to create a probability matrix based on the sensitivity of the data and possible system components that can be attacked. In addition, the HMM is used to provide supplemental information to discover a trend attack pattern from the implicit (or hidden) raw data. The proposed model is trained to identify anomalous sequences or threats so that accurate and up-to-date information on risk exposure of cloud-hosted services are properly detected. The proposed model would act as an underlying framework and a guiding tool for cloud systems security experts and administrators to secure processes and services over the cloud. The performance evaluation shows the effectiveness of the proposed approach to find attack probability and the number of correctly detected attacks in the presence of multiple attack scenarios.
99,744
Title: Reliability analysis for degradation and shock process based on truncated normal distribution Abstract: The reliability analysis for the system with degradation and random shocks is an important issue in the field of reliability engineering. In this paper, we use Wiener process to fit the performance degradation process, and regard both the degradation rate and the mean of the magnitude of shock load as random variables which follow the truncated normal distribution. With the Markov Chain Monte Carlo(MCMC), we provide a new method to estimate the parameters of the reliability function of the system. At the end of the paper, we take the degradation and shock data of the MOSFET as an example to show the effectiveness of the method presented in the paper.
99,773
Title: Estimation of semi-varying coefficient error-in-variable models with surrogate data and validation sample Abstract: In this study, a semi-varying coefficient error-in-variable model with surrogate data and validation sample is proposed. Without specifying any error structure, we firstly use the local linear kernel smoothing technique to define the estimators and the proposed estimators are proved to be asymptotically normal. Then, we conduct generalized likelihood ratio (GLR) test on varying coefficient function. The data-driven bandwidth selection method is discussed. Finally, simulated studies are conducted to illustrate the finite sample properties of the proposed estimators and efficiency of the GLR methodology.
99,823
Title: Smart manufacturing business management system for network industry spin-off enterprises Abstract: Enterprises are increasingly using big data analytics and intelligence to solve business processes between original equipment manufacturers (OEMs) and original design manufacturers (ODMs) to increase the competitiveness and improve the efficiency of solutions. This research applies a cloud computing-based enterprise resource planning (ERP) system and OEM/ODM system to business to business (B2B) e-commerce. The integration of smart ERP system with B2B e-commerce can achieve the expected benefits of both. It reduces cost and improves the work efficiency, thereby improving the competitiveness of enterprises. In comparison to other traditional methods, the proposed system can make business management more efficient, promote the optimisation of cost structure, enhance the accuracy of sales orders, and improve the response time. It can also make the use of resources more reasonable.
99,881
Title: On bootstrap estimators of some prediction accuracy measures of loss reserves in a non-life insurance company Abstract: This article considers the hierarchical generalized linear model (HGLM) for loss reserving in a non-life insurance. In the current insurance practice, insurance companies use generalized linear models (GLM) for prediction of the total loss reserve. This model, however, requires the assumption of independence of random variables occurring in the loss triangle, which may not always be fulfilled. The remedy to this problem is to introduce random effects to the GLM allowing modeling dependence inside the loss triangle. A limitation in the use of HGLM to predict the total loss reserve is the fact that the error of prediction is expressed by a complex analytical formula. An alternative to the analytical approach is to use the bootstrap technique. The main purpose of this article is to propose the full residual and parametric bootstrap procedures for the estimation of three types of prediction accuracy measures. The first is a classic root mean squared error of prediction (RMSE). The second is a quantile of the absolute prediction error (QAPE). The third one, we propose, is a quantile of a mixture of absolute prediction errors. Stochastic properties of the estimators of the accuracy measures are studied in two Monte Carlo simulation experiments. Bootstrap procedures and Monte Carlo simulation are implemented in R program.
99,913
Title: An efficient scheme for secure feature location using data fusion and data mining in internet of things environment Abstract: Feature location (FL) is performed to find the relationships between domain concepts and other software artifacts. One major problem in maintaining a software system is to understand how many functional features exist in a system and how these features are implemented. Also, poor security is the prime problem in the FL system. However, the existing recent FL techniques use a textual and dynamic approach, which is not found to be secure, keeping in view the changes in the description of security attacks. To overcome this drawback, this work proposed a novel secure approach for FL utilizing data fusion as well as data mining for the internet of things environment. Firstly, the repeated test cases (TC) are eradicated as of the labeled TC. Next, important attributes are selected using the artificial flora optimization algorithm from the removed labeled TC. Then, association rule mining is performed to ascertain closed attributes. Subsequently, encrypt the closed attributes utilizing Caesar Cipher-Rivest, Shamirs, as well as Adelman algorithm. After that, the score value of the closed attributes counts was found utilizing entropy calculation. Finally, the score value is given as input to the normalized-K-Means (N-[K-Means]) algorithm, where the score value is normalized utilizing min-max normalization and then grouped utilizing K-Means algorithm (KMA). It proffers better results for FL in the source code. The proposed N-(K-Means) performance is found better in comparison to the KMA and latent semantic indexing methods. The proposed system proffered better FL results in comparison to the other prevailing methods.
99,975
Title: Classification Model on Big Data in Medical Diagnosis Based on Semi-Supervised Learning Abstract: Big data in medical diagnosis can provide abundant value for clinical diagnosis, decision support and many other applications, but obtaining a large number of labeled medical data will take a lot of time and manpower. In this paper, a classification model based on semi-supervised learning algorithm using both labeled and unlabeled data is proposed to process big data in medical diagnosis, which includes structured, semi-structured and unstructured data. For the medical laboratory data, this paper proposes a self-training algorithm based on repeated labeling strategy to solve the problem that mislabeled samples weaken the performance of classifiers. Aiming at medical record data, this paper extracts features with high correlation of classification results based on domain expert knowledge base first, and then chooses the unlabeled medical record data with the highest confidence to expand the training set and optimizes the performance of the classifiers of tri-training algorithm, which uses supervised learning algorithm to train three basic classifiers. The experimental results show that the proposed medical diagnosis data classification model based on semi-supervised learning algorithm has good performance.
100,022
Title: Heuristic Access Points Grouping for Mobility Driven User-Centric Ultra Dense Networks Abstract: In ultra-dense network (UDN) scenarios the big number of users and dense placement is comparable to those of the Access Points (APs). In such UDNs new approaches for mobility management are necessary to ensure a reliable service provision, seamless connectivity and high unit area throughput. Dynamic access point grouping for user service provision is considered to be a core function of the UDN with user centric wireless access. In this paper, a heuristic approach with low computational complexity for AP grouping for mobility driven user-centric UDN is proposed, based on two general metrics reflecting the user density and distribution, user requirements and available resources in the APs. This allows the AP grouping process and user association to one or a group of APs to be represented as a dynamic service provision system and its performance as a function of user allocation, user mobility and user service requests to be more effective.
100,075
Title: A virtual execution platform for OpenFlow controller using NFV Abstract: The Software Defined Networking (SDN) paradigm decouples the network control functions from the data plane and offers a set of software components for flexible and controlled management of networks. SDN has promised to provide numerous benefits in terms of on-demand provisioning, automated load balancing, streamlining physical infrastructure, and flexibility in scaling network resources. In order to realize these network service offerings, there is an important need for developing an efficient, robust, and secure execution platform. As a primary contribution, we present a novel virtual execution platform for the OpenFlow controller using Network Function Virtualization (NFV). Theoretically, NFV can apply to any network function, which can simplify the managing of the heterogeneous data plane. The characteristics of our proposed architecture include pipe-lined processing of network traffic, virtualized and replicated execution of network functions, isolation between task nodes, and random mapping of traffic to task nodes. The proposed architecture has two major components: a Network Packet Schedulers (NPS) and a Task Engine (TE). The TE consists of Task Nodes (TNs) which are responsible for executing different network functions on various traffic flows and each TN is realized as a virtual machine. Upon receiving traffic from the data plane, NPS analyses the functional requirements of the traffic and different controller performance parameters. Then it allocates the traffic to appropriate TNs for executing necessary network functions. In this respect, it provides performance benefits, robustness, fine-grained modularity, and strong isolation security in the processing of traffic flows on the SDN platform. Efficacy of our proposed architecture has been demonstrated with a case study. (c) 2020 The Authors. Production and hosting by Elsevier B.V. on behalf of King Saud University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
100,220
Title: Genetic Algorithm based Internet of Precision Agricultural Things (IopaT) for Agriculture 4.0 Abstract: The development of IoT is increasing in our daily life. Its applications are now becoming famous in rural areas also, such as Agriculture 4.0. Cheap sensors, climate data, soil information, and drones are now used to solve many real-time problems. One of the most emerging topics in the IoT in the Agriculture field is IoT based precision agriculture. The range of IoT applications can range between water spraying from drone, soil recommendation for different crops, weather prediction and recommendation for water supply, etc. In this paper we propose a system that will recommend whether water is needed or not by predicting the rain fall using Genetic Algorithm. In this article, we proposed a unique decision making method to predict Rainfall using Genetic Algorithm (GA) to identify the necessity of manual water supply is needed or not. The sensor based system will be activated to check wheather the GA based system completes its prediction correctly or not by sensing moisture level from the soil. If the moisture level of the soil crosses the pre-defined threshold value then plant watering is performed by quadrotor UAV. A terrace gardening system is also implemented in this article, which uses a pump for water spraying. Various atmospheric parameters help to develop a rainfall prediction system to enhance efficiancy more than 80% in the proposed IopaT system to make the system more interoperable.
100,382
Title: Bounding the final rank during a round robin tournament with integer programming (vol 17, pg 231, 2019) Abstract: In the published version of this article, the article contained an error in the Equation (6b).
100,466
Title: A class of new partial least square algorithms for first and higher order models Abstract: We propose a class of new partial least square (PLS) algorithms to build first and higher order latent variable models. There exist three well-known linear-regression-type PLS algorithms: Repeated indicators approach (RI), two-step approach (TS), and hybrid approach (H). RI uses observed variables repeatedly and leads to a possible bias of the estimates. TS needs two separate steps and does not take higher order latent variables into account when computing the scores of lower order constructs at the first step. H randomly assigns all observed variables to latent variables and may lead to the uncertainty of structure relationship each time. In addition, all the above linear-regression-type PLS algorithms only offer a conditional mean view of the relationships among variables and thus fail in quantifying the relationships at different levels. The new PLS algorithms use quantile regression to broaden this view by allowing coefficients to be estimated at different quantiles. Because of this attractive feature, we can capture overall view of structure relationships and complex associations among variables and highlight the changing relationships according to the explored quantile of interest. Our new PLS algorithms are compared to the existing ones in simulation studies, and applied to part of the 2018 Global Innovation Index study.
100,570
Title: An approach for optimal-secure multi-path routing and intrusion detection in MANET Abstract: Mobile ad-hoc network (MANET) is dynamic in nature that is susceptible to energy and security constraints. Among most of the existing techniques, energy optimization was a hectic challenge, which is addressed effectively using the routing protocols. Accordingly, this paper proposes an effective multipath routing protocol in MANET based on an optimization algorithm. The energy and the security crisis in the MANET are addressed effectively using the cluster head (CH) selection and intrusion detection strategies namely, fuzzy clustering and fuzzy Naive Bayes (fuzzy NB). Then, the multipath routing progresses using the secure nodes based on the routing protocol, Bird swarm-whale optimization algorithm (BSWOA), which is the integration of bird swarm optimization (BSA) in whale optimization algorithm (WOA). The selection of the optimal routes is based on fitness factors, such as connectivity, energy, trust, and throughput. The analysis of the methods is done using the attacks, such as flooding, blackhole, and selective packet drop based on the performance metrics. The proposed BSWOA acquired the maximal energy, throughput, detection rate, and a minimal delay of 9.48 Joule, 0.676 bps, 69.9%, and 0.00372 ms in the presence of the attack.
100,613
Title: On a queueing-inventory system with common life time and Markovian lead time process Abstract: We consider a correlated queueing-inventory system with Markovian arrival of customers, phase type distributed service time and Markovian lead time. Items in each cycle have a common life time. Before the realization of this, a purchased item in a cycle can be cancelled in that cycle itself provided inventory level has not dropped to zero. Common life time and inter-cancellation time follow independent exponential distributions. We exhaustively analyze this system. The special case of customer arrival following a Poisson process and service time exponentially distributed, is shown to yield product form solution, thus extending earlier work to the case of correlated lead time. The inventory replenishment policy is to bring the inventory level to its maximum at the lead time realization. Several numerical illustrations are provided to illustrate the system performance.
100,627
Title: On authenticated skyline query processing over road networks Abstract: In recent times, many location-based service providers (LBSPs) choose to outsource data query services to third-party cloud service providers (CSPs). This allows users to easily search for points of interests (POIs), such as restaurants and parking lots in their vicinity, using their mobile devices and in-vehicle infotainment units. Skyline query is one potential technique to be deployed for road networks. However, the untrusted CSPs may forge or omit query results, intentionally or not. Therefore, in this article, we posit that by observing the unique properties of skyline query results in road networks, we can bind each POI with four nearby POIs with special properties using signature chain technology. Our proposed approach not only provides users with skyline query result authentication ability over the road network, but also have low communication overhead. Specifically, the overhead analysis and experimental results show that our proposed approach decreases the communication overhead.
100,662
Title: Periodic replacement policies with shortage and excess costs Abstract: It has been proposed that if replacement time is planned too early prior to failure, a waste of operation cost, i.e., excess costs, would incur because the system might run for an additional period of time to complete critical operations, and if replacement time is too late after failure, a great failure cost, i.e., shortage cost, is incurred due to the delay in time of the carelessly scheduled replacement. In order to make the preventive replacement policies perform in a more general way, the above two variable types of costs are taken into considerations for periodic replacement policies in this paper. We firstly take up a standard model in which the unit is replaced preventively at periodic times. Secondly, the modeling approaches of whichever occurs first and last are applied into periodic and random models, and replacement first and last policies are discussed to find optimum periodic replacement times for a random working time. Furthermore, optimum working numbers are obtained for the extended models. We give analytical discussions of the above replacement policies, and finally, numerical examples are illustrated.
100,691
Title: Fuzzy testing model for the lifetime performance of products under consideration with exponential distribution Abstract: With intense competition in industry today, product quality has become a crucial factor influencing whether a firm can achieve sustainable operations and maintain competitiveness. Process capability indices are effective tools often used in manufacturing to determine whether products meets requirements, and most assume that the quality characteristics of products follow normal distributions. However, not all quality characteristics necessarily follow normal distributions; for example, product lifetime, a time-oriented quality characteristic, generally follows an exponential distribution or other associated non-normal distributions. The lifetime performance index $$ C_{L} $$ was thus developed to gauge the lifetime performance of products, and most related studies use the precise values of time data to evaluate product lifetime. However, in practice, measurement errors may hinder the accuracy of the observed values of quality characteristics, and the time at which the lifetime of a product ends becomes imprecise, which may result in uncertainty in the evaluation method and lead to errors in judgment. For this reason, this study thus proposes a triangular shaped fuzzy number for $$ C_{L}^{*} $$ to deal with imprecise data, and further develops a fuzzy testing model for lifetime performance index $$ C_{L} $$ , to assist manufacturers in evaluating product lifetime performance more cautiously and precisely. Finally, we provide an illustration of how the proposed approach can be implemented through a numerical example.
100,795
Title: Research on water temperature prediction based on improved support vector regression Abstract: This paper presents a model for predicting the water temperature of the reservoir incorporating with solar radiation to analyze and evaluate the water temperature of large high-altitude reservoirs in western China. Through mutual information inspection, the model shows that the dependent variable has a good correlation with water temperature, and it is added to the sample feature training model. Then, the measured water temperature data in the reservoir for many years are used to establish the support vector regression (SVR) model, and genetic algorithm (GA) is introduced to optimize the parameters, so as to construct an improved support vector machine (M-GASVR). At the same time, root-mean-square error, mean absolute error, mean absolute percentage error, and Nash–Sutcliffe efficiency coefficient are used as the criteria for evaluating the performance of SVR model, ANN model, GA-SVR model, and M-GASVR model. In addition, the M-GASVR model is used to simulate the water temperature of the reservoir under different working conditions. The results show that ANN model is the worst among the four models, while GA-SVR model is better than SVR model in terms of metric, and M-GASVR model is the best. For non-stationary sequences, the prediction model M-GASVR can well predict the vertical water temperature and water temperature structure in the reservoir area. This study provides useful insights into the prediction of vertical water temperature at different depths of reservoirs.
100,799
Title: Topology hiding routing based on learning with errors Abstract: The protocol of onion routing constitutes the underpinning of Onion Routing network for anonymous communication. However, since the main idea behind such protocol is to hierarchically peel the IP head and decode each crypted routing address, the topology with regards to path length or long-term direction cannot be preserved. Owing to topology exposure, lots of attacks such as denial of service and differential flow analysis can be effectively conducted to violate the security of it. Moreover, considering that the path should be constructed by a sender in advance, onion routing is infeasible to be transplanted to ad hoc networks, let alone other defects such as heavy burden about computational complexity and package length. In this paper, a topology-hiding routing protocol is proposed to address most of the aforementioned problems recurring to homomorphic learning with errors. Security analysis illustrated that nothing but only the IPs of adjacent hops will be revealed to any router. Also the computational complexity on sender side as well as the package length outperforms those of traditional onion routing algorithm in view of comparative simulation.
100,832
Title: Network video summarization based on key frame extraction via superpixel segmentation Abstract: The spread of insecure online video has been a serious social problem. The video summarization becomes one of key step for automatic filtering the expected video from the Internet. At present, the most existing video summarization methods are based on calculating the image similarity between video frames, so that the key frame can be selected properly. In this article, we introduce a superpixel segmentation based image similarity calculation, and then the metric is applied into video summarization. To identify the video key frames, we introduce superpixel segmentation to cluster the pixels locally by estimating the optical flow displacement field between successive frames, which can extract key frames and reduce video redundancy. On the VSUMM dataset and YouTube dataset, the experimental results demonstrate that the proposed method has clear advantages on both subjectively qualitative analysis and objectively quantitative evaluation comparing with the state of the art methods.
100,850
Title: Coordinated Caching and QoS-Aware Resource Allocation for Spectrum Sharing Abstract: 5G cellular networks will heavily rely on the use of techniques that increase the spectral efficiency (SE) to meet the stringent capacity requirements of the envisioned services. To this end, the use of coordinated multi-point (CoMP) as an enabler of underlay spectrum sharing promises substantial SE gains. In this work, we propose novel low-complexity coordinated resource allocation methods based on standard linear precoding schemes that not only maximize the sum-SE and protect the primary users from harmful interference, but they also satisfy the quality-of-service demands of the mobile users. Furthermore, we devise coordinated caching strategies that create joint transmission (JT) opportunities, thus overcoming the mobile backhaul/fronthaul throughput and latency constraints associated with the application of this CoMP variant. Additionally, we present a family of caching schemes that outperform significantly the “de facto standard” least recently used (LRU) technique in terms of the achieved cache hit rate while presenting smaller computational complexity. Numerical simulations indicate that the proposed resource allocation methods perform close to their interference-unconstrained counterparts, illustrate that the considered caching strategies facilitate JT, highlight the performance gains of the presented caching schemes over LRU, and shed light on the effect of various parameters on the performance.
100,882
Title: Tesia: A trusted efficient service evaluation model in Internet of things based on improved aggregation signature Abstract: Service evaluation model is an essential ingredient in service-oriented Internet of things (IoT) architecture. Generally, traditional models allow each user to submit their comments with respect to IoT services individually. However, these kind of models are fragile to resist various attacks, like comment denial attacks, and Sybil attacks, which may decrease the comments submission rate. In this article, we propose a new aggregation digital signature scheme to resolve the problem of comments aggregation, which may aggregate different comments into one with high efficiency and security level. Based on the new aggregation digital signature scheme, we further put forward a new service evaluation model named Tesia allowing specific users to submit the comments as a group in IoT networks. More specifically, they aggregate comments and assign one user as a submitter to submit these comments. In addition, we introduce the synchronization token mechanism into the new service evaluation model, to assure that all users in the group may sign their comments one by one, and the last one who receives the token is assigned as the final submitter. Tesia has more acceptable robustness and can greatly improve the comments submission rate with rather lower submission delay time.
100,910
Title: Second-order cone programming relaxations for a class of multiobjective convex polynomial problems Abstract: This paper is concerned with a multiobjective convex polynomial problem, where the objective and constraint functions are first-order scaled diagonally dominant sums-of-squares convex polynomials. We first establish necessary and sufficient optimality criteria in terms of second-order cone (SOC) conditions for (weak) efficiencies of the underlying multiobjective optimization problem. We then show that the obtained result provides us a way to find (weak) efficient solutions of the multiobjective program by solving a scalar second-order cone programming relaxation problem of a given weighted-sum optimization problem. In addition, we propose a dual multiobjective problem by means of SOC conditions to the multiobjective optimization problem and examine weak, strong and converse duality relations.
100,912
Title: Comparative analysis of median filter and its variants for removal of impulse noise from gray scale images Abstract: Image denoising is a vital pre-processing phase, used to refine the image quality and make it more informative. Many image-denoising algorithms have been proposed with their own pros and cons. This paper presents a comprehensive study of the median filter and its different variants to reduce or remove the impulse noise from gray scale images. These filters are compared with respect to their functionality, time complexity and relative performance. For performance evaluation of the existing algorithms, extensive MATLAB based simulations have been carried out on a set of images. For benchmarking the relative performance, we have used Peak Signal to Noise Ratio (PSNR), Root Mean Square Error (RMSE), Universal Image Quality Index (UQI), Structural Similarity Index (SSIM) and Edge-strength Similarity (ESSIM) as quality assessment metrics. The Extended median filter (EMF) and Modified BDND are best in terms of relative statistical ratios and pleasant visual results where IAMF is having the best time complexity among existing algorithms.(c) 2020 The Authors. Production and hosting by Elsevier B.V. on behalf of King Saud University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
100,971
Title: A malware detection method using satisfiability modulo theory model checking for the programmable logic controller system Abstract: Nowadays programmable logic controllers (PLCs) are suffering increasing cyberattacks. Attackers could reprogram PLCs to inject malware that would cause physical damages and economic losses. These PLC malwares are highly customized for the target which makes it difficult to extract a general pattern to detect them. In this article, we propose a PLC malware detection method based on model checking. Firstly, we improve the existing modeling method for PLC system by using the Satisfiability Modulo Theory (SMT) constraints to model the PLC system. We also present an algorithm that can transform the PLC program to the model. Our SMT-based model can deal with the features of the PLC system such as undetermined input signals, edge detection and so on. Secondly, we focus on malware detection and propose two methods, invariant extraction and rule design pattern, to generate detection rules. The former can extract the invariants from an original program, and the latter can lower the bar for user to design detection rules. Finally, we implement a prototype and evaluate it on three representative ICS scenarios. The evaluation result shows that our proposed method can successfully detect the malwares using four attack patterns.
101,131
Title: A new family of heavy tailed distributions with an application to the heavy tailed insurance loss data Abstract: Heavy tailed distributions play very significant role in the study of actuarial and financial risk management data but the probability distributions proposed to model such data are scanty. Actuaries often search for new and appropriate statistical models to address data related to financial risk problems. In this work, we propose a new family of heavy tailed distributions. Some basic properties of this new family of heavy tailed distributions are obtained. A special sub-model of the proposed family, called a new heavy tailed Weibull model is considered in detail. The maximum likelihood estimators of the model parameters are obtained. A Monte Carlo simulation study is carried out to evaluate the performance of these estimators. Furthermore, some actuarial measures such as value at risk and tail value at risk are calculated. A simulation study based on these actuarial measures is done. Finally, an application of the proposed model to a heavy tailed insurance loss data set is presented.
101,217
Title: LoRa-Based Medical IoT System Architecture and Testbed Abstract: Medical IoT (MIoT) or Internet of Medical Things is a new concept aimed to improve e-health services using interconnected medical devices. Medical applications are classified as critical in the context of Internet of Things and, with the new paradigm MIoT, one must analyse and improve every component of IoT architecture to develop reliable platforms. This paper reviews the current progress of Internet of Medical Things and Low Power Wide Area Networks, presenting the advantages and the drawbacks of nowadays systems and technologies. Moreover, this paper proposes new Medical IoT architectures based on LoRa technology dedicated to homecare and hospital services.
101,259
Title: Weighted symmetrized centered discrepancy for uniform design Abstract: Uniformity and projection uniformity are very important criteria in space-filling design and studied by many authors. In this paper, a new criterion, called weighted symmetrized centered discrepancy (WSCD), is put forwarded for measuring the uniformity of design array. Firstly, a new kernel function is deduced by symmetrizing the reproducing kernel of centered L-2 discrepancy (CD). The new criterion WSCD is generated from the symmetrized the reproducing kernel, which make it avoid the well-known shortcomings of CD. Secondly, the customized weights are set to all the sub-dimensions to conform with the Effect Hierarchy principle. The WSCD has similar computation formula with the popularly used discrepancies, hence all the construction methods for uniform design still work. Thirdly, the weights choice is also discussed, and a practical suggestion is made. A naive empirical comparison is also conducted to show that the new criterion performs better than those existed discrepancies in the variable screening.
101,426
Title: A decision support system for the dynamic hazardous materials vehicle routing problem Abstract: The problem of delivering hazardous materials to a set of customers under a dynamic environment is both relevant and challenging. The objective is to find the best routes that minimize both the transportation cost and the travel risk in order to meet the customers' demands or needs, within predefined time windows. Aside from the difficulties involved in the modeling of the problem, the solution should take into consideration the demands revealed overtime. To deal with this problem, a solution approach is required to continuously adapt the planned routes in order to respond the customers' demands. In this paper, the dynamic variant of the Hazardous Materials Vehicle Routing Problem with Time Windows (DHVRP) is introduced. Besides, a decision support system is developed for the DHVRP in order to generate the best routes, based on two new meta-heuristics: a bi-population genetic algorithm and a hybrid approach combining the genetic algorithm and the variable neighborhood search. An experimental investigation is conducted to evaluate the proposed algorithms, using Solomon's 56 benchmarks instances and through several performance measures. We show through computational experiments, that the new approaches are highly competitive with regards to two state-of-the-art algorithms.
101,442
Title: Optimized support vector neural network and contourlet transform for image steganography Abstract: Image steganography is one of the promising and popular techniques used to secure the sensitive information. Even though there are numerous steganography techniques for hiding the sensitive information, there are still a lot of challenges to the researchers regarding the effective hiding of the sensitive data. Thus, an effective pixel prediction-based image steganography method is proposed, which uses the error dependent SVNN classifier for effective pixel identification. The suitable pixels are effectively identified from the medical image using the SVNN classifier using the pixel features, such as edge information, pixel coverage, texture, wavelet energy, Gabor, and scattering features. Here, the SVNN is trained optimally using the GA or MS Algorithm based on the minimal error. Then, the CT is applied to the predicted pixel for embedding. Finally, the inverse CT is employed to extract the secret message from the embedded image. The experimentation of the proposed image steganography is performed using the BRATS database depending on the performance metrics, PSNR, SSIM, and correlation coefficient, which acquired 89.3253 dB, 1, and 1, for the image without noise and 48.5778 dB, 0.6123, and 0.9933, for the image affected by noise, respectively.
101,505
Title: Bayesian wavelet shrinkage with logistic prior Abstract: Consider the nonparametric curve estimation problem. Wavelet shrinkage methods are applied to the data in the wavelet domain for noise reduction. After denoising, the function can be estimated by wavelet basis expansion. The present paper proposes a Bayesian approach for wavelet shrinkage with the use of a mixture of a point mass function at zero and the logistic distribution symmetric around zero as the prior distribution to the wavelet coefficients of an unknown function in models with additive Gaussian errors. Statistical properties such as bias, classical and Bayesian risks of the rules are analyzed and performances of the proposed rules are obtained in simulations studies involving the Donoho-Johnstone test functions. Application in a mass spectrometry real data set is done.
101,580
Title: Quantum identity authentication protocol based on three-photon quantum error avoidance code in edge computing Abstract: Information security protection is always one of very significant and fundamental issues in edge computing, generally including cryptography, authentication, intrusion detection, privacy protection, and other technologies, and so forth. Quantum authentication is one of the latest extensions of authentication technology in quantum field. So far, most of the quantum identity authentication protocols proposed are based on noise-free environments. However, due to the existence of quantum noise in quantum channels is inevitable, the noise immunity is very important to quantum identity authentication. In this article, a novel quantum identity authentication protocol based on three-photon error avoidance code is proposed. In this protocol, quantum information is encoded on the noiseless subsystem, so the protocol can effectively resist on the interference of noise in the quantum channel to information transmission. The comprehensive analysis of antinoise performance and the security analysis on various eavesdropping attacks shows that the new protocol proposed in this article has not only good antinoise performance but also good security.
101,621
Title: A new ensemble learning method based on learning automata Abstract: Improving the performance of machine learning algorithms has been always the topic of interest in data mining. The ensemble learning is one of the machine learning methods that, according to the subject literature, it yields better performance than the single base learner in the accuracy parameter. In the ensemble learning, all base learners are considered at the same level in terms of power and separation capabilities. However, whether the ensemble is made of homogeneous based learners or it is made of heterogeneous base learners, in either case, weaknesses and strengths of the base learners are ignored. To overcome this challenge, the stronger coefficient of influence should be assigned to stronger base learners and the lower coefficient of influence should be assigned to weaker base learners. However, given that the data is associated with uncertainty in the real-world issues, it is impossible to determine which base learner performs better than the others under these circumstances. Learning Automata is one of the desirable options of reinforcement learning subject literature to dealing with dynamic environments. The learning automata works by receiving feedback from the environment. In this paper, a method named LAbEL has been proposed which allows the assignment of the coefficient of influence to each base learner in the ensemble dynamically. Due to the use of learning automata, the proposed method works adjusted to the problem space conditions. The LAbEL is based on learning automata and according to its ability to dealing with dynamic environments, it is possible to apply it to issues where data has nonlinear and unpredictable behavior.
101,649
Title: On a global efficiency criterion in multiobjective variational control problems with path-independent curvilinear integral cost functionals Abstract: In this paper, a global efficiency criterion is established for a class of multidimensional variational control problems governed by first order PDE&PDI constraints and path-independent curvilinear integral cost functionals. More precisely, a minimal efficiency criterion for a local efficient solution to be its global efficient solution in the considered optimization problem is formulated and proved. Also, the theoretical developments derived in the paper are accompanied by an example of a nonconvex optimization problem.
101,651