text
stringlengths
70
7.94k
__index_level_0__
int64
105
711k
Title: Smart home health monitoring system for predicting type 2 diabetes and hypertension Abstract: Home health monitoring can facilitate patient monitoring remotely for diabetes and blood pressure patients. Early detection of hypertension and diabetes is extremely important, as these chronic diseases often result in life-threatening complications when found at a later stage. This work proposes a smart home health monitoring system that helps to analyze the patient's blood pressure and glucose readings at home and notifies the healthcare provider in case of any abnormality detected. A combination of conditional decision-making and machine-learning approaches is used to predict hypertension and diabetes status, respectively. The goal is to predict the hypertension and diabetes status using the patient's glucose and blood pressure readings. Using supervised machine learning classification algorithms, herein a system is trained to predict the patient's diabetes and hypertension status. After analyzing all the classification algorithms, support vector machine classification algorithm was found to be most accurate and thus chosen to train the model. This proposed work develops an application for a home health monitoring system with a user-friendly easy-to-use graphical user interface to diagnose blood pressure and diabetes status of patients along with sending categorized alerts and real-time notifications to their registered physician or clinic all from home. (C) 2020 The Authors. Production and hosting by Elsevier B.V. on behalf of King Saud University.
58,835
Title: A new repair model and its optimization for cold standby system Abstract: In this paper, we propose a new repair model for a cold standby system, which consists of two components and one repairman. It is assumed that the consecutive working time follows decreasing geometric process after repair, and the repair time interval is a constant for component 1. For component 2 (standby component), the failure process during working time follows Generalized Polya Process, which is a generalized version of the nonhomogeneous Poisson process. Component 2 is rectified by Generalized Polya Process repair when it fails. The repair time of component 2 is assumed to be negligible. Component 1 is assumed to have priority in use. The long-run average cost rate function of the system is deduced based on the failure number of component 1. Moreover, the optimal replacement policy of model is established by minimizing the long-run average cost rate function theoretically, which proves the existence and uniqueness of the optimal replacement policy. Numerical examples are provided to verify the effectiveness of the proposed approaches. Sensitivity analysis are conducted to illustrate the influence of parameters under the optimal replacement policy.
59,007
Title: Combinatorial test list generation based on Harmony Search Algorithm Abstract: Combinatorial test case generation faces a problem on how to reduce the test cases by uncover the unnecessary test cases. So, there is a need for expert applications or strategies that generate the most optimum test cases keeping in mind the most important combinations. Complementing existing work on combinatorial test case generation strategies, also known as t-way testing strategies (i.e., where t represents interaction degree), this paper presents the design and the implementation of a new combinatorial test list generation strategy based on Harmony Search (HS) algorithm, called General T-way Harmony Search-based Strategy (GTHS). HS has been chosen to be the main engine of test generation because it could balance between intensification and diversification. Benchmarking experimental results show that GTHS produces competitive results as compared to other existing well-known optimization-based strategies and provides the support for high combination degrees (i.e., t ≤ 12).
59,091
Title: Functional linear models for interval-valued data Abstract: Aggregation of large databases in a specific format is a frequently used process to make the data easily manageable. Interval-valued data is one of the data types that is generated by such an aggregation process. Using traditional methods to analyze interval-valued data results in loss of information, and thus, several interval-valued data models have been proposed to gather reliable information from such data types. On the other hand, recent technological developments have led to high dimensional and complex data in many application areas, which may not be analyzed by traditional techniques. Functional data analysis is one of the most commonly used techniques to analyze such complex datasets. While the functional extensions of much traditional statistical techniques are available, the functional form of the interval-valued data has not been studied well. This article introduces the functional forms of some well-known regression models that take interval-valued data. The proposed methods are based on the function-on-function regression model, where both the response and predictor/s are functional. Through several Monte Carlo simulations and empirical data analysis, the finite sample performance of the proposed methods is evaluated and compared with the state-of-the-art.
59,270
Title: Polyad inconsistency measure for pairwise comparisons matrices: max-plus algebraic approach Abstract: A max-algebraic approach is applied in this study to assess the inconsistency of pairwise comparisons (PC) matrices. An input PC matrix is flexible: It can be nonreciprocal, inconsistent, and incomplete. Contrary to previous studies, inconsistency is examined for all polyads with varying cycle lengths, while typical assessment methods are triad based. Central is max-plus algebra, also known as tropical geometry. An input PC matrix is converted by a logarithmic mapping into linear space, an eigenvalue problem in max-plus algebra is then solved to obtain the most significant inconsistent cycle across the associated graph. The max-algebraic eigenvalue reflects the extent of inconsistency, and its exponential inversion signifies the maximum geometric mean of cycle inconsistencies. The new measure can thus be comprehended in both geometric mean and polyad contexts. First-time users of max-plus algebra can compute an approximate value with an off-the-shelf computational environment that provides matrix functionalities. The resulting measure passes all property requirements stipulated in three related categories of literature. Analytical results highlight the significance of our method.
59,271
Title: Artificial neural networks with a signed-rank objective function and applications Abstract: In this paper, we propose to analyze artificial neural networks using a signed-rank objective function as the error function. We prove that the variance of the gradient of the learning process is bounded as a function of the number of patterns and/or outputs, therefore preventing the gradient explosion phenomenon. Simulations show that the method is particularly efficient at reproducing chaotic behaviors from biological models such as the Logistic and Ricker models. In particular, the accuracy of the learning process is improved relatively to the least squares objective function in these cases. Applications in regression settings on two real datasets, one small and the other relatively large are also considered.
59,349
Title: A double sampling multivariate T-2 control chart with variable sample size and variable sampling interval Abstract: This paper presents an adaptive T-2 control chart for monitoring shifts in the mean vector of a multivariate normally distributed process. This chart which combines the double sampling, variable sample size and variable sampling interval features, called the DSVSSI T-2 chart and uses three different sample sizes, two different sampling intervals, two warning limits and two different control limits. A Markov chain approach is employed to compute the statistical performance measures. The proposed chart is compared with the existing standard T-2 and other adaptive multivariate T-2 charts. In all cases, the DSVSSI T-2 chart is quicker at detecting small to moderate shifts in the mean vector. An example is provided to illustrate the construction and implementation of the DSVSSI T-2 chart. Finally, some concluding remarks are given.
59,372
Title: A system dynamics-based approach to determinants of family business growth Abstract: Family business growth is crucial not only for the long-term sustainability of this type of firm but also for economic growth as a whole. However, family businesses face internal and external obstacles that hamper their healthy, sustainable growth. Analyses of family business growth are critical to overcoming these barriers. By combining fuzzy cognitive mapping techniques and the system dynamics approach, this study sought to create an analysis model that allows for a more holistic perspective on determinants of family business growth, their cause-and-effect relationships, and thus their long-term behavior. The model developed is a complete, transparent, and realistic tool that enables decision makers to make better and more informed decisions. This study was grounded in the insights provided by a panel of experts in family business, who shared their knowledge and experience over the course of two group sessions. The results of the static and dynamic analyses carried out were validated by the head of department and a board member of the Southern Business Support Center of Portugal’s Institute for the Support of Small and Medium-sized Enterprises and Innovation (IAPMEI). This research’s contributions are also discussed, and suggestions are made for future research.
59,609
Title: Enhancing understanding of foundation concepts in first year university STEM: evaluation of an asynchronous online interactive lesson Abstract: The transition to tertiary level learning requires students take responsibility for their own learning and independently synthesise conceptual knowledge specific to their discipline. We designed an interactive online lesson that aimed to build student engagement and foster self-regulation of understanding for a foundation concept in molecular biology in a first year general biology unit delivered to both on-campus and off-campus cohorts. Students demonstrated a high level of engagement following exposure to the lesson, and more attempted the molecular biology exam question, and subsequently recorded higher grades than in the previous year. The inclusion of an interactive online lesson has facilitated these first year science students to take responsibility for their own learning (through asynchronous engagement) and independently synthesise conceptual knowledge (improved summative outcomes). Development and application of online lessons for STEM disciplines that require students to synthesise discipline specific content has the potential to improve students' successful transition to university education.
59,648
Title: A Kriging-based multi-point sequential sampling optimization method for complex black-box problem. Abstract: The Generalized Efficient Global Optimization (GEGO) algorithm assisted by Kriging model can solve black-box problem of complex computing. However, a single sampling point obtained in each iteration process may cause longer objective-evaluation time and slower convergence speed in contrast with multi-point sampling optimization methods. For this, a Kriging-based multi-point sequential sampling optimization (KMSSO) method is presented. The proposed method uses uncertainty estimate information of Kriging to construct the multiple-point generalized Expected Improvement (EI) criterion. In optimization cycle, this criterion is maximized to produce the Pareto front data, which will be further screened to obtain final expensive evaluation points. For numerical tests and an engineering case, KMSSO is compared to GEGO and HAM algorithm and is shown to deliver better results. It is also proves that when multiple points are added per cycle, the optimization accuracy and convergence property are both improved.
59,656
Title: Use a sequential gradient-enhanced-Kriging optimal experimental design method to build high-precision approximate model for complex simulation problem. Abstract: The surrogate model based on Kriging has been widely used to approximate simulation problems of expensive computing. Although the accuracy of the gradient enhanced Kriging (GEK) is often higher than that of ordinary Kriging, designers cannot avoid more time consuming during gradient calculation of GEK. To this end, a sequential gradient-enhanced-Kriging optimal experimental design method with the Gaussian correlation function (GCF) is investigated to approximate complex black-box simulation problems by introducing gradient information of Kriging parameters. Due to the differentiable GCF, the gradient information can be simply evaluated. This characteristic make the proposed method effectively improve the modeling accuracy and efficiency of GEK. As expected, the test results from benchmark functions and the cycloid gear pump simulation show the feasibility, stability and applicability of the proposed method.
59,664
Title: On Face Irregular Evaluations of Plane Graphs Abstract: We investigate face irregular labelings of plane graphs and we introduce new graph characteristics, namely face irregularity strength of type (alpha,beta,gamma). We obtain some estimation on these parameters and determine the precise values for certain families of plane graphs that prove the sharpness of the lower bounds.
59,677
Title: Bootstrap bandwidth selection in time-varying coefficient models with jumps Abstract: The time-varying coefficient models (TVCMs) are very important tools to explore the hidden structure between the time series response variable and its predictors. The assumption that the coefficient functions are smooth can be restrictive in practice. The TVCMs with jumps are needed to be considered. Selection of smoothing parameters plays a critical role in assessing the performance of estimations of coefficient functions. In this article, we first develop a nonparametric two-step estimation procedure for estimating a finite number of jumps of the coefficient functions. Then, based on the estimated jumps, we propose a bootstrapping bandwidth selection procedure in the TVCMs with jumps. Monte Carlo simulations are conducted to evaluate the finite sample performance of the proposed data-driven estimation and bootstrap bandwidth selection procedures. As an illustration, the proposed methodologies are further applied to a real data example.
59,693
Title: Three economical life test acceptance sampling plans Abstract: The main objective of life test sampling plans developed under Type-I censoring, that is time truncation, is to make the decision about a lot with the minimum sample size for specified producer's and consumer's risks and mean life. In this article, we develop three new economical life test sampling plans by introducing a two-stage time-censoring scheme and making use of a modified exponentially weighted moving average statistic. From the comparison of life test sampling design parameters for generalized exponential and Weibull distributions, with a similar acceptance sampling plan in literature, we find the most economical life test sampling plan is the one that employs both above-mentioned scheme and statistic. We give two examples, including one real life example, to illustrate the performance of three new plans. Tables of life test sampling design parameters are simulated using software R.3.5.2.
59,857
Title: Monitoring of water quality in a shrimp farm using a FANET Abstract: This paper develops an architecture for flying ad-hoc networks (FANETs) to enable monitoring of water quality in a shrimp farm. Firstly, the key monitoring parameters for the characterization of water quality are highlighted and their desired operational ranges are summarized. These parameters directly influence shrimp survival and healthy growth. Based on the considered sensing modality, a reference architecture for implementing a cost-effective FANET based mobile sensing platform is developed. The controlled mobility of the platform is harnessed to increase the spatial monitoring resolution without the need for extensive infrastructure deployment. The proposed solution will be offered to shrimp farmers in the Mexican state of Colima once the laboratory trials are concluded.
60,193
Title: A balm: defend the clique-based attack from a fundamental aspect Abstract: Clique, as the most compact cohesive component in a graph, has been employed to identify cohesive subgroups of entities, and explore the sensitive information in the online social network, crowdsourcing network, and cyber-physical network, etc. In this study, we focus on the defense of clique-based attack and target at reducing the risk of entities security/privacy issues in clique structure. Since the ultimate resolution for defending the clique-based attack and risk is wrecking the clique with minimum cost, we establish the problem of clique-destroying in the network from a fundamental algorithm aspect. Interestingly, we notice that the clique-destroying problem in the directed graph is still an unsolved problem, and complexity analysis also does not exist. Therefore, we propose an innovative formal clique-destroying problem and proof the NP-complete problem complexity with solid theoretical analysis, then present effective and efficient algorithms for both undirected and directed graph. Furthermore, we show how to extend our algorithm to data privacy protection applications with controllable parameter k, which could adjust the size of a clique we wish to destroy. By comparing our algorithm with the up-to-date anonymization approaches, the real data experiment demonstrates that our resolution could efficaciously defend the clique-based security and privacy attacks.
60,218
Title: Reliability assessment for products with two performance characteristics based on marginal stochastic processes and copulas Abstract: Many highly reliable products (systems) usually have multiple dependent degradation processes because of their complex structure. Therefore, it is important to investigate the multivariate degradation models along with the reliability assessment in the stochastic modeling. This article proposes a framework for bivariate degradation modeling based on hybrid stochastic processes for products with two performance characteristics (PCs), the dependence of which is captured by copula functions. Considering heterogeneities among product units, different random effects are introduced in marginal stochastic processes. Then two classes of reliability functions based on joint posterior distributions are derived. In addition, a simulation study indicates that the holistic Bayesian parameter estimation method is better than the two-step Bayesian parameter estimation method. Finally, this article concludes with a case application to demonstrate the effectiveness of the proposed model.
60,327
Title: The development of a goodness-of-fit test for high level binary multilevel models Abstract: Before making inferences about a population using a fitted model, it is necessary to determine whether the fitted model describes the data well. A poorly fitted model may lead to biased and invalid conclusions, resulting in incorrect inferences. Recent studies show the necessity of goodness-of-fit tests for high level binary multilevel models. The focus here was to develop a goodness-of-fit test to use in the model adequacy testing of high level binary multilevel models and to examine, whether the type I error and power hold for the newly developed goodness-of-fit test considering a three-level random intercept model.
60,400
Title: Confidence intervals for difference between two binomial proportions derived from logistic regression Abstract: A common method of reporting the result of logistic regression is to provide an odds ratio and its corresponding confidence interval. The results of such statistical analyses cannot be further evaluated with respect to the consistency of confidence intervals between the odds ratio and the difference between proportions. In this paper, we propose a simple method to construct the confidence intervals for the difference between binomial proportions based on parameter estimates of logistic regression. Simulation results showed that the score-based confidence interval based on the sample marginal approach is a recommended method for sensitivity analysis in randomized clinical trials.
60,419
Title: Finding the strong efficient frontier and strong defining hyperplanes of production possibility set using multiple objective linear programming Abstract: Data envelopment analysis (DEA) models use the frontier of the production possibility set (PPS) to evaluate decision making units (DMUs). However, the explicit-form equations of the frontier cannot be obtained using the traditional DEA models. To fill this gap, the current paper proposes an algorithm to generate all strong-efficient DMUs and the explicit-form equations of the strong-efficient frontier and the strong defining hyperplanes for the PPS with the variable returns to scale (VRS) technology. The algorithm is based on a multiple objective linear programming (MOLP) problem in the DEA methodology, which is solved through the multicriteria simplex method. Also, Isermann's test is employed to specify strong-efficient nonbasic variables in each strong-efficient multicriteria simplex table. Before presenting the algorithm, a theoretical framework is introduced to characterize the relationships between the feasible region in the decision space of the MOLP problem and the PPS with the VRS technology. It is shown that the algorithm which has four phases is finitely convergent and has less computational complexity than other algorithms in the related literature. Finally, two examples are used to illustrate the algorithm.
60,475
Title: Optimal solutions for online conversion problems with interrelated prices Abstract: We consider various online conversion problems with interrelated prices. The first variant is the online time series search problem with unknown bounds on the relative price change factors. We design the optimal online algorithm IPN to solve this problem. We then consider the time series search with known bounds. Using the already established UND algorithm of Zhang et al. (J Comb Optim 23(2):159-166, 2012), we develop a new optimal online algorithm oUND which improves the experimental performance of the already existing optimal online algorithm for selected parameter constellations. We conduct a comparative experimental testing of UND and oUND and establish the parameter combinations for which one algorithm is better than the other. We then combine these two algorithms into a new one called cUND. This algorithm incorporates the strengths of UND and oUND and is also optimal online. Finally, we consider another variant, the general k-max search problem with interrelated prices, and also develop an optimal online algorithm.
60,486
Title: Improving parking availability prediction in smart cities with IoT and ensemble-based model. Abstract: Abstract Smart cities are part of the ongoing advances in technology to provide a better life quality to its inhabitants. Urban mobility is one of the most important components of smart cities. Due to the growing number of vehicles in these cities, urban traffic congestion is becoming more common. In addition, finding places to park even in car parks is not easy for drivers who run in circles. Studies have shown that drivers looking for parking spaces contribute up to 30% to traffic congestion. In this context, it is necessary to predict the spaces available to drivers in parking lots where they want to park. We propose in this paper a new system that integrates the IoT and a predictive model based on ensemble methods to optimize the prediction of the availability of parking spaces in smart parking. The tests that we carried out on the Birmingham parking data set allowed to reach a Mean Absolute Error (MAE) of 0.06% on average with the algorithm of Bagging Regression (BR). This results have thus improved the best existing performance by over 6.6% while dramatically reducing system complexity.
60,517
Title: A new approach to design median control charts for location monitoring Abstract: In SPC, the most effective and magnificent tool is the control chart. The structure of control charting schemes required the assumption that the process is free from disturbances with known parameters or correctly estimated from the in-control process. These assumptions need to full fill for monitoring location parameter with mean control charts. But in practical, these assumptions are not always true, due to occasionally presence of outliers. That is why the median is a suitable measure as compared to the mean in the presence of disturbances. The ranked and neoteric ranked set samplings are suggested in this study to monitor the location parameter of the process by using median estimator for and Shewhart type control charts. The run-length profile is used as a performance measure for proposed median based control charts. The results reveal that for median control charts based on neoteric ranked set sampling scheme yield excellent outcomes. The real-life application is provided of the new charting schemes.
60,716
Title: Multiple regression analysis for dynamics of patient volumes Abstract: We study a real data set of 7,894,947 patients who received service from the University of Michigan Health System (UMHS) from January 1, 2003 to December 31, 2008 using regression analysis to understand the dynamics of patient volume. Our objective is to find out patterns from time series of patient volume during economic crisis. We propose a contribution adjusted formula to understand the dynamics of a heterogeneous customer population. We find that the trend of patient volume for a health system is positively correlated to the trend of the underlying adjusted resident population and to the GDP rates and negatively correlated to annual unemployment rate. We also find that the percent change of patient volume in a health system depends on the threshold level curves of resident population and unemployment rate with nonlinear behavior. Our multiple regression model with quadratic response surface explains 98.9% of the variation. Moreover, the multiple regression model having lag 1 with interaction term explains 96.5% of the variation. Furthermore, we propose several models having dummy variables using localities for patient groups. Overall, our results suggest that people use more health services when they have enough income, job and health insurance.
60,728
Title: Numerical characteristics and parameter estimation of finite mixed generalized normal distribution Abstract: In this paper, a univariate finite mixed generalized normal distribution (MixGND) is proposed. First, we derive some probabilistic properties including hazard rate function, characteristic function, kurtosis and skewness, for a mixture of two generalized normal distributions. In particular, we use a geometric analysis and numerical simulation technique to study the monotonicity of skewness and kurtosis from prescribing corresponding parameters. Then moment estimation and maximum likelihood estimation of parameters are also given. To use the maximum likelihood estimation (MLE) method, an expectation conditional maximization (ECM) algorithm is proposed to estimate and numerically simulate seven parameters of a two-component MixGND under the same variance and heteroscedasticity. By using data sets of the S&P 500 and Shanghai Stock Exchange Composite Index (SSEC), we compare goodness-of-fit performance between the mixture of two generalized normal distributions and the mixture of two normal distributions. The empirical analysis results show that the former better describes the heavy-tailed and leptokurtic characteristics of the daily returns.
60,878
Title: A Classification of Cactus Graphs According to their Domination Number Abstract: A set S of vertices in a graph G is a dominating set of G if every vertex not in S is adjacent to some vertex in S. The domination number, gamma(G), of G is the minimum cardinality of a dominating set of G. The authors proved in [A new lower bound on the domination number of a graph, J. Comb. Optim. 38 (2019) 721-738] that if G is a connected graph of order n >= 2 with k >= 0 cycles and l leaves, then gamma(G) >= left ceiling (n - l + 2 - 2k)/3 right ceiling . As a consequence of the above bound, gamma(G) = (n - l + 2(1 - k) + m)/3 for some integer m >= 0. In this paper, we characterize the class of cactus graphs achieving equality here, thereby providing a classification of all cactus graphs according to their domination number.
60,903
Title: A methodology for improving efficiency estimation based on conditional mix-GEE models in longitudinal studies Abstract: Estimating random effects accurately is crucial since it reflects the subject-specific effect in longitudinal studies. In this paper, we develop a new methodology for improving the efficiency of fixed-effects and random-effects estimation based on conditional mix-GEE models. The advantage of our proposed approach is that the serial correlation over time was accommodated in estimating random effects. Meanwhile, the normality assumption for random effects is not required. In addition, according to the estimates of some mixture proportions, the true working correlation matrix can be identified. The feature of our proposed approach is that the estimators of the regression parameters are more efficient than CCQIF, cmix-GEE and CQIF approaches even if the working correlation structure is not correctly specified. In theory, we show that the proposed method yields a consistent and more efficient estimator than the random-effect estimator that ignores correlation information from longitudinal data. We establish asymptotic results for both fixed-effects and random-effects estimators. Simulation studies confirm the performance of our proposed method.
60,951
Title: On forecasting the intraday Bitcoin price using ensemble of variational mode decomposition and generalized additive model. Abstract: Abstract High frequency Bitcoin price series are often non-linear and non-stationary. Forecasting the price of Bitcoin directly or by transformation using statistical models is subject to large errors. This paper presents an ensemble model using variational mode decomposition (VMD) and Generalized additive model (GAM) to forecast intraday Bitcoin price. To evaluate the performance of the constructed model, it is compared with an ensemble of empirical mode decomposition (EMD) and GAM. The results showed that VMD-GAM model performed better than the EMD-GAM ensemble model in terms of three evaluation metrics (root mean square error, mean absolute percentage error, and bias) used.
60,974
Title: Predicting future lifetime for mixture exponential distribution Abstract: This paper studies the prediction problem of future order statistics based on a mixture of exponential distributions. We suggest different scenarios to construct predictive intervals and point predictions for the future observations in two cases. In the first case, we assume fixed sample size, while in the second case, the sample size is assumed to be positive integer-valued random variable independent of the observations.
61,417
Title: A genetic algorithm applied to optimal allocation in stratified sampling Abstract: Over the last decades, many researchers have studied and proposed new methods for the solution of the multivariate optimal allocation problem, which can be performed by choosing one of the following goals: (i) minimizing the weighted combination of relative variances, considering the sample size fixed or (ii) minimizing the sample size in a way that the coefficients of variation are lower or equal to the previously fixed coefficients of variation. Taking each goal into account, the present article proposes two heuristic algorithms that were developed by studying the optimization technique called biased random key genetic algorithm. The computational experiments reported in the end of this work indicate that the proposed algorithms can be a good alternative for the solution of this problem, when compared with the two methods from literature.
61,429
Title: Analyzing the educational goals, problems and techniques used in educational big data research from 2010 to 2018 Abstract: The purpose of this study is to review journal papers on educational big data research published from 2010 to 2018. A total of 143 papers were selected. The papers were characterized based on three dimensions: (a) educational goals; (b) educational problems addressed; and (c) big data analytical techniques used. A qualitative content analysis approach was conducted to develop a coding scheme for analyzing the selected papers. The results identified four types of educational goals, with a clear predominance of quality assurance. The identification of the most mentioned educational problems resulted in four main concerns: the lack of detecting student behavior modeling and waste of resources; inappropriate curricula and teaching strategies; oversights of quality assurance; and privacy and ethical issues. Concerning the most mentioned big data analytical techniques, the coding scheme revealed that the majority of the papers focused on the educational data mining technique followed by the learning analytics technique. The visual analytics technique was mentioned in a few papers. The results also indicated that the educational data mining technique is the most suitable technique to use for quality assurance and to provide potential solutions for the lack of detecting student behavior modeling and the waste of resources in institutions.
61,494
Title: Benchmarking Within A Dea Framework: Setting The Closest Targets And Identifying Peer Groups With The Most Similar Performances Abstract: Data envelopment analysis (DEA) is widely used as a benchmarking tool for improving performance of organizations. For that purpose, DEA analyses provide information on both target setting and peer identification. However, the identification of peers is actually a by-product of DEA. DEA models seek a projection point of the unit under evaluation on the efficient frontier of the production possibility set, which is used to set targets, while peers are identified simply as the members of the so-called reference sets, which consist of the efficient units that determine the projection point as a combination of them. In practice, the selection of peers is crucial for benchmarking, because organizations need to identify a peer group in their sector or industry that represents actual performances from which to learn. In this paper, we argue that DEA benchmarking models should incorporate into their objectives criteria for the selection of suitable benchmarks among peers, in addition to considering the setting of appropriate targets (as usual). Specifically, we develop models having two objectives: setting the closest targets and selecting the most similar reference sets. Thus, we seek to establish targets that require the least effort from organizations for their achievement in addition to identifying peer groups with the most similar performances, which are potential benchmarks to emulate and improve.
61,627
Title: Corrigendum to: Independent Transversal Domination in Graphs [Discuss. Math. Graph Theory 32 (2012) 5-17] Abstract: In [Independent transversal domination in graphs, Discuss. Math. Graph Theory 32 (2012) 5-17], Hamid claims that if G is a connected bipartite graph with bipartition {X, Y } such that |X| <= |Y| and |X| = gamma(G), then gamma(it)(G) = gamma(G) + 1 if and only if every vertex x in X is adjacent to at least two pendant vertices. In this corrigendum, we give a counterexample for the sufficient condition of this sentence and we provide a right characterization. On the other hand, we show an example that disproves a construction which is given in the same paper.
61,651
Title: Detecting Specular Reflections and Cast Shadows to Estimate Reflectance and Illumination of Dynamic Indoor Scenes Abstract: The goal of Mixed Reality (MR) is to achieve a seamless and realistic blending between real and virtual worlds. This requires the estimation of reflectance properties and lighting characteristics of the real scene. One of the main challenges within this task consists in recovering such properties using a single RGB-D camera. In this article, we introduce a novel framework to recover both the posit...
63,407
Title: Spatial- L ∞ -Norm-Based Finite-Time Bounded Control for Semilinear Parabolic PDE Systems With Applications to Chemical-Reaction Processes Abstract: This article investigates a spatial- $\mathcal {L}^{\infty }$ -norm-based reliable bounded control problem for a class of nonlinear partial differential equation systems in a finite-time interval. The main novelties are reflected in the following aspects: 1) inspired by the sector-nonlinearity approach, the considered nonlinear...
63,425
Title: Event-Triggered Adaptive Fuzzy Output-Feedback Control for Nonstrict-Feedback Nonlinear Systems With Asymmetric Output Constraint Abstract: This article addresses the event-triggered adaptive fuzzy output-feedback control problem for a class of nonstrict-feedback nonlinear systems with asymmetric and time-varying output constraints, as well as unknown nonlinear functions. By designing a linear observer to estimate the unmeasurable states, a novel event-triggered adaptive fuzzy output-feedback control scheme is proposed. The barrier Ly...
63,426
Title: Granularity and Entropy of Intuitionistic Fuzzy Information and Their Applications Abstract: A granular structure of intuitionistic fuzzy (IF) information presents simultaneously the similarity and diversity of samples. However, this structural representation has rarely displayed its technical capability in data mining and information processing due to the lack of suitable constructive methods and semantic interpretation for IF information with regard to real data. To pursue better perfor...
63,427
Title: Computational Approaches to Detect Illicit Drug Ads and Find Vendor Communities Within Social Media Platforms Abstract: AbstractThe opioid abuse epidemic represents a major public health threat to global populations. The role social media may play in facilitating illicit drug trade is largely unknown due to limited research. However, it is known that social media use among adults in the US is widespread, there is vast capability for online promotion of illegal drugs with delayed or limited deterrence of such messaging, and further, general commercial sale applications provide safeguards for transactions; however, they do not discriminate between legal and illegal sale transactions. These characteristics of the social media environment present challenges to surveillance which is needed for advancing knowledge of online drug markets and the role they play in the drug abuse and overdose deaths. In this paper, we present a computational framework developed to automatically detect illicit drug ads and communities of vendors. The SVM- and CNN- based methods for detecting illicit drug ads, and a matrix factorization based method for discovering overlapping communities have been extensively validated on the large dataset collected from Google+, Flickr and Tumblr. Pilot test results demonstrate that our computational methods can effectively identify illicit drug ads and detect vendor-community with accuracy. These methods hold promise to advance scientific knowledge surrounding the role social media may play in perpetuating the drug abuse epidemic.
63,509
Title: Sampled-Data Stabilization for Boolean Control Networks With Infinite Stochastic Sampling Abstract: Sampled-data state feedback control with stochastic sampling periods for Boolean control networks (BCNs) is investigated in this article. First, based on the algebraic form of BCNs, stochastic sampled-data state feedback control is applied to stabilize the considered system to a fixed point or a given set. Two kinds of distributions of stochastic sampling periods are considered. First, the distrib...
63,518
Title: Fully Adaptive-Gain-Based Intelligent Failure-Tolerant Control for Spacecraft Attitude Stabilization Under Actuator Saturation Abstract: This article investigates the attitude stabilization problem of a rigid spacecraft with actuator saturation and failures. Two neural network-based control schemes are proposed using anti-saturation adaptive strategies. To satisfy the input constraint, we design two controllers in a saturation function structure. Taking into account the modeling uncertainties, external disturbances, and adverse eff...
63,521
Title: Understanding Smartphone Users From Installed App Lists Using Boolean Matrix Factorization Abstract: Smartphones are changing humans’ lifestyles. Mobile applications (apps) on smartphones serve as entries for users to access a wide range of services in our daily lives. The apps installed on one’s smartphone convey lots of personal information, such as demographics, interests, and needs. This provides a new lens to understand smartphone users. However, it is difficult to compactly characterize a u...
63,522
Title: Targeted Bipartite Consensus of Opinion Dynamics in Social Networks With Credibility Intervals Abstract: This article investigates the targeted bipartite consensus problem of opinion dynamics in cooperative–antagonistic networks. Each agent in the network is assigned with a convergence set to represent a credibility interval, in which its opinion is trustworthy. The network topology is characterized by a signed switching digraph. The objective is to achieve the bipartite consensus targeted within the...
63,526
Title: 3-D Deconvolutional Networks for the Unsupervised Representation Learning of Human Motions Abstract: Data representation learning is one of the most important problems in machine learning. Unsupervised representation learning becomes meritorious as it has no necessity of label information with observed data. Due to the highly time-consuming learning of deep-learning models, there are many machine-learning models directly adapting well-trained deep models that are obtained in a supervised and end-...
63,532
Title: Brain EEG Time-Series Clustering Using Maximum-Weight Clique Abstract: Brain electroencephalography (EEG), the complex, weak, multivariate, nonlinear, and nonstationary time series, has been recently widely applied in neurocognitive disorder diagnoses and brain–machine interface developments. With its specific features, unlabeled EEG is not well addressed by conventional unsupervised time-series learning methods. In this article, we handle the problem of unlabeled EE...
63,536
Title: Laplacian Welsch Regularization for Robust Semisupervised Learning Abstract: Semisupervised learning (SSL) has been widely used in numerous practical applications where the labeled training examples are inadequate while the unlabeled examples are abundant. Due to the scarcity of labeled examples, the performances of the existing SSL methods are often affected by the outliers in the labeled data, leading to the imperfect trained classifier. To enhance the robustness of SSL ...
63,553
Title: A Transfer Learning-Based Multi-Instance Learning Method With Weak Labels Abstract: In multi-instance learning (MIL), labels are associated with bags rather than the instances in the bag. Most of the previous MIL methods assume that each bag has the actual label in the training set. However, from the process of labeling work, the label of a bag is always evaluated by the calculation of the labels obtained from a number of labelers. In the calculation, the weight of each labeler i...
63,554
Title: Multiperiod Location Models for Urban System Planning With Fuzzy Intercity Passenger Transportation Demands Abstract: The quality of transportation between cities depends on the accessibility of intercity passenger transportation (IPT) facilities, which is closely related to regional spatial development plans. Such plans aim to define an urban system (i.e., the level of hierarchy to assign to cities of the region under study) with a class of facilities corresponding to each level of hierarchy. We propose optimiza...
63,555
Title: Distributed Control of Time-Varying Signed Networks: Theories and Applications Abstract: Signed networks admitting antagonistic interactions among agents may polarize, cluster, or fluctuate in the presence of time-varying communication topologies. Whether and how signed networks can be stabilized regardless of their sign patterns is one of the fundamental problems in the network system control areas. To address this problem, this paper targets at presenting a self-appraisal mechanism ...
63,556
Title: Revealing Fine Structures of the Retinal Receptive Field by Deep-Learning Networks Abstract: Deep convolutional neural networks (CNNs) have demonstrated impressive performance on many visual tasks. Recently, they became useful models for the visual system in neuroscience. However, it is still not clear what is learned by CNNs in terms of neuronal circuits. When a deep CNN with many layers is used for the visual system, it is not easy to compare the structure components of CNNs with possib...
65,615
Title: A Probabilistic Niching Evolutionary Computation Framework Based on Binary Space Partitioning Abstract: Multimodal optimization problems have multiple satisfactory solutions to identify. Most of the existing works conduct the search based on the information of the current population, which can be inefficient. This article proposes a probabilistic niching evolutionary computation framework that guides the future search based on more sufficient historical information, in order to locate diverse and hi...
65,617
Title: Finite-Time Consensus Tracking for Incommensurate Fractional-Order Nonlinear Multiagent Systems With Directed Switching Topologies Abstract: This article investigates the problem of finite-time consensus tracking for incommensurate fractional-order nonlinear multiagent systems (MASs) with general directed switching topology. For the leader with bounded but arbitrary dynamics, a neighborhood-based saturated observer is first designed to guarantee that the observer’s state converges to the leader’s state in finite time. By utilizing a fu...
65,938
Title: An Approximate Neuro-Optimal Solution of Discounted Guaranteed Cost Control Design Abstract: The adaptive optimal feedback stabilization is investigated in this article for discounted guaranteed cost control of uncertain nonlinear dynamical systems. Via theoretical analysis, the guaranteed cost control problem involving a discounted utility is transformed to the design of a discounted optimal control policy for the nominal plant. The size of the neighborhood with respect to uniformly ulti...
65,939
Title: An Efficient Feedback Control Mechanism for Positive/Negative Information Spread in Online Social Networks Abstract: The wide availability of online social networks (OSNs) facilitates positive information spread and sharing. However, the high autonomy and openness of the OSNs also allow for the rapid spread of negative information, such as unsubstantiated rumors and other forms of misinformation that often elicit widespread public cognitive misleads and huge economic losses. Therefore, how to effectively control...
65,941
Title: Augmented reality in STEM education: a systematic review Abstract: This study aimed to systematically investigate the studies in which augmented reality (AR) was used to support Science, Technology, Engineering and Mathematic (STEM) education. In this framework, the general status of AR in STEM education was presented and its advantages and challenges were identified. The study investigated 42 articles published in journals indexed in SSCI database and deemed suitable for the purposes of this research. The obtained data were analyzed by two researchers using content analysis method. It was found that the studies in this field have become more significant and intensive in recent years and that these studies were generally carried out at schools (class, laboratory etc.) using marker-based AR applications. It was concluded that mostly K-12 students were used as samples and quantitative methods were selected. The advantages of AR-STEM studies were summarized and examined in detail in 4 sub-categories such as "contribution to learner, educational outcomes, interaction and other advantages". On the other hand, some challenges were identified such as teacher resistance and technical problems.
66,311
Title: Design and development of network simulator module for distributed mobility management protocol Abstract: Distributed Mobility Management (DMM) is presented recently by IETF to overcome the limitations of the conventional Centralized Mobility Management (CMM) protocols. It is developed based on the network-based CMM protocol; Proxy Mobile IPv6 (PMIPv6). DMM tackles the issue of relying on a single entity by decoupling the control and data planes and distributes the functionalities of the centralized entity in CMM protocols. To study and examine the performance of DMM protocol, different evaluation approaches can be utilized. Although the test-bed implementation is more realistic approach, its drawbacks, such as high cost, complexity and unscalable, make the utilization of test-bed extremely difficult. Alternatively, simulation is an inexpensive and effective approach of testing various complex scenarios with scalability features. In this context, this paper presents DMM module for Network Simulator-2 (NS-2) implementing DMM entities, functionalities and operation. The paper also evaluates the performance of DMM protocol comparing to CMM protocol in different scenarios. The analytical evaluation of handover latency and session recover time is employed to ensure validity and reliability of DMM module. The results show that the developed module is valid and reliable where the theoretical results are approximately similar to the results obtained from simulation.
66,321
Title: Increasing flexibility and productivity in Industry 4.0 production networks with autonomous mobile robots and smart intralogistics Abstract: Manufacturing flexibility improves a firm's ability to react in timely manner to customer demands and to increase production system productivity without incurring excessive costs and expending an excessive amount of resources. The emerging technologies in the Industry 4.0 era, such as cloud operations or industrial Artificial Intelligence, allow for new flexible production systems. We develop and test an analytical model for a throughput analysis and use it to reveal the conditions under which the autonomous mobile robots (AMR)-based flexible production networks are more advantageous as compared to the traditional production lines. Using a circular loop among workstations and inter-operational buffers, our model allows congestion to be avoided by utilizing multiple crosses and analyzing both the flow and the load/unload phases. The sensitivity analysis shows that the cost of the AMRs and the number of shifts are the key factors in improving flexibility and productivity. The outcomes of this research promote a deeper understanding of the role of AMRs in Industry 4.0-based production networks and can be utilized by production planners to determine optimal configurations and the associated performance impact of the AMR-based production networks in as compared to the traditionally balanced lines. This study supports the decision-makers in how the AMR in production systems in process industry can improve manufacturing performance in terms of productivity, flexibility, and costs.
66,696
Title: On the generation of factorial designs with minimum level changes Abstract: Factorial experiments, wherein two or more factors each at two or more levels are used simultaneously, have profound applications in many fields of agricultural and allied sciences. These experiments allow studying the effect of each individual factor as well as the effects of interactions between factors on the response variable. In order to avoid any kind of bias in the estimation of these effects, it is always advisable that the order of execution of runs in a factorial design is random. However, experimentation under factorial setup may become expensive, time-consuming and difficult due to a large number of changes in factor levels induced by randomization. Adoption of factorial designs with minimum number of changes in the factor levels may prove to be useful in such situations in terms of cost and time. Present paper describes online software developed using client-server architecture for generation of cost effective factorial designs with minimum number of changes in the factor levels viz., webFMC and an R package named FMC.
66,752
Title: Bounding the final rank during a round robin tournament with integer programming Abstract: This article is mainly motivated by the urge to answer two kinds of questions regarding the Bundesliga, which is Germany's primary football (soccer) division having the highest average stadium attendance worldwide: "At any point in the season, what is the lowest final rank a certain team can achieve?" and "At any point in the season, what is the highest final rank a certain team can achieve?". Although we focus on the Bundesliga in particular, the integer programming formulations we introduce to answer these questions can easily be adapted to a variety of other league systems and tournaments.
67,030
Title: An analytical framework for distributed and centralized mobility management protocols Abstract: Proxy Mobile IPv6 (PMIPv6) maintains the mobility management of mobile users without involving them in the signaling of mobility process. The main limitations of PMIPv6 are the high latency and packet loss. Consequently, IETF has addressed these limitations by standardizing Fast Handover for Proxy Mobile IPv6 (PFMIPv6) protocols. The whole processes of PMIPv6 and PFMIPv6 protocols, including mobility management and connectivity needs, are based on a centralized and static mobility anchor. Therefore, the centralized anchor usually suffers from enormous burdens and hence degradation in performance, scalability, and reliability of the network. Lately, Distributed Mobility Management (DMM) solution is introduced based on PMPv6 to tackle the issue of relying on a single entity. Analyzing and investigating the performance of these centralized and distributed solutions depends on traffic characteristics and user mobility model. Accordingly, we propose through these two factors an analytical framework to evaluate the handover performance of PMIPv6, PFMIPv6 and DMM in vehicular environment. Our analysis and experimental validation are very significant to determine the impacts of different network parameters on the handover performance of these protocols to facilitate decision making on which analytical framework must be adopted in a network. Analytical results demonstrate that there is a trade-off between network parameters and handover performance metrics. PFMIPv6 is the most suited protocol for low to high mobility scenarios in term of handover performance.
67,083
Title: A hybrid modified step Whale Optimization Algorithm with Tabu Search for data clustering Abstract: Clustering is a popular data analysis tool that groups similar data objects into disjoint clusters. Clustering algorithms have major drawbacks, such as: sensitivity to initialization and getting trapped in local optima. Swarm Intelligence (SI) have been combined with clustering techniques to enhance their performance. Despite their improvements, some SI methods, such as Particle Swarm Optimization (PSO) and its rivals, had their drawbacks, such as low convergence rates and generating low quality solutions. This is due to storing a single best solution about solution space, which can lead swarm members to local optima. Whale Optimization Algorithm (WOA) is a recent SI method, which has been proved as a global optimizer over multiple optimization problems. Unlikely, WOA uses the same concept as PSO, it uses the best solution to reposition swarm members. To overcome this problem, WOA needs to store multiple best solutions about solution space. Tabu Search (TS) is a meta-heuristic method, which uses memory components to explore and exploit search space. In this paper, we propose combining WOA with TS (WOATS) for data clustering. To assess the efficiency of WOATS, it has been applied in data clustering. WOATS uses an objective function inspired by partitional clustering to maintain the quality of clustering solutions. WOATS was tested over multiple real life datasets, which their sizes vary from small to large-sized datasets. WOATS was able to find centers with high quality in a small number of iterations regardless of the size of the dataset, which clarifies its ability to cover the solution space efficiently. The generated clusters were evaluated according to different outlier criteria: Davies-Bouldin Index and Dunn Index. According to these criteria, WOATS presented its superiority over multiple recent original and hybrid SI methods for different datasets. Results obtained by WOATS were better than the results of the other methods with big differences regarding to the different outlier criteria, which ensured the efficiency of WOATS. (C) 2020 The Authors. Production and hosting by Elsevier B.V. on behalf of King Saud University.
67,262
Title: Several two-component mixture distributions for count data Abstract: Finite mixture model is a flexible approach for modeling multimodal data. Multimodality can be present in the data when the data constitute several subpopulations. In this study, several two-component mixture distributions for count data are proposed and described to cater for bimodality issue. The distributions considered in developing mixture distributions are Poisson (P), Poisson Lindley (PL), negative binomial (NB) as well as negative binomial Lindley (NBL), and altogether, a total of ten two-component mixture distributions are obtained. The maximum likelihood estimators for each mixture distribution are obtained by employing the L-BFGS-B method. A comparison study based on graphical approach is conducted to investigate the effect of mixing proportion on the resulting mixture distribution which are found based on different shapes of probability curve and positions of the mode. A simulation study is conducted to investigate the performance of each mixture distribution in fitting data that come from two subpopulations with different mean and dispersion values. Three mixture models which are P-NB, PL-NB and NB-NB, are the most commonly identified as adequate in describing the simulated data with various different types of mixing properties. These three distributions are considered to be the most flexible and thus, suggested for real data applications.
67,320
Title: The business model of intelligent manufacturing with Internet of Things and machine learning Abstract: To establish a business model of intelligent manufacturing, the sequence Generative Adversarial Network (SeqGAN) was used to optimise the Back Propagation (BP) neural network algorithm improved by multi-objective Genetic Algorithm to propose the sequence Generative Adversarial Network-Genetic Algorithm Back Propagation Algorithm (SeqGAN-GABP). Meanwhile, the Elman algorithm was optimised by the SeqGAN model to propose the SeqGAN-Elman algorithm. The algorithms were constructed and trained and were applied to the Internet of Things platforms. The results showed that the SeqGAN-GABP algorithm outperforms the SeqGAN-Elman algorithm in terms of minimal error, fitting accuracy, training time and internal memory usage.
67,329
Title: Developing an online VR tool for participatory evaluation of animal vocal behaviours Abstract: Animal vocal behaviour represents a certain meaning, which is the explicit expression of animal emotions, needs and communication. Different from human language, animal vocal behaviour is very abstract, and the learning material of animal vocal behaviour is also more difficult to obtain. It is hard for college students to recognize vocal behaviours of animals accurately. Using virtual reality (VR) to describe them is an innovation in outdoor practice. Zoology teaching can be further enhanced by VR online application. However, current online VR application is mostly restricted as visualization tools, and the application in virtual sound environment is rarely supported. An online VR demonstrator tool was developed by integrating affordable visualization and auralization components. The tool could be published in mainstream web browsers with users' own devices. Sichuan golden monkey (Rhinopithecus roxellana) of Shennongjia Nature Reserve in Hubei province, China was used as the case site to create the virtual environment. A participatory evaluation was performed to test and evaluate the effect of the online VR tool. The result was analysed and discussed on the usability and potential of the VR tool in animal vocal behaviours.
67,360
Title: A blockchain-enabled e-learning platform Abstract: The properties of a blockchain such as immutability, provenance, and peer-executed smart contracts could bring a new level of security, trust, and transparency to e-learning. In this paper, we introduce our proof-of-concept blockchain-based e-learning platform developed to increase transparency in assessments and facilitate curriculum personalisation in a higher education context. Most notably, our platform could automate assessments and issue credentials. We designed it to be pedagogically neutral and content-neutral in order to showcase the benefits of a blockchain back-end to end users such as students and teaching staff. Our evaluation suggests that our platform could increase trust in online education providers, assessment procedures, education history and credentials.
67,406
Title: A modified biogeography-based optimization algorithm with guided bed selection mechanism for patient admission scheduling problems Abstract: One of the complex combinatorial optimization problems is the Patient admission scheduling problem (PASP), which is concerned with assigning the patients arriving into a hospital to available beds to get medical services. The objective of PASP is to maximize the patients comfort, medical treatment effectiveness, and hospital utilization. This research proposes a new approach based on Biogeography-Based Optimization (BBO) algorithm for tacking the PASP. BBO was inspired from the idea of species migration between different habitats. Due to the complexity of the search space in PASP, the original BBO has been equipped with a guided bed selection (GBS) mechanism in order to improve its results and performance, as well as the operator capabilities of BBO which are modified to improve its diversity. These three variants of BBO are compared with each other using six de facto data sets that are widely used in the literature with varying sizes and complexity. The modified BBO is able to yield better results than the other variants. In a nutshell, this paper provides a new PASP method that can be considered an efficient alternative for the scheduling domain to be used by other researchers. (C) 2020 The Author. Production and hosting by Elsevier B.V. on behalf of King Saud University.
67,448
Title: Algorithmic Aspects of the Independent 2-Rainbow Domination Number and Independent Roman {2}-Domination Number Abstract: A 2-rainbow dominating function (2RDF) of a graph G is a function g from the vertex set V (G) to the family of all subsets of {1, 2} such that for each vertex v with g(v) = null we have ?(u)(is an element of)(N(v)) g(u) = {1, 2}. The minimum of g(V (G)) = sigma(v is an element of)(V (G)) |g(v)| over all such functions is called the 2-rainbow domination number. A 2RDF g of a graph G is independent if no two vertices assigned non empty sets are adjacent. The independent 2-rainbow domination number is the minimum weight of an independent 2RDF of G. A Roman {2}-dominating function (R2DF) f : V -> {0, 1, 2} of a graph G = (V, E) has the property that for every vertex v is an element of V with f(v) = 0 either there is u is an element of N(v) with f(u) = 2 or there are x, y is an element of N(v) with f(x) = f(y) = 1. The weight of f is the sum f(V ) = sigma(v)(is an element of)(V) f(v). An R2DF f is called independent if no two vertices assigned non-zero values are adjacent. The independent Roman {2}-domination number is the minimum weight of an independent R2DF on G. We first show that the decision problem for computing the independent 2-rainbow (respectively, independent Roman {2}-domination) number is NP-complete even when restricted to planar graphs. Then, we give a linear algorithm that computes the independent 2-rainbow domination number as well as the independent Roman {2}-domination number of a given tree, answering problems posed in [M. Chellali and N. Jafari Rad, Independent 2-rainbow domination in graphs, J. Combin. Math. Combin. Comput. 94 (2015) 133-148] and [A. Rahmouni and M. Chellali, Independent Roman {2}-domination in graphs, Discrete Appl. Math. 236 (2018) 408-414]. Then, we give a linear algorithm that computes the independent 2-rainbow domination number of a given unicyclic graph.
67,632
Title: Smart occupancy detection for road traffic parking using deep extreme learning machine Abstract: Predicting the location of parking is a long-lasting problem that has ultimate importance in our daily life. In this paper, artificial neural networks are used to predict the parking location that will be helpful for drivers to settle on a reasonable area for stopping. This approach eventually adds to the familiarity and wellbeing of traffic on the roads which results in a decrease in turbulence. By using the approach of Deep Extreme Learning Machine (DELM), reliability can be achieved with a marginal error rate thus reducing the skeptical inclination. In this article, the Proposed Car Parking Space Prediction (CPSP) to elaborate on the dilemma of parking space for vehicles, we have used deep learning neural networks in contrast with feedforward and backward propagation. When the results were taken into consideration, it was unveiled that extreme deep machine learning neural network bears the highest accuracy rate with 60% of training (21431 samples), 40% of test and validation (14287 examples). It has been observed that the proposed DELM has the highest precision rate of 91.25%. Simulation results validate the prediction effectiveness of the proposed DELM strategy. (C) 2020 The Author. Production and hosting by Elsevier B.V. on behalf of King Saud University.
67,720
Title: Isolation branching: a branch and bound algorithm for the k-terminal cut problem Abstract: In the k-terminal cut problem, we are given a graph with edge weights and k distinct vertices called “terminals.” The goal is to remove a minimum weight collection of edges from the graph such that there is no path between any pair of terminals. The k-terminal cut problem is NP-hard. The k-terminal cut problem has been extensively studied and a number of algorithms have been devised for it. Most are approximation algorithms. There are also fixed-parameter tractable algorithms, but none have been shown empirically practical. It is also possible to apply implicit enumeration using any integer programming formulation of the problem and solve it with a branch-and-bound algorithm. Here, we present a branch-and-bound algorithm for the k-terminal cut problem which does not rely on an integer programming formulation. Our algorithm employs “minimum isolating cuts” and, for this reason, we call our branch-and-bound algorithm Isolation Branching. In an empirical experiment, we compare the performance of Isolation Branching to that of a branch-and-bound applied to the strongest known integer programming formulation of k-terminal cut. The integer programming branch-and-bound procedure is implemented with Gurobi, a commercial mixed-integer programming solver. We compare the performance of the two approaches for real-world instances and simulated data. The results on real data indicate that Isolation Branching, coded in Python, runs an order of magnitude faster than Gurobi for problems of sizes of up to tens of thousands of vertices and hundreds of thousands of edges. Our results on simulated data also indicate that Isolation Branching scales more effectively. Though we primarily focus on creating a practical tool for k-terminal cut, as a byproduct of our algorithm we prove that the complexity of Isolation Branching is fixed-parameter tractable with respect to the size of the optimal solution, thus providing an alternative, constructive, and somewhat simpler, proof of this fact.
67,752
Title: Learning and Dynamic Decision Making Abstract: Humans make decisions in dynamic environments (increasingly complex, highly uncertain, and changing situations) by searching for potential alternatives sequentially over time, to determine the best option at a precise moment. Surprisingly, the field of behavioral decision making has little to offer in terms of theoretical principles and practical guidelines on how people make decisions in dynamic situations. My research program aims to fill in this gap by developing theoretical understandings of decision processes as well as practical demonstrations of how these theoretical developments can improve human dynamic decision making. Throughout my research career, I have helped create, test, and improve a general theory of dynamic decision making, instance-based learning theory, IBLT. The methods I have used to contribute to IBLT are (1) laboratory experiments that rely on dynamic games in which humans make choices over time and space, individually and in teams, and from which we extrapolate robust phenomena and behavioral insights; and (2) computational, actionable cognitive models, which specify the decision-making process and the cognitive mechanisms involved into a computational algorithm. The combination of these methods spawned novel applications in areas such as cybersecurity, phishing, climate change, and human-machine interactions. In this paper, I will take you through my own intellectual exploratory experience of computational modeling of human decision processes, and how the integration of experimental work and cognitive modeling helped in discovering and uncovering the field of dynamic decision making.
67,776
Title: Robust nonparametric derivative estimator Abstract: In this paper, a robust nonparametric derivative estimator is proposed to estimate the derivative function of nonparametric regression when the data contain noise and have curves. A robust estimation of the derivative function is important for understanding trend analysis and conducting statistical inferences. The methods for simultaneously assessing the functional relationship between response and covariates as well as estimating its derivative function without trimming noisy data are quite limited. Our robust nonparametric derivative functions were developed by constructing three weights and then incorporating them into kernel-smoothing. Various simulation studies were conducted to evaluate the performance of our approach and to compare our proposed approach with other existing approaches. The advantage of our robust nonparametric approach is demonstrated using epidemiology data on mortality and temperature in Seoul, South Korea.
67,807
Title: Simulating survival data with predefined censoring rates under a mixture of non-informative right censoring schemes Abstract: Simulation studies have been routinely used to validate the performances of statistical methods for censored survival data under various scenarios. Our previous work proposed an integrated approach of simulating right censored survival data for proportional hazards models given a set of arbitrarily distributed baseline covariates and predefined censoring rates. However, the limitations are that all study subjects are assumed to be enrolled at the same time and there is no study ending time. We extended the previous work to accommodate the more realistic scenario under which study subjects are enrolled at a constant rate during an enrollment period and are then followed until one of the following events occurs: (a) the event of interest (e.g., death or occurrence of disease); (b) the end of study period; (c) early withdraws from random censoring events, whichever comes first. To demonstrate the application of the proposed approach in practice, we generated censored survival data and assessed the impact of several factors (the magnitude of confounding, size of treatment effect, the sine distance between coefficient vectors of confounders in the treatment and outcome models, and censoring rate) on the potential bias of propensity score matching estimators in estimating conditional and marginal hazards ratios.
67,978
Title: BYOD Policy Compliance: Risks and Strategies in Organizations Abstract: The proliferation of mobile devices has brought the Bring Your Own Device (BYOD) trend in organizations, along with significant challenges when employees fail to comply with security policies. Previous reviews in this research area had focused solely on the technical issues surrounding BYOD implementation while leaving out the human behavior in complying with security policies which is a major contributing factor to security vulnerabilities. In this paper, we systematically review the literature to gather evidences related to security risks, challenges posed by employees' security policy noncompliance behavior and mitigation strategies to address them. The risks are mapped according to the People, Process and Technology (PPT) Model. The review reports that security policy compliance in a BYOD environment remains scarce which makes this review a novel contribution to research on human security behavior in Information System (IS). In addition, open research problems and future research directions are presented in the paper.
68,040
Title: The bi-criteria seeding algorithms for two variants of k-means problem Abstract: The k-means problem is very classic and important in computer science and machine learning, so there are many variants presented depending on different backgrounds, such as the k-means problem with penalties, the spherical k-means clustering, and so on. Since the k-means problem is NP-hard, the research of its approximation algorithm is very hot. In this paper, we apply a bi-criteria seeding algorithm to both k-means problem with penalties and spherical k-means problem, and improve (upon) the performance guarantees given by the k-means++ algorithm for these two problems.
68,179
Title: Total Coloring of Claw-Free Planar Graphs Abstract: A total coloring of a graph is an assignment of colors to both its vertices and edges so that adjacent or incident elements acquire distinct colors. Let Delta(G) be the maximum degree of G. Vizing conjectured that every graph has a total (Delta + 2)-coloring. This Total Coloring Conjecture remains open even for planar graphs, for which the only open case is Delta = 6. Claw-free planar graphs have Delta <= 6. In this paper, we prove that the Total Coloring Conjecture holds for claw-free planar graphs.
68,258
Title: Optimal UPQC location in power distribution network via merging genetic and dragonfly algorithm Abstract: Nowadays, flexible alternating current transmission system devices, particularly unified power quality conditioner (UPQC) are found to have significant impacts on stability of rising power system. In power systems, several intellectual optimization methods were exploited to position the UPQC. However, those optimization models fail to offer more reliability and feedback signal. Hence, this paper presents a power quality improvement model, which is based on a hybrid algorithm that links genetic algorithm (GA) and DragonFly algorithm (DA). In the current research work, the optimal solution is determined based on the crossover operation of GA in dragonfly algorithm (DA), and hence, the adopted model is named as Genetically Modified DA algorithm. Moreover, the proposed model discovers the optimal location of UPQC device by focusing on the UPQC cost, power losses, and Voltage stability Index. The proposed model is carried out in IEEE 69, and IEEE 33 test bus systems. In addition, the performance of implemented model is distinguished over other conventional models such as artificial bee colony, firefly, grey wolf optimization, whale optimization algorithm, worst solution linked whale optimization algorithm update (WS-WU), GA and DA. The performance of the proposed model is effectively proved by performance and convergence analysis.
68,276
Title: Development of retail network on the example of three regional towns comparison in West Slovakia Abstract: The main aim of this paper is to show the specifics of the retail network of three regional towns in the Slovak Republic-Nitra, Trnava and Zilina, and answer the following question: What value of the Population to the Admissible Floor Space (PAFS) indicator will cause saturation of the retail network? The situation of the retail network of the examined towns from 2015 was compared by using the standard methods based on the indicators from detailed passportisation of the retail facilities in the terrain. The indicators are: amount and assortment structure of the stores, sales area size, PAFS and concentration index at the level of the town districts. Based on the result of the comparison, regularities of space structure formation of the retail amenities of the mentioned towns in Slovakia were identified and their features were pointed out. According to the survey results, the size of sales area is the clear indicator of the retail network evaluation. The sales area size defines the following changes in the best detail, and it appears as the most suitable tool for the analysis. However, in this case, more thorough and detailed passportation is needed. Saturation stage was identified only in Nitra, where it was identified after the year 2010. This process has not occurred yet in other two region towns. Quantitatively new stage called: "The stage gradually flowing "purification" of the retail network from the excessive sales area" is expected to start in the Nitra town.
68,299
Title: Optimistic Bias and Exposure Affect Security Incidents on Home Computer Abstract: Individuals who are optimistically biased believe that they are less likely to experience an adverse event or more likely to experience a positive event than their peers. This paper examines whether individuals feel less likely to experience a breach on their home computers (cyber optimistic bias), have more computer and security education, apply protective more measures, and experience more security incidents. The study also explored whether spending more time on the computer or visiting more untrusted sites resulted in more security breaches. College students responded to a survey that measured cyber optimistic bias, technical optimism, security education, protective measures, and security incidents. Findings showed that as one's belief that they will become a cyber-victim increases, they report experiencing more security incidents and visited more untrusted sites. In addition, visits to untrusted web sites was related to increased security incidents reported on home computers; while time spent on the computer was not related to experiencing security incidents. Finally, computer/security education had no relation to cyber optimistic bias.
68,306
Title: Neural network with adaptive evolutionary learning and cascaded support vector machine for fault localization and diagnosis in power distribution system Abstract: Fault diagnosis and classification in electric power system is necessary to maintain a protected operation of power system. The classification of this signal is complex due to the large dataset, computational complexity and limited real time performance. This paper focuses on the detection and classification of electric power transmission using neural network with adaptive evolutionary learning and cascade support vector machine (SVM) with wavelet descriptors of the signal to overcome such limitations. Initially the wavelet decomposed fault signals are extracted from the simulated signals. The received signal consists of normal signals and fault signals such as transient, sag and swells signals respectively. The wavelet descriptors of different datasets are applied to the cascade SVM for better classification. This real experiment of this paper shows that this cascade SVM provides good generalization and much fast speed compared with traditional SVMs.
68,337
Title: Data analytics for the sustainable use of resources in hospitals: Predicting the length of stay for patients with chronic diseases Abstract: Various factors are behind the forces that drive hospitals toward more sustainable operations. Hospitals contracting with Medicare, for instance, are reimbursed for the procedures performed, regardless of the number of days that patients stay in the hospital. This reimbursement structure has incentivized hospitals to use their resources (such as their beds) more efficiently to maximize revenues. One way hospitals can improve bed utilization is by predicting patients’ length of stay (LOS) at the time of admission, the benefits of which extend to employees, communities, and the patients themselves. In this paper, we employ a data analytics approach to develop and test a deep learning neural network to predict LOS for patients with chronic obstructive pulmonary disease (COPD) and pneumonia. The theoretical contribution of our effort is that it identifies variables related to patients’ prior admissions as important factors in the prediction of LOS in hospitals, thereby revising the current paradigm in which patients’ medical histories are rarely considered for the prediction of LOS. The methodological contributions of our work include the development of a data engineering methodology to augment the data sets, prediction of LOS as a numerical (rather than a binary) variable, temporal evaluation of the training and validation data sets, and a significant improvement in the accuracy of predicting LOS for COPD and pneumonia inpatients. Our evaluations show that variables related to patients’ previous admissions are the main driver of the deep network’s superior performance in predicting the LOS as a numerical variable. Using the assessment criteria introduced in prior studies (i.e., ±2 days and ±3 days tolerance), our models are able to predict the length of hospital stay with 86 % and 91 % accuracy for the COPD data set, and with 74 % and 85 % accuracy for the pneumonia data set. Hence, our effort could help hospitals serve a larger number of patients with a fixed amount of resources, thereby reducing their environmental footprint while increasing their revenue, as well as their patients’ satisfaction.
68,370
Title: A toll evasion recognition method based on Gaussian mixture clustering Abstract: The phenomenon of toll evasion on expressway is universal and recognition of evasion phenomena has become a necessary means to reduce property losses. In view of this situation, a clustering method based Gaussian mixture model(GMM) of load weight is applied to identify the toll evasion by transportation vehicles. Firstly, based on the Kolmogorov-Smirnov and Quantile-Quantile plot test of the load weight in different driving cycles, it is definite that the load in a certain driving cycle is approximately Gaussian mixture distribution(GMD) and there are significant differences among load distributions in different driving cycles. Then, the load during historical vehicles in a certain driving cycle is clustered by GMM. Expectation-maximization(EM) algorithm is used to calculate the parameters of GMM. Based on the clustering results, the GMD of the load in a certain driving cycle is clear. Finally, according to the criterion of Gaussian distribution, we scientifically obtain a reasonable load interval and distinguish the toll evasion by transportation vehicles. In addition, the concrete practice procedures of the toll evasion recognition method are discussed. Empirical results demonstrate that could achieve satisfactory results when the proposed method is applied to model the toll data of six-axis trucks of south of Baoding City station in Hebei province of China.
68,386
Title: New approach based on proximity/remoteness measurement for customer classification Abstract: Customer recognition provides an opportunity to the customers to think about services in service companies. Classifying customers into different categories based on their satisfaction helps these insurance companies to better manage their capital that results in more profit. Researchers have used different categorization methods to identify and classify customers based on their level of satisfaction with services. The purpose of this article is to present a new method for customer classification based on the satisfaction with services in the insurance company. It overcomes the inefficiencies of a classification method called Selectability/Rejectability Measures Approach for nominal classification and provides more accurate results. This method uses service quality criteria to better consideration of customers’ perceptions and expectations. Finally, a numerical example is provided to justify the proposed method. The input data is obtained from a survey in which 384 complete questionnaires collected from the customers are examined.
68,491
Title: Dynamic resource provisioning for workflow scheduling under uncertainty in edge computing environment Abstract: Edge computing, an extension of cloud computing, is introduced to provide sufficient computing and storage resources for mobile devices. Moreover, a series of computing tasks in a mobile device are set as structured computing processes and flows to achieve effective management by the workflow. However, the execution uncertainty caused by performance degradation, service failure, and new service additions remains a huge challenge to the user's service experience. In order to address the uncertainty, a software-defined network (SDN)-based edge computing framework and a dynamic resource provisioning (UARP) method are proposed in this paper. The UARP method is implemented in the proposed framework and addresses the uncertainty through the advantages of SDN. In addition, the nondominated sorting genetic algorithm-III is employed to optimize two goals, that is, the energy consumption and the completion time, to obtain balanced scheduling strategies. The comparative experiments are performed and the results show that the UARP method is superior to other methods in addressing the uncertainty, while reducing energy consumption and shortening the completion time.
68,493
Title: Multi-criteria decision making for green supplier selection using interval type-2 fuzzy AHP: a case study of a home appliance manufacturer Abstract: Many managers and firm owners nowadays pay special attention to green supplier selection to gain competitive advantage all over the world. Consequently, this subject a critical and significant decision for firms. This study utilizes an extension to analytical hierarchy process (AHP) under interval type-2 fuzzy environment (IT2FAHP) model to better cope with ambiguity and vagueness for solving supplier selection problem considering green concepts. A real case application is also performed in a home appliance manufacturer to demonstrate the effectiveness and efficiency of the IT2FAHP model in the present paper. The findings indicate that the most important factors that are effective in selecting green suppliers are cleaner production, energy/material saving, green package, remanufacturing, and environmental management system. Further, a comparison and a sensitivity analysis are conducted to display the consistency and stability of the proposed model. The findings reveal that the IT2FAHP model is quite consistent with the models in literature and green product, cleaner production, green design, and green package have a significant positive effect on performances of green suppliers.
68,589
Title: Benefits and challenges of online instruction in agriculture and natural resource education Abstract: Many land-grant institutions with agriculture and natural resource programs in the United States offer online courses to meet student demand. The goal of this study was to understand how major educational stakeholders, including instructors and students, perceive the benefits and limitations of online teaching and learning in agriculture and natural resource sciences. This study utilized a mixed mode data collection method, which involved informal meetings as well as online survey administration. The data were analyzed through strengths, weaknesses, opportunities, and threats (SWOT)-Analytic Hierarchical Process (AHP) framework. The study results offer novel perspectives on the perceived utility and challenges of several attributes of online learning including work-home balance, lack of social interactions, virtual classroom opportunities for working professionals, and academic integrity and cyber scam issues among others. Our findings may be beneficial to academic administrators, instructors, and institutions in identifying opportunities, challenges, and adopting programmatic strategies to improve effectiveness of online learning.
68,602
Title: Use Public Wi-Fi? Fear Arouse and Avoidance Behavior Abstract: This study aims to study the internet security compliance behavior in a public Wi-Fi usage condition. We developed our research framework and hypotheses with the consideration of the protection motivation theory. Assembling questions from existing literature, we collected opinions from a group of college students who know and use public Wi-Fi. Using structural equation modeling, we examined the role of coping appraisal, threat appraisal, and fear to the avoidance behavior. Our findings are that fear is an important determinant to the user's avoidance behavior and that coping appraisal is indirectly enhancing fear. Through this study, we provide support to use fear as a major control mechanism to the high-risk behavior in a less controlled environment. We also provided another perspective for how the fear is developed so that policy makers can have additional method to improve compliance behavior.
68,892
Title: Approximation algorithms for the maximally balanced connected graph tripartition problem Abstract: Given a vertex-weighted connected graph $$G = (V, E, w(\cdot ))$$ , the maximally balanced connected graph k-partition (k-BGP) seeks to partition the vertex set V into k non-empty parts such that the subgraph induced by each part is connected and the weights of these k parts are as balanced as possible. When the concrete objective is to maximize the minimum (to minimize the maximum, respectively) weight of the k parts, the problem is denoted as max–min k-BGP (min–max k-BGP, respectively), and has received much study since about four decades ago. On general graphs, max–min k-BGP is strongly NP-hard for every fixed $$k \ge 2$$ , and remains NP-hard even for the vertex uniformly weighted case; when k is part of the input, the problem is denoted as max–min BGP, and cannot be approximated within 6/5 unless P $$=$$ NP. In this paper, we study the tripartition problems from approximation algorithms perspective and present a 3/2-approximation for min–max 3-BGP and a 5/3-approximation for max–min 3-BGP, respectively. These are the first non-trivial approximation algorithms for 3-BGP, to our best knowledge.
68,959
Title: An approximation algorithm for stochastic multi-level facility location problem with soft capacities Abstract: Facility location problem is one of the most important problems in the combinatorial optimization. The multi-level facility location problem and the facility location with capacities are important variants for the classical facility location problem. In this work, we consider the multilevel facility location problem with soft capacities in the uncertain scenario. The uncertainty setting means the location process is stochastic. We consider a two-stage model. The soft-capacities setting means each facility has multiple capacities by paying multiple opening cost. The multi-level setting means the client needs to connect to a path. We propose a bifactor $$ (1/\alpha ,6/(1-2\alpha ) )$$ -approximation algorithm for the stochastic multi-level facility location problem (SMLFLP), where $$ \alpha \in (0,0.5) $$ is a given constant. Then, we reduce the stochastic multi-level facility location problem with soft capacities to SMLFLP. The reduction implies a $$ \left( 1/\alpha + 6/(1-2\alpha ) \right) $$ -approximation algorithm. The ratio is 14.9282 when setting $$ \alpha = 0.183 $$ .
69,034
Title: A robust network DEA model for sustainability assessment: an application to Chinese Provinces Abstract: This paper constructs an Environmental Sustainability index in order to investigate regional efficiency in China between 2000 and 2012. The Environmental Sustainability index consists of a Production Efficiency index and an Eco-efficiency index. A multiplicative relational network data envelopment analysis model is applied, and a window analysis is conducted to capture the efficiency trends over time. The results reveal significant heterogeneity among Chinese provinces for the Environmental Sustainability and the Eco-efficiency indices, while there is a high level of Production Efficiency across all provinces. Furthermore, there are large differences among geographical areas. Specifically, high Production Efficiency levels are reported for the eastern area, whereas, high Eco-efficiency levels are reported for the western area. The reported results provide valuable insights to decision makers, revealing a high potential for improvement in the overall Environmental Sustainability score, especially for the eastern and middle areas. In addition, regional heterogeneity should be taken into account when considering new legislation.
69,157
Title: An intelligent linear time trajectory data compression framework for smart planning of sustainable metropolitan cities Abstract: The urban road networks and vehicles generate exponential amount of spatio-temporal big-data, which invites researchers from diverse fields of interest. Global positioning system devices may transceive data every second thus producing huge amount of trajectory data. Subsequently, it requires optimized computing for various operations such as visualization and mining hidden patterns. This sporadically stored big-data contains invaluable information, which is useful for a number of real-time applications. Compression is a highly important, but knotty task. Optimized compression enables us achieve the desired results in efficient and effective manner by using minimum energy and computational resources without compromising on important information. We present two versions of a compression technique based on the points of intersections (PoI) of urban roads networks. Based on intelligent mining paradigm, we created a compressed lookup lexicon to store the PoIs of dynamically selected region of interests (ROI). An important feature of our lexicon is the key pattern, which is intelligently computed based on the relative geographic position of a spatial geodetic vertex with respect to Euclidean space origin in a given ROI. This compresses trajectories in linear time, making it feasible for mission critical real world applications. Our experimental dataset contained 959 547, 517 436, and 231 740 trajectories for Bikes, Cars, and Taxis, respectively. The Compr(10) reduced these trajectories to 17 428, 11 084, and 6565, respectively. Results of Compr(15) and Compr(20) show promising results. We define the quality of the compression in context of the considered problem. The results show that the proposed technique achieved satisfactory quality of the compression.
69,181
Title: Computation and algorithm for the minimum k-edge-connectivity of graphs Abstract: Boesch and Chen (SIAM J Appl Math 34:657–665, 1978) introduced the cut-version of the generalized edge-connectivity, named k-edge-connectivity. For any integer k with $$2\le k\le n$$ , the k-edge-connectivity of a graph G, denoted by $$\lambda _k(G)$$ , is defined as the smallest number of edges whose removal from G produces a graph with at least k components. In this paper, we first compute some exact values and sharp bounds for $$\lambda _k(G)$$ in terms of n and k. We then discuss the relationships between $$\lambda _k(G)$$ and other generalized connectivities. An algorithm in $$\mathcal {O}(n^2)$$ time will be provided such that we can compute a sharp upper bound in terms of the maximum degree. Among our results, we also compute some exact values and sharp bounds for the function f(n, k, t) which is defined as the minimum size of a connected graph G with order n and $$\lambda _k(G)=t$$ .
69,319
Title: Related machine scheduling with machine speeds satisfying linear constraints Abstract: We propose a related machine scheduling problem in which the processing times of jobs are given and known, but the speeds of machines are variables and must satisfy a system of linear constraints. The objective is to decide the speeds of machines and minimize the makespan of the schedule among all the feasible choices. The problem is motivated by some practical application scenarios. This problem is strongly NP-hard in general, and we discuss various cases of it. In particular, we obtain polynomial time algorithms for two special cases. If the number of constraints is more than one and the number of machines is a fixed constant, then we give a $$(2+\epsilon )$$ -approximation algorithm. For the case where the number of machines is an input of the problem instance, we propose several approximation algorithms, and obtain a polynomial time approximation scheme when the number of distinct machine speeds is a fixed constant.
69,337
Title: On weak Pareto optimality of nonatomic routing networks Abstract: This paper establishes several sufficient conditions for the weak Pareto optimality of Nash equilibria in nonatomic routing games on single- and multi-commodity networks, where a Nash equilibrium (NE) is weakly Pareto optimal (WPO) if under it no deviation of all players could make everybody better off. The results provide theoretical and technical bases for characterizing graphical structures for nonatomic routing games to admit WPO NEs. We prove that the NE of every nonatomic routing game on a single or multi-commodity network is WPO (regardless of the choices of nonnegative, continuous, nondecreasing latency functions on network links) if the network does not have two links x, y and three paths each of which goes from some origin to some destination such that the intersections of the three paths with $$\{x,y\}$$ are $$\{x\},\{y\}$$ and $$\{x,y\}$$ , respectively. This sufficient condition leads to an alternative proof of the recent result that the NE of every 2-commodity nonatomic routing game on any undirected cycle is WPO. We verify a general property satisfied by all feasible 2-commodity routings (not necessarily controlled by selfish players) on undirected cycles, which roughly says that no feasible routing can “dominate” another in some sense. The property is crucial for proving the weak Pareto optimality associated to the building blocks of undirected graphs on which all NEs w.r.t. two commodities are WPO.
69,364
Title: MINIMALLY STRONG SUBGRAPH (k, l)-ARC-CONNECTED DIGRAPHS Abstract: Let D = (V, A) be a digraph of order n, S a subset of V of size k and 2 <= k <= n. A subdigraph H of D is called an S-strong subgraph if H is strong and S subset of V(H). Two S-strong subgraphs D-1 and D-2 are said to be arc-disjoint if A(D-1) boolean AND A(D-2) = empty set. Let lambda(S)(D) be the maximum number of arc-disjoint S-strong digraphs in D. The strong subgraph k-arc-connectivity is defined as lambda(k)(D) = min{lambda(S)(D) vertical bar S subset of V, vertical bar S vertical bar = k}. A digraph D = (V, A) is called minimally strong subgraph (k, l)-arc-connected if lambda(k)(D) >= l but for any arc e is an element of A, lambda(k) (D - e) <= l - 1. Let G(n, k, l) be the set of all minimally strong subgraph (k, l)-arc-connected digraphs with order n. We define G(n, k, l) = max{vertical bar A(D)vertical bar vertical bar D is an element of G(n, k, l)} and g(n, k, l) = min{vertical bar A(D)vertical bar vertical bar D is an element of G(n, k, l)}. In this paper, we study the minimally strong subgraph (k, l)-arc-connected digraphs. We give a characterization of the minimally strong subgraph (3, n - 2)-arc-connected digraphs, and then give exact values and bounds for the functions g(n, k, l) and G(n, k, l).
69,401
Title: Approximation algorithms for constructing required subgraphs using stock pieces of fixed length Abstract: In this paper, we address the problem of constructing required subgraphs using stock pieces of fixed length (CRS-SPFL, for short), which is a new variant of the minimum-cost edge-weighted subgraph (MCEWS, for short) problem. Concretely, for the MCEWS problem Q, it is asked to choose a minimum-cost subset of edges from a given graph G such that these edges can form a required subgraph $$G'$$ . For the CRS-SPFL problem $$Q^{\prime }$$ , these edges in such a required subgraph $$G'$$ are further asked to be constructed by plus using some stock pieces of fixed length L. The new objective, however, is to minimize the total cost to construct such a required subgraph $$G'$$ , where the total cost is sum of the cost to purchase stock pieces of fixed length L and the cost to construct all edges in such a subgraph $$G'$$ . We obtain the following three main results. (1) Given an $$\alpha $$ -approximation algorithm to solve the MCEWS problem, where $$\alpha \ge 1$$ (for the case $$\alpha =1$$ , the MCEWS problem Q is solved optimally by a polynomial-time exact algorithm), we design a $$2\alpha $$ -approximation algorithm and another asymptotic $$\frac{7\alpha }{4}$$ -approximation algorithm to solve the CRS-SPFL problem $$Q^{\prime }$$ , respectively; (2) When Q is the minimum spanning tree problem, we provide a $$\frac{3}{2}$$ -approximation algorithm and an AFPTAS to solve the problem $$Q^{\prime }$$ of constructing a spanning tree using stock pieces of fixed length L, respectively; (3) When Q is the single-source shortest paths tree problem, we present a $$\frac{3}{2}$$ -approximation algorithm and an AFPTAS to solve the problem $$Q^{\prime }$$ of constructing a single-source shortest paths tree using stock pieces of fixed length L, respectively.
69,447
Title: Intelligent oil well identification modelling based on deep learning and neural network Abstract: The purpose of this study is to explore the intelligent oil well identification and model construction, so as to make the identification of oil well more intelligent. The convolutional neural network (CNN) algorithm in the deep learning algorithm is used to construct the recognition model of oil well function diagram, the collected original data are gradually converted into standardised binary images for input and CNN feature extraction. The CNN algorithm can be used to optimise the intelligent identification of oil wells, and the expected requirements can be met when the error of the intelligent identification of oil wells is small.
69,483
Title: Application of item sum technique for estimating quantitative sensitive mean on successive moves using auxiliary information Abstract: Estimation of sensitive issues is an adversely challenging task as it is highly influenced by social desirability bias leading to untrue response many times. Hence, the possible way out for these problems are randomized response technique or scrambled response technique or the item sum technique etc. The proposed work is a methodological advancement in estimation of quantitative sensitive mean in two move successive sampling by modifying the item sum technique. Detailed properties of new IST estimators have been discussed in general sampling design with applications in different sampling designs on successive moves. Theoretical considerations have been integrated with simulation studies to show the working version of modified IST on two move successive sampling.
69,510
Title: Software system design for solution of effective material layout for the needs of production and logistics Abstract: The article deals with the problem of research of software solutions of efficiently layout of material within a defined space for the needs of production storages and transport. The problem relates to using knowledge with branches of logistics, layout, math and geometry for needs of research and for needs the creation of an expert system of materials layout as the software application. Similar systems do not occur in the market for the needs of practice. The aim of the article is to point out the principles, which is necessary to take into account in the research of an expert system of materials layout. Final implementation of the goal consists of the idea to achieve a logistic computer system design, focused on the effective use of loading areas, combined with an aesthetic, comfortable and logically moulded outlet as a possible realistic graphic illustration of the desired result. This part refers to the object-oriented programming language, mainly because of its object-oriented features, such as polymorphism, or templating. The results of the program solution show an improvement in the utilization of the loading area of the means of transport by up to 30%.
69,567
Title: Almost unbiased ridge estimator in the gamma regression model Abstract: This article introduces the almost unbiased gamma ridge regression estimator (AUGRRE) estimator based on the gamma ridge regression estimator (GRRE). Furthermore, some shrinkage parameters are proposed for the AUGRRE. The performance of the AUGRRE by using different shrinkage parameters is compared with the existing GRRE and maximum likelihood estimator. A Monte Carlo simulation is carried out to assess the performance of the estimators where the bias and mean squared error performance criteria are used. We also used a real-life dataset to demonstrate the benefit of the proposed estimators. The simulation and real-life example results show the superiority of AUGRRE over the GRRE and the maximum likelihood estimator for the gamma regression model with collinear explanatory variables.
69,583
Title: Examining a SPOC experiment in a foundational course: design, creation and implementation Abstract: SPOC is being practiced and studied in an increasingly wider educational community. The literature has concentrated on reporting its pedagogical effectiveness measured by self-reported data of isolating learning components at the implementation phase within the domain of the immediate course site. By using a combination of quantitative and qualitative methods, this study measures the educational efficacy of a SPOC program for a foundational course by multiple sources of objective data derived from substantial SPOC learning parameters, examines various problems throughout the pedagogical discourse and probes beyond the course level to reveal contextual factors beneath the surface problems. Major findings include (1) learning SPOC has had a statistically insignificant correlation with participants' academic outcome; (2) componential problems in the SPOC program could be traced to inherent deficiencies in its design, creation and implementation, which are rooted in contextual factors at levels of system, institution, faculty and course. The study does not intend to determine whether SPOC is effective or not, but hopes to demonstrate how a plethora of variables could conspire to trivialize the potential of a major curriculum renewal opportunity and inform future endeavors of optimizing SPOC environment in similar and dissimilar contexts.
69,587
Title: Targeted reminders of electronic coupons: using predictive analytics to facilitate coupon marketing Abstract: Electronic coupon (e-coupon) is one of the most important marketing tools in B2C e-commerce. To improve the e-coupon redemption rate and reduce marketing costs, it is crucial to retarget customers who have received e-coupons and have higher propensity to redeem their coupons. Using log data and transactional data to extract the features of past purchase behavior, past coupon redemption behavior and browsing behavior during coupon validity period, we investigate the factors influencing customers’ propensity for e-coupon redemption. Our results show that almost all the variables used in our analysis (except the visit time) affect consumers’ coupon redemption propensity. Our study can help companies develop promotional strategies that better retarget those customers who are more likely to respond to coupon marketing. It also highlights the potential of using predictive analytics to enhance marketing effectiveness in the era of big data.
69,644