text
stringlengths
70
7.94k
__index_level_0__
int64
105
711k
Title: On estimation of two-dimensional dynamic panel model with confounders Abstract: In this paper, motivated by a real data example about borderline overian tumors, we study a two-dimension dynamic panel models with confounding individual effect for modeling binary panel data. We propose using the maximum likelihood estimation method to estimate the model parameters. The properties of the maximum likelihood estimators are studied. The performance of the proposed estimation method is studied by using a Monte Carlo simulation study. The proposed model and estimation methods are illustrated by the real data example.
24,555
Title: An effective method for incentivizing groups implemented in a collaborative problem-based learning system to enhance positive peer interaction and learning performance Abstract: Many studies have verified that the effective promotion of both positive interactions among individual group members and group accountability is a critical issue in collaborative problem-based learning (CPBL). This work, therefore, proposes a group incentive mechanism (GIM) that is based on considering several important factors that influence peers' interactions and group accountability in collaborative learning to improve learning performance, interactive relationships, group efficacy, and the cohesiveness of groups of learners in a CPBL system. To evaluate the effectiveness of the proposed GIM, 48 Grade 4 students from two classess were recruited from an elementary school in Taoyuan City, Taiwan, to participate in an instruction experiment. Two classess were randomly assigned to the experimental group using CPBL with the proposed GIM and the control group using CPBL with the individual incentive mechanism (IIM) to solve a target problem collaboratively. Analytical results reveal that although the control group of learners with the IIM exhibited greater social interactions than the experimental group of learners with the proposed GIM, the experimental group exhibited better learning performance, group efficacy, and positive interactive relationships than the control group.
24,568
Title: HE3: A hierarchical attribute based secure and efficient things-to-fog content sharing protocol Abstract: Internet of Things(IoT) based systems and applications are nowadays being enabled by paradigm of edge or fog computing. Central authority controlling the system formulates the policy about which fog device will get data from which IoT device/s. As fog devices have attributes like physical location, storage capacity, computing capability etc, attribute based access policy can be enforced on each IoT data. Ciphertext Policy – Attribute Based Encryption (CP-ABE) is a method to implement the access policies. However, number of access trees becomes equal to the number of access policies. In this paper, hierarchy present in attribute sets of access policies has been exploited to obtain single integrated access policy. A content sharing protocol named Hierarchical Efficient Encryption at Edge (HE3) is worked out to implement the concept. The proposed protocol is demonstrated to be highly efficient in terms of encryption and decryption time as compared to existing approaches. A formal security proof is provided to establish strength of encryption.
24,573
Title: Statistical monitoring of binary response attributed social networks considering random effects Abstract: This paper presents a novel approach for monitoring attributed social networks considering random effects, the concept which enables the monitoring procedure to be effective for the networks where the available pairs of communications are randomly accessible from an unknown larger set of individuals. The proposed method presents a statistical approach based on generalized linear mixed model and a likelihood-based control chart for monitoring the social network data. Numerical examples, including simulation studies and a real-world case study, verify the effectiveness of the proposed method due to its' ability in considering random effects.
24,604
Title: Sensitive medical data transmission and maintaining data quality using bacterial bee swarm-based hybrid lifetime maximization large-scale ad hoc routing protocol Abstract: Nowadays, clinical examination process requires continuous data for evaluating the patient health. Due to the importance of medical data, wireless sensor network is used to collect the large volume of patient health information. The collected data includes the patient body temperature, blood pressure, heart rate, air flow, glucose level, which helps to provide the initial treatment and avoid serious situation. The gathered information must be transmitted to the health care center without affecting the quality of data because its quality issue leads to create the packet drop, transmission throughput, maximum energy consumption, and delay. Hence, this paper develop the optimized routing protocol called bacterial bee swarm-based hybrid lifetime maximization large-scale mobile ad hoc routing protocol for performing data transmission process. The developed system examines the quality of data while improving the network lifetime in large area of mobile ad hoc network. The optimal path and data quality are continuously monitored and effective path and node are selected based on the bacterial bee swarm functions. Then, the efficiency of the system is evaluated with the help of experimental results in terms of energy consumption, packet delivery ratio, end-to-end delay, and QoS metric related constraints.
24,636
Title: Optimal tests for arbitrarily varying object and the familywise error rates Abstract: In this article the optimal statistical hypotheses testing to make decision for arbitrarily varying object are considered. The statistician would like to control the overall error rate relative to draw statistically valid conclusions from each test. The familywise error rate (FWE) metric and the hypothesis test procedure while controlling both the type I and II FWEs are generalized by an arbitrarily varying object. The characteristics of logarithmically asymptotically optimal (LAO) hypotheses testing are studied. The purpose of research is to express the optimal functional relation among the reliabilities of LAO hypotheses testing and to judge with FWE metric.
24,697
Title: Quantum-behaved RS-PSO-LSSVM method for quality prediction in parts production processes Abstract: Quality control in the production process is the core of the enterprise to ensure product quality, and quality prediction is the key link of quality control and quality management. Aiming at the quality prediction of parts in the production process, a product quality prediction model is established. In this model, Rough Set (RS), Particle Swarm Optimization (PSO), and Least Square Support Vector Machine (LSSVM) are applied to solve the problem of product quality prediction and a RS-PSO-LSSVM synthesis algorithm is established. First, the 5M1E analysis of production process for parts is carried out, and the index system of influencing factors is established. Based on this index system, the condition attributes and decision attributes of RS are determined, in which RS is used to the reduction to extract rules and the optimal condition attribute value is obtained, which is used as the pre-processing of LSSVM input data. Second, in order to improve the learning and generalization ability of LSSVM, PSO is used to optimize the relevant parameters and find the optimal solution. Finally, an example is given to verify the feasibility and effectiveness of the product quality prediction model and the RS-PSO-LSSVM comprehensive algorithm established above, and the prediction accuracy is higher than that of the RS-LSSVM algorithm.
24,727
Title: A contextual learning model for developing interactive e-books to improve students' performances of learning the Analects of Confucius Abstract: The Analects of Confucius has been identified as an important and influential philosophy around the globe. However, in traditional instruction, it is difficult to express the spirit of this philosophy. Therefore, many students consider it as a big challenge to study the Analects of Confucius. To cope with this problem, in this study, an interactive e-book system has been developed based on the contextual learning model to interpret the Analects of Confucius in an interactive way. A quasi-experiment was conducted to evaluate the performance of the proposed approach. The participants were 38 fifth graders from 2 classes in an elementary school located in northern Taiwan, who were divided into an experimental and a control group. The experimental group used an interactive e-book learning mode to learn the Analects of Confucius course while the control group used a conventional technology enhanced teaching mode. The results showed that adopting the interactive e-book teaching mode to study the Analects of Confucius in the elementary school course can promote students' learning achievement and motive. It was also found that the interactive e-book learning mode can trigger learners' deep motive, and hence promote their learning achievement.
24,733
Title: Online information searching behaviours: examining the impact of task complexity, information searching experience, and cognitive style Abstract: While the impact of behavioural and cognitive processes on online information searching behaviours have been studied in some depth, little is known about the impact of procedural and metacognitive processes on online information searching behaviours. In addition, although the literature contains studies examining online information searching behaviours based on experience, cognitive styles, and task complexity separately, there is only a limited number of studies that investigate how online information searching behaviours vary depending on individual characteristics by taking task complexity as a basis. The aim of this study is to explore whether university students' information searching behaviours, task completion times, and task completion rates in simple and difficult tasks differ depending on information searching experience and cognitive style. The study was conducted with a sample of 20 university students. The results of this study indicated that in difficult search tasks, online information searching experience is influential on the exhibition of online information searching behaviours associated with the metacognitive domain. In simple and difficult tasks, experience and cognitive styles cause differentiation in online information searching behaviours. When task complexity is taken as a basis, the experience is more influential on task completion time and task completion rate compared to cognitive styles.
24,796
Title: Representativeness of ranked set sampling based on Bayesian score Abstract: The present article is concerned with the comparison of representativeness of a random sample to the underlying population as a whole, obtained through ranked set sampling (RSS) with simple random sampling (SRS). This is done by defining the Bayesian score as a measure of representativeness corresponding to the units of the population. The proposed procedure shows that RSS is a better sampling method and the probability of having the most unrepresentative sample is very less in case of RSS than in case of SRS, irrespective of whether the ranking is perfect or imperfect. The proposed procedure is also illustrated through a real-life data on survivorship of children below one year in Empowered Action Groups (EAG) states of India.
24,804
Title: Improving p-value approximation and level accuracy of Monte Carlo tests by quasi-Monte Carlo methods Abstract: We argue and show empirically that for the Monte Carlo test, if the pseudo-random numbers are replaced by a randomized low discrepancy sequence, the actual errors in approximating the p-value are smaller and the deviations of the exact level from the nominal level have higher potential to be smaller. Hence in real applications the proposed method, called randomized quasi-Monte Carlo test, is suggested to be used instead of the traditional Monte Carlo test.
24,809
Title: Discriminating between some lifetime distributions in geometric counting processes Abstract: Gamma, lognormal and Weibull distributions are most commonly used in modeling asymmetric data coming from the areas of life testing and reliability engineering. In this study, we deal with the problem of selecting one of these distributions for a given data set which is consistent with the geometric process (GP) model according to -statistic based on the ratio of the maximized likelihood (RML). First, we show that -statistic performs better than Kolmogorov- Smirnov (KS), mean square error (MSE) and maximum percentage error (MPE) based on extensive simulation study. Then, by using the T-statistic, we determine the distributions of ten real data sets shown to be consistent with the GP model by Lam et al. (2004). After validating the distribution for these data sets, we calculate the estimators of the parameters by using the suitable method given in Lam and Chan (1998), Chan, Lam, and Leung (2004) or Aydogdu, Senoglu, and Kara (2010). Then, we plot observed and the fitted values of the interarrival and arrival times for comparison.
24,874
Title: A nonparametric generally weighted moving average sign chart based on repetitive sampling Abstract: Most control charts that assume quality characteristics of interest follow a normal or specific distribution. However, in reality there is often limited or no information regarding the underlying process distribution. Therefore, in recent years, nonparametric techniques for quality control have been developed. In this paper, the nonparametric generally weighted moving average sign chart based on repetitive sampling (hereinafter RS-GWMA sign chart) is proposed to improve performance capability of existing charts in small process shifts. Simulation studies show that the nonparametric RS-GWMA sign chart with large design and adjustment parameters outperforms competing charts considered in this article. The fill volume of soft-drink beverage bottles is as industrial example used to illustrate the application of the proposed nonparametric RS-GWMA sign chart.
24,918
Title: Multi-objective approach for multiple clusters detection in data points events Abstract: The spatial scan statistic is a widely used technique for detecting spatial clusters. Several extensions of this technique have been developed over the years. The objectives of these techniques are the detection accuracy improvement and a flexibilization on the search clusters space. Based on Voronoi-Based Scan (VBScan), we propose a biobjective approach using a recursively VBScan method called multi-objective multiple clusters VBScan (MOMC-VBScan), alongside a new measure called matching. This approach aims to identify and delineate all multiple significant anomalies in a search space. We conduct several experiments on different simulated maps and two real datasets, showing promising results. The proposed approach proved to be fast and with good precision in determining the partitions.
24,939
Title: Bounds on Watching and Watching Graph Products Abstract: A watchman's walk for a graph G is a minimum-length closed dominating walk, and the length of such a walk is denoted (G). We introduce several lower bounds for such walks, and apply them to determine the length of watchman's walks in several grids.
24,958
Title: Construction and evaluation of an online environment to reduce off-topic messaging Abstract: Online discussions have become more common as social network services have become more ubiquitous and complement various learning activities. However, studies investigating online discussions in recent years have shown that off-topic messaging has increased with the use of social network services. Thus, determining the design of a mechanism to reduce the frequency of off-topic messaging is an issue deserving attention. This study develops a Facebook-based system and employs two strategies (a filter reminder strategy and a self-reflection strategy) aiming to reduce off-topic messaging in comparative and empirical studies. The research questions are as follows: (a) Which strategy is more effective in reducing off-topic messaging? (b) What are the influences of the strategies on the patterns of students' cognitive processes? and (c) Does this influence occur during discussions? The results indicate that the filter reminder strategy can not only reduce off-topic messaging but also elicit more diversified cognitive behaviors. Finally, based on the findings, this study provides suggestions for future research and advice regarding instruction.
24,975
Title: Half-spectral analysis of spatial-temporal data: The case study of Iranian daily wind speed data Abstract: In this paper, we first study the theory of the spatial-temporal half spectral modeling and describe some properties of recently proposed half spectral models. Next, we propose an estimation method for the estimation of spatial-temporal covariance functions in the half-spectral setting. To assess the performance of the proposed half-spectral models, we conduct two simulations, in which we compare the proposed fitting approach with respect to the other classical estimation methods. The proposed methods have great success in fitting parametric space-time covariance functions specifically for massive data sets. Finally, we apply the proposed methods for a real daily wind speed data in Iran.
24,979
Title: Evolving Optimized Neutrosophic C means clustering using Behavioral Inspiration of Artificial Bacterial Foraging (ONCMC-ABF) in the Prediction of Dyslexia Abstract: Precise prediction of risk for dyslexia among children’s in earlier stages is a significant long-term aim in the field of cognitive computing. Producing such accurate results for detection of dyslexia from a dataset which consist of low-quality dataset and the presence of vague information is the toughest challenge among researchers. This paper aims at developing an evolving model to handle the impreciseness in the detection of dyslexia more intelligently. In this work, each instance is described in a neutrosophic domain by defining a membership degree of truthiness, indeterminacy, and falsity. These instances are neutrosophically clustered by applying Neutrosophic C-Means clustering (NCM) which forms four different clusters namely dyslexia, no dyslexia, control/revision and hyperactivity or other issues. The outlier and noise are the special categories of indeterminacy which often occurs in real datasets are promptly discovered and clustered. NCM is optimized by introducing Artificial Bacterial Foraging (ABF), especially when there is vagueness or imprecision in the selection of cluster centroids. With the merits of global searching, ABF selects more promising clustering during cluster re-computation. The interpreted results confirm that the role played by proposed ONCMC-ABF algorithm produces better results in the prediction of dyslexia with the low-quality dataset.
25,008
Title: An Analogue of DP-Coloring for Variable Degeneracy and its Applications Abstract: A graph G is list vertex k-arborable if for every k-assignment L, one can choose f(v) is an element of L(v) for each vertex v so that vertices with the same color induce a forest. In [6], Borodin and Ivanova proved that every planar graph without 4-cycles adjacent to 3-cycles is list vertex 2-arborable. In fact, they proved a more general result in terms of variable degeneracy. Inspired by these results and DP-coloring which is a generalization of list coloring and has become a widely studied topic, we introduce a generalization on variable degeneracy including list vertex arboricity. We use this notion to extend a general result by Borodin and Ivanova. Not only this theorem implies results about planar graphs without 4-cycles adjacent to 3-cycle by Borodin and Ivanova, it also implies other results including a result by Kim and Yu [S.-J. Kim and X. Yu, Planar graphs without 4-cycles adjacent to triangles are DP-4-colorable, Graphs Combin. 35 (2019) 707-718] that every planar graph without 4-cycles adjacent to 3-cycles is DP-4-colorable.
25,029
Title: Accurate and time-efficient negative binomial linear model for electric load forecasting in IoE Abstract: Accurate and efficient model predictive control (MPC) is essential for Internet of energy (IoE) to enable active real-time control, decentralized demand-supply balance, and dynamic energy management. The MPC consists of short-term electric load forecasting, whose accuracy is affected by the load characteristics, such as overdispersion, autocorrelation, and seasonal patterns. The forecasting efficiency depends on the computational time that is required to produce accurate results and is affected by the IoE data volume. Although several fundamental short-term forecasting models have been proposed, more accurate and efficient models are needed for IoE. Therefore, we propose a novel forecasting temporal negative binomial linear model (NBLM) that handles overdispersion and captures nonlinearity of electric load. We also classify the load into low, moderate, and high intraday seasons to increase the forecast accuracy by modeling the autocorrelation in each season, separately. The temporal NBLM was evaluated using real-world data from Jericho city, and its results were compared to other forecasting models. The temporal NBLM is found more accurate than the other models as the mean absolute percentage error (MAPE) is reduced by 29% compared to the ARMA model. In addition, the proposed model is more efficient as its running time is reduced by 63% in the training phase and by 87% in the forecast phase compared to the Holt-Winter model. This increase in accuracy and efficiency makes the proposed model applicable for load forecasting in IoE contexts where data volume is large and load is highly fluctuated, is overdispersed, is autocorrelated, and follows seasonal patterns.
25,151
Title: Nonparametric empirical Bayesian method for noncontractual setting of customer-base analysis Abstract: In the noncontractual setting of customer-base analysis, heterogeneity parameters in purchase model and lifetime model are usually assumed to follow some familiar parametric distribution such as gamma or log-normal distribution. But, in many applications, these assumptions may be questionable because the true distributions of heterogeneity parameters are usually unknown. To this end, this paper relaxes these assumptions imposed on heterogeneity parameters to develop a nonparametric approach to purchase model and lifetime model, in which unknown distributions of heterogeneity parameters are approximated by a truncated Dirichlet process prior. A nonparametric empirical Bayesian method is developed to obtain Bayesian estimations of unknown parameters in the proposed nonparametric models. The blocked Gibbs sampler is presented to draw observations required for Bayesian inference from the corresponding posterior distributions of the components of parameters. Extensive simulation studies and a CDNOW data set are presented to illustrate the newly developed methodologies.
25,183
Title: A closed-form quasi-maximum likelihood estimator of bid-ask spread Abstract: We propose a closed-form quasi-maiximum likelihood (QML) estimator of bid-ask spread from daily high-low ranges under both near-ideal and overnight theoretical frameworks. As is shown that the high-low spread estimator is more precise than that only using closing prices, we pay attention to investigating the statistical properties of such kind of range-based spread estimators, and further demonstrating that our estimator is free from overnight adjustment with higher estimation efficiency. Simulation studies show that the QML estimator has relatively lower bias and RMSE, compared with other prevalent low-frequency measures, and the results are more significant under overnight conditions.
25,198
Title: Economic and economic-statistical designs of auxiliary information based synthetic and EWMA charts Abstract: The economic and economic-statistical performances of auxiliary information based synthetic and EWMA charts in monitoring the mean is investigated. Incorporating the auxiliary characteristic improves the cost and out-of-control run length performances of the charts. As it is often difficult to specify the exact shift size where a quick detection is needed, this paper investigates the economic and economic-statistical performances of auxiliary information based synthetic and EWMA charts, for both cases when the exact shift sizes can and cannot be specified. This is achieved by using the average run length (ARL) and expected average run length (EARL) performance criteria, respectively. The cost performances of these charts are compared and the cost savings are discussed.
25,203
Title: Comparing limiting availabilities of a k-out-of-n : G system and a k-component series system with $$(n-k)$$ ( n - k ) spare components Abstract: The limiting availability of an ordinary k-out-of-n : G system can be improved simply by operating exactly k operable units while keeping any other operable units on cold standby. We establish this truth when there are different numbers of repair facilities, the repair time is exponential, and either each component has an exponential lifetime, or each component is made up of m identical sub-components that operate sequentially, with each sub-component having an exponential lifetime.
25,226
Title: Efficient opportunistic routing with social context awareness for distributed mobile social networks Abstract: Mobile social networks (MSNs) are developed from mobile ad hoc networks. Nodes in such networks usually have social characteristics. In recent years, researchers are trying to use the social characteristics of the network to propose new data forwarding metrics, so as to design more efficient routing algorithms. However, most of the proposed algorithms only consider local context information, which leads to the performance of the routing is not optimized enough. In this paper, we introduce two key metrics, namely, social relationship and social activity. The metrics will be used to search the best data forwarding nodes to improve the probability of data delivery. We propose a prediction-based social-aware opportunistic routing (PSOR). In the proposed method, node's social profiles are used to search relay candidates set, and the discrete-time semi-Markov prediction model is used to find the probability distribution of node transition between communities. Many simulation experiments based on real traces show that the proposed PSOR algorithm is more efficient to maximize the packet delivery probability than other state-of-the-art algorithms.
25,281
Title: Modeling additive genetic effects in animal models by closed skew normal distribution Abstract: Animal models are used commonly for modeling genetic responses. In these models the response variable can be Gaussian or Non-Gaussian, so these models belong to the generalized linear mixed models, where the genetic correlation structure of data is considered through random effects with the normal distribution. But in many applications, it is unclear whether or not the normal assumption holds. Wrong Gaussian assumptions cause bias in the component variance estimates and affect the accuracy of results. In this paper, we have proposed a closed skew normal distribution for the genetic random effects which is more flexible and includes the normal distribution. The Bayesian inference approach and the Markov Chain Monte Carlo algorithms are developed for the parameter estimations. The performance of the proposed models is illustrated by a simulation study and an example. The accuracy of the closed skew normal model is favorably compared with the normal model in the simulation and the real example.
25,288
Title: Comparison of additive shared frailty models under Lindley baseline distribution Abstract: In this article, we propose additive shared gamma frailty model and additive shared inverse Gaussian frailty model with Lindley as baseline distribution to analyze the infectious disease data set of McGilchrist and Aisbett. The Bayesian approach of Markov Chain Monte Carlo technique was employed to estimate the parameters involved in the models. A simulation study was also carried out to compare the true values and estimated values of the parameters. Comparison of proposed models and the existing models was also done by using Bayesian information criteria such as BIC, WBIC and Bayes factor. A better model was also recommended for infectious disease data.
25,315
Title: Inference for exponential competing risks data under generalized progressive hybrid censoring Abstract: In this paper, a competing risks model based on a generalized progressive hybrid censoring is considered. When the latent lifetime distributions of failure causes are exponential distributed and partially observed, maximum likelihood estimates for unknown parameters are established and the associated asymptotic confidence interval estimates are provided by using approximate theory via the observed Fisher information matrix. Moreover, Bayes point estimates and the highest posterior density credible intervals of unknown parameters are also considered, and the importance sampling procedure is used to approximate corresponding estimates. Finally, a real-life example and simulation study are presented for illustration.
25,331
Title: A novel need based free channel selection scheme for cooperative CRN using EFAHP-TOPSIS Abstract: Emerging wireless applications needs more spectrums space to tackle the rapid growth of users. Outside of radio band in spectrum will be hard to utilize for data transmissions in terms of hardware requirement. Cognitive radio research considers the Current Spectrum underutilization and provides a better model for next-generation wireless environment. Since, the cognitive radio and its policies violate the static spectrum allocation i.e., Current Wireless networks policy, many challenges are in front of us to accomplish a better cognitive radio wireless environment. One of the major challenges is secure transmission and optimal free channel selection for un-interrupt data transmission. In this article, we have proposed an efficient free-channel selection scheme for improving QoS of Cooperative Cognitive Radio Networks. The proposed model has been derived as an integrated approach by using successful MCDM techniques such as EFAHP and TOPSIS. Also, we have proved the optimality of the proposed integrated technique with other parallel channel selection techniques with various scenarios.
25,339
Title: Stochastic restricted Liu estimator in linear mixed measurement error models Abstract: This paper is concerned with the Liu estimator, stochastic restricted estimator and stochastic restricted Liu estimator of fixed and random effects in the linear mixed measurement error models. We compare the proposed estimators under the criterion of mean squared error matrix (MSEM). Furthermore, the selection of the Liu biasing parameter is discussed. A real data analysis is provided to illustrate the theoretical results and a simulation study is conducted to characterize the performance of estimators in the linear mixed measurement error model.
25,461
Title: Bayesian optimal designs for cox regression model with random and nonrandom intercept based on type I censored data Abstract: In this article, we aim to develop A and D-optimal designs for censored data via Bayesian strategy. In this regard, based on some different sorts of prior distributions for the unknown parameters, the appropriate designs are calculated. In the sequel, we define the random intercept model for the Cox regression model assuming the random intercept distributed normally. It is observed that the likelihood function of the observations cannot be acquired in a closed form. Thus, we obtain the quasi-information matrix based on the quasi-likelihood function. We then, use this matrix to obtain the A and D-optimal designs. It is seen that the Fisher information matrix depends on the unknown parameters as a result the locally optimal designs will be inefficient when the true guess about the parameters is failed. To overcome this problem, we propose the Bayesian strategy to obtain more robust optimal designs. Some numerical techniques such as Monte Carlo integration and numerical optimization methods are implemented to calculate the designs. Further, the generalized equivalence theorem is adapted to confirm the optimality of the proposed designs. All of the numerical results are done with R software. The results of this article will help to remove the weakness of the locally optimal designs that have been addressed in Schmidt and Schwabe when the parameters are misspecfied.
25,471
Title: A review on big data based parallel and distributed approaches of pattern mining Abstract: Pattern mining is a fundamental technique of data mining to discover interesting correlations in the data set. There are several variations of pattern mining, such as frequent itemset mining, sequence mining, and high utility itemset mining. High utility itemset mining is an emerging data science task, aims to extract knowledge based on a domain objective. The utility of a pattern shows its effectiveness or benefit that can be calculated based on user priority and domain-specific understanding. The sequential pattern mining (SPM) issue is much examined and expanded in various directions. Sequential pattern mining enumerates sequential patterns in a sequence data collection. Researchers have paid more attention in recent years to frequent pattern mining over uncertain transaction dataset. In recent years, mining itemsets in big data have received extensive attention based on the Apache Hadoop and Spark framework. This paper seeks to give a broad overview of the distinct approaches to pattern mining in the Big Data domain. Initially, we investigate the problem involved with pattern mining approaches and associated techniques such as Apache Hadoop, Apache Spark, parallel and distributed processing. Then we examine major developments in parallel, distributed, and scalable pattern mining, analyze them in the big data perspective and identify difficulties in designing the algorithms. In particular, we study four varieties of itemsets mining, i.e., parallel frequent itemsets mining, high utility itemset mining, sequential patterns mining and frequent itemset mining in uncertain data. This paper concludes with a discussion of open issues and opportunity. It also provides direction for further enhancement of existing approaches.
25,481
Title: Exploring how students interact with guidance in a physics simulation: evidence from eye-movement and log data analyses Abstract: The purpose of this study was to explore how students interacted with guidance to conduct a scientific inquiry in a physics simulation by using the eye-tracking techniques. The participants were 51 7th graders, and an eye-tracking system was used to record their visual behaviors and log data while they were using the simulation. As for data analysis, we first checked each participant's log data to examine whether they completed the requirement of the guidance, and then checked the correctness of her/his answer to the inquiry task. The participants were thus divided into two groups (correct vs. wrong), and the patterns of their visual behaviors were examined by a set of eye-movement indices, normalized heat maps and lag sequential analyses. The results indicate that both spatial distributions and temporal sequences of the participants' visual attention were associated with their performances on the inquiry task. Regarding the spatial distribution, the correct group tended to allocate more visual attention to the regions presenting the target phenomenon than the wrong group. Concerning the temporal sequence, the correct group tended to make more visual transitions among the content of the guidance, the relevant control panels and the target phenomena than the wrong group.
25,580
Title: Saliency guided faster-RCNN (SGFr-RCNN) model for object detection and recognition Abstract: Recently, the object detection and recognition based applications are widely adopted in various real-time and offline applications. The computer vision based automatic learning schemes have gained huge attraction from researchers due to their significant nature of learning that can significantly improve the detection performance. The advances in deep and convolutional neural networks have improved the efficiency of applications based on recognition and detection. However, enhancing precision, decreasing detection error, and detecting camouflaged items are still regarded as difficult problems. In this work, we concentrated on these problems and presented a model based on Faster-RCNN that utilizes saliency detection, proposal generation and bounding box regression, for better detection along with loss functions. The suggested method is referred to as the saliency driven Faster RCNN model for object detection and recognition using computer vision approach (SGFr-RCNN). The performance of the suggested strategy is assessed using the data sets (PASCAL VOC 2007, PASCAL VOC 2012 & CAMO_UOW) and contrasted with current methods in terms of mean average precision. The comparative research demonstrates the important improvement in the results of the suggested strategy relative to the current methods.
25,602
Title: New distribution function estimators and tests of perfect ranking in concomitant-based ranked set sampling Abstract: Ranked set sampling (RSS) allows us to draw inference about the population parameters more efficiently as compared with that in simple random sampling. Recent publication has shown that using the concomitant information can lead to improved estimation of the cumulative distribution function (CDF) under RSS set up. In this article, new estimators for both in-stratum and overall CDF are proposed. The proposed estimators are then used for estimating the differential entropy, Bohn and Wolfe model and fraction-of-random model. Three new simple tests for the assumption of perfect ranking are also produced. Our simulation results show that the suggested estimators as well as the perfectness tests provide higher efficiency against their competitors. For illustrative purposes, the proposed procedures are applied on empirical data set in the medical field.
25,614
Title: Effects of spherical video-based virtual reality on nursing students' learning performance in childbirth education training Abstract: The childbirth education training course aims to help nurses establish the basic ability to care for pregnant women. However, in traditional teaching, it is often difficult for learners to distinguish the meaning of different childbirth signs. While virtual reality (VR) may be connected in most people's minds with videos designed for immersive entertainment, in fact its early purpose was for nursing education. Initial forms of Spherical Video-based Virtual Reality (SVVR) were used to prepare and train nursing students, pilots, and military personnel. In this paper, an exploratory study using social learning theory embedded in SVVR for childbirth education training is presented; moreover, the learning performances of nursing students who participated in SVVR classroom learning and those who learned with the traditional approach were compared. The experimental results show that, compared with the traditional instruction, the learning motivation and learning satisfaction of the students who learned with SVVR were better, showing the potential of this powerful medium for enhancing nursing students' learning performance in the context of childbirth education.
25,661
Title: Testing symmetry of model errors for nonparametric regression models by using correlation coefficient(1) Abstract: In this paper, we propose a residuals based estimator of k-th correlation coefficient between the density function and distribution function of a continuous variable for nonparametric regression models, and further we use this k-th correlation coefficient to test whether the density function of the true model error is symmetric or not. First, we propose a moment based estimator of k-th correlation coefficient and present its asymptotic results. Second, we consider statistical inference of k-th correlation coefficient by using the empirical likelihood method, and the empirical likelihood statistic is shown to be asymptotically distributed as Chi-squared. Simulation studies are conducted to examine the performance of the proposed estimators.
25,699
Title: Discrete analogues of continuous bivariate probability distributions Abstract: In many real-world applications, the phenomena of interest are continuous in nature and modeled through continuous probability distributions, but their observed values are actually discrete and hence it would be more reasonable and convenient to choose an appropriate (multivariate) discrete distribution generated from the underlying continuous model preserving one or more important features. In this paper, two methods are discussed for deriving a bivariate discrete probability distribution from a continuous one by retaining some specific features of the original stochastic model, namely (1) the joint density function, or (2) the joint survival function. Examples of applications are presented, which involve two types of bivariate exponential distributions, in order to illustrate how the discretization procedures work and show whether and to which extent they alter the dependence structure of the original model. We also prove that some bivariate discrete distributions that were recently proposed in the literature can be actually regarded as discrete counterparts of well-known continuous models. A numerical study is presented in order to illustrate how the procedures are practically implemented and to present inferential aspects. Two real datasets, considering correlated discrete recurrence times (the former) and counts (the latter) are eventually fitted using two discrete analogues of a bivariate exponential distribution.
25,725
Title: Effect of timing on reliability improvement and ordering decisions in a decentralized assembly system Abstract: In this study, we investigate a decentralized assembly system in which the suppliers are unreliable and have uncertain production capacities. We focus on the case in which a manufacturer deals with two suppliers that provide complementary products. We assume that the suppliers have the opportunity to improve their reliabilities through investments. The manufacturer determines her order quantities and the suppliers decide on their investment amounts in their capacities. The timing of the decisions has a substantial effect on the optimal behaviors of the players. We investigate the problem under four different settings based on the sequence of events: (1) simultaneous ordering and investment in which decisions are made concurrently, (2) ordering after observation of capacities in which the manufacturer orders after observing the suppliers’ capacities, (3) ordering before realization of capacities in which the manufacturer orders after the suppliers’ investment decisions, and (4) ordering before investments in which the suppliers invest after the manufacturer’s ordering decision. We demonstrate the existence of a Pareto optimal equilibrium in the first two scenarios. In addition, we show that in the fourth scenario, there exists a unique Nash equilibrium in the suppliers’ game. Based on the realized capacities of the suppliers, it may be beneficial for them to share their production information with the manufacturer. In addition, we indicate that using a sequential decision strategy can enhance the performances of the supply chain and the members. When the suppliers are the leaders, they implement investment inflation strategies to stimulate the manufacturer to place larger orders. When the manufacturer is the leader, she uses an order inflation strategy to increase the investment of the suppliers. Our numerical analysis revealed that in different situations, the players may prefer ordering before realization of capacities scenario or ordering before investments. Finally, we extended our results to a multiple-suppliers case in which the suppliers are identical.
25,751
Title: Improving the quality of Higher Education teaching through the exploitation of student evaluations and the use of control charts Abstract: Student evaluations of faculty members' teaching effectiveness are considered quite important in Higher Education (HE). In this paper, we elaborate on the framework of Nikolaidis and Dimitriadis, based mainly on Statistical Process Control techniques and tools, which enables a deeper analysis and broader exploitation of student evaluation data. More specifically, we thoroughly examine and evaluate through simulation, several popular types of control charts (CCs), identifying the most suitable among them, using as comparison criteria various statistical properties of CCs. The ultimate goal of our research is to provide decision makers in HE institutions with an easy-to-use reliable tool for not only monitoring the teaching process, but also identifying the effective and ineffective faculty members' teaching performance to promote the quality of their Institution.
25,758
Title: Parallel graph-based anomaly detection technique for sequential data Abstract: In data mining, outlier detection is applied in different domains. It has very large applications such as energy consumption analysis, forecasting hurricanes in meteorological data, fraud and intrusion detection, event detection and system monitoring in sensor networks, etc. Most of existing outlier detection techniques depend on the properties of a particular type of data and can not deal with a large volume of data well, which mean that there is a necessity for improved methodologies and techniques to be applied to a large amount of data with different types in other application areas. In this paper, a parallel outlier detection technique is developed to detect the outliers in the sequential data. Although there are many types of outliers, this paper concentrates on the contextual anomalies. The proposed technique uses a graph approach to detect the outliers. It is very flexible, fast, and no labeled data is needed comparing to many previous approaches. The experimental results show the detected contextual outliers in the sequential data, as well as the efficient scaling up to handle the massive data by increasing the number of processors. The results prove that the parallelism of the proposed technique is very valuable. (C) 2019 The Authors. Production and hosting by Elsevier B.V. on behalf of King Saud University.
25,815
Title: A multiple clustering combination approach based on iterative voting process Abstract: This paper addresses the problem of clustering ensemble which aims to combine multiple clusterings into a probably better solution in terms of robustness, novelty and stability. The proposed Iterative Combining Clusterings Method (ICCM) processes iteratively the entire dataset, where each iteration is based on two steps framework. In the first step, different clustering algorithms process the common data set individually and, in the next step, a set of sub-clusters is extracted through a voting process among the data objects. To overcome the ambiguity due to voting, only objects with majority voting are assigned to their correspondent sub-clusters. The remaining objects are then collected and re-clustered in the next iterations. At the end of the iterative process, a clustering algorithm is used to group the obtained sub cluster centres and extract the final clusters of the dataset. Two gene expression datasets and three real-life datasets have been used to evaluate the proposed approach using external and internal criteria. The experimental results demonstrate the effectiveness and robustness of the proposed method, where an improvement up to 16.89% for iris dataset, and up to 14.98% for wine dataset in DB index has been achieved. The external validity metrics confirm the usefulness of the proposed approach by achieving the highest average NMI (%) score of 81.05%, across the datasets compared to different clustering ensemble methods. (c) 2019 The Authors. Production and hosting by Elsevier B.V. on behalf of King Saud University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
25,879
Title: A financial fraud detection indicator for investors: an IDeA Abstract: Fraud detection is a key issue for investors and financial authorities. The Ponzi scheme organized by Bernard Madoff is a magnified example of a financial fraud, always possible when well-orchestrated. Traditional methods to detect fraud require costly and lengthy investigations that involve complex financial and legal knowledge, as well as highly skilled analysts. Based on the motto “too good to be true” that should be adopted by any rational investor, we propose herein the use of a robust performance measure (named GUN*) to construct an Index for detection of anomalies (called IDeA). This index is based on the basic intuition that it is not possible to properly evaluate a fund as “good” regardless the characteristics and risk aversion of investors. After defining the intuition behind such an index and its economic theoretical background, we illustrate our innovative operations research methodology for fraud detection and demonstrate its usefulness studying the emblematic case of the fraud by Madoff.
25,909
Title: A composite class of estimators to deal with the issue of variance estimation under the situations of random non-response in two-occasion successive sampling Abstract: Missing data often creates complications in data analysis for any type of scientific investigations. The non response is the source of missing data and it is an unavoidable phenomenon in survey sampling. This problem becomes more severe in the case of repeated surveys because of its repetitious nature. The missing data due to random causes which we call as random non-response is an important issue which needs to be dealt with a more cautious approach. To address these thoughts, this study is an attempt to reduce the adverse impact of random non response in the estimation of population variance on the current occasion in two-occasion successive sampling. A composite class of estimators for estimation of population variance on the current occasion has been proposed and its properties are deeply examined. An empirical study is carried out to show the efficacious performances of the proposed class of estimators over an estimator defined for the complete response situation. Suitable recommendations are made to the survey practitioners for their real-life practical applications.
25,916
Title: Development and Cross-Cultural Evaluation of a Scoring Algorithm for the Biometric Attachment Test: Overcoming the Challenges of Multimodal Fusion with “Small Data” Abstract: The Biometric Attachment Test (BAT) is a recently developed psychometric assessment that exposes adults to standardized picture and music stimuli-sets while simultaneously capturing their linguistic, behavioral and physiological responses, with the goal of objectively measuring their psychological attachment characteristics. Within this work, (I) we describe a new version of the B...
25,938
Title: The effects of interaction types on learning outcomes in a blog-based interactive learning environment Abstract: As the use of online learning has continued to expand, there has been a growing demand for advanced software or social sites that can enhance learning outcomes in online settings. It is important to explore how learner-learner, learner-teacher and learner-content interactions affect on learning outcomes while using the social sites such as Facebook, Twitter, WeChat, Blogs, etc. Although some prior research highlight that there is a demand for interaction in online courses, specific design tactics and empirical evaluation of the effects of interaction on learning outcomes in blog-based courses are lacking. Thus, a blog learning platform named "Learner's Digest Blog" (LDB) was developed to facilitate the three types of interactions and enhance learning outcomes. A total of 26 learners participated in the blog-based interactive learning environment. The study addressed two research questions, and regression analysis was used to analyze the data. The findings indicated a significant influence between learner-learner interaction, learner-teacher interaction and learner-content interaction on subjective learning outcome. In contrast, there was no significant influence of learner-teacher interaction on objective learning outcomes, but learner-learner and learner-content did significantly affect objective learning outcomes. A significant relationship was also found between students' subjective and objective learning outcomes.
25,965
Title: Revisiting variation affordance: applying variation theory in the design of educational software Abstract: Variation theory, which is a theory of learning developed by Marton and others, has quickly become popular in education research. Our purpose of this paper is to articulate the application of variation theory in the form of a number of concrete design principles that offer prescriptive and practical guidelines for improving the designs of educational software programs. To achieve this, we analyzed a wide range of educational software programs produced over the years in our previous projects for learning Chinese characters. From this analysis, we identified four design principles, namely, (i) not aiming to test but to bring about learning, (ii) focusing on a specific object of learning, (iii) allowing learners to explore variation to be learned, and (iv) keeping all other aspects invariant. These design principles are specialized for designing how learners interact with educational software programs, which is the major practical contribution of this paper.
25,974
Title: Capsule Networks-A survey Abstract: Modern day computer vision tasks requires efficient solution to problems such as image recognition, natural language processing, object detection, object segmentation and language translation. Symbolic Artificial Intelligence with its hard coding rules is incapable of solving these complex problems resulting in the introduction of Deep Learning (DL) models such as Recurrent Neural Networks and Convolutional Neural Networks (CNN). However, CNNs require lots of training data and are incapable of recognizing pose and deformation of objects leading to the introduction of Capsule Networks. Capsule Networks are the new sensation in Deep Learning. They have lived to this expectation as their performance in relation to the above problems has been better than Convolutional Neural Networks. Even with this promise in performance, lack of architectural knowledge and inner workings of Capsules serves as a hindrance for researchers to take full advantage of this breakthrough. In this paper, we provide a comprehensive review of the state of the art architectures, tools and methodologies in existing implementations of capsule networks. We highlight the successes, failures and opportunities for further research to serve as a motivation to researchers and industry players to exploit the full potential of this new field. The main contribution of this survey article is that it explains and summarizes significant current state of the art Capsule Network architectures and implementations. (c) 2019 The Authors. Production and hosting by Elsevier B.V. on behalf of King Saud University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
26,017
Title: A risk-adjusted EWMA chart with dynamic probability control limits for monitoring survival time Abstract: Developing control charts for monitoring health-care systems has attracted many researchers attention in recent years. Considering the fact that continuous data have more information rather than discrete data, survival time after cardiac surgery is considered as a continuous quality characteristic in this paper. Accelerated failure time regression model is used to adjust the risk of patients' conditions and surgeon groups on the survival times. After that, a risk-adjusted exponentially weighted moving average control chart is proposed to monitor the standardized residuals of accelerated failure time regression. Since the false alarm rate of each patient change dramatically, dynamic probability control limits have been extended to address the issue. The proposed method is evaluated in Phase II in terms of average run length criterion. The results indicate that the proposed method has acceptable performance in identifying the out-of-control state in the process. Furthermore, considering the effect of surgeon groups in the accelerated failure time regression model leads to improvement in the performance of the proposed method.
26,120
Title: A simulated parameter optimization method-based manifold learning for a production process Abstract: A production process parameter optimization method based on feature extraction for manifold learning is proposed to achieve precise optimization of steel anomaly data of different grades in the same series and to improve the quality of industrial products. First, the appropriate neighboring samples are found in the state of the sample point and the next state to form the neighborhood matrix. Then, the manifold hidden inside the data is extracted, ie, the evolution trend of the process parameters between different brands. At the same time, a monitoring model is built with the training data based on the support vector data description (SVDD). If an outlier is detected, it will be projected onto the manifold to obtain the adjustment values. Thus, the outlier can return to the normal state. The Swiss roll and actual production data of interstitial-free (IF) steels are employed to verify the effectiveness of the proposed method. The results show that the new method considers the continuity of process parameters of different product grades in the production process and uses data to extract the potential manifold, ie, using the evolution trend of process parameters among different product grades to achieve the optimization of the process parameter. The proposed method provides a new process parameter optimization method for the actual production process.
26,198
Title: 3-Tuple Total Domination Number of Rook's Graphs Abstract: A k-tuple total dominating set (kTDS) of a graph G is a set S of vertices in which every vertex in G is adjacent to at least k vertices in S. The minimum size of a kTDS is called the k-tuple total dominating number and it is denoted by gamma(xk,t)(G). We give a constructive proof of a general formula for gamma(x3)(,t)(KnKm).
26,365
Title: Estimators and D-optimal experimental designs for mixtures of binary responses Abstract: Models of mixture distributions are of great interest in many areas where several populations are mixed up. If the response is binary there is a mixture of Bernoulli distributions, which has practical applications for the classification of texts and images, biochemistry, genetics, robotics, computer science or pattern recognition. A finite mixture of probability distributions includes a set of parameters such as the proportion of the mixture and the parameters of each distribution. This simple case allows us to make some explicit computations of the estimators as well as working on the EM algorithm doing some comparisons. Some of these parameters may depend on one or more covariates through some specific model. Mixtures of exponential family distributions, except in very particular cases, are no longer within the exponential family and this means, among other things, that the expectation for computing the Information Matrix must, in most cases, be approximated. One of the main contributions of this article is its handling of this. Linear and logistic models are considered either for the proportion of one of the two populations (clusters), or for the parameters of the Bernoulli distributions. For each of these cases the analytic expression of the information matrix is calculated and optimal designs determined.
26,398
Title: A note on parameter asymptotics for weighted Lindley distribution Abstract: In this paper, we study the small and large parameter asymptotics for weighted Lindley distribution. It is shown that as the shape parameter gets smaller (larger), the weighted Lindley distribution can be approximated by exponential (normal, respectively) distribution. Numerical simulation are proceeded to verify these theoretical results. Two real data sets are also presented to illustrate the results.
26,428
Title: Minimum Coverings of Crowns with Cycles and Stars Abstract: Let F, G and H be graphs. A (G, H)-decomposition of F is a partition of the edge set of F into copies of G and copies of H with at least one copy of G and at least one copy of H. For R subset of F, a (G, H)-covering of F with padding R is a (G, H)-decomposition of F + E(R). A (G, H)-covering of F with the smallest cardinality is a minimum (G, H)-covering. This paper gives the solution of finding the minimum (C-k, S-k)-covering of the crown C-n,n(-1).
26,447
Title: Cross-codifference for bidimensional VAR(1) time series with infinite variance Abstract: In this paper, we consider the problem of a measure that allows us to describe the spatial and temporal dependence structure of multivariate time series with innovations having infinite variance. By using recent results obtained in the problem of temporal dependence structure of univariate stochastic processes, where the auto-codifference was used, we extend its idea and propose a cross-codifference measure for a general bidimensional vector autoregressive time series of order 1 (bidimensional VAR(1)). Next, we derive analytical results for VAR(1) model with Gaussian and stable sub-Gaussian innovations, that are characterized by finite and infinite variance, respectively. We emphasize that obtained expressions perfectly agree with the empirical counterparts. Moreover, we show that for the considered time series the cross-codifference simplifies to the well-established cross-covariance in the case when the innovations of time series are given by Gaussian white noise. The last part of the work is devoted to the statistical estimation of VAR(1) time series parameters based on the empirical cross-codifference. Again, we demonstrate via Monte Carlo simulations that the proposed methodology works correctly.
26,481
Title: Parallel tempering strategies for model-based landmark detection on shapes Abstract: In the field of shape analysis, landmarks are defined as a low-dimensional, representative set of important features of an object's shape that can be used to identify regions of interest along its outline. An important problem is to infer the number and arrangement of landmarks, given a set of shapes drawn from a population. One proposed approach defines a posterior distribution over landmark locations by associating each landmark configuration with a linear reconstruction of the shape. In practice, sampling from the resulting posterior density is challenging using standard Markov chain Monte Carlo (MCMC) methods because multiple configurations of landmarks can describe a complex shape similarly well, manifesting in a multi-modal posterior with well-separated modes. Standard MCMC methods traverse multi-modal posteriors poorly and, even when multiple modes are identified, the relative amount of time spent in each one can be misleading. We apply new advances in the parallel tempering literature to the problem of landmark detection, providing guidance on implementation generalized to other applications within shape analysis. Proposal adaptation is used during burn-in to ensure efficient traversal of the parameter space while maintaining computational efficiency. We demonstrate this algorithm on simulated data and common shapes obtained from computer vision scenes.
26,649
Title: Some side sensitive group runs based control charts to detect shifts in the process median Abstract: In this article, we propose two nonparametric side sensitive group runs based univariate control charts to detect shifts in the process median using sample based ?transition probability matrix? (tpm) namely ?Nonparametric Corrected Sample based Side Sensitive Synthetic? (N-CS-SSS) and ?Nonparametric Sample based Side Sensitive Group Runs? (N-S-SSGR) control charts respectively. We study the zero state and the steady state ATS performance of the proposed two charts. It is observed that, N-S-SSGR chart performs better than that of the ?Nonparametric Sample based two sided Synthetic? (N-S-Syn), N-CS-SSS and ?Nonparametric Sample based two sided Group Runs? (N-S-GR) charts.
26,703
Title: Empirical likelihood-based unified confidence region for a predictive regression model Abstract: In finance and economics, predictive regression models are widely used. It is known that the limit distributions of their least squares estimators are nonstandard, and depend on the properties of the predictors. In this paper, we consider the unified confidence region construction of a predictive regression model by using empirical likelihood. It turns out that the resulting statistic has an asymptotical chi-squared distribution regardless of the predictor being stationary or non-stationary. Simulations are also conducted to illustrate its finite sample performance.
26,734
Title: A novel conditional anonymity scheme for vehicular communication networks Abstract: Vehicular communication networks are traffic applications of the Internet of Things consisting of vehicle nodes, roadside units, service providers, and other components. To protect the vehicle privacy, some vehicular communication networks adopt anonymous schemes. However, in these anonymous schemes, anonymity abuse is inevitable. In order to protect vehicular privacy and prevent anonymity abuse, some conditional anonymous authentication schemes based on cryptography have been proposed. However, most of these methods have complex computation and large communication overhead, which makes them difficult to meet the actual requirement of high-speed traffic in vehicular communication networks. In this paper, a novel conditional anonymity scheme called VKPCA (vehicular communication network based on kernel principal component analysis [KPCA]) is proposed, which can protect vehicle privacy while avoid anonymity abuse. The simulation results show that this scheme has lower computational complexity and communication overhead than the conventional anonymous authentication scheme.
26,746
Title: An efficient implementation of low-power approximate compressor-based multiplier for cognitive communication systems Abstract: Image compression is significant for resourceful transmission and storage of images. The multimedia data like image, audio, and videos are uploaded in the Internet through telecommunication networks. Actual image size plays when page utilization is higher. So image compression standard joint photographic experts group (JPEG) is used for storing images in a compressed format. The quality of JPEG is that it yields little loss in quality using high compression ratios. Multiplier and adder play a major hardware resource in JPEG compression. To reduce this, hardware complexity here proposed a compressor-based multiplier to increase the performance of JPEG standard as well as reducing the logic resources. This architecture is going to be implemented in CYCLONE IV FPGA with 512 x 512 and 640 x 480 resolutions of images.
26,788
Title: The coefficient of determination in the ridge regression Abstract: In a linear regression, the coefficient of determination, R-2, is a relevant measure that represents the percentage of variation in the dependent variable that is explained by a set of independent variables. Thus, it measures the predictive ability of the estimated model. For an ordinary least squares (OLS) estimator, this coefficient is calculated from the decomposition of the sum of squares. However, when the model presents collinearity problems (a strong linear relation between the independent variables), the OLS estimation is unstable, and other estimation methodologies are proposed, with the ridge estimation being the most widely applied. This paper shows that the decomposition of the sum of squares is not verified in the ridge regression and proposes how the coefficient of determination should be calculated in this case.
26,870
Title: Exploring student teachers? social knowledge construction behaviors and collective agency in an online collaborative learning environment Abstract: The potential of online collaborative learning has been recognized in teacher education, and the value student teachers? collective agency in predict their sustainable professional develop has been discussed. However, there is limited understanding of how student teachers? collaborative knowledge construction behaviors correlate with their collective agency. This study investigated one class of student teachers? collaborative knowledge construction behavioral pattern in a multi-layered interaction activity, and further explained their correlation with the participants? collective agency. A total of 50 third-year student teachers from a class of a university in central China participated in the study. The results suggested that in terms of the average density, both intragroup and intergroup interaction networks were high-density ones. However, the number of interactions within each group was uneven. The student teachers had different reciprocity in intragroup and intergroup discussions. Their social knowledge construction behavioral patterns showed different characteristics in different stages of the multi-layered interaction activity. The questionnaire survey and interview data further revealed that though the student teachers built up collaborative learning awareness to some extent, their collective agency still has space to be improved. The authors conclude with some insights into the online collaborative learning in sustainable teachers? professional development.
26,912
Title: Model selection for Bayesian linear mixed models with longitudinal data: Sensitivity to the choice of priors Abstract: We explore the performance of three popular Bayesian model-selection criteria when vague priors are used for the covariance parameters of the random effects in a linear mixed-effects model (LMM) using an extensive simulation study. In a previous paper, we have shown that the conditional selection criteria perform worse than their marginal counterparts. It is known that for some ?vague? priors, their impact on the estimated model parameters can be non-negligible, e.g., for the priors of the covariance matrix of the random effects in a longitudinal LMM. We evaluate here the impact of vague priors for the covariance matrix of the random effects on selecting the correct LMM using classical Bayesian selection criteria. We consider marginal and conditional criteria. For the random intercept case, we assign different vague priors to the variance parameters. With two or more random effects, we considered five different specifications of inverse-Wishart (IW) prior, five different separation priors and a joint prior. The results show again the better performance of the marginal over the conditional criteria and the superiority of joint and separation priors over IW in all settings. We also illustrate the performance of the selection criteria on a practical dataset.
26,986
Title: Quality of service in IoT protocol as designs and its verification in PVS Abstract: Reliable data transmission during communication in Internet of things (IoT)-based systems has gained much interest in last few years due to the current growth and huge investment in such systems. Message Queue Telemetry Transport (MQTT) is an open publish/subscribe-based messaging protocol that is widely used for device communication in IoT. For data transmission between devices, different levels of quality of service (QoS) are used in MQTT. In this paper, we provide a formal model for MQTT protocol under the Unifying Theories of Programming (UTP) semantic framework, where QoS levels in MQTT are modeled as designs in UTP. Refinement and equivalence relations between QoS levels can be established naturally via implication between predicates. Moreover, Prototype Verification System (PVS) is used to encode the UTP design models and some important properties as well as the refinement relation between QoS levels is proved with the PVS proof assistant.
27,051
Title: An ensemble approach for spam detection in Arabic opinion texts Abstract: Nowadays, individuals express experiences and opinions through online reviews. These has an influence on online marketing and obtaining real knowledge about products and services. However, some of the online reviews can be unreal. They may have been written to promote low-quality products/services or sabotage a product/service reputation to mislead potential customers. Such misleading reviews are known as spam reviews and require crucial attention. Prior spam detection research focused on English reviews, with less attention to other languages. The detection of spam reviews in Arabic online sources is a relatively new topic despite the relatively huge amount of data generated. Therefore, this paper contributes to such topic by presenting four different Arabic spam reviews detection methods, while putting more focus towards the construction and evaluation of an ensemble approach. The proposed ensemble method is based on integrating a rule-based classifier with machine learning techniques, while utilizing content-based features that depend on N-gram features and Negation handling. The four proposed methods are evaluated on two datasets of different sizes. The results indicate the efficiency of the ensemble approach where it achieves a classification accuracy of 95.25% and 99.98% for the two experimented datasets and outperforming existing related work by far of 25%. (c) 2019 The Authors. Production and hosting by Elsevier B.V. on behalf of King Saud University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
27,425
Title: Energy-efficient weak-barrier coverage with adaptive sensor rotation Abstract: Wireless sensor networks have been widely used in environment, climate, animal monitoring, the medical field, and also for surveillance. In surveillance applications, when sensors are deployed randomly along a path in order to monitor boundaries of battlefields or country borders, gaps are usually unavoidable. In this paper, we propose an energy-efficient distributed algorithm for weak-barrier coverage using event-based clustering and adaptive sensor rotation. We consider a wireless sensor network consisting of randomly deployed sensor nodes and directional border nodes deployed using a random line deployment model. When an event is detected, sensor nodes form one or more event-based clusters. Cluster heads are in charge with aggregating data from the clusters and sending them to the border nodes, which will adjust the orientation angle such that to monitor intruders crossing the barrier. We use simulations using WSNet and MatLab to analyze the performance of our algorithm and to compare it with other non-clustering gap-mending algorithms.
27,550
Title: Control points deployment in an Intelligent Transportation System for monitoring inter-urban network roadway Abstract: The constant evolution of transportation systems and traffic in developing countries is nowadays confronted with a problem of road safety and therefore of a high accident rate, especially in the context of inter-urban road transport. In this work, we propose a communication architecture for an Intelligent Transport System (ITS) to provide surveillance in an inter-urban transport network in the context of developing countries. We introduce two types of control points: Relay Control Points (RCP) and Treatment Control Points (TrCP). We also designed two multi-objective models for the deployment of these points. In order to ensure good coverage, to minimize the cost of installation and to put more emphasis in the areas having a high number of accidents. To ensure optimal deployment of these points, we have used the Non-Dominated Sorted Genetic Algorithm II (NSGA-II) that will allow to realize the Pareto front of non-dominated solutions. The results of the simulations show the effectiveness of our proposal.
27,682
Title: A heuristic search algorithm for Hamiltonian circuit problems in directed graphs Abstract: A heuristic search algorithm is given that determines and resolves the Hamiltonian circuit problem in directed graphs. The heuristic information of each vertex is a set composed of its possible path length values from the starting vertex, which is obtained by the path length extension algorithm. A detailed analysis of algorithm examples shows that the heuristic algorithm can greatly reduce the number of processing nodes compared to the backtracking algorithm without information; meanwhile, the algorithm's time complexity is related to the number of loops in the directed graph. If a directed graph G(V, E) only contains loops that contain the starting point, then the Hamiltonian problem of G can be determined in polynomial time. The algorithm is implemented using C++ and Python programming, and all of the test graphs are generated by the random graph algorithm. The experiments show that the heuristic algorithm can reduce the number of extended nodes by at least 10% in directed random graphs with no or only a few loops. This heuristic algorithm can save time in applications that perform tasks while extending nodes because it reduce the number of extended nodes.
27,719
Title: Quanto option pricing with a jump diffusion process Abstract: This paper proposes a dynamic model for the spot foreign exchange rate which is governed by a standard Brownian motion and a stationary compound Poisson process in the domestic-real world. Using the tool of the Esscher transform, we explore two suitable parameters for the amplitude and the intensity of the jump process and redefine the dynamic process of the spot foreign exchange rate under the domestic risk-neutral measure. Based on the above work, we obtain two types of the quanto option pricing formulas. At last, we utilize the real market data of the stock ?Aluminum Corporation of China Limited? and the foreign exchange rate of USD/CNY to obtain the value of the quanto option taken the constant K-1 as the strike price in the domestic currency. Furthermore, we investigate the implied volatility of the quanto option.
27,785
Title: An adaptive EWMA control chart for monitoring zero-inflated Poisson processes Abstract: Zero-Inflated Poisson (ZIP) distribution is used to model count data with excessive zeros. In this article, we develop and design an adaptive exponentially weighted moving average (AEWMA) control chart for monitoring ZIP processes. A Markov Chain approach is used to approximate the performance measures; namely the average run length (ARL) and standard deviation of run length (SDRL) of the AEWMA chart. The chart performance is assessed using optimized design parameters that provide the smallest ARL for a range of shifts. A performance comparison of the ZIP-AEWMA chart is conducted with the competing charts in terms of the relative mean index (RMI) metric. Results show that the ZIP-AEWMA chart has superior performance over the competing charts for a wide range of process shifts, especially when the probability of excessive zeros in data is high. The proposed chart is also applied on a real-life application to demonstrate its use. We highly recommend the use of the AEWMA chart for monitoring ZIP processes.
27,827
Title: Reliability evaluation and big data analytics architecture for a stochastic flow network with time attribute Abstract: A network with multi-state (stochastic) elements (arcs or nodes) is commonly called a stochastic flow network. It is important to measure the system reliability of a stochastic flow network from the perspective of operations management. In the real world, the system reliability of a stochastic flow network can vary over time. Hence, a critical issue emerges—characterizing the time attribute in a stochastic flow network. To solve this issue, this study bridges (classical) reliability theory and the reliability of a stochastic flow network. This study utilizes Weibull distribution as a possible reliability function to quantify the time attribute in a stochastic flow network. For more general cases, the proposed model and algorithm can apply any reliability function and is not limited to Weibull distribution. First, the reliability of every single component is modeled by Weibull distribution to consider the time attribute, where such components comprise a multi-state element. Once the time constraint is given, the capacity probability distribution of elements can be derived. Second, an algorithm to generate minimal component vectors for given demand is provided. Finally, the system reliability can be calculated in terms of the derived capacity probability distribution and the generated minimal component vectors. In addition, a big data architecture is proposed for the model to collect and estimate the parameters of the reliability function. For future research in which very large volumes of data may be collected, the proposed model and architecture can be applied to time-dependent monitoring.
27,909
Title: Strolling through a city of the Roman Empire: an analysis of the potential of virtual reality to teach history in Primary Education Abstract: Virtual Reality is an emerging educational technology due to its potential immersive, interactive and imaginative characteristics supporting pupils in the learning process towards meaningful learning. Furthermore, the current teaching of history is generally too traditional, making the subject being perceived as pointless and boring by students. These aspects lead to poor academic performance. In this context, this study focuses on analysing the eventual benefits of Virtual Reality in the teaching of history at Primary Education and compares its results with those of traditional teaching resources in two dimensions, academic performance and the motivation of students. Thus, an intervention was performed with 98 fourth graders divided into two groups, control and experimental. For the collection of the data referring to motivation, an adaptation of the Instructional Material Survey instrument from Keller [(2010). Motivational design for learning and performance: The ARCS model approach. New York, NY: Springer] was used while, in relation to the evaluation of the academic performance, a specific test was designed taking as reference the tests of the coursebooks for the unit of work ?The Roman Civilization? of the subject Social Sciences. The results show statistically significant differences in favour of those students who used Virtual Reality, both in motivation and in academic performance.
27,937
Title: Comparison of penalized logistic regression models for rare event case Abstract: The occurrence rate of the event of interest might be quite small (rare) in some cases, although sample size is large enough for Binary Logistic Regression (LR) model. In studies where the sample size is not large enough, the parameters to be estimated might be biased because of rare event case. Parameter estimations of LR model are usually obtained using Newton?Raphson (NR) algorithm for Maximum Likelihood Estimation (MLE). It is known that these estimations are usually biased in small samples but asymptotically unbiased. On the other hand, initial parameter values are sensitive for parameter estimation in NR for MLE. Our aim of the study is to present an approach on parameter estimation bias using inverse conditional distributions based on distribution assumption giving true parameter values and to compare this approach on different penalized LR methods. With this aim, LR, Firth LR, FLIC and FLAC methods were compared in terms of parameter estimation bias, predicted probability bias and Root Mean Squared Error (RMSE) for different sample sizes, event and correlation rates conducting a detailed Monte Carlo simulation study. Findings suggest that FLIC method should be preferred in rare event and small sample cases.
27,956
Title: Generalized estimation strategy for mean estimation on current occasion in two-occasion rotation patterns Abstract: In this paper, an efficient generalized estimation strategy has been proposed to present the problem of estimation of current population mean of study variable in two-occasion rotation sampling. The properties of the proposed estimation procedure have been deeply examined and its optimum replacement strategy is formulated. Empirical studies are carried out to show the superiority of the proposed estimator over the sample mean estimator, natural successive sampling estimator (when no auxiliary information is used) and some well-known recent contemporary estimators under various situations. Numerical results are given in support of the study and recommendations are also made to the survey practitioners.
27,990
Title: Development and validation of an artificial intelligence anxiety scale: an initial application in predicting motivated learning behavior Abstract: While increasing productivity and economic growth, the application of artificial intelligence (AI) may ultimately require millions of people around the world to change careers or improve their skills. These disruptive effects contribute to the general public anxiety toward AI development. Despite the rising levels of AI anxiety (AIA) in recent decades, no AI anxiety scale (AIAS) has been developed. Given the limited utility of existing self-report instruments in measuring AIA, the aim of this paper is to develop a standardized tool to measure this phenomenon. Specifically, this paper introduces and defines the construct of AIA, develops a generic AIAS, and discusses the theoretical and practical applications of the instrument. The procedures used to conceptualize the survey, create the measurement items, collect data, and validate the multi-item scale are described. By analyzing data obtained from a sample of 301 respondents, the reliability, criterion-related validity, content validity, discriminant validity, convergent validity, and nomological validity of the constructs and relationships are fully examined. Overall, this empirically validated instrument advances scholarly knowledge regarding AIA and its associated behaviors.
28,032
Title: On numerical stability of randomized projection functional algorithms Abstract: In this paper, the application of randomized projection functional algorithms for numerical approximation of solutions of Fredholm equations of the second kind is discussed. Special attention is paid to the problems of numerical stability of the used orthonormal functional bases. The numerical instability of the randomized projection functional algorithm is noticed for the simple test one-dimensional equation and for the Hermite orthonormal basis.
28,113
Title: The role of time in the acceptance of MOOCs among Chinese university students Abstract: The current research aims to address the concern of high attrition rate in MOOCs via exploring factors underlying learners? acceptance of MOOCs. Despite the plethora of studies on technology acceptance, few have discussed the role of time in technology acceptance. Given that time was reported as an important factor influencing learners? decision of MOOC completion/dropout in the previous studies, the current study is conducted to fill in the literature gap by extending technology acceptance model (TAM) with a new construct: Perception of Time. Structural equation modeling was employed to test the reliability and validity of the hypothesized model with data collected from Chinese university students. The results showed that the extended TAM could explain 45% of the variance in Chinese university students? MOOCs acceptance. Perception of Time did not directly associate with attitude and intention, but it was a significant predictor for perceived usefulness. Gender did not moderate the perception of time in explaining university students? MOOCs acceptance as expected. This study yields insights into the understanding of time? role in the acceptance of MOOCs, which contributes to the development of technology acceptance theory in educational context and adds value to MOOC teaching and learning practice.
28,114
Title: Investigating teachers? adoption of MOOCs: the perspective of UTAUT2 Abstract: The number of the massive open online courses (MOOCs) around the globe is on the rise. Despite the popularity of MOOCs, they have received less attention from faculty members around the globe compared to other less-traditional and digital education models. MOOCs can be challenging for teachers to use. As such, understanding how to facilitate teachers? adoption of MOOCs is very important to better promote their use. The aim of this research paper is to investigate the drivers of teachers? acceptance and use of MOOCs from the perspective of the extended unified theory of acceptance and use of technology (UTAUT2). An online survey was used to collect responses from university faculty in Taiwan. Partial least squares structural equation modeling (PLS-SEM) was utilized for data analysis. The findings reveal that performance expectancy, social influence, facilitating conditions, and price value facilitated teachers? behavioral intention to adopt MOOCs. Furthermore, facilitating conditions and behavioral intention determined teachers? adoption of MOOCs. However, effort expectancy and hedonic motivation failed to drive teachers? adoption of MOOCs. Based on the findings, several important theoretical and practical implications are discussed.
28,116
Title: Guarding a Subgraph as a Tool in Pursuit-Evasion Games Abstract: Pursuit-evasion games study the number of cops needed to capture the robber in a game played on a graph, in which the cops and the robber move alternatively to neighbouring vertices, and the robber is captured if a cop steps on the vertex the robber is in. A common tool in analyzing this cop number of a graph is a cop moving along a shortest path in a graph, thus preventing the robber to step onto this path. We generalize this approach by introducing a shadow of the robber, the maximal set of vertices from which the cop parries the protected subgraph. In this context, the robber becomes an intruder and the cop becomes the guard. We show that the shadow can be computed in polynomial time, implying polynomial time algorithms for computing both a successful guard as well as a successful intruder, whichever exists. Furthermore, we show that shadow function generalizes the concept of graph retractions. In some cases, this implies a polynomially computable certification of the negative answer to the NP-complete problem of existence of a retraction to a given subgraph.
28,155
Title: Charting the hidden City: Collecting prison social network data Abstract: •We have experience collecting whole network data from inmates in five prisons.•We describe the challenges associated with gathering network data from prisoners.•Prison norms against “snitching” pose problems for collecting network data.•Interviewer interactions with prisoners and staff generates a “forbidden triad”.
28,185
Title: Protection of Lexicographic Product Graphs Abstract: In this paper, we study the weak Roman domination number and the secure domination number of lexicographic product graphs. In particular, we show that these two parameters coincide for almost all lexicographic product graphs. Furthermore, we obtain tight bounds and closed formulas for these parameters.
28,211
Title: Numerical methods for estimating the tuning parameter in penalized least squares problems Abstract: The solution of the penalized least squares problems depends on a tuning parameter. A popular tool for specifying the tuning parameter is the generalized cross-validation (GCV). In this work, we utilize estimates for the GCV function whose minimizers can lead to the determination of the tuning parameter. The selection of an efficient estimate depends on an appropriately defined index of proximity. Bounds and specific values are derived for this index, and a thorough study proves that the proposed one-term estimate suits perfectly to statistical models with high correlated variables. This is confirmed through simulation tests for several datasets.
28,213
Title: An algorithmic implementation of entropic ternary reduct soft sentiment set (ETRSSS) using soft computing technique on big data sentiment analysis (BDSA) for optimal selection of a decision based on real-time update in online reviews Abstract: In our day to day life, we face so many decision-making problems. The heavy text data sets and these data sets are increasing drastically in such a way that it reaches to the big data environment. Here, We have not only proposed a framework for big data sentiment analysis on real-time updates in online reviews or text for optimal or best decision selection (for example selection of a restaurant) from existing huge list of N number of restaurants but also implemented our proposed framework as a mathematical algorithm (named as Algorithm 4.1) by using soft computing technique for finding reduct soft set of consolidated review matrix. We further quantified sentiments in three values (1, −1, and 0), either in 1 for (positive/yes/true) or −1 for (negative/no/False) and 0 for (neutral or absence of sentiment) and stored them in a table (named as ternary sentiment table). Then, we have done an entropic calculation on this ternary sentiment table to find the quantity of information stored in its associated rows and columns. This proposed quantification further helps to identify the most important attribute of the table. It helps to decide weight for the different attributes and applying calculated weights to corresponding attributes to obtain the quantified ordered decision-making values.
28,231
Title: Rank Criteria Improved Confidence-based centroid scheme for Non Line of Sight node localizations in vehicular networks Abstract: The location verification of the vehicles interacting during the process of communication needs to be cooperatively determined under Non Line Of Sight (NLOS) situations for facilitating risk free environment with the least degree of congestion. The vehicular nodes in NLOS conditions possess the possibility of introducing channel congestion and broadcast storm into the network either intentionally or unintentionally during the event of emergency message dissemination in the vehicular network. Thus, the NLOS vehicular nodes of the network need to be detected through direct interaction and neighbourhood collaboration such that the rate of emergency message dissemination is sustained to the maximum degree. In this paper, a Rank Criteria Improved Confidence-based Centroid Scheme (RCICCS) is proposed for potential localization of NLOS nodes. This proposed RCICCS scheme uses an integrated cost, computed using the primitive cost and penalty cost in order to increase the effectiveness in localizing NLOS nodes. Thus effective location of NLOS nodes is determined based on rank criterion-based neighbor confidence measure that are iteratively enhanced during the process of perturbation using the method of gradients. The experimental analysis of the proposed RCICCS scheme confirmed a remarkable enhancement rate of emergency message delivery and neighbourhood awareness under NLOS conditions.
28,260
Title: Modeling of relationships between students? navigational behavior and problems in hypermedia learning system: the moderating role of working memory capacity Abstract: The studies in the literature show that all learners may not benefit from hypermedia learning systems in the same way as a result of their individual differences. Learners with different individual characteristic may display different navigational pattern and they may encounter different problems in the hypermedia learning system. The aim of the study is to investigate the relationships among students? navigational behaviors, disorientation, academic achievement, and satisfaction in hypermedia learning systems, and also, investigating the moderating impact of working memory capacity on this relationship. The participants of the study consist of 81 university students. This study utilized path analysis to examine the relationships between the measured variables. The results obtained after testing the model indicate that the relationships between navigational metrics and disorientation, academic achievement, and satisfaction change depending on working memory capacity. The results may provide a basis for developing an adaptive system with dynamic models taking into account students? individual differences.
28,276
Title: Reliability of time-constrained multi-state network susceptible to correlated component faults Abstract: Correlation can seriously degrade reliability and capacity due to the simultaneous failure of multiple components, which lowers the probability that a system can execute its required functions with acceptable levels of confidence. The high cost of fault in time-critical systems necessitates methods to explicitly consider the influence of correlation on reliability. This paper constructs a network-structured model, namely time-constrained multi-state network (TCMSN), to investigate the capacity of a computer network. In the TCMSN, the physical lines comprising the edges of the computer network experience correlated faults. Our approach quantifies the probability that d units of data can be sent from source to sink in no more than T units of time. This probability that the computer network delivers a specified level of data before the deadline is referred to as the system reliability. Experimental results indicate that the negative influence of correlation on reliability could be significant, especially when the data amount is close to network bandwidth and the time constraint is tight. The modeling approach will subsequently promote design and optimization studies to mitigate the vulnerability of networks to correlated faults.
28,373
Title: Inference on skew-normal distribution based on Fisher information in order statistics Abstract: The skew-normal distribution and various generalizations have received much attention in recent years in behalf of their non-normal characteristics, such as heavy right or left tail, which are appeared in many real data sets. So far, many mathematical results have been derived about the skew distributions. Nevertheless, less attention has been paid on the inferential aspects of the proposed distributions. Hence in this paper, we focus on some inferential results of the skew-normal distribution. The amount of Fisher information in the order statistics of a simple random sample is studied and some properties are discussed with respect to variations of skewness parameter. Also, the most informative part of a sample is determined via simulation. The maximum likelihood estimators of skewness parameter are derived based on both complete sample and the most informative part of the sample and their efficiencies are compared. Furthermore, a ranked based sampling scheme is introduced to do inference about the skewness parameter in populations with high right skewness. Finally, a real data set is used to illustrate the results of the paper.
28,380
Title: A new design paradigm for provably secure keyless hash function with subsets and two variables polynomial function Abstract: Provably secure keyless hash function uses Random Oracle (RO) or Sponge principles for the design and construction of security-centric hash algorithms. It capitalizes the aforesaid principles to produce outcomes like MD2, MD5, SHA-160, SHA-224/256, SHA-256, SHA-224/512, SHA-256/512, SHA-384/512, SHA-512, and SHA-3. These functions use bitwise AND, OR, XOR, and MOD operators to foresee randomness in their hash outputs. However, the partial breaking of SHA2 and SHA3 families and the breaking of MD5 and SHA-160 algorithms raise concerns on the use of bitwise operators at the block level. The proposed design tries to address this structural flaw through a polynomial function. A polynomial function of degree 128 demands arduous effort to be decoded in the opposing direction. The application of a polynomial on the blocks produces an unpredictable random response. It is a fact that the new design exhibits the merits of the polynomial function on subsets to achieve the avalanche response to a significant level. The output from experiments with more than 24 Million hash searches proves the proposed system is a provably secure hash function. The experiments on avalanche response and confusion and diffusion analysis prove it is an apt choice for security-centric cryptographic applications.
28,414
Title: Factors that influence the different levels of individuals' understanding after collaborative problem solving: the effects of shared representational guidance and prior knowledge Abstract: This study examined the effects of shared representational guidance (collaborative textural representative tool vs. collaborative graphical representative tool; TR vs. GR) and prior knowledge (low vs. high; LPK vs. HPK) on different levels of individuals' understanding of specific-domain knowledge after collaborative problem solving (CPS). A total of 84 individuals who majored in education from the same university in East China participated in the study. Their pre- and post-test factual and conceptual knowledge understanding were measured and analysed. It was found that prior knowledge affected individuals' development of factual knowledge understanding after CPS. Individuals with high prior knowledge could obtain better factual knowledge understanding than those with low prior knowledge. Shared representational guidance impacted on individuals' development of conceptual knowledge understanding after CPS. Individuals in a group who used a collaborative graphical representative tool achieved better conceptual knowledge understanding after CPS than the one who used a collaborative textual representative tool. The findings in this study can provide teachers with some detailed instructional guides to organize CPS project in the classroom context.
28,588
Title: Estimation for a partially linear single-index varying-coefficient model Abstract: In this paper, we study the estimation for the partial linear single-index varying-coefficient model, which is a natural extension of the partially linear varying-coefficient model. A stepwise estimation procedure is developed to obtain asymptotic normality estimators of the index parameter vector, the coefficient parameter vector, and the coefficient function vector. The asymptotic properties of the resulting estimators are established under some conditions. A simulation study is conducted to assess the performance of the stepwise estimation procedure and the results show that our proposed procedure performs well in finite samples. Furthermore, a real data example is also used to illustrate our proposed method.
28,606
Title: RSA based encryption approach for preserving confidentiality of big data Abstract: Sensitive Health Information (SHI) is a developing patient-centric model of medical data exchange, which is frequently outsourced to be stored at third party servers. Though, there have been various privacy issues as SHI could be disclosed to the unauthorised and third parties. This is a promising method to encrypt the SHI before outsourcing to assure the patients’ control over access to their own SHI. However, challenges like scalability in key management and flexible access have remained the most significant issues toward achieving fine-grained, cryptographically data access control. In this paper, the authors proposed a novel patient-centric system model for access control to SHIs stored in semi-honest servers. For fine-grained and scalable access control for SHIs, authors have proposed an encryption technique which is an improvement over RSA techniques to encrypt every patient’s SHI file. To different from previous works in secure data transmission, the authors focus on the data owner and divide users into several domains in SHI, which greatly decrease the key management complexity for data owners and users. Comprehensive analytical and experimental results are presented which reflect the efficiency of the proposed approach.
28,680
Title: Assessing perceptions and evaluating achievements of ESL students with the usage of infographics in a flipped classroom learning environment Abstract: Today, technologies are utilized extensively to support the process of learning according to learner needs. Hence, the advances in technology and ideology have prompted new directions in the second language (L2) instructional practices. This study is carried out in regard to the usage of infographics in a flipped classroom learning environment called ?Flipped Classroom Instructional Infographics?. The goal of the research is to assess the perceptions and evaluate the achievements of ESL students in learning English language through flipped classroom instructional infographics. The study is designed as a case study and uses a mixed research method (quantitative and qualitative) in order to obtain the result in a more consistent manner. The data are gathered from 130 students by means of a questionnaire and a focus group interview is conducted to explore the ?students' perceptions of infographics?, ?students' perceptions of the FCII learning environment? and ?students' academic achievements level?. The findings of the study indicate that students' motivations in the experimental group compared to the control group are more triggered by the engaging and comprehensive nature of flipped classroom instructional infographics, meaning that they could absorb the concepts easier, memorize the information faster, and become more confident in the educational process.
28,703
Title: A predictive analytics framework as a countermeasure for attrition of students Abstract: Attrition is one of the main concerns in distance learning due to the impact on the incomes and institutions reputation. Timely identification of students at risk has high practical value in effective students' retention services. Big Data mining and machine learning methods are applied to manipulate, analyze and predict students' failure, supporting self-directed learning. Despite the extensive application of data mining to education, the imbalance problem in minority classes of students' attrition is often overlooked in conventional models. This document proposes a large data frame using the Hadoop ecosystem and the application of machine learning techniques to different datasets of an academic year at the Hellenic Open University. Datasets were divided into thirty-five weeks. Thirty-two classifiers were created, compared and statistical analyzed to address the minority classes' imbalance of student's failure. The algorithms metacost-SMO, and C4.5 provide the most accurate performance for each target class. Early predictions of timeframes determine a remarkable performance, while the importance of written assignments and specific quizzes is noticeable. The models' performance in any week is exploited by developing a prediction tool for student attrition, contributing to timely and personalized intervention.
28,751
Title: Asymptotic tests for parameters of two one-truncation parameter family of distributions Abstract: In this article, we have presented some of the asymptotic distributions related to two one (left or right) ? truncation parameter family of distributions. The asymptotic conditional distribution of complete sufficient statistic of parameter of interest is obtained by conditioning the distribution of complete sufficient statistic of nuisance parameter to be held fixed. Using this conditional distribution we have derived asymptotic uniformly most powerful invariant tests for testing hypotheses (i) one parameter is larger than the other and (ii) the equality of the truncation parameters of two independent one (left or right) ? truncation parameter family of distributions. Attempt has also been made to calculate the power and to draw the corresponding power curve. A comparative study has been done with the likelihood ratio test.
28,752
Title: A Bayesian discriminant analysis method under semiparametric density ratio models Abstract: We propose a semiparametric Bayesian discriminant analysis method. Statistical simulation shows that when the hypothesis of normal distribution is satisfied, our method is comparable to and even slightly better than the traditional method; when the hypothesis of normal distribution is not satisfied, our method is obviously better than the traditional one. We also apply our method to analyze a real data set and find that our method is better than the traditional one. Finally, we point out that implementation of our method is easy since the usual polytomous logistic regression procedures in many statistical softwares can be employed.
28,845
Title: Game-based learning pedagogy: a review of the literature Abstract: The present study sought to elicit insights into pedagogical practices pertaining to the integration of digital games into teaching and learning. A review of peer-reviewed journal articles published in English over the past 10 years uncovered common pedagogical themes that were categorized into a pre-game, game, and post-game taxonomy. The findings indicated that teachers implemented a variety of instructional activities at the pre-game, game, and post-game stages. Pre-game activities consisted mostly of lectures and gameplay trainings. At the game stage, teachers engaged in content scaffolding, performed classroom management activities, and provided technical assistance to students during gameplay. At the post-game stage, teachers organized debriefing sessions to ensure that gameplay translated into learning outcomes for students. Recommendations are made for the integration of games into teaching and learning in order to maximize student engagement and learning outcomes.
28,852
Title: SD-MAC: Design and evaluation of a software-defined passive optical intrarack network in data centers Abstract: With rapid development of cloud computing services, traffic in data centers (DCs) increases dramatically. Top-of-rack Ethernet switches dominate network cost and energy consumptions in DCs. Optical switching technologies have been introduced into DC networks to increase switching capacity and to reduce energy consumption. However, how to design high-performance optical intrarack networks is still a great challenge. This paper presents a software-defined passive optical intrarack network (SD-POIRN) based on optical coupler fabrics. The data plane of SD-POIRN is a broadcast-and-select optical network combining wavelength division multiplexing and time division multiplexing technologies. The control plane of SD-POIRN has a centralized rack controller for scheduling intrarack data transmissions. This paper proposes a software-defined media access control (SD-MAC) protocol for SD-POIRN. The SD-MAC utilizes dynamic transmission and receiving server grouping methods and max-min fair share bandwidth allocation algorithms to dynamically allocate bandwidth among servers in an efficient and fair manner. Network performance evaluation results show that, when the offered load is high, the proposed SD-MAC protocol effectively increases network throughput and reduces average packet delay compared with previous works.
28,880
Title: Multi-objective economic-statistical design of simple linear profiles using a combination of NSGA-II, RSM, and TOPSIS Abstract: A multi-objective economic-statistical design is aimed in this article for simple linear profiles. In this design, the interval between two successive sampling intervals, the sample size and the number of adjustment points alongside, the parameters of the monitoring scheme are determined such that not only the implementation cost is minimized, but also the profile exhibits desired statistical performances. To this aim, three objective functions are considered in the multi-objective optimization model of the problem. The Lorenzen-Vance cost function is used to model the implementation cost as the first objective function to be minimized. The second objective function maximizes the in-control average run length of the monitoring scheme (ARL(0)), while the third objective function minimizes the out-of-control average run length (ARL(1)). In addition, a lower bound is defined in a constraint for ARL(0) and an upper bound is determined in another constraint for ARL(1). The complex multi-objective optimization problem is solved by an NSGA-II algorithm, whose parameters are tuned using response surface methodology (RSM). A numerical illustration is solved, for which the Pareto optimal solutions are ranked by a multi-criteria decision-making method called Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS). Finally, sensitivity analyses are conducted on the parameters of the cost function, based on which the time required to sample and create a profile is the most important factor.
28,945
Title: Aligning observed and modelled behaviour by maximizing synchronous moves and using milestones Abstract: Given a process model and an event log, conformance checking aims to relate the two together, e.g. to detect discrepancies between them. For the synchronous product net of the process and a log trace, we can assign different costs to a synchronous move, and a move in the log or model. By computing a path through this (synchronous) product net, whilst minimizing the total cost, we create a so-called optimal alignment — which is considered to be the primary target result for conformance checking. Traditional alignment-based approaches (1) have performance problems for larger logs and models, and (2) do not provide reliable diagnostics for non-conforming behaviour (e.g. bottleneck analysis is based on events that did not happen). This is the reason to explore an alternative approach that maximizes the use of observed events. We also introduce the notion of milestone activities, i.e. unskippable activities, and show how the different approaches relate to each other. We propose a data structure, that can be computed from the process model, which can be used for (1) computing alignments of many log traces that maximize synchronous moves, and (2) as a means for analysing non-conforming behaviour. In our experiments we show the differences of various alignment cost functions. We also show how the performance of constructing alignments with our data structure relates to that of the state-of-the-art techniques.
28,997
Title: A hybrid classification model for prediction of academic performance of students: a big data application Abstract: Predicting the educational performance of students turns into more complex aspect due to the vast data volume in the database. The conventional systems have failed to analyze and monitor the progress of the student; often the promising performance is not being addressed. It is due to inappropriate method selection and investigation lagging. This paper intends to predict the educational performance of students based on socio-demographic information. It introduces a performance prediction architecture that includes two modules. One module is to handle the big data via MapReduce framework (incorporates the concept of principle component analysis), whereas the second module is an intelligent module that predicts the performance of the students using data processing stages. For this, this paper introduces a new hybrid classifier, which hybridizes the deep belief network and support vector machine. The output from the proposed classifier gives an accurate prediction of student performance. The performance of the proposed prediction model is compared over other conventional methods in terms of accuracy, specificity, sensitivity, precision, negative predictive value, F1-score Matthews correlation coefficient, false positive rate, false negative rate and false discovery rate, and proven the superiority of proposed prediction model.
29,003