text
stringlengths
70
7.94k
__index_level_0__
int64
105
711k
Title: Advancement and the foci of investigation of MOOCs and open online courses for language learning: a review of journal publications from 2009 to 2018 Abstract: Based on the Technology-Based Learning Model, the present study reviewed open online language learning research published in the Scopus database from 2009 to 2018, with a focus on research design, sample size, learning devices, target languages, language skills, the learners' education level, the learners' language proficiency level, learning methods, and the foci of investigation. In terms of research design, quantitative empirical studies accounted for the most, followed by mixed method studies. As for the target languages, English was investigated the most, with few studies focused on other languages. In terms of language skills, the number of studies related to vocabulary and speaking was less than those concerning reading and writing. With regard to learning methods, self-directed learning, blended learning, and peer review were the most commonly adopted. There was little research on collaborative learning, flipped classrooms, adaptive learning, or thematic-oriented learning. As for the foci of investigation, motivation, interest, and satisfaction were the focus of the majority of the studies. On the other hand, little research was found to examine self-efficacy or confidence. A discussion based on the analytic results is presented along with proposed suggestions for future research directions.
51,786
Title: Ridge estimation in linear mixed measurement error models with stochastic linear mixed restrictions Abstract: This article is concentrated on the problem of multicollinearity in linear mixed models (LMMs) with measurement error in the fixed effects variables. After introducing a ridge estimator (RE) in these models, we propose a new estimator called the stochastic restricted ridge estimator (SRRE) by combining the ridge estimator (RE) and the stochastic restricted estimator (SRE). Moreover, asymptotic properties of these estimators will be derived and the necessary and sufficient conditions for the superiority of the SRRE over the RE and SRE are obtained using the mean squared error matrix (MSEM). Finally, the theoretical findings of the proposed estimators are also evaluated with a Monte Carlo simulation study and a numerical example.
51,822
Title: Resource allocation through logistic regression and multicriteria decision making method in IoT fog computing Abstract: Cloud computing has received a lot of attention from both researcher and developer in last decade due to its unique structure of providing services to the user. As the digitalization of world, heterogeneous devices, and with the emergence of Internet of Things (IoT), these IoT devices produce different type of data with distinct frequency, which require real-time and latency sensitive services. This provides great challenge to cloud computing framework. Fog computing is a new framework to accompaniment cloud platform and is proposed to extend services to the edge of the network. In fog computing, the entire user's tasks are offloaded to distributed fog nodes to the edge of network to avoid delay sensitivity. We select fog computing network dwell different set of fog nodes to provide required services to the users. Allocation of defined resource to the users in order to achieve optimal result is a big challenge. Therefore, we propose dynamic resource allocation strategy for cloud, fog node, and users. In the framework, we first formulate the ranks of fog node using TOPSIS to identify most suitable fog node for the incoming request. Simultaneously logistic regression calculates the load of individual fog node and updates the result to send back to the broker for next decision. Simulation results demonstrate that the proposed scheme undoubtedly improves the performance and give accuracy of 98.25%.
51,845
Title: RTSLPS: Real time server load prediction system for the ever-changing cloud computing environment Abstract: Cloud Computing (CC) proposes a multi-tenant framework used by multiple concurrent users, each of which exhibits different and varied behavior. Such heterogeneity shapes a highly fluctuating load and creates new usage patterns overtime at the server level. Virtual Machines (VMs) interference also plays a big part in inducing changes at server load. Server load prediction is deemed crucial to ensure efficient resource usage. The execution of real-time interactive tasks constitutes an important part of CC. Thus, we propose, in this paper, a real-time server load prediction system based on incoming task classification and VM interference detection. The incoming task classification is used to capture the incoming workload trend and VM interference detection aims to capture the interference rate. Finally, load prediction considers server actual resources’ usage, VM interference rate, and incoming workload trend. We propose an improved version of Hoeffding Adaptive Tree (HAT), augmented by ensemble drift detectors. Results show that our Real-Time Server Load Prediction System (RTSLPS) was able to deliver great flexibility dealing with changes and very good accuracy with quick evaluation time and a small memory footprint.
51,909
Title: Forbidden Subgraphs for Collapsible Graphs and Supereulerian Graphs Abstract: In this paper, we completely characterize the connected forbidden subgraphs and pairs of connected forbidden subgraphs that force a 2-edge-connected (2-connected) graph to be collapsible. In addition, the characterization of pairs of connected forbidden subgraphs that imply a 2-edge-connected graph of minimum degree at least three is supereulerian will be considered. We have given all possible forbidden pairs. In particular, we prove that every 2-edge-connected noncollapsible (or nonsupereulerian) graph of minimum degree at least three is Z(3)-free if and only if it is K(3-)free, where Z(i) is a graph obtained by identifying a vertex of a K-3 with an end-vertex of a Pi+1.
51,928
Title: The different effects of daily-life instant response social media and an educational feedback system on flipped learning: from the evidence of behavioral analysis Abstract: In this study, we used the application software installed on students' smart phones as instantly interactive tools during class time in a flipped learning context. One popular daily-life instant interaction application, Line, provided the students with text or multimedia feedback in time sequence; one non-daily instant interaction application, PAL+, provided the students with inscribed feedback in time sequence; and one educational instant interaction application, SpeakUP!, provided the students with anonymous feedback in time sequence. We compared the students' behaviors of using the different interactive systems for interactions in the flipped classroom class time. The study applied learning analytics techniques and used behavioral analysis to identify the role of the daily-life instant response application in flipped learning, and found how the instant feedback systems enhanced discussion in the classroom.
51,974
Title: Social network research in health care settings: Design and data collection Abstract: •Social network research offers a means of understanding the complex health care system.•The use of social network research in health care is increasing.•This paper outlines recommendations for network data collection in health services research.•Key ethical challenges of social network research in health care are around confidentiality of respondents.
52,028
Title: Evaluating the reliability of diagnostic performance indices by using Taguchi quality loss function Abstract: A medical test to diagnose a disease is often used to distinguish between healthy and diseased individuals, where early, accurate and reliable diagnosis can decrease morbidity and mortality rates of disease. An optimal cut-off point is required to discriminate healthy from diseased individuals, and a corresponding biomarker value is used to assess the accuracy and robustness whether a person is healthy (negative) or diseased (positive). If the biomarker values, that are greater than or equal to this cut-off value, are considered positive, otherwise they are negative. Several indices such as Youden index, Euclidean index, product of sensitivity and specificity have been used in clinical practices but their reliability of performance are not well understood by clinicians. This study uses Taguchi quality loss function to compare the choice of methods in determining optimal cut-off points for the diagnostic tests. The results illustrate that the variance of diseased populations is less than the variance of healthy populations and the loss coefficient of false negative results is greater than loss coefficient of the false positive results, the Youden index has a better performance; in other cases, the Euclidean index is a better measure. This paper proposes a Taguchi index based on the quality loss function can measure the diagnostic accuracy for differences in the sensitivity and specificity by minimizing the cost of false positive and false negative results. The proposed index can assess diagnostic tests and offer perfect discrimination.
52,115
Title: Scheduling The Vehicles Of Bus Rapid Transit Systems: A Case Study Abstract: Bus rapid transit (BRT) is a cost-efficient, traffic-free bus-based transportation system competing with subways. There are 205 municipalities around the world that implemented their own BRT systems. Istanbul, having the sixth-most congested traffic in the world, built its own BRT system (Metrobus), which serves more than 830,000 people (6.45% of all public transportation usage) in a day with 6254 trips covered by its current fleet of 496 vehicles. In this study, we model the vehicle scheduling problem of Metrobus as a multiple depot vehicle scheduling problem. The model aims to minimize the fleet size and total deadhead kilometers while covering all timetabled trips. We propose a new heuristic, trips merger (TM), to solve the model and show that there exists cost reduction opportunities in terms of both fleet size and deadhead kilometers. The proposed heuristic is a member of the state-space reduction heuristics family, which first reduces the problem size, then solves the reduced problem. Computational study reveals that TM performed better than the existing state-space reduction heuristics for the Metrobus case.
52,126
Title: A fiducial test for assessing the non-inferiority of odds ratio in matched-pairs design Abstract: The non-inferiority of odds ratio in a matched-pairs design is a common question in medical research, and several different approaches are available to answer it. However, their performance is not always satisfactory. Some of the traditional approaches, such as the delta test and score test, do not perform well unless the sample size is large. The simulated significance level of the inferential model (IM) test is conservative, and its power is lower than that of the other tests. The results of the randomized IM test are satisfactory; however, since it is based on which is randomly generated, the plausibility function is not fixed. This paper describes a fiducial test that is based on Fisher's fiducial argument. Our simulation studies illustrated that the fiducial test can control Type I error, and its power appears to be very close to that of the score test. All of the test procedures are illustrated with a real example.
52,161
Title: Decidability and complexity of action-based temporal planning over dense time Abstract: In this paper, we study the computational complexity of action-based temporal planning interpreted over dense time. When time is assumed to be discrete, the problem is known to be EXPSPACE-complete. However, the official PDDL 2.1 semantics and many implementations interpret time as a dense domain. This work provides several results about the complexity of the problem, focusing on some particularly interesting cases: whether a minimum amount ε of separation between mutually exclusive events is given, in contrast to the separation being simply required to be non-zero, and whether or not actions are allowed to overlap already running instances of themselves. We prove the problem to be PSPACE-complete when self-overlap is forbidden, whereas, when it is allowed, it becomes EXPSPACE-complete with ε-separation and even undecidable with non-zero separation. These results clarify the computational consequences of different choices in the definition at the core of the PDDL 2.1 semantics, which have been vague until now.1
52,271
Title: Hurst exponent based approach for influence maximization in social networks Abstract: Influence maximization in online social networks is a trending research area due to its use in many real-world domains. Influence maximization addresses the problem of identifying a k-size subset of nodes in a social network which can trigger a cascade of further adoptions, leading to maximum influence spread across the social network. In this paper, influence maximization has been proposed by combining a node’s connections and its actual past activity pattern. Analyzing node’s activity with respect to interaction frequency and self-similarity trend, provides a more realistic view of the node’s influence potential. Inspired by this concept, HAC-Rank algorithm has been proposed for identification of initial adopters based on both their connections and past behaviour. Furthermore, a Hurst-based Influence Maximization (HBIM) model for diffusion, wherein a node’s activation depends upon its connections and the self-similarity trend exhibited by its past activity, has also been proposed. For assessing the self-similarity trend in a node’s activity pattern, Hurst exponent (H) has been computed. Based on the results achieved, proposed algorithm has been found to perform better than other state-of-art algorithms for initial adopter identification.
52,307
Title: Strategyproof mechanisms for Friends and Enemies Games Abstract: We investigate strategyproof mechanisms for Friends and Enemies Games, a subclass of Hedonic Games in which every agent classifies any other one as a friend or as an enemy. In this setting, we consider the two classical scenarios proposed in the literature, called Friends Appreciation (FA) and Enemies Aversion (EA). Roughly speaking, in the former each agent gives priority to the number of friends in her coalition, while in the latter to the number of enemies.
52,534
Title: Knowledge-based block chain networks for health log data management mobile service Abstract: There is a rapidly growing interest in health care due to the recent development of IT convergence technologies according to the 4th industrial revolution. More services for personal health management of users are available and studies on the establishment of knowledge base for an efficient health log data management in the health care field are being carried out with the emergence of block chain technology which is the next generation information security technology. In this paper, a knowledge-based block chain network for health log data management mobile service is suggested. The user’s log data and context information are applied to block chain technology that is difficult to forge and falsify in the knowledge-based health platform, enabling a large amount of users’ log data and context information accumulated continuously to be stored in a block in the knowledge base using the side chain structure that stores information through the configuration of knowledge-based data transaction. This enables high expandability and security to be secured in mobile environment as well. The result of comparative evaluation with the existing ontology knowledge model for verifying the validity shows that the suggested method presented approximately 16.5% higher performance in accuracy and reproducibility.
52,586
Title: Agent grouping recommendation method in edge computing Abstract: In edge computing, diverse kinds of data are handled in real-time. An increasing number of researches have been carried out to improve the performance of data handling for agent-based data control technology. An important application for edge computing is to control the distributed agents in real-time strategy (RTS) games. One of the key approaches for agent control is the grouping of agents; however, it is difficult to group them in a reasonable cluster. This paper proposes a recommendation method for the best grouping of agents and edge computing devices to reduce the time of handling data and obtaining optimal results for RTS game agent selecting. The proposed method used K-means, influence mapping, and Bayesian probability, and was evaluated by utilizing a game environment in which the performance of handling data is easily evaluated. The comparison result between the recommendation and random modes shows that our method has ability to increase 47% of the percentage the wins.
52,693
Title: The use of eye tracking technology to explore learning and performance within virtual reality and mixed reality settings: a scoping review Abstract: This scoping review examines studies using eye tracking technology to monitor learning and performance in virtual or mixed reality settings. The aim of this review is to describe the various ways in which eye tracking devices have been deployed in relation to key aspects of virtual reality and mixed reality environments, list the eye tracking measures most salient to such environments and identify emergent patterns in the findings that the eye tracking data in the studies reviewed have brought to light. Drawing on these findings, an analytical framework for attending to and analysing eye tracking data is proposed and recommendations for future research using eye tracking to optimise learning and performance within virtual reality and mixed reality environments are discussed.
52,712
Title: aTask scheduling approaches in fog computing: A survey Abstract: The advent of the Internet of Things brings a wave of research, technology, and computing. The Internet of Things has brought a concept called the fog computing, which has launched many discussions in the scientific community. In a fog environment, a requested service decomposed into a set of tasks and applied with an optimal approach to schedule these tasks across the fog devices to serve the user requirements is an important challenge. In spite of the task scheduling approaches being a necessity in fog computing, to the best of our knowledge, there is no comprehensive survey in the field of task scheduling approaches in fog computing. This paper provides a survey to analyze the current research studies in task scheduling approaches in fog computing from 2015 to 2018. Moreover, this paper classifies the task scheduling approaches in two fields: static and dynamic. This study tries to discuss the advantages and disadvantages of each study to solve their weaknesses. Providing a survey of the task scheduling approaches in the fog computing, planning a technical taxonomy for the task scheduling approaches, and highlighting the future open issues in the recent topics are the contributions of this study.
52,760
Title: Application of network security penetration technology in power internet of things security vulnerability detection Abstract: The emergence of network penetration attacks makes network security problems more and more prominent. To better detect network penetration attacks, this article first proposes an attack detection method based on ant colony classification rules mining algorithm based on swarm intelligence theory. Classification rules perform pattern matching to detect attack behavior. Second, the quantitative problem in the evaluation mechanism of power internet security vulnerability is analyzed. The addition and deletion of power internet of things (IoT) security assessment elements introduces a probabilistic computing model, improves the vulnerability assessment mechanism of power object networks, and solves the problem of power object network security and quantitative computing. Finally, after performing a series of preprocessing on the dataset, using the above method to find the classification rule or classification center, respectively, perform pattern matching or similarity calculation, and finally obtain the attack detection result. A series of evaluation functions are established in the experiment, and the experimental results are compared with other related algorithms. The results show that the method can effectively detect network penetration attacks and IoT security vulnerabilities, and the effectiveness of attack detection methods is greatly improved.
52,761
Title: On interval estimation of the population mean in ranked set sampling Abstract: This article studies interval estimation of the population mean in ranked set sampling. Eight types of confidence intervals are considered which are based on asymptotic theory, resampling methods, or a combination of them. A comprehensive simulation study is conducted to investigate performance of these intervals, and to deal with some limitations of a relevant work in the literature. Lastly, a data example is presented to illustrate the procedures.
52,779
Title: Extracting scientific trends by mining topics from Call for Papers Abstract: Purpose The purpose of this paper is to present a novel approach for mining scientific trends using topics from Call for Papers (CFP). The work contributes a valuable input for researchers, academics, funding institutes and research administration departments by sharing the trends to set directions of research path. Design/methodology/approach The authors procure an innovative CFP data set to analyse scientific evolution and prestige of conferences that set scientific trends using scientific publications indexed in DBLP. Using the Field of Research code 804 from Australian Research Council, the authors identify 146 conferences (from 2006 to 2015) into different thematic areas by matching the terms extracted from publication titles with the Association for Computing Machinery Computing Classification System. Furthermore, the authors enrich the vocabulary of terms from the WordNet dictionary and Growbag data set. To measure the significance of terms, the authors adopt the following weighting schemas: probabilistic, gram, relative, accumulative and hierarchal. Findings The results indicate the rise of "big data analytics" from CFP topics in the last few years. Whereas the topics related to "privacy and security" show an exponential increase, the topics related to "semantic web" show a downfall in recent years. While analysing publication output in DBLP that matches CFP indexed in ERA Core A* to C rank conference, the authors identified that A* and A tier conferences not merely set publication trends, since B or C tier conferences target similar CFP. Originality/value Overall, the analyses presented in this research are prolific for the scientific community and research administrators to study research trends and better data management of digital libraries pertaining to the scientific literature.
53,131
Title: Social network analysis of law information privacy protection of cybersecurity based on rough set theory Abstract: Purpose The purpose of this paper is to solve the problem of information privacy and security of social users. Mobile internet and social network are more and more deeply integrated into people's daily life, especially under the interaction of the fierce development momentum of the Internet of Things and diversified personalized services, more and more private information of social users is exposed to the network environment actively or unintentionally. In addition, a large amount of social network data not only brings more benefits to network application providers, but also provides motivation for malicious attackers. Therefore, under the social network environment, the research on the privacy protection of user information has great theoretical and practical significance. Design/methodology/approach In this study, based on the social network analysis, combined with the attribute reduction idea of rough set theory, the generalized reduction concept based on multi-level rough set from the perspectives of positive region, information entropy and knowledge granularity of rough set theory were proposed. Furthermore, it was traversed on the basis of the hierarchical compatible granularity space of the original information system and the corresponding attribute values are coarsened. The selected test data sets were tested, and the experimental results were analyzed. Findings The results showed that the algorithm can guarantee the anonymity requirement of data publishing and improve the effect of classification modeling on anonymous data in social network environment. Originality/value In this study, the proposed algorithm and scheme can effectively protect the privacy of social network data, ensure the availability of social network graph structure and realize the need of both protection and sharing of user attributes and relational data.
53,145
Title: Evaluation of network reliability for stochastic-flow air transportation network considering discounted fares from airlines Abstract: In the tourism sector, travel agent’s managers often set meeting customers’ demand as a top priority and often get discounts from airlines and the discounted fare on each flight depends on their booking, whereas, customers have their request in terms of travel time and cost. This study addresses the network reliability, the probability that the given demand can travel from an origin to a destination within the time threshold and the travel cost after discount does not exceed the budget, to indicate travel agents’ ability on satisfying customers’ demand under specific constraints. To calculate the network reliability, this work develops an algorithm, which determines all lower boundary points via the minimal concept. An air transportation system of the travel agent is constructed as a flow-network consisting of sets of nodes (airports) and arcs (flights). On each flight, there exists a contracted airline providing the service with stochastic capacity and corresponding probability. The flow-network in this study is typical stochastic, called a stochastic-flow air transportation network. A real case for the proposed algorithm demonstration and some management suggestions of the network reliability are provided.
53,232
Title: Prediction of hospital length of stay to achieve flexible healthcare in the field of Internet of Vehicles Abstract: The patient transfer from hospitals to followup care and rehabilitation facilities is an important aspect for maintaining the continuity of medical care. In order to achieve flexible healthcare within the field of Internet of Vehicles (IoVs) in terms of secure patient transfer and ambulance transport, the whole organization of patients' discharge and transfer should be anticipated, based mostly on a length of stay (LOS) given at the time of inpatient admission. Therefore, the prediction of LOS has serious impact on influx coordination, bed management, ambulance scheduling, and furthermore, on the financial balance of hospitals. Based on studying medical data, the prediction with good accuracy can help hospital managers get an efficient and robust resource management. The challenge is then how to extract valuable information from medical data, which contains considerable hesitation and uncertainty elements. In this article, a hesitant fuzzy-rough nearest-neighbor algorithm has been proposed and experimented with real medical data. Hesitation interpretation has been reflected in the process of determining class labels in our algorithm via hesitant fuzzy relation determination and hesitant fuzzy-rough similarity measure. The experimental analysis has shown that the proposed algorithm has better performance and extensibility.
53,286
Title: Estimating IBNR claim counts using different levels of data aggregation Abstract: Reserving problem in non-life insurance is an applied statistical problem with several computational aspects. In the literature, the focus has mainly been on aggregate reserving techniques and the chain-ladder method with its extensions has remained as the most widely applied claim reserving method. The classical chain-ladder method is regularly applied to annual data, but the question arises whether the reserve estimates based on more refined data outperform results obtained by annual data. We investigate whether and how much different data aggregation levels can improve the reserving process. To compare the performance of the classical chain-ladder method and its novel continuous extension, we conduct two simulation studies as well as a case study with an insurance data and use both models to estimate the IBNR claim count estimates on different levels of data aggregation. The results demonstrate that the continuous approach of the chain-ladder method provides in general a minor improvement in the IBNR predictions but proves to hold its predictive power on different types of data. This study highlights data aggregation levels in which the estimates of classical chain-ladder model outperforms the estimates obtained by the use of annual data, providing more insights for establishing accurate loss reserves.
53,533
Title: A hybrid VNS-Lagrangean heuristic framework applied on single machine scheduling problem with sequence-dependent setup times, release dates and due dates Abstract: In this work, we propose a hybrid VNS-Lagrangean heuristic applied on the single machine scheduling problem with sequence-dependent setup times, release dates, and due dates. The objective function is the minimization of the total tardiness. The proposed hybrid heuristic is a Lagrangean relaxation integrated with the variable neighborhood search (VNS). The methodology can generate strong bounds, using the information of the Lagrangean multipliers to construct and perturb feasible solutions within the VNS framework. We compare its performance with previous hybrid approaches and find that the upper bounds obtained are optimal for several cases and tight for others. The methodology presents competitive results when compared with previous related works.
53,540
Title: An efficient algorithm for counting Markov equivalent DAGs Abstract: We consider the problem of counting the number of DAGs which are Markov equivalent, i.e., which encode the same conditional independencies between random variables. The problem has been studied, among others, in the context of causal discovery, and it is known that it reduces to counting the number of so-called moral acyclic orientations of certain undirected graphs, notably chordal graphs.
53,574
Title: RFM model for customer purchase behavior using K-Means algorithm Abstract: The objective of this study is to apply business intelligence in identifying potential customers by providing relevant and timely data to business entities in the Retail Industry. The data furnished is based on systematic study and scientific applications in analyzing sales history and purchasing behavior of the consumers. The curated and organized data as an outcome of this scientific study not only enhances business sales and profit, but also equips with intelligent insights in predicting consumer purchasing behavior and related patterns. In order to execute and apply the scientific approach using K-Means algorithm, the real time transactional and retail dataset are analyzed. Spread over a specific duration of business transactions, the dataset values and parameters provide an organized understanding of the customer buying patterns and behavior across various regions. This study is based on the RFM (Recency, Frequency and Monetary) model and deploys dataset segmentation principles using K-Means Algorithm. A variety of dataset clusters are validated based on the calculation of Silhouette Coefficient. The results thus obtained with regard to sales transactions are compared with various parameters like Sales Recency, Sales Frequency and Sales Volume.
53,718
Title: Optimizing non-unit repetitive project resource and scheduling by evolutionary algorithms Abstract: Repetitive project scheduling is a frequently encountered and challenging task in project planning. Researchers have developed numerous methods for the scheduling and planning of repetitive construction projects. However, almost all current repetitive scheduling methods are based on identical production units or they neglect the priorities of activities. This work presents a new hybrid evolutionary approach, called the fuzzy clustering artificial bee colony approach (FABC), to optimize resource assignment and scheduling for non-unit repetitive projects (NRP). In FABC, the fuzzy c-means clustering technique applies several multi-parent crossover operators to utilize population information efficiently and to improve convergence efficiency. The scheduling subsystem considers the following: (1) the logical relationships among activities throughout the project; (2) the assignment of multiple resources; and (3) the priorities of activities in groups to calculate project duration. Two numerical case studies are analyzed to demonstrate the use of the FABC-NRP model and its ability to optimize the scheduling of non-unit repetitive construction projects. Experimental results indicate that the proposed method yields the shortest project duration on average and deviation of optimal solution among benchmark algorithms considered herein and those considered previously. The outcomes will help project managers to prepare better schedules of repetitive projects.
53,735
Title: A framework of monitoring water pipeline techniques based on sensors technologies Abstract: Water pipelines are one of the most vital structures to convey fresh water for consumption or irrigation over long distances. However, the major problem of water transportation pipeline is leak which can cause water resources loss, possible human injuries, and environmental pollution problems. For that, these pipelines must be carefully and real-time monitored. In this work, we describe a comprehensive framework for existing monitoring water pipeline techniques based on both wired and wireless networks. Furthermore, we elaborate a comparative overview in terms of keys parameters touching monitoring models. After the evaluation of all the existing pipeline monitoring methods, it is much more evident that techniques based on wireless sensors networks have variety and are the best selection for water pipeline monitoring purposes. This study is a guideline to select which model is the most suitable to design a new pipeline monitoring scheme.
53,785
Title: Time series classification through visual pattern recognition Abstract: In this paper, a new approach to time series classification is proposed. It transforms the scalar time series into a two-dimensional space of amplitude (time series values) and a change of amplitude (increment). Subsequently, it uses this representation to plot the data. One figure is produced for each time series. In consequence, the time series classification problem is converted into the visual pattern recognition problem. This transformation allows applying a wide range of algorithms for standard pattern recognition – in this domain, there are more options to choose from than in the domain of time series classification. In this paper, we demonstrated the high effectiveness of the new method in a series of experiments on publicly available time series. We compare our results with several state-of-the-art approaches dedicated to time series classification. The new method is robust and stable. It works for time series of differing lengths and is easy to extend and alter. Even with a baseline variant presented in an empirical study in this paper, it achieves a satisfying classification accuracy. Furthermore, the proposed conversion of raw time series into images that are subjected to feature extraction opens the possibility to apply standard clustering algorithms.
53,802
Title: An improved intelligent clustering algorithm for irregular wireless network Abstract: The topology management classifiers consist of several methods, such as the typical clustering-based method excelled in wireless network partitioning. However, most algorithms appear load unbalanced in the application of irregular network, resulting "energy hot zone" phenomenon. This paper proposes an improved intelligent clustering algorithm and applies it to the complex water system environment. Firstly, we build a new energy consumption model for wireless transmission network, and design a genetic clustering strategy via the minimum energy consumption principle. Secondly, we introduce the P matrix coding approach considering the search scale, so as to avoid the squared increasing relationship between the searching space and the data calculation. Thirdly, we employ adaptive genetic operator to enhance the directivity of the searching space, and utilize a fuzzy modified operator to enhance the accuracy of the cluster head selection, which may ensure the iterative efficiency. Through numerical simulations, empirical results show better performance than traditional methods in load balancing and clustering efficiency, which can effectively improve the network convergence speed and extend the network lifetime.
53,803
Title: Portfolio selection algorithm under financial crisis: a case study with Bursa Malaysia Abstract: Most study on online portfolio selection algorithm focus on the theoretical derivation of optimal regret bound or empirically validates portfolio cumulative return and its variability. This study investigates the behavior of algorithm under financial crisis based on 2008 stock trading in Bursa Malaysia, a market in small open economy whereby trading actions could not exert impact to the spillover trends from US and Europe. The equity returns data generating process under this scenario is an AR process with positive lag as such algorithms arbitrate between relative growths like Anticorrelation, constant rebalancing are not performing. Whereas algorithms that search for optimal portfolio at each transaction such as Universal Portfolio, Convex Optimization approaches are able to reverse the downward trends of portfolio before market recovery, dampen downside variability and deliver lower extreme returns. We also explained the expendability and practicality of the Convex Optimization approach for future development of automated trading scheme.
53,869
Title: Emotion recognition of speech signal using Taylor series and deep belief network based classification Abstract: In the recent years, one of the multidisciplinary research areas attracting the researchers is emotion recognition. It is an important and challenging process to be achieved in emotional interaction. Accordingly, this work introduces the emotion recognition system by proposing the Taylor series based Deep Belief Network (Taylor-DBN). The noise present in the speech signal is removed through the speech enhancement process and then, subjected to the feature extraction. The features, such as tonal power ratio, Multiple Kernel Mel Frequency Cepstral Coefficients (MKMFCC) parameters, and the spectral flux are extracted and provided as the training input to the proposed Taylor-DBN classifier for identifying the emotions present in the signal. The experimentation is done with the help of the Berlin database, real database 1, and the real database 2. The experimental datasets contain the speech signals from different domain and language, and the performance of the proposed Taylor-DBN has shown minimal variations in each domain, and thus, the proposed model is suitable for various domains. The proposed Taylor-DBN outclassed other comparative models with Accuracy, False Acceptance Rate (FAR), and False Rejection Rate (FRR) values of 0.97, 0.0135, and 0.0165, respectively.
54,004
Title: Robot perceptual adaptation to environment changes for long-term human teammate following Abstract: Perception is one of the several fundamental abilities required by robots, and it also poses significant challenges, especially in real-world field applications. Long-term autonomy introduces additional difficulties to robot perception, including short- and long-term changes of the robot operation environment (e.g., lighting changes). In this article, we propose an innovative human-inspired approach named robot perceptual adaptation (ROPA) that is able to calibrate perception according to the environment context, which enables perceptual adaptation in response to environmental variations. ROPA jointly performs feature learning, sensor fusion, and perception calibration under a unified regularized optimization framework. We also implement a new algorithm to solve the formulated optimization problem, which has a theoretical guarantee to converge to the optimal solution. In addition, we collect a large-scale dataset from physical robots in the field, called perceptual adaptation to environment changes (PEAC), with the aim to benchmark methods for robot adaptation to short-term and long-term, and fast and gradual lighting changes for human detection based upon different feature modalities extracted from color and depth sensors. Utilizing the PEAC dataset, we conduct extensive experiments in the application of human recognition and following in various scenarios to evaluate ROPA. Experimental results have validated that the ROPA approach obtains promising performance in terms of accuracy and efficiency, and effectively adapts robot perception to address short-term and long-term lighting changes in human detection and following applications.
54,089
Title: Basis psychological needs of students in blended learning Abstract: Traditional classroom setting has transitioned from a solely face-to-face, teacher-oriented instructional approach to an integrated, mixed-mode classroom learning dynamic. With this change of educational context, it is imperative to know: are students' basic psychological needs being better met and fulfilled? To address this question, this paper adopted a mixed method to discover if, and how, blended learning meets students' three basic psychological needs, specifically relatedness, competence and autonomy. Findings show that the first two need-constructs of relatedness and competence were fulfilled. The need for autonomy, however, was not being met due to school culture, assessment and the perhaps-habitual adherence to the conventional roles of teachers and students. This study also found that the three aforementioned psychological are positively related. In fact, blended learning has provided a new dimension of, and opportunity for, learning interactions for students of differing learning styles. Varieties of academic outputs released other expressions of "self" in many students, which enabled the first need for relatedness to be met. Blended learning outputs could bring a positive spiral of development of recognition from others, and meet the second need of competence later, leading to better identity formation, and ultimately again to relatedness.
54,212
Title: An empirical examination of UTAUT model and social network analysis Abstract: Purpose The purpose of this paper is to ensure the sustainability of the competitive advantages of internet financial enterprises. In recent years, driven by the two wheels of financial market and information technology, the internet finance has experienced an extremely rapid development. Design/methodology/approach Based on the performance expectation, effort expectation, social influence and purchase intention of UTAUT model, an empirical examination was conducted. Specifically, the authors made the user purchasing behavior as the dependent variable and added some new factors such as perceived risk, individual innovation and product cognition as the independent variables in the model, and they also added user gender and experience as regulated variables, so as to study the impact factors that affect the purchasing behavior. In addition, the authors also studied the impact of social network friend recommendations on consumers' willingness to purchase. Findings The research results showed that effort expectation, performance expectancy, effort expectancy, purchase intention, awareness and individual innovation have a positive effect on the behavior of buying financial products, whereas the perceived risk has a negative effect on the behavior of buying internet financial products. Additionally, in the context of social networking, social network friend recommendations have a positive impact on consumers' willingness to purchase. Practical implications This study can enrich the existing theories on the interpretation of the intention of using internet financial products, help internet financial enterprises understand user behavior and demands better, and improve service quality and customer satisfaction. Originality/value This study provides an empirical examination of UTAUT model and social network analysis.
54,234
Title: An Arc Flow Formulation To The Multitrip Production, Inventory, Distribution, And Routing Problem With Time Windows Abstract: The multitrip production, inventory, distribution, and routing problem with time windows (MPIDRPTW) is an integrated problem that combines a production and distribution problem, a multitrip vehicle routing problem, and an inventory routing problem. In the MPIDRPTW, a set of customers, which have a time-varying demand during a finite planning horizon, is served by a single production facility. The distribution is accomplished by a fleet of homogeneous vehicles that deliver the customer orders within their specific time windows. Production management has to be done according to the inventories at the facility and at the customers. An exact arc flow model based on a graph is proposed to solve the MPIDRPTW, where the nodes represent instants of time. The main goal of the problem is to minimize the costs associated with the entire system. The proposed approach was implemented and a set of experimental tests were conducted based on a set of adapted instances from the literature.
54,243
Title: Nonlinear regression models with profile nonlinear least squares estimation Abstract: This paper considers the efficient estimation for a parametric regression model. For parameters estimation, three estimation methods for the parameters are proposed. These estimators are the semi-parametric profile nonlinear least squares estimators, the nonlinear least squares estimators and one-step estimators. We study the asymptotic properties of the proposed estimators, and further discuss their estimation efficiency. The asymptotic normal confidence intervals and empirical likelihood confidence intervals are also proposed for parameters. Simulation studies are conducted to compare the proposed estimation methods.
54,344
Title: Beyond Pairwise Comparisons In Social Choice: A Setwise Kemeny Aggregation Problem Abstract: In this paper, we advocate the use of setwise contests for aggregating a set of input rankings into an output ranking. We propose a generalization of the Kemeny rule where one minimizes the number of k-wise disagreements instead of pairwise disagreements (one counts 1 disagreement each time the top choice in a subset of alternatives of cardinality at most k differs between an input ranking and the output ranking). After an algorithmic study of this k-wise Kemeny aggregation problem, we introduce a k-wise counterpart of the majority graph. It reveals useful to divide the aggregation problem into several sub-problems. We conclude with numerical tests.
54,393
Title: Deciding online and offline sales strategies when service industry customers express fairness concerns Abstract: In recent years the range of channels for customers to acquire products has generally expanded. The Smart-X system plays an essential role in pricing strategies, as fairness concerns could impact the price significantly. When customers in the BM store have concerns, the firm is induced to lower product price in the store, resulting in total profit reduction. When customers do not have fairness concerns, it is always better for the firm to operate in dual channels. Additionally, we demonstrate that a range of service cost coefficients exist, and, within certain ranges, these may inhibit the firm from upgrading service levels.
54,483
Title: Simulation and optimization of robotic tasks for UV treatment of diseases in horticulture Abstract: Robotization is increasingly used in the agriculture since the last few decades. It is progressively replacing the human workforce that is deserting the agricultural sector, partly because of the harshness of its activities and health risks they may present. Moreover, robotization aims to improve efficiency and competitiveness of the agricultural sector. However, it leads to several research and development challenges regarding robots supervision, control and optimization. This paper presents a simulation and optimization approach for the optimization of robotized treatment tasks using type-c ultraviolet radiation in horticulture. The optimization of tasks scheduling problem is formalized and a heuristic and a genetic algorithms are proposed to solve it. These algorithms are evaluated compared to an exact method using a multi-agent-based simulation approach. The simulator takes into account the evolution of the disease during time and simulates the execution of treatment tasks by the robot.
54,530
Title: Aspect-based approach to measure performance of financial services using voice of customer Abstract: Banking institution is a critical part of any country’s economy, which provides a variety of services for an individual or a business entity. Due to vast availability of banks and their services, it is a cumbersome task for a person to choose a bank for his/her specific need. In this regard, we introduced a computational framework for ranking a set of banking institutions based on people’s opinion. The introduced framework is based on aspect-based sentiment analysis and multi-criteria decision making (MCDM) approaches. To evaluate our methodology, we developed a sentiment dataset comprising reviews on four Indian nationalized banks, whose quality is evaluated using 9 aspects/attributes. In order to come up with holistic ranking, we employed simple majority voting on ranking obtained using three multi-criteria decision making methods viz. analytic hierarchy process, VIKOR, and fuzzy multi-attribute decision making. We observed that final ranking obtained using our method has a strong resemblance with real-life outreach of selected four Indian banks.
54,600
Title: Evaluating capability of a process with ordinal responses Abstract: Evaluating the capability of a manufacturing process is an important initial step in any quality improvement program. Methodologies have been reported in literature for evaluating capability of processes with quantitative characteristics of different natures. However, quantifying capability of a process with qualitative responses remains a difficult task. In this paper, a novel method is presented for measuring capability of a process with ordinal responses involving three ordered categories. The proposed method makes use of the process area of proportions and specification area of proportions defined in a two-dimensional plane. Extensive simulation studies reveal that in general, the defined process area does not always follow a normal distribution. However, by using Johnson transformation, it can be transformed to normal distribution and thus capability of a process with ordinal responses can be evaluated easily. Application of the proposed method to a real life problem is presented.
54,632
Title: Dual-feasible functions for integer programming and combinatorial optimization: Algorithms, characterizations, and approximations Abstract: Within the framework of the superadditive duality theory of integer programming, we study two types of dual-feasible functions of a single real variable (Alves et al., 2016). We introduce software that automates testing piecewise linear functions for maximality and extremality, enabling a computer-based search. We build a connection to cut-generating functions in the Gomory-Johnson and related models, complete the characterization of maximal functions, and prove analogues of the Gomory-Johnson 2-slope theorem and the Basu-Hildebrand-Molinaro approximation theorem. (c) 2019 Elsevier B.V. All rights reserved.
54,658
Title: Cellular automaton model considering the effect of brake light and traffic light at the intersection Abstract: Traffic flow modeling of urban traffic on a single-lane road with one traffic light operating as a bottleneck is the basis of research on urban road network. By introducing the brake light rules into ChSch model, we proposed a new cellular automaton traffic flow model of intersection considering the effect of brake light and traffic light to avoid running a red light and rear-end collision. The first vehicle of the vehicle queue formed in front of the stop line at red light (FVoQ) is focused, and the evolution characteristics of intersection traffic flow are analyzed. Results show that the proposed model is more realistic than ChSch model in the following aspects: (a) The FVoQ decelerates more smoothly with the proposed model. The behavior of smooth deceleration reflects the driver's desire of comfortable driving in real traffic. (b) The proportion of the FVoQ's great and sudden deceleration to avoid running a red light is greatly reduced with the proposed model. In ChSch model, the proportion is larger than 80%, while in the proposed model, the proportion is less than 1%. The behavior of great and sudden deceleration reflects the aggressive driving pattern of a few drivers in real traffic. (c) The rear-end collision accident rate caused by FVoQ's aggressive driving behaviors is greatly reduced with the proposed model, which conforms more to the real traffic.
54,835
Title: SIFT-Based Visual Tracking using Optical Flow and Belief Propagation Algorithm Abstract: Perceptible visual tracking acts as an important module for distinct perception tasks of autonomous robots. Better features help in easier decision-making process. The evaluation of tracking objects, dynamic positions and their visual information in results are quite difficult tasks. Until now, most real-time visual tracking algorithms suffer from poor robustness and low occurrence as they deal with complex real-world data. In this paper, we have proposed more robust and faster visual tracking framework using scale invariant feature transform (SIFT) and the optical flow in belief propagation (BF) algorithm for efficient processing in real scenarios. Here, a new feature-based optical flow along with BF algorithm is utilized to compute the affine matrix of a regional center on SIFT key points in frames. Experimental results depict that the proposed approach is more efficient and more robust in comparison with the state-of-the-art tracking algorithms with more complex scenarios.
54,898
Title: Data privacy preservation in MAC aware Internet of things with optimized key generation Abstract: Recently, the progression in Internet of Thing (IoT) has extensively influenced the model of wireless networking. As numerous wireless devices were emerged, the new Medium Access Control (MAC) and routing protocols has been introduced for assuring the end-to-end network efficiency. This paper aims to focus on the IEEE 802.15.4 MAC standards, where the security field is included in MAC header. The sensitive data security authentication on this MAC header is completely based on the method with the introduction of optimum authentication key using partially homomorphic encryption named ElGamal public key cryptosystem. More importantly, in this work, the ElGamal public key cryptosystem for securing IoT data is progressed by generating the optimal private key as well. For this optimal selection of key, this paper aims to establish a new hybrid optimization algorithm that hybrids the concept of Lion Algorithm (LA) and Cuckoo Search Algorithm (CS) named Cuckoo Mated-Lion Algorithm (CM-LA). Finally, the performance of the proposed working strategy is compared over other state-of-the-arts models by concerning various attacks, convergence and key sensitivity analysis, respectively.
54,926
Title: Modeling maternal infant HIV transmission having variable hazard rates with two lag time distributions Abstract: Mother to infant transmission of HIV can occur in utero, intr-apartum or post-partum. Postnatal HIV transmission through contaminated breast milk is of particular concern. A knowledge of the timing of perinatal transmission of HIV would be valuable for the determination and evaluation of preventive treatments. The present article proposes the models that simultaneously estimates the risks of perinatal transmission together with the sensitivity of the screening tests for HIV infection having variable hazard rates. The methods are illustrated with the data from a randomized control study, conducted in South Africa.
54,934
Title: Robustly assigning unstable items Abstract: We study the robust assignment problem where the goal is to assign items of various types to containers without exceeding container capacity. We seek an assignment that uses the fewest number of containers and is robust, that is, if any item of type $$t_i$$ becomes corrupt causing the containers with type $$t_i$$ to become unstable, every other item type $$t_j \ne t_i$$ is still assigned to a stable container. We begin by presenting an optimal polynomial-time algorithm that finds a robust assignment using the minimum number of containers for the case when the containers have infinite capacity. Then we consider the case where all containers have some fixed capacity and give an optimal polynomial-time algorithm for the special case where each type of item has the same size. When the sizes of the item types are nonuniform, we provide a polynomial-time 2-approximation for the problem. We also prove that the approximation ratio of our algorithm is no lower than 1.813. We conclude with an experimental evaluation of our algorithm.
55,045
Title: Discrimination with unidimensional and multidimensional item response theory models for educational data Abstract: Achievement tests are used to characterize the proficiency of higher-education students. Item response theory (IRT) models are applied to these tests to estimate the ability of students (as latent variable in the model). In order for quality IRT parameters to be estimated, especially ability parameters, it is important that the appropriate number of dimensions is identified. Through a case study, based on a statistics exam for students in higher education, we show how dimensions and other model parameters can be chosen in a real situation. Our model choice is based both on empirical and on background knowledge of the test. We show that dimensionality influences the estimates of the item-parameters, especially the discrimination parameter which provides information about the quality of the item. We perform a simulation study to generalize our conclusions. Both the simulation study and the case study show that multidimensional models have the advantage to better discriminate between examinees. We conclude from the simulation study that it is safer to use a multidimensional model compared to a unidimensional if it is unknown which model is the correct one.
55,156
Title: Correlation coefficient-based measure for checking symmetry or asymmetry of a continuous variable with additive distortion Abstract: This paper studies how to estimate and test the symmetry of a continuous variable under the additive distortion measurement errors setting. The unobservable variable is distorted in a additive fashion by an observed confounding variable. In this paper, a direct plug-in estimation procedure and two-step direct plug-in estimation procedure for correlation coefficient are proposed to measure the symmetry or asymmetry of a continuous variable, and empirical likelihood based confidence intervals are constructed to test the symmetry of the unobserved variable. The asymptotic properties of the proposed estimators and test statistics are investigated. We conduct Monte Carlo simulation experiments to examine the performance of the proposed estimators and test statistics. These methods are applied to analyze a real dataset for an illustration.
55,167
Title: Analytic vision on fog computing for effective load balancing in smart grids Abstract: The Internet of Things generates a massive amount of data through sensors and other physical devices, which cause latency and delay in processing time and response time of smart grid (SG) services. To increase the efficiency of SGs, cloud computing provides a pay-per model approach to transmit the collected data and enhances the scalability and functionality of end devices. Moreover, in load balancing (LB), resource utilization, and distribution mechanism, milliseconds also make an effect where delays or jitters are not acceptable. Fog computing, an extension of cloud provides computing, networking, storage, communication at the edge of the network, and has overcome the existing challenges of SGs. In this article, a new hybrid model on the highly virtualized platform is proposed. Three algorithms for LB: throttled, round robin, and particle swarm optimization, are analyzed and compared. Moreover, this article also highlights some cost minimization and effective utilization approaches to distribute resources efficiently to provide services in SGs with respect to LB algorithms.
55,172
Title: A Bayesian adaptive design for addressing correlated late-onset outcomes in phase I/II randomized trials of drug combinations in oncology Abstract: An advantage of phase I/II clinical trials for anticancer drugs is that safety and efficacy can be simultaneously evaluated in a single study. However, in order to benefit from this advantage, it is necessary for the safety and efficacy variables to comprise late-onset outcomes. In addition, current cancer treatments often involve therapies combining multiple drugs. Consequently, increasing emphasis has been placed on the importance of randomized clinical trials, which randomize the drugs being tested. This study proposes specific methods for addressing late-onset outcomes for safety and efficacy in phase I/II randomized trials of anticancer drug combinations. Data from patients in cases where the evaluation of safety and efficacy is not complete are considered missing data. A model taking into consideration the missing data mechanism, relationship between safety and efficacy, and correlation between observation points was used to adaptively determine the dose combination to assign to the next cohort. Simulation studies suggested that the proposed method is capable of supporting clinical trials in practice with realistic study periods. We believe the proposed method may flexibly adapt to practical requirements and contribute to advances in drug development.
55,366
Title: The t linear mixed model: model formulation, identifiability and estimation Abstract: The robustness of the t linear mixed model (tLMM) has been proved and exploited in many applications. Various publications emerged with the aim of proving superiority with respect to traditional linear mixed models, extending to more general settings and proposing more efficient estimation methods. However, little attention has been paid to the mathematical properties of the model itself and to the evaluation of the proposed estimation methods. In this paper we perform an in-depth analysis of the tLMM, evaluating a direct maximum likelihood estimation method via an intensive simulation study and investigating some identifiability properties. The theoretical findings are illustrated through an application to a dataset collected from a sleep trial.
55,428
Title: A predictive model for phishing detection Abstract: Nowadays, many anti-phishing systems are being developed to identify phishing contents in online communication systems. Despite the availability of myriads anti-phishing systems, phishing continues unabated due to inadequate detection of a zero-day attack, superfluous computational overhead and high false rates. Although Machine Learning approaches have achieved promising accuracy rate, the choice and the performance of the feature vector limit their effective detection. In this work, an enhanced machine learning-based predictive model is proposed to improve the efficiency of anti-phishing schemes. The predictive model consists of Feature Selection Module which is used for the construction of an effective feature vector. These features are extracted from the URL, webpage properties and webpage behaviour using the incremental component-based system to present the resultant feature vector to the predictive model. The proposed system uses Support Vector Machine and Naïve Bayes which have been trained on a 15-dimensional feature set. The experiments were based on datasets consisting of 2541 phishing instances and 2500 benign instances. Using 10-fold cross-validation, the experimental results indicate a remarkable performance with 0.04% False Positive and 99.96% accuracy for both SVM and NB predictive models.
55,445
Title: First-order random coefficient INAR process with dependent counting series Abstract: In this paper, we propose a first-order random coefficient integer-valued autoregressive process with dependent counting series. Some moments and stationary ergodicity of the process are established. The maximum-likelihood estimators of the parameters of interest are presented. We conduct some simulation studies to assess the performance of our method. An example about crime data is provided for practical application.
55,552
Title: The pause/play button actor-network: lecture capture recordings and (re)configuring multi-spatial learning practices Abstract: Lecture recording is an increasingly common practice in UK universities, whereby audio, video, and multimedia content from lecture theatres can be captured and distributed online. Despite a large body of recent lecture capture literature, much of the empirical research adopts positivist paradigms, which overlooks the complex and unpredictable nature of teaching and learning. Addressing this knowledge gap, this exploratory case study adopts sociomaterial approaches, specifically perspectives from the domain of actor-network theory (ANT), to view learning technologies as complex assemblages involving heterogeneous human and non-human entities or actors. This paper explores the entanglements involved in enacting online pedagogy and learning across spatiotemporal dimensions using trace ethnography and visualisation mapping. Examining the student-led study practices revealed that multitasking and fluid task switching, between contrasting networks and spaces, was a significant activity during the playback of lecture recordings. Exploring an innocuous and ubiquitous practice, such as video pausing, affords nuanced perspectives into the sociomaterial entanglements involved in enacting study practices. Moreover, adopting multimodal sensitivities reveals how often overlooked modes, such as iconography, can become actors within an assemblage. This may offer new insights into how modes help produce or stabilise configurations and advance efforts in attending to the non-human within actor-networks.
55,720
Title: Gastrointestinal polyp detection through a fusion of contourlet transform and Neural features Abstract: The gastrointestinal polyp (GIP) is the abnormal growth of tissues in digestive organs. Identifying these polyps from endoscopy video or image is a tremendous task to reduce the future risk of gastrointestinal cancer. This paper proposes a proper diagnosis method of polyp using a fusion of contourlet transform and fine-tuned VGG19 pre-trained model from enhanced endoscopic 224 x 224 patch images. This study has used different fine-tuned models (Alexnet, ResNet50, VGG16, VGG19) as well as a few scratch models while fine-tuned VGG19 works better. Also, this research has used Principal Component Analysis (PCA) and Minimum Redundancy Maximum Relevance (MRMR) dimensionality reduction methods to collect the intuitive features for classification. In Support Vector Machine (SVM) based polyp detection, the prior method (PCA) performs better. Besides, a proposed algorithm marks polyp region from identified polyp patches and uses a binning strategy to process video. A set of experiments are performed on standard public data sets and found comparative improved performance with an accuracy of 99.59%, sensitivity of 99.74% and specificity of 99.44%. This work can be instrumental for the radiologist for diagnosis of polyps during real-time endoscopy. (c) 2019 The Authors. Production and hosting by Elsevier B.V. on behalf of King Saud University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
55,803
Title: On the approximability of Time Disjoint Walks Abstract: We introduce the combinatorial optimization problem Time Disjoint Walks (TDW), which has applications in collision-free routing of discrete objects (e.g., autonomous vehicles) over a network. This problem takes as input a digraph $$G$$ with positive integer arc lengths, and $$k$$ pairs of vertices that each represent a trip demand from a source to a destination. The goal is to find a walk and delay for each demand so that no two trips occupy the same vertex at the same time, and so that a min–max or min–sum objective over the trip durations is realized. We focus here on the min–sum variant of Time Disjoint Walks, although most of our results carry over to the min–max case. We restrict our study to various subclasses of DAGs, and observe that there is a sharp complexity boundary between Time Disjoint Walks on oriented stars and on oriented stars with the central vertex replaced by a path. In particular, we present a poly-time algorithm for min–sum and min–max TDW on the former, but show that min–sum TDW on the latter is NP-hard. Our main hardness result is that for DAGs with max degree $$\varDelta \le 3$$ , min–sum Time Disjoint Walks is APX-hard. We present a natural approximation algorithm for the same class, and provide a tight analysis. In particular, we prove that it achieves an approximation ratio of $$\varTheta (k/\log k)$$ on bounded-degree DAGs, and $$\varTheta (k)$$ on DAGs and bounded-degree digraphs.
55,895
Title: Student-created video: an active learning approach in online environments Abstract: The purpose of this study was to investigate student-created video as an active learning approach in an online environment to inform instructional practices of student-created video in STEM. Data analyzed in this study included pre-service teachers N = 107, 1-minute videos and pre- and post surveys. The findings of this qualitative study indicated that student-created video was an active learning activity that contributed to an increase in students' perceived STEM content knowledge, improved perceptions of self-efficacy, and evidence of student engagement inclusive of behavioral, affective, and cognitive domains. Themes derived from the participants' perceptions included: perceived self-efficacy, novelty or usefulness of creating short video, time to design and create video, and content and technical knowledge. Student-created video as an active approach to learning can be included in STEM education to increase STEM knowledge and foster integrative twenty-first Century skills. Practical implications for educators when designing student-created video assignments include (a) following a video development model; (b) providing extra time for content acquisition and revisions; and (c) incorporating peer evaluations.
55,996
Title: Machine learning in human resource system of intelligent manufacturing industry Abstract: A hybrid model based on latent factor model (LFM) and deep forest algorithm, namely multi-Grained Cascade forest (gcForest) was established to optimise and integrate the key recruitment links in the human resource system of intelligent manufacturing industry. The LFM mainly analysed the browsing, application, collection and other aspects of data of job users, and the gcForest mainly analysed the matching degree of users and positions. The results showed that the hybrid model based on LFM and gcForest played a significant role in the recruitment of human resource system employees in the intelligent manufacturing industry.
56,118
Title: A simulation study comparing model fit measures of structural equation modeling with multivariate contaminated normal distribution Abstract: SEM is a very popular analytical method for quantitative studies in many disciplines. The goal of SEM is to determine the model that best fits the data. The data sets were generated from the multivariate contaminated normal distribution in this study. The aim of the study was to determine the best goodness of fit measures not affected by data structures and sample sizes. ADF, ML, and GLS estimation methods were applied under the multivariate nonnormality. According to the ADF method, AGFI, GFI, and RMSEA were not affected by sample size and correlation changes. CFI, NFI, and NNFI values were affected by both sample size and correlation changes. RMR always received values outside the acceptable fit limits in all sample sizes and correlation values. Consequently; if the data are multivariate nonnormality, the sample size should be greater than 250 units and AGFI, GFI and RMSEA fit measures should be used.
56,121
Title: Designing translucent learning analytics with teachers: an elicitation process Abstract: Learning Analytics (LA) systems can offer new insights into learners' behaviours through analysis of multiple data streams. There remains however a dearth of research about how LA interfaces can enable effective communication of educationally meaningful insights to teachers and learners. This highlights the need for a participatory, horizontal co-design process for LA systems. Inspired by the notion of translucence, this paper presents LAT-EP (Learning Analytics Translucence Elicitation Process), a five-step process to design for the effective use of translucent LA systems. LAT-EP was operationalised in an authentic multimodal learning analytics (MMLA) study in the context of teamwork in clinical simulation. Results of this process are illustrated through a series of visual proxies co-designed with teachers, each presenting traces of social, physical, affective and epistemic evidence captured while teams of student nurses practised clinical skills in a simulated hospital setting.
56,177
Title: On a new class of binomial ridge-type regression estimators Abstract: This paper is about developing a new class of two-parameter shrinkage estimators for the binomial model under the multicollinearity problem. The proposed class includes the ridge estimator (RE) and the maximum likelihood estimator (MLE) as special cases. The necessary and sufficient conditions that ensure the superiority of the proposed estimator over the MLE and the RE in terms of the mean squared error (MSE) matrix were obtained. The performance of new estimators is evaluated and compared with the MLE and some REs through simulation under moderate to strong correlations. A real application supports the simulation results.
56,184
Title: Promoting eco-agritourism using an augmented reality-based educational resource: a case study of aquaponics Abstract: Agritourism is a type of ecological tourism that combines agricultural activities with tourism. There is a growing interest in this industry worldwide, which poses both opportunities and challenges for social and natural environments. Grounded on the theory of situated learning, we developed an augmented reality-based educational resource to promote eco-agritourism - namely, to promote agritourism while encouraging tourists to be environmentally responsible. This study presents the results of an educational experiment that compared the scores of students who were instructed using a combination of the augmented reality-based educational resource and the professor's guidance with those of students who were instructed only by the professor. The study analyzed the learning outcomes and the degree of users' motivation to learn about aquaponics in an agritourism farm. The participants were 40 volunteers who had no previous knowledge about aquaponics. The results show that the resource has a medium effect size on learning gains and a large effect on knowledge retention. Likewise, the data indicates that the resource increased users' motivation to learn and practice responsible agritourism. These results seem to indicate that augmented reality-based resources are appropriate for promoting eco-agritourism and encourage tourists to develop a positive bond with nature.
56,266
Title: QSST: A Quranic Semantic Search Tool based on word embedding Abstract: Retrieving information from the Quran is an important field for Quran scholars and Arabic researchers. There are two types of Quran searching techniques: semantic or concept-based and keyword-based. Concept-based search is a challenging task, especially in a complex corpus such as Quran. This paper presents a concept-based searching tool (QSST) for the Holy Quran. It consists of four phases. In the first phase, the Quran dataset is built by manually annotating Quran verses based on the ontology of Mushaf Al-Tajweed. The second phase is word Embedding, this phase generates features' vectors for words by training a Continuous Bag of Words (CBOW) architecture on large Quranic and Classic Arabic corpus. The third phase includes calculating the features' vectors of both input query and Quranic topics. Finally, retrieving the most relevant verses by computing the cosine similarity between both topic and query vectors. The performance of the proposed QSST is measured by comparing results against Mushaf Al-Tajweed. Then, precision, recall, and F-score are computed and their percentages were 76.91%, 72.23% 69.28% respectively. In addition, the results are evaluated by three Islamic experts and the average precision was 91.95%. Finally, QSST results are compared with the recent existing tools; QSST outperformed them. (C) 2020 The Authors. Production and hosting by Elsevier B.V. on behalf of King Saud University.
56,320
Title: The gradient test and its finite sample size properties in a conditional maximum likelihood and psychometric modeling context Abstract: In asymptotic theory, the gradient test proposed by Terrell is a recent likelihood-based hypothesis testing approach which can be considered as an alternative to the well-established trinity of likelihood ratio, Rao score, and Wald tests. The gradient test has not yet entered into the mainstream of applied statistics. This is particularly true for the psychometric context. This research discusses a novel application of the gradient test within the conditional maximum likelihood and the Rasch modeling framework. It also investigates some of its finite sample size properties and compares it with the classical trinity of chi square tests by conducting an extensive Monte Carlo study. The results confirm that the gradient test has its pros and cons.
56,328
Title: Blockchain for smart grid Abstract: The demand for electricity increases rapidly along with the advancement of the industrial age. To ensure efficient distribution of the electricity, maintain low losses and high level of quality, and the security of electricity supply, the smart grid concept was proposed. The concept enables a small, individual scale to generate electricity and sell it to the grid. However, the concept adds complexity to the existing system, such as how a transaction between these generators and consumers are conducted, verified and recorded. This paper proposes the blockchain as a tool to manage transactions in the smart grid. Transactions are performed with smart contracts, and the network acts as a transaction verifier. The blockchain provides immutability of the transactions, which ensure every transaction between generators and consumers will always be executed. It also provides immutability to transaction history, which can be used for audit or solving a transaction dispute. (c) 2020 The Authors. Production and hosting by Elsevier B.V. on behalf of King Saud University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
56,520
Title: No teacher is an island: technology-assisted personal learning network (PLN) among English language teachers in Turkey Abstract: In this era of technology, teachers need to adapt to changes in their profession, and learn faster than before. They don't want to be restricted to their isolated classrooms and schools but continue learning anywhere, anytime. When a teacher attends conferences, joins teaching communities and discusses new techniques and innovations with colleagues, that means s/he is building a personal learning network (PLN). Face-to-face PLN has been extended to technology-assisted networks lately, as technology has led to new ways of learning and socializing. Teaching is not a solitary act, and teachers need other teachers to be complete. In this respect, social media such as Twitter, Facebook, blogs, are suitable virtual platforms for personal learning. In the literature, very limited research has been conducted regarding PLNs as a whole concept rather than individual digital platforms among English language teachers in Turkey directly. Responding to gap in the literature, this study was conducted through a questionnaire with 302 participants. For open questions, a thematic analysis was done while for closed questions, frequencies and percentages were taken. The findings show that English language teachers in Turkey utilize technology-assisted PLNs as a sharing and learning platform and expand the boundaries of their immediate contacts.
56,580
Title: Specializing parallel data structures for Datalog Abstract: We see a resurgence of Datalog in a variety of applications, including program analysis, networking, data integration, cloud computing, and security. The large-scale and complexity of these applications need the efficient management of data in relations. Hence, Datalog implementations require new data structures for managing relations that (1) are parallel, (2) are highly specialized for Datalog evaluation, and (3) can accommodate different workloads depending on the applications concerning memory consumption and computational efficiency. In this article, we present a data structure framework for relations that is specialized for shared-memory parallel Datalog implementations such as the souffle Datalog compiler. The data structure framework permits a portfolio of different data structures depending on the workload. We also introduce two concrete parallel data structures for relations, designed for various workloads. Our benchmarks demonstrate a speed-up of up to 6x by using a portfolio of data structures compared with using a B-tree alone, showing the advantage of our data structure framework.
56,631
Title: Analysis of production and organisational management efficiency of Chinese family intelligent manufacturing enterprises based on IoT and machine learning technology Abstract: This paper analyses how to improve the production efficiency and organisational management level of Chinese family intelligent manufacturing enterprises under the environment of big data and information integration, and ultimately enhance the economic benefits of enterprises. By reforming the organisational structure of intelligent manufacturing enterprises, in the environment of intelligent manufacturing, product life cycle information can be real-time detected and shared through sensor networks. In addition, in the enterprise intelligent manufacturing evaluation system, the system is optimised based on the neural network model in machine learning.
56,700
Title: Exploring the impact of initial herd on overfunding in equity crowdfunding Abstract: Outcome of equity crowdfunding campaigns often exceeds original fundraising goals, leading to market inefficiency. This undesirable phenomenon of overfunding garnered little attention in past studies. Synthesizing extant literature on crowdfunding and herding, we identified the initial herd made visible by funding progress indicator as the main cause of overfunding. The impact of the initial herd can be quantified by three dimensions: maturity, intensity, and persistency. We then advance and validate a research model for examining how dimensions of the initial herd affect overfunding in equity crowdfunding. Findings from this study can shed light on plausible remedies for the overfunding issue.
56,794
Title: Grid integrated photovoltaic system with fuzzy based maximum power point tracking control along with harmonic elimination Abstract: Efficient extraction and conversion of solar power is an important research area in the field of renewable energy integration. Although, literature is enriched with different power extraction, conversion, and harmonics elimination techniques but still there are problems of maximum power point tracking (MPPT) under partial shading conditions, photovoltaic (PV) mismatching, power fluctuation in steady-state condition, and higher total harmonic distortion (THD) value. The article has two manifolds. First, it proposes a fuzzy embedded MPPT controller to extract the maximum available PV power under time-varying environmental conditions. For that, it utilizes seven linguistic variables and fuzzy sets along with 49 rules base to have reduced power fluctuation and power losses in a steady-state condition, and fast tracking under time-varying temperature and solar irradiance condition. The performance of the proposed fuzzy-logic control (FLC)-49 rules base is compared with FLC-25 rules and perturb & observe based MPPT controllers. Second, for fulfilling the load demands with lower switching losses and THD, particle swarm optimization (PSO) based selective harmonic elimination (SHE) technique is applied on a cascaded half-bridge multilevel inverter. SHE technique utilizes switching angles, which are found by solving the nonlinear transcendental equations obtained by Fourier series expansion of stepped output voltage waveform. For that, the proposed research preferred meta-heuristic techniques of PSO over the classical techniques because of their high convergence rate, short run time, less complexity, and ability to obtain lower THD.
56,883
Title: Accounting for matching structure in post-matching analysis of observational studies Abstract: Matching design is commonly used in social science and health research with observational data, as it is robust to outcome model misspecification and has the intuitive interpretation similar to blocked randomization design. Estimate the population average treatment effect with propensity score adjustment is very popular. From a practical perspective, however, it is not clear whether the post-matching analysis should adjust for the matching structure. Analytical strategies with and without accounting for matching design have appeared in literature. For continuous outcomes, the implication is more on the variance estimation. But for binary outcomes, the non-collapsibility problem for the odds ratio adds another layer of complexity in choosing between estimation strategies. We have conducted extensive simulation studies to compare several matching estimators and the propensity score weighting estimator for both continuous and binary outcomes. Especially, we consider three measures for binary outcomes, risk difference, relative risk and odds ratio. Our simulation results suggest that statistical methods accounting for matching structure are more advantageous and among binary effect measures, odds ratio tends to have higher power than other measures. We also apply different estimation strategies to a U.S. trauma care database to examine mortality difference between trauma centers and non-trauma centers.
56,964
Title: Shrinkage estimator for scale parameter of gamma distribution Abstract: In this article, we propose a shrinkage estimator for the scale parameter of the Gamma distribution when the prior information is available and compare it with minimum mean square error (MMSE) of its usual estimator in the sense of efficiency. The proposed shrinkage estimator has smaller Mean Square Error (MSE) than MMSE estimator when the prior estimate is good. The properties of shrinkage estimator have been studied in terms of bias and mean square error. Numerical illustrations are carried out to throw light on the performance of the proposed method of estimation other conventional estimators.
56,989
Title: Another proposal about the new two-parameter estimator for linear regression model with correlated regressors Abstract: In this article, we present a new general class of biased estimators which includes some popular estimators as special cases and discuss its properties for multiple linear regression models when regressors are correlated. This proposal is based on some modification in the existing new two-parameter estimator. Performance of the proposed estimator is compared with many of the leading estimators, using the mean squared error matrix criterion, mitigating the adverse effects of multicollinearity. An extensive simulation study has been provided with a numerical example to illustrate the superiority of the proposed estimator.
57,051
Title: Inference for dependent competing risks from bivariate Kumaraswamy distribution under generalized progressive hybrid censoring Abstract: In this paper, competing risks model is considered when causes of failure are dependent. When latent failure times are distributed by the Marshall-Olkin bivariate Kumaraswamy model, inference for the unknown model parameters is studied under a generalized progressive hybrid censoring. Maximum likelihood estimates of unknown parameters are established, and the associated existence and uniqueness are provided. The approximate confidence intervals are constructed via the observed Fisher information matrix. Moreover, Bayes estimates and the credible intervals of the unknown parameters are also presented based a flexible Gamma-Dirichlet prior, and the importance sampling method is used to compute associated estimates. Simulation study and a lifetime example are given for illustration purposes.
57,058
Title: Quantitative representation of perception and evaluation method for service quality in university library under 4-D space Abstract: Purpose The purpose of this paper is to evaluate the service quality of university library more accurately and dynamically and improve the service efficiency of library. The paper realizes quantified representation of library service quality and overcomes the shortcoming of the traditional library evaluation system, which does not consider reader's identity and cannot be evaluated separately. In addition, according to the function configuration of each department of library, a relation between library evaluation parameter and its organization structure is built. According to the evaluation results and the chain of relations, some suggestions for improving library service can be put forward; thus, it can improve the quality of library service and management efficiency. Design/methodology/approach In this paper, a four-dimensional (4-D) representation method is put forward to express four kinds of parameters, namely, the category of participants, the number of people evaluated, the rating level and the weight of parameters, which is expressed by chromaticity and a three-dimensional column coordinate space. Considering the existing evaluation methods such as LibQUAL+TM, the content of evaluation parameters, the grade of evaluation parameters and the weight of evaluation parameters are modified. Using the volume and the equivalent number of people under this evaluation system, the evaluation grade can be quantified and the total results can be evaluated quantitatively. Findings The evaluation model proposed in this paper is a 4-D system that is based on content parameters to evaluate the number of participants, score segments, evaluation content weights and reader information. It gives full consideration to the good advice of many scholars and combines the actual operation of domestic libraries. The situation effectively integrates successful experience abroad. Both the undergraduate and teacher sampling evaluation results and their analysis in this paper show the accuracy and credibility of the method. Originality/value Although the satisfaction index model has a good effect in foreign countries, taking into account that readers of university libraries in China are different from those in foreign countries in the evaluation methods of the tutorial, professional multi-level evaluation will produce greater errors in practical applications. The traditional four-level method based on Chinese education evaluation (excellent, good, pass and fail) has reached consensus among teachers and students in practical application, and it is easy to achieve consistency. Therefore, this paper also adopts four-level evaluation, that is, very satisfied, satisfied, generally satisfied and very dissatisfied. The embedded application will be able to perform dynamic evaluation and thus can be used in China. The evaluation of service quality in university libraries provides an effective new method.
57,132
Title: Investigation of finite-sample properties of robust location and scale estimators Abstract: When the experimental data set is contaminated, we usually employ robust alternatives to common location and scale estimators such as the sample median and Hodges-Lehmann estimators for location and the sample median absolute deviation and Shamos estimators for scale. It is well known that these estimators have high positive asymptotic breakdown points and are Fisher-consistent as the sample size tends to infinity. To the best of our knowledge, the finite-sample properties of these estimators, depending on the sample size, have not well been studied in the literature. In this paper, we fill this gap by providing their closed-form finite-sample breakdown points and calculating the unbiasing factors and relative efficiencies of the robust estimators through the extensive Monte Carlo simulations up to the sample size 100. The numerical study shows that the unbiasing factor improves the finite-sample performance significantly. In addition, we provide the predicted values for the unbiasing factors obtained by using the least squares method which can be used for the case of sample size more than 100.
57,223
Title: Mis-specification analysis of the impact of covariates on the diffusion coefficient in Wiener degradation process Abstract: The diffusion coefficients are often assumed irrelevant to stresses in Wiener degradation process. However, there is evidence that the diffusion coefficients under different accelerated stress levels are different in an accelerated degradation test where higher accelerated stress could lead to larger diffusion. Therefore, in contrast to the existing models in which the diffusion coefficients are assumed as a constant or irrelevant to stress levels, this paper allows both the drift coefficients and the diffusion coefficients to be defined as the functions of accelerated stresses. Then, the consequence of model mis-specification in which the proposed model is wrongly fitted by its special case is analyzed based on Kullback-Leibler distance. In addition, the influence of each degradation parameter on the relative bias and the relative variation of the mean time to failure and 100p th percentage of first hitting time is analyzed by simulation methods. Finally, some important differences between the two models are verified by two case studies.
57,330
Title: From packing rules to cost-sharing mechanisms Abstract: Bin packing is one of the most fundamental problems in resource allocation. Most research on the classical bin packing problem has focused on the design and analysis of centralized packing rules. However, such rules are often infeasible to implement in distributed and decentralized environments, for the sake of both unavailability of global information and incentive compatibility. In this paper, we revisit the cost-sharing mechanisms for selfish bin packing (SBP) in decentralized environments. We first propose a simple and intuitive mechanism with $$PoA=1.5$$ . We then show that for a large class of mechanisms for the SBP, 1.5 is actually a lower bound of PoA. Based on this, we propose new rules for the SBP and design a new mechanism with $$PoA \le 22/15\approx 1.467$$ .
57,390
Title: Arbitrarily Partitionable {2K(2), C-4}-Free Graphs Abstract: A graph G = (V, E) of order n is said to be arbitrarily partitionable if for each sequence lambda = (lambda(1), lambda(2), horizontal ellipsis , lambda(p)) of positive integers with lambda(1) +center dot horizontal ellipsis center dot+lambda(p) = n, there exists a partition (V-1, V-2, horizontal ellipsis , V-p) of the vertex set V such that V-i induces a connected subgraph of order lambda(i) in G for each i is an element of {1, 2, horizontal ellipsis , p}. In this paper, we show that a threshold graph is arbitrarily partitionable if and only if it admits a perfect matching or a near perfect matching. We also give a necessary and sufficient condition for a {2K(2), C-4}-free graph being arbitrarily partitionable, as an extension for a result of Broersma, Kratsch and Woeginger [Fully decomposable split graphs, European J. Combin. 34 (2013) 567-575] on split graphs.
57,402
Title: Offloading Optimization and Time Allocation for Multiuser Wireless Energy Transfer Based Mobile Edge Computing System. Abstract: This paper considers a wireless energy transfer based mobile edge computing system, where wireless devices can be charged by the radio-frequency signals broadcast by hybrid access point. With Mobile Edge Computing (MEC), wireless devices can execute their computation tasks locally or offload them to the MEC server by time division multiple access protocol. Based on this system, this paper studies the problem of system energy efficiency maximization by joint optimization of computing time allocation, energy consumption, capacity of local computing and task offloading. A Tabu search based system energy efficiency maximization algorithm is proposed for solving the optimization problems. Finally, the performance of the proposed algorithm is valued by extensive simulation experiments. Simulation results verify the effectiveness of the proposed algorithm.
57,536
Title: Computing an $$L_1$$ shortest path among splinegonal obstacles in the plane Abstract: We reduce the problem of computing an $$L_1$$ shortest path between two given points s and t in the given splinegonal domain $$\mathcal {S}$$ to the problem of computing an $$L_1$$ shortest path between two points in the polygonal domain. Our reduction algorithm defines a polygonal domain $$\mathcal {P}$$ from $$\mathcal {S}$$ by identifying a coreset of points on the boundaries of splinegons in $$\mathcal {S}$$ . Further, it transforms a shortest path between s and t among polygonal obstacles in $$\mathcal {P}$$ to a shortest path between s and t among splinegonal obstacles in $$\mathcal {S}$$ . When $$\mathcal {S}$$ is comprised of h pairwise disjoint simple splinegons defined with a total of n vertices, excluding the time to compute an $$L_1$$ shortest path among simple polygonal obstacles in $$\mathcal {P}$$ , our reduction algorithm takes $$O(n + h \lg {n} + (\lg {h})^{1+\epsilon })$$ time. Here, $$\epsilon $$ is a small positive constant [resulting from the triangulation of the free space using Bar-Yehuda and Chazelle (Int J Comput Geom Appl 4(4):475–481, 1994)]. For the special case of $$\mathcal {S}$$ comprising of concave-out splinegons, we have devised another reduction algorithm. This algorithm does not rely on the structures used in the algorithm (Inkulu and Kapoor in Comput Geom 42(9):873–884, 2009) to compute an $$L_1$$ shortest path in the polygonal domain. Further, we have characterized few of the properties of $$L_1$$ shortest paths among splinegons which could be of independent interest.
57,632
Title: Using ant colony optimisation for improving the execution of material requirements planning for smart manufacturing Abstract: In this paper, ant colony optimization algorithm is used, and then the records in the supply and demand documents in the material requirement planning (MRP) are used to simulate the city points that the salesperson moves, so that the artificial ants can move between cities. To find the shortest path through all cities, that is, to find the shortest path of MRP in the main file of supply and demand, to reduce the system execution time, improve the efficiency of related personnel. Experimental results show that compared with other algorithms, ACO algorithm can effectively shorten the deployment time of MRP and greatly improve the implementation efficiency.
57,709
Title: Filtered-OFDM with channel coding based on T-distribution noise for underwater acoustic communication Abstract: Bit error rate (BER) is typically high in underwater acoustic (UWA) channel, which is characterized by high propagation delay and poor quality of communications. UWA noise statistics do not follow the standard Gaussian distribution. It has been proven through field tests that the noise follows the t-distribution in Malaysian shallow-water. In this paper, a study on UWA error performance is presented based on t-distribution. Furthermore, the expressions of error performance are derived using binary phase shift keying (BPSK) and quadrature phase shift keying (QPSK) modulations order. Moreover, the new waveform filtered orthogonal frequency division multiplexing (F-OFDM) in UWA with turbo and convolution code is adopted. The simulation results show that at BER 10–3, the Signal-to-Noise Ratio (SNR) is 6 dB and 11 dB for BPSK and QPSK, respectively. The turbo code performance appears to be superior over the convolution code. Furthermore, the results indicate that F-OFDM significantly improves the power spectral density to approximately 120 dBW compared with OFDM.
57,729
Title: On designing a new control chart for Rayleigh distributed processes with an application to monitor glass fiber strength Abstract: In this study, a Shewhart type control chart, namely chart, has been proposed to monitor a process that follows Rayleigh distribution. The proposed chart is implemented to monitor the single scale parameter of the Rayleigh distributed process. We have studied the proposed chart under two type of control limits namely probability and -sigma limits. The performance of the proposed chart has been assessed by using power function. In addition, we have investigated run length properties including average run length (ARL), standard deviation of run length (SDRL) and median run length (MDRL). The analysis of run length profile reveals that the proposed V-R chart outperforms the existing charts including the traditional Shewhart control chart and V control charts under Rayleigh distribution. The construction process for the newly proposed chart has been demonstrated using a simulated data. Finally, a real application of the proposed chart, along with the existing chart, is presented that evaluates the strength of glass fiber in a manufacturing process.
57,732
Title: RMCriteria: a decision making support system package for R Abstract: In this paper we present the RMCriteria package to support the decision making system in the R software environment for statistical computing and graphics. The RMCriteria is a support decision system that implements all PROMETHEE family methods and also graphical and sensitivity analysis assisting the analysts to make their own decision process and also facilitating the analysis of the decisions.
57,740
Title: A column generation and a post optimization VNS heuristic for the vehicle routing problem with multiple time windows Abstract: The Vehicle Routing Problem with Multiple Time Windows (VRPMTW) is a generalization of the Vehicle Routing Problem (VRP), where the customers have one or more time windows in which they can be visited. In this paper, we propose a Column Generation (CG) algorithm and a post optimization heuristic based on a Variable Neighborhood Search (VNS) to provide both lower and upper bounds for the cost of optimal solutions to VRPMTW. As in CG algorithms for VRP, the master problem is based on a Weighted Set Covering formulation. However, due to the multiple time windows, the pricing subproblem is an Elementary Shortest Path Problem with Multiple Time Windows and Capacity Constraints, which is more difficult to solve than the classical Elementary Shortest Path Problem with a Single Time Window and Capacity Constraints. Computational experiments were performed on 594 instances generated from classical Solomon instances with up to 17 customers. They showed that CG was able to produce lower bounds, within one hour of running time, for 66.7% of the instances. Besides, the post optimization heuristic was able to improve the solution provided by the VNS heuristic in 28.9%, finding integer optimal solutions for 39.9% of the instances. Moreover, for the instances where lower bounds are known, the average optimality gap was 6.0% on average.
57,831
Title: Achieving novelty and efficiency in business model design: Striking a balance between IT exploration and exploitation Abstract: Digitalization is encouraging an increasing number of firms to design their business models based on information technology (IT) for exploring business opportunities. This study examines the effect of the balance (imbalance) between IT exploration and exploitation on novelty-centered business model design (NBMD) and efficiency-centered business model design (EBMD). Using matched data from the IT-related executive and chief executive officer of 183 firms, this study finds that IT exploration is positively related to NBMD and EBMD and that IT exploitation is positively related to only EBMD. NBMD and EBMD are significantly related to firm performance. Polynomial regression and response surface analysis reveal that NBMD and EBMD first decline and then rise with the increase in the level of balance between IT exploration and exploitation. NBMD declines but EBMD rises with the increase in the level of imbalance between IT exploration and exploitation. This study contributes to the understanding on leveraging IT capability to influence the business model design and firm performance.
57,897
Title: Masking data: a solution to social desirability bias in paired comparison experiments Abstract: This paper deals with the reduction of evasive answering bias in paired comparisons studies by providing a masking mechanism to the judges. Specifically, the Warner (1965) masking design is applied in the application of Bradley and Terry (1952) paired comparison model. For estimating the worth parameters and preference probabilities, Bayesian method of estimation is applied. To study the behavior of Bayes estimates and the effect of masking parameter, simulation study is performed. It is observed that the preference ordering is not disturbed when the number of judges is moderate to large. Also, the preference ordering is observed to be robust with respect to the extent of masking, when the number of judges is large. These findings are also supported by a numerical study using real data.
58,064
Title: A note on the limitations of the CAT procedure with application to mixed-effects models Abstract: In the recent past, the CAT procedure introduced by Pal, Lim, and Ling has been successfully applied to obtain improved testing procedures in numerous applications. Having seen such results, practitioners may resort to the CAT procedure in all testing problems, assuming that improvement is assured in all applications. To avoid such myths, in this article, we present an important class of applications, where the CAT test performs poorly, and then discuss the type of applications where CAT procedure could be accepted or avoided. However, this does not mean that it is not possible to develop improved tests by taking the CAT approach, as we show in this article by employing the LRT statistic instead of ML/REML-based CAT tests, as authors of the original CAT article also now advocate. In fact, in terms of the Type I error, LRT-based CAT test performed well among tests we studied, except when k, the number of groups in one-way layout is small, in which case the generalized p-value-based tests can be employed. We believe this note will encourage further research to take full benefits of the CAT approach, in such problems as higher way ANOVA and mixed-effects regression models, for which generalized tests are currently available.
58,099
Title: A three-dimensional model of student interest during learning using multimodal fusion with natural sensing technology Abstract: A student's interest level can strongly affect the learning process, and thus, can be considered an important factor in the effort to improve learning. Presently, student interest is primarily assessed by administering questionnaires or conducting case analyses. However, this method cannot provide timely feedback in the learning environment to allow an instructor to make immediate improvements for a more effective learning process. Hence, we designed an intelligent analysis method to analyse student interest using multimodal natural sensing technology. In this study, we present a three-dimensional (3D) learning interest model designed from an educational psychology perspective which comprehensively describes student interest in a learning environment: cognitive attention, learning emotion and thinking activity. Multimodal data are compiled by head pose estimation, facial expression recognition and interactive data collection and interpreted based on this model. Then, multimodal data fusion is conducted to comprehensively gauge student interest. Experimental testing revealed that the proposed 3D model of learning interest could objectively reflect student interest, providing an effective basis for improving teaching in real time.
58,236
Title: A conceptual model for integrating affordances of mobile technologies into task-based language teaching Abstract: Due to a range of different affordances, mobile technologies can be used pedagogically for language teaching and learning. However, the connection of technology to education needs to be grounded in theoretical frameworks and methodological principles. Task-based language teaching (TBLT), as an optimal approach to language teaching and learning, provides rationale and methodological principles for the application of mobile technologies. The Conversational Framework (Laurillard, D. (2007). Pedagogical forms for mobile learning: framing research questions. In N. Pachler (Ed.), Mobile learning: towards a research agenda (pp. 153-175). London: UCL university.), on the other hand, can be adapted as a framework for the design of learning process as well as for the test of affordances of mobile technologies. Based on these components, a conceptual model for integrating affordances of mobile technologies into TBLT is proposed in this study. Such a model will serve practice as well as research on the educational uses of mobile technologies in the domain of language teaching and learning. Specific procedures for the implementation of the model are illustrated and demands for teachers as well as students are discussed.
58,382
Title: A computationally fast estimator for semiparametric multinomial choice model Abstract: In this article, based on cyclical monotonicity moment inequalities implied by the random utility choice model, we propose a computationally fast estimator for semiparametric multinomial choice models. The term semiparametric refers to the fact that we do not specify a particular functional form for the error term in the random utility function. The proposed estimators are consistent. Comparing with the estimators developed by the cyclic monotonicity method, simulations show the estimators we proposed have great advantages on the running time.
58,446
Title: The effectiveness of robot training in special education: a robot training model proposal for special education Abstract: In this research, robot training was conducted with teacher candidates in the department of special education, and the effectiveness of robot training was analyzed at the end of the training program. The design of the study is phenomenology. The study has been carried out with 44 teacher candidates from special education, selected by a purposeful sampling method. A nine-week robot training program was conducted for the teacher candidates. At the end of the training program, teacher candidates were interviewed, and the data obtained were subjected to in vivo and descriptive coding. The results of the study show that the robot training program is supportive for teacher candidates in many aspects. In the light of gathered data, a four-phased model of robot training for special education is proposed and discussed.
58,531
Title: An alternative measure of positive correlation for bivariate time series Abstract: Nadarajah et al. (2018) have recently proposed an alternative measure for positive correlation between two variables of interest say X and Y. This takes into account the idea of concordance or association between the two variables. They have made use of extensive illustrations including a series of confidence intervals to show the applicability and interpretation of their proposed measure. In this work, we propose an alternative measure for positive correlation between integrated time series processes. We support this idea through simulated examples and real data illustrations from climatology and finance.
58,577
Title: A class of nonparametric mode estimators Abstract: A class of nonparametric mode estimators is proposed. While the widely applied half sample mode estimators use the diameter of a set as the "measure of concentration", the proposed estimators use for it some types of "variance measures". In some cases, the new estimators perform better than half sample mode and half range mode estimators. Strong consistency is proved for an estimator from the class.
58,758
Title: Extropy of order statistics applied to testing symmetry Abstract: The results of the extropy of order statistics are applied to constructing a test statistic for symmetry. This test statistic has this advantage that we do not need to estimate the center of the symmetry. Through a simulation study, the power of the test is computed and then compared with the competing tests. The power results show that, for a wide variety of alternative asymmetric distributions, the proposed test is much more powerful than other tests. A real dataset is presented and analyzed.
58,782
Title: Variable selection in finite mixture of generalized estimating equations Abstract: This paper develops a new method to estimate the parameters in mixture models. Traditionally, the parameter estimation in mixture models is performed from a likelihood point of view by exploiting the expectation maximization (EM) method. In this paper, however, we utilize the Least Square Principle. Based on this principle, we propose an iterative algorithm called Iterative Weighted least Square (IWLS) to estimate the parameters. Through comparative study, we demonstrate the superiority of our method compared to EM method. We show that IWLS method outperforms EM in both accuracy and the number of iterations required for convergence.
58,783