id
stringlengths
7
7
title
stringlengths
3
578
abstract
stringlengths
0
16.7k
keyphrases
sequence
prmu
sequence
3jY81cP
service quality of multimedia streams and objects
The present paper deals with problems concerning the non-breaking and breaking quality of performance of multimedia presentation. Influence of the duration of the multimedia objects delay on their synchronization expressed by membership functions has been studied.
[ "multimedia stream", "quality of performance", "multimedia object", "delay", "break of quality of performance", "synchronization relation" ]
[ "P", "P", "P", "P", "R", "M" ]
4ens66i
Damage assessed by wavelet scale bands and b-value in dynamical tests of a reinforced concrete slab monitored with acoustic emission
The Continuous Wavelet Transform was applied to acoustic emission signals from dynamic tests conducted on a reinforced concrete slab with a shaking table. The Cumulative Acoustic Emission Energy was compared with the Cumulative Dissipated Energy of the tested structure. The (4564kHz) frequency band was assigned to cracking of concrete through the evolution of maximum values of CWT coefficients. The evolution of the b-value in the successive seismic simulations revealed the inception of loss of bond between reinforcing steel and surrounding concrete.
[ "damage", "b-value", "reinforced concrete", "continuous wavelet transform", "acoustic emission signals" ]
[ "P", "P", "P", "P", "P" ]
11U1NFg
Extension of hereditary classes with substitutions
Let G and H be graphs. A substitution of H in G instead of a vertex v?V(G) is the graph G(v?H), which consists of disjoint union of H and G?v with the additional edge-set . For a hereditary class of graphs , the substitutional closure of is defined as the class consisting of all graphs which can be obtained from graphs in by repeated substitutions. Let be an arbitrary hereditary class for which a characterization in terms of forbidden induced subgraphs is known. We propose a method of constructing forbidden induced subgraphs for .
[ "hereditary class of graphs", "substitutional closure", "homogeneous set", "stability number" ]
[ "P", "P", "U", "U" ]
4RtyxGg
Evaluation of trap creation and charging in thin SiO2 using both SCM and C-AFM
Conductive atomic force microscopy (C-AFM) and scanning capacitance microscopy (SCM) are used in this work to characterize trap creation and charge trapping in ultra-thin SiO2. It is found that C-AFM working at normal operational voltages causes severe damage and subsequent negative charge trapping in the oxide. Permanent hillocks are seen in the topography of stressed regions. The height of these features is determined rather by the applied voltage than the electric field. Electrostatic repulsion between tip and sample and Si epitaxy underneath the oxide are the two most probable causes of this feature. The immediate physical damage caused in the oxide during high field C-AFM measurements is a possible showstopper for use of the C-AFM to investigate differences in pristine interface states. SCM operates at lower voltages, yielding less oxide damage and is able to indicate the interface state density variations through hysteresis in the dC/dV vs. V curves.
[ "c-afm", "charge trapping", "mos", "gate dielectric" ]
[ "P", "P", "U", "U" ]
3gaSwEa
Segmentation of ultrasound images of thyroid nodule for assisting fine needle aspiration cytology
The incidence of thyroid nodule is very high and generally increases with the age. Thyroid nodule may presage the emergence of thyroid cancer. Most thyroid nodules are asymptomatic which makes thyroid cancer different from other cancers. The thyroid nodule can be completely cured if detected early. Therefore, it is necessary to correctly classify the thyroid nodule to be benign or malignant. Fine needle aspiration cytology is a recognized early diagnosis method of thyroid nodule. There are still some limitations in the fine needle aspiration cytology, such as the difficulty in location and the insufficient cytology specimen. The accuracy of ultrasound diagnosis of thyroid nodule improves constantly, and it has become the first choice for auxiliary examination of thyroid nodular disease. If we could combine medical imaging technology and fine needle aspiration cytology, the diagnostic rate of thyroid nodule would be improved significantly.
[ "ultrasound images", "thyroid", "fine needle aspiration cytology", "image segmentation", "normalized cut", "anisotropic diffusion" ]
[ "P", "P", "P", "R", "U", "U" ]
1dme4dX
Adaptive feedback linearizing control of nonholonomic wheeled mobile robots in presence of parametric and nonparametric uncertainties
In this paper, the integrated kinematic and dynamic trajectory tracking control problem of wheeled mobile robots (WMRs) is addressed. An adaptive robust tracking controller for WMRs is proposed to cope with both parametric and nonparametric uncertainties in the robot model. At first, an adaptive nonlinear control law is designed based on inputoutput feedback linearization technique to get asymptotically exact cancellation of the parametric uncertainty in the WMR parameters. The designed adaptive feedback linearizing controller is modified by two methods to increase the robustness of the controller: (1) a leakage modification is applied to modify the integral action of the adaptation law and (2) the second modification is an adaptive robust controller, which is included to the linear control law in the outer loop of the adaptive feedback linearizing controller. The adaptive robust controller is designed such that it estimates the unknown constants of an upper bounding function of the uncertainty due to friction, disturbances and unmodeled dynamics. Finally, the proposed controller is developed for a type (2, 0) WMR and simulations are carried out to illustrate the robustness and tracking performance of the controller.
[ "feedback linearization", "parametric uncertainty", "trajectory tracking", "wmr", "adaptive robust" ]
[ "P", "P", "P", "P", "P" ]
4Yy2cj:
Modelling the asymmetric volatility in hog prices in Taiwan: The impact of joining the WTO
Prices in the hog industry in Taiwan are determined according to an auction system. There are significant differences in hog prices before, during and after joining the World Trade Organization (WTO). The paper models growth rates and volatility in daily hog prices in Taiwan from 23 March 1999 to 30 June 2007, which enables an analysis of the effects of joining the WTO. The empirical results have significant implications for risk management and policy in the agricultural industry. The three sub-samples for the periods before, during and after joining the WTO display significantly different volatility persistence of symmetry, asymmetry and leverage, respectively. (c) 2010 IMACS. Published by Elsevier B.V. All rights reserved.
[ "hog prices", "joining the wto", "asymmetry", "leverage", "conditional volatility models" ]
[ "P", "P", "P", "P", "M" ]
-JWRwCj
computing convolutions by reciprocal search
In this paper we show how certain geometric convolution operations can be computed efficiently. Here efficiently means that our algorithms have running time proportional to the input size plus the output size. Our convolution algorithms rely on new optimal solutions for certain reciprocal search problems, such as finding intersections between blue and green intervals, and overlaying convex planar subdivisions.
[ "computation", "search", "paper", "operability", "algorithm", "timing", "input", "size", "optimality", "intersection", "interval", "subdivision" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P" ]
34uL-H3
A genetic approach to the synthesis of composite right/left-handed transmission line impedance matching sections
A genetic approach for the synthesis of composite right/left-handed (CRLH) transmission line impedance matching sections is presented. Continuous parameter genetic algorithm (CPGA) is used for the synthesis. Examples for a uniform CRLH transmission line impedance matching section and a nonuniform CRLH transmission line impedance matching section are given.
[ "transmission lines", "impedance matching", "composite right/left-handed (crlh)", "genetic algorithm" ]
[ "P", "P", "P", "P" ]
:KvKRoH
Toward an interdisciplinary science of consumption
Scientific perspectives on the drive to consume were presented in Ann Arbor, Michigan, at the conference entitled The Interdisciplinary Science of Consumption: Mechanisms of Allocating Resources Across Disciplines. The meeting, which took place May 1215, 2010 and was sponsored by Rackham Graduate School and the Department of Psychology at the University of Michigan, included presentations on human, primate, and rodent models and spanned multiple domains of consumption, including reward seeking, delay discounting, food-sharing reciprocity, and the consumption and display of material possessions across the life span.
[ "consumption", "resource allocation", "neureconomics", "hoarding", "decision making" ]
[ "P", "R", "U", "U", "U" ]
4qUTCXq
Hybrid bio-inspired techniques for land cover feature extraction: A remote sensing perspective
Recent advances in the theoretical and practical implementations of biogeography have led to the exploration of new bio-inspired techniques which can prove to be the building blocks of hybrid bio-inspired techniques. This aspect was discovered while considering the exploration of bio-inspired intelligence for developing generic optimization algorithms that can be adapted for performing the given land cover feature extraction task at hand. Certain bio-inspired techniques when integrated with the existing optimization techniques can drastically improve their optimization capability hence leading to better feature extraction. In this paper, we propose a generic architectural framework of a hybrid biologically inspired technique that is characterized by its capability to adapt according to the database of expert knowledge for a more efficient, focused and refined feature extraction. Since our hybrid feature extractor possesses intelligence for selective cluster identification for application of either of the constituent techniques which is in turn based on an inefficiency analysis, we term our classifier as the hybrid bio-inspired pattern analysis based intelligent classifier. Our hybrid classifier combines the strengths of the modified BBO Technique for land cover feature extraction with the Hybrid ACO2/PSO Technique for a more refined land cover feature extraction. The algorithm has been tested for for the remote sensing application of land cover feature extraction where we have applied it to the 7-Band carto-set satellite image of size 472546 of the Alwar area in Rajasthan and gives far better feature extraction results than the original biogeography based land cover feature extractor [20] and the other soft computing techniques such as ACO, Hybrid PSO-ACO2, Hybrid ACO-BBO Classifier, Fuzzy sets, Rough-Fuzzy Tie up etc.. The 7-band Alwar Image is a benchmark image for testing the performance of a bio-inspired classifier on multi-spectral satellite images since this image is a complete image in the sense that it contains all the land cover features that we need to extract and hence land cover feature extraction results are demonstrated and compared using this image as the standard image.
[ "aco ant colony optimization", "bbo biogeography based optimization", "pso particle swarm optimization", "mdmc minimum distance to mean classifier", "mlc maximum likelihood classifier", "tsp travelling salesman problem", "liss linear imaging self scanning", "rs1 radarsat 1", "rs2 radarsat 2", "dn digital number", "i maximum immigration rate", "sar synthetic aperture radar", "ga genetic algorithm", "fcm fuzzy c-means", "rcbbo real coded biogeography based optimization", "hsi habitat suitability index", "siv suitability index variables", "dpso discrete particle swarm optimization", "nir near infra red", "mir middle infra red", "dem digital elevation model", "e maximum emigration rate" ]
[ "M", "R", "M", "M", "M", "U", "M", "U", "U", "U", "U", "U", "M", "M", "M", "U", "U", "M", "U", "U", "U", "U" ]
-6AUCq4
Visualization for understanding of neurodynamical systems
Complex neurodynamical systems are quite difficult to analyze and understand. New type of plots are introduced to help in visualization of high-dimensional trajectories and show global picture of the phase space, including relations between basins of attractors. Color recurrence plots (RPs) display distances from each point on the trajectory to all other points in a two-dimensional matrix. Fuzzy Symbolic Dynamics (FSD) plots enhance this information mapping the whole trajectory to two or three dimensions. Each coordinate is defined by the value of a fuzzy localized membership function, optimized to visualize interesting features of the dynamics, showing to which degree a point on the trajectory belongs to some neighborhood. The variance of the trajectory within the attraction basin plotted against the variance of the synaptic noise provides information about sizes and shapes of these basins. Plots that use color to show the distance between each trajectory point and a larger number of selected reference points (for example centers of attractor basins) are also introduced. Activity of 140 neurons in the semantic layer of dyslexia model implemented in the Emergent neural simulator is analyzed in details showing different aspects of neurodynamics that may be understood in this way. Influence of connectivity and various neural properties on network dynamics is illustrated using visualization techniques. A number of interesting conclusions about cognitive neurodynamics of lexical concept activations are drawn. Changing neural accommodation parameters has very strong influence on the dwell time of the trajectories. This may be linked to attention deficits disorders observed in autism in case of strong enslavement, and to ADHD-like behavior in case of weak enslavement.
[ "neurodynamics", "recurrence plots", "symbolic dynamics", "visualization of multidimensional time series", "attractor networks", "attractor dynamics" ]
[ "P", "P", "P", "M", "R", "R" ]
4vP87dR
growing and destroying the worth of ideas
This paper presents a novel computational approach to the study of creativity. In particular, it discusses a modeling framework that addresses the worth of ideas ascribed by agents embedded in a social world. The triple objective of this system is to improve our understanding of how ideas may emerge from a few individuals, how social interaction may result in the ascription of value to new ideas, and how culture may evolve through time, transforming or replacing dominant or consensual ideas. The proposed system encompasses commonalities in existing theories of creativity, and suggests future theoretical directions that can be explored via simulation.
[ "social simulation", "creative autonomy", "creative destruction", "multiagent simulation", "change agents" ]
[ "R", "M", "M", "M", "M" ]
-P3sWA6
FermiDirac, BoseEinstein, MaxwellBoltzmann, and computers
Few statistics other than the named three have opened the path to understanding of so many natural laws and formulas employed in the engineering practice. Engineers interested in origins of expressions they may be using in their work will find some of them in this paper. Planck's Radiation and StefanBoltzmann Laws, Maxwell's Velocity Distributions, P-N diode equation, and intrinsic hole and electron concentrations in semiconductors are some examples. No new inventions are presented and the topics addressed are treated in books on statistical mechanics, thermodynamics, quantum and classical physics, chemistry and others. The contribution is an integration of a broad range of sciences underlying the formulas of interest under one roof of a relatively short paper. All derivations are done at the engineering level and originate in the first principles of conservation and quantization. Discrete mathematics is employed throughout and provides direct access to computerized experimentation. The appendixes assist the uninitiated in quantum physics and provide an ample collection of useful supporting material. 2015 Wiley Periodicals, Inc. Comput. Appl. Eng. Educ. Comput Appl Eng Educ 23:746759, 2015; View this article online at wileyonlinelibrary.com/journal/cae; DOI 10.1002/cae.21647
[ "semiconductors", "statistical mechanics", "thermodynamics", "fermions & bosons", "thermal radiation" ]
[ "P", "P", "P", "U", "M" ]
2Z-FVu-
from motion capture data to character animation
In this paper, we propose a practical and systematical solution to the mapping problem that is from 3D marker position data recorded by optical motion capture systems to joint trajectories together with a matching skeleton based on least-squares fitting techniques. First, we preprocess the raw data and estimate the joint centers based on related efficient techniques. Second, a skeleton of fixed length which precisely matching the joint centers are generated by an articulated skeleton fitting method. Finally, we calculate and rectify joint angles with a minimum angle modification technique. We present the results for our approach as applied to several motion-capture behaviors, which demonstrates the positional accuracy and usefulness of our method.
[ "motion capture", "articulated skeleton fitting" ]
[ "P", "P" ]
2PwrkFR
Reconfiguration graphs for vertex colourings of chordal and chordal bipartite graphs
A k-colouring of a graph G=(V,E) is a mapping c:V?{1,2,,k} such that c(u)?c(v) whenever uv is an edge. The reconfiguration graph of the k-colourings of G contains as its vertex set the k-colourings of G, and two colourings are joined by an edge if they differ in colour on just one vertex of G. We introduce a class of k-colourable graphs, which we call k-colour-dense graphs. We show that for each k-colour-dense graph G, the reconfiguration graph of the ?-colourings of G is connected and has diameter O(|V|2), for all ??k+1. We show that this graph class contains the k-colourable chordal graphs and that it contains all chordal bipartite graphs when k=2. Moreover, we prove that for each k?2 there is a k-colourable chordal graphG whose reconfiguration graph of the (k+1)-colourings has diameter ?(|V|2).
[ "reconfigurations", "chordal graphs", "graph colouring", "graph diameter" ]
[ "P", "P", "R", "R" ]
-HgMUg3
Single-supplier/multiple-buyer supply chain coordination: Incorporating buyers expectations under vertical information sharing
We address the coordination problem in a single-supplier/multiple-buyer supply chain. The supplier wishes to coordinate the supply chain by offering quantity discounts. To obtain their complete cost information, the supplier exchanges his own cost parameters with buyers leading to vertical information sharing. The supplier thinks that the buyers, as they have access to suppliers setup and holding cost information, may demand a portion of the anticipated coordination savings based on the partial information they hold about the cost structure of the entire supply chain. We model each buyers expectations based on her limited view of the entire supply chain which consists of herself and the supplier only. These expectations are then incorporated into the modeling of the supply chain, which results in a generalization of the traditional Stackelberg type models. We discuss alternative efficiency sharing mechanisms, and propose methods to design the associated discount schemes that take buyers expectations into account. In designing the discount schemes, we consider both price discriminatory and non-price discriminatory approaches. The study adds to the existing body of work by incorporating buyers expectations into a constrained Stackelberg structure, and by achieving coordination without forcing buyers to explicitly comply with the suppliers replenishment period in choosing their order quantities. The numerical analysis of the coordination efficiency and allocation of the net savings of the proposed discount schemes shows that the supplier is still able to coordinate the supply chain with high efficiency levels, and retain a significant portion of the net savings.
[ "supply chain coordination", "discount schemes", "inventory management" ]
[ "P", "P", "U" ]
--VeCtE
Vector bin packing with multiple-choice ?
We consider a variant of bin packing called multiple-choice vector bin packing . In this problem, we are given a set of n n items, where each item can be selected in one of several D D -dimensional incarnations . We are also given T T bin types, each with its own cost andD D -dimensional size. Our goal is to pack the items in a set of bins of minimum overall cost. The problem is motivated by scheduling in networks with guaranteed quality of service (QoS), but due to its general formulation it has many other applications as well. We present an approximation algorithm that is guaranteed to produce a solution whose cost is about lnD ln D times the optimum. For the running time to be polynomial we require D=O(1) D = O ( 1 ) and T=O(logn) T = O ( log n ) . This extends previous results for vector bin packing, in which each item has a single incarnation and there is only one bin type. To obtain our result we also present a PTAS for the multiple-choice version of multidimensional knapsack, where we are given only one bin and the goal is to pack a maximum weight set of (incarnations of) items in that bin.
[ "multiple-choice vector bin packing", "approximation algorithms", "multiple-choice multidimensional knapsack" ]
[ "P", "P", "R" ]
1y5coGY
A Fortran 90 Hartree-Fock program for one-dimensional periodic pi-conjugated systems using Pariser-Parr-Pople model
Pariser-Parr-Pople (P-P-P) model Hamiltonian is employed frequently to study the electronic structure and optical properties of pi-conjugated systems. In this paper we describe a Fortran 90 computer program which uses the P-P-P model Hamiltonian to solve the Hartree-Fock (HF) equation for infinitely long, one-dimensional, periodic, pi-electron systems. The code is capable of computing the band structure, as also the linear optical absorption spectrum, by using the tight-binding and the HF methods. Furthermore, using our program the user can solve the HF equation in the presence of a finite external electric field, thereby, allowing the simulation of gated systems. We apply our code to compute various properties of polymers such as trans-polyacetylene, poly-para-phenylene, and armchair and zigzag graphene nanoribbons, in the infinite length limit. Program summary Program title: ppp_bulk.x Catalogue identifier: AEKW_v1_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEKW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 87 464 No. of bytes in distributed program, including test data, etc.: 2 046 933 Distribution format: tar.gz Programming language: Fortran 90 Computer: PCs and workstations Operating system: Linux, Code was developed and tested on various recent versions of 64-bit Fedora including Fedora 14 (kernel version 2.6.35.12-90). Classification: 7.3 External routines: This program needs to link with LAPACK/BLAS libraries compiled with the same compiler as the program. For the Intel Fortran Compiler we used the ACML library version 4.4.0, while for the gfortran compiler we used the libraries supplied with the Fedora distribution. Nature of problem: The electronic structure of one-dimensional periodic pi-conjugated systems is an intense area of research at present because of the tremendous interest in the physics of conjugated polymers and graphene nanoribbons. The computer program described in this paper provides an efficient way of solving the Hartree-Fock equations for such systems within the P-P-P model. In addition to the Bloch orbitals, band structure, and the density of states, the program can also compute quantities such as the linear absorption spectrum, and the electro-absorption spectrum of these systems. Solution method: For a one-dimensional periodic pi-conjugated system lying in the xy-plane, the single-particle Bloch orbitals are expressed as linear combinations of p(z)-orbitals of individual atoms. Then using various parameters defining the P-P-P Hamiltonian, the Hartree-Fock equations are set up as a matrix eigenvalue problem in the k-space. Thereby, its solutions are obtained in a self-consistent manner, using the iterative diagonalizing technique at several k points. The band structure and the corresponding Bloch orbitals thus obtained are used to perform a variety of calculations such as the density of states, linear optical absorption spectrum, electro-absorption spectrum. etc. Running time: Most of the examples provided take only a few seconds to run. For a large system, however, depending on the system size, the run time may be a few minutes to a few hours. (C) 2011 Elsevier B.V. All rights reserved.
[ "hamiltonian", "hartree-fock method", "self-consistent field approach p-p-p model", "periodic boundary conditions" ]
[ "P", "R", "M", "M" ]
-R1yL4J
Web server load balancing: A queueing analysis
Over the last few years, the Web-based services, more specifically different types of E-Commerce applications, have become quite popular, resulting in exponential growth in the Web traffic. In many situations, this has led to unacceptable response times and unavailability of services, thereby driving away customers. Many companies are trying to address this problem using multiple Web servers with a front-end load balancer. Load balancing has been found to provide an effective and scalable way of managing the ever-increasing Web traffic. However, there has been little attempt to analyze the performance characteristics of a system that uses a load balancer. This paper presents a queuing model for analyzing load balancing with two Web servers. We first analyze the centralized load balancing model, derive the average response time and the rejection rate, and compare three different routing policies at the load balancer. We then extend our analysis to the distributed load balancing and find the optimal routing policy that minimizes the average response time.
[ "load balancing", "routing", "parallel queues", "queueing theory" ]
[ "P", "P", "M", "M" ]
haML&qG
Thermal switching error versus delay tradeoffs in clocked QCA circuits
The quantum-dot cellular automata (QCA) model offers a novel nano-domain computing architecture by mapping the intended logic onto the lowest energy configuration of a collection of QCA cells, each with two possible ground states. A four-phased clocking scheme has been suggested to keep the computations at the ground state throughout the circuit. This clocking scheme, however, induces latency or delay in the transmission of information from input to output. In this paper, we study the interplay of computing error behavior with delay or latency of computation induced by the clocking scheme. Computing errors in QCA circuits can arise due to the failure of the clocking scheme to switch portions of the circuit to the ground state with change in input. Some of these non-ground states will result in output errors and some will not. The larger the size of each clocking zone, i.e., the greater the number of cells in each zone, the more the probability of computing errors. However, larger clocking zones imply faster propagation of information from input to output, i.e., reduced delay. Current QCA simulators compute just the ground state configuration of a QCA arrangement. In this paper, we offer an efficient method to compute the N-lowest energy modes of a clocked QCA circuit. We model the QCA cell arrangement in each zone using a graph-based probabilistic model, which is then transformed into a Markov tree structure defined over subsets of QCA cells. This tree structure allows us to compute the N-lowest energy configurations in an efficient manner by local message passing. We analyze the complexity of the model and show it to be polynomial in terms of the number of cells, assuming a finite neighborhood of influence for each QCA cell, which is usually the case. The overall low-energy spectrum of multiple clocking zones is constructed by concatenating the low-energy spectra of the individual clocking zones. We demonstrate how the model can be used to study the tradeoff between switching errors and clocking zones.
[ "switching errors", "bayesian modeling", "nano-computing", "quantum cellular automata (qca)" ]
[ "P", "M", "U", "M" ]
4N7TGMF
Quantitative analysis of noninvasive diagnostic procedures for induction motor drives
This paper reports quantitative analyses of spectral fault components in five noninvasive diagnostic procedures that use input electric signals to detect different types of abnormalities in induction motors. Besides the traditional one phase current spectrum analysis SC, the diagnostic procedures based on spectrum analysis of the instantaneous partial powers Pab, Pcb, total power Pabc, and the current space vector modulus csvm are considered. The aim of this comparison study is to improve the diagnosis tools for detection of electromechanical faults in electrical machines by using the best suitable diagnostic procedure knowing some motor and fault characteristics. Defining a severity factor as the increase in amplitude of the fault characteristic frequency, with respect to the healthy condition, enables us to study the sensitivity of the electrical diagnostic tools. As a result, it is shown that the relationship between the angular displacement of the current side-bands components at frequencies (ffosc) is directly related to the type of induction motor faults. It is also proved that the total instantaneous power diagnostic procedure was observed to exhibit the highest values of the detection criterion in case of mechanical faults while in case of electrical ones the most reliable diagnostic procedure is tightly related to the value of the motor power factor angle and the group motor-load inertia. Finally, simulation and experimental results show good agreement with the fault modeling theoretical results.
[ "noninvasive diagnostic", "mechanical fault signature", "signal processing", "rotating machinery" ]
[ "P", "M", "M", "U" ]
-r6qr:S
Program for quantum wave-packet dynamics with time-dependent potentials ?
We present a program to simulate the dynamics of a wave packet interacting with a time-dependent potential. The time-dependent Schrdinger equation is solved on a one-, two-, or three-dimensional spatial grid using the split operator method. The program can be compiled for execution either on a single processor or on a distributed-memory parallel computer. Program title: wavepacket Catalogue identifier: AEQW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEQW_v1_0.html Program obtainable from: CPC Program Library, Queens University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 7231 No. of bytes in distributed program, including test data, etc.: 232209 Distribution format: tar.gz Programming language: C (iso C99). Computer: Any computer with an iso C99 compiler (e.g, gcc [1]). Operating system: Any. Has the code been vectorized or parallelized?: Yes, parallelized using MPI. Number of processors: from 1 to the number of grid points along one dimension. RAM: Strongly dependent on problem size. See text for memory estimates. Classification: 2.7. External routines: fftw [2], mpi (optional) [3] Nature of problem: Solves the time-dependent Schrdinger equation for a single particle interacting with a time-dependent potential. Solution method: The wave function is described by its value on a spatial grid and the evolution operator is approximated using the split-operator method [4, 5], with the kinetic energy operator calculated using a Fast Fourier Transform. Unusual features: Simulation can be in one, two, or three dimensions. Serial and parallel versions are compiled from the same source files. Running time: Strongly dependent on problem size. The example provided takes only a few minutes to run. References: http://gcc.gnu.org http://www.fftw.org http://www.mpi-forum.org M.D. Feit, J.A. Fleck Jr., A. Steiger, Solution of the Schrdinger equation by a spectral method, J. Comput. Phys. 47 (1982) 412433. M.D. Feit, J.A. Fleck Jr., Solution of the Schrdinger equation by a spectral method II: vibrational energy levels of triatomic molecules, J. Chem. Phys. 78 (1) (1983) 301308.
[ "wave-packet dynamics", "time-dependent schrdinger equation", "ion traps", "laser control" ]
[ "P", "P", "U", "U" ]
1ThbnNy
The design of miniaturised displacement transducers for deep hole diameter measurement
The paper presents a new approach to the design of a miniaturised displacement transducer for deep hole measurement. By exploiting the induced eddy current effects detected by chip coils, the sensor generates a frequency output signal. The sensor chip coil can be manufactured by the similar processes to those used for manufacturing a printed circuit board (PCB) which allows them to be miniaturised. The paper elaborates on the construction and mechanism by which the displacement is directly transferred to a frequency output. It also reports on the transducer, which uses two contact probes for transmitting the displacement to a non-contact sensing element. Experimental results demonstrate the stability, linearity, measurement range and accuracy of the sensor system. (C) 1999 Elsevier Science Ltd. All rights reserved.
[ "deep hole measurement", "eddy current", "chip coils", "frequency output", "displacement sensor" ]
[ "P", "P", "P", "P", "R" ]
4Vng6s7
Modeling of isotopomeric cluster of the molecular ion
The occurrence of poly-isotopic elements in a molecule or ion can result in complex isotopomeric cluster of an ion. The "isotopomer" better and correctly indicates different isotopic compositions of a molecule (compound) or ion and not a single atom. The ions of organic compounds show in accurate mass spectra single, isolated peaks or narrow sub-clusters regardless of their molecular masses. The occurrence of a PIE makes the molecular ion cluster more complex and significantly influences the location of the most abundant peak and the form of the cluster. The present study is an attempt at answering the following question: what is the mechanism of the molecular ion's isotopomeric cluster formation and is it step-by-step predictable? The accurate mass-resolved isotopomer cluster can be predicted from accurate masses and abundances of the stable isotopes. The cluster consists of several sub-patterns, each of which is composed of near signals (at the same nominal m/z). The range of the sub-cluster usually does not exceed 0.005 u. The low-resolution cluster can be predicted from the high-resolution pattern by addition of all peaks occurring over a given narrow mass range (m/z - 0.5; m/z+0.49). Surprisingly, predicting the accurate mass cluster is simpler than predicting the low-resolution one. A compliance of the model results with the experimental ones suggests a correct prediction.
[ "isotopomeric cluster", "molecular ion", "molecular mass", "cluster modeling", "mass spectrometry" ]
[ "P", "P", "P", "R", "M" ]
-8UP3&b
Composition of the pericellular matrix modulates the deformation behaviour of chondrocytes in articular cartilage under static loading
The aim was to assess the role of the composition changes in the pericellular matrix (PCM) for the chondrocyte deformation. For that, a three-dimensional finite element model with depth-dependent collagen density, fluid fraction, fixed charge density and collagen architecture, including parallel planes representing the split-lines, was created to model the extracellular matrix (ECM). The PCM was constructed similarly as the ECM, but the collagen fibrils were oriented parallel to the chondrocyte surfaces. The chondrocytes were modelled as poroelastic with swelling properties. Deformation behaviour of the cells was studied under 15% static compression. Due to the depth-dependent structure and composition of cartilage, axial cell strains were highly depth-dependent. An increase in the collagen content and fluid fraction in the PCMs increased the lateral cell strains, while an increase in the fixed charge density induced an inverse behaviour. Axial cell strains were only slightly affected by the changes in PCM composition. We conclude that the PCM composition plays a significant role in the deformation behaviour of chondrocytes, possibly modulating cartilage development, adaptation and degeneration. The development of cartilage repair materials could benefit from this information.
[ "pericellular matrix", "chondrocyte", "articular cartilage", "finite element analysis", "fibril-reinforced" ]
[ "P", "P", "P", "M", "U" ]
4YRYnze
On mappings preserving measurability
Let ?=(?n) be a universal fuzzy measure and let M(?) M ( ? ) be the set of all ? -measurable sets, i.e. sets A?N A ? N for which the limit ??(A)=limn???n(A?{1,2,,n }) exists. We are studying properties of measurability preserving injective mappings, i.e. injective mappings ?:N?N ? : N ? N such that A?M(?) A ? M ( ? ) implies ?(A)?M(?) ? ( A ) ? M ( ? ) . Under some assumptions on ? we prove ??(?(A))=???(A ) for all A?M(?) A ? M ( ? ) , where ?=??(?(N)) ? = ? ? ( ? ( N ) ) .
[ "universal fuzzy measure", "asymptotic fuzzy measure", "asymptotic density" ]
[ "P", "M", "U" ]
j8j-dvR
Moves and displacements of particular elements in Quicksort
In this research note we investigate the number of moves and the displacement of particular elements during the execution of the well-known quicksort algorithm. This type of analysis is useful if the costs of data moves were dependent on the Source and target locations, and possibly the moved element itself. From the mathematical point of view, the analysis of these quantities turns out to be related to the analysis of quickselect, a selection algorithm which is a variant of quicksort that finds the i-th smallest element of n given elements, without sorting them. Our results constitute thus a novel application of M. Kuba's machinery [M. Kuba, On quickselect, partial sorting and multiple quickselect, Inform. Process. Lett. 99(5) (2006) 181-186] for the solution of general quickselect recurrences. (C) 2009 Elsevier B.V. All rights reserved.
[ "quicksort", "sorting", "data moves", "quickselect", "analysis of algorithms", "generating functions", "divide-and-conquer recurrences" ]
[ "P", "P", "P", "P", "R", "M", "M" ]
331p4y4
a robust combinatorial auction mechanism against shill bidders
This paper presents a method for discovering and detecting shill bids in combinatorial auctions. The Vickrey-Clarke-Groves Mechanism is one of the most important combinatorial auctions because it can satisfy the strategy-proof property, individual rationality, and Pareto efficiency, that is, it is the only mechanism that simultaneously satisfies these properties. As Yokoo et al. pointed out, false-name bids and shill bids pose an emerging problem for auctions, since on the Internet it is easy to establish different e-mail addresses and accounts for auction sites. Yokoo et al. proved that VCG cannot satisfy the false-name-proof property, and they also proved that no auction protocol can satisfy all three of the above properties and the false-name proof property simultaneously. Their approach concentrates on designing a new mechanism that has desirable properties, but this is quite complicated. As a new approach against shill-bids, in this paper, we design a mechanism that utilizes VCG and an algorithm for finding potential shill bids. Our mechanism is quite simple compared with Yokoo's approaches [11][12][13]. Our mechanism can judge whether there might be a shill bid from the results of the VCG procedure. We prove a theorem stating that shill bidders cannot increase their utilities unless all shill bidders win in the auction. Based on this theorem, our proposed mechanism compares the agents' utilities in a conventional auction with those in an auction where a shill bidder does not join in the auction. When these agents' utilities are different between the above cases, such agents might be shill bidders. Then, our mechanism allocates items to the shill bidders as a group from the set of items obtained through successful bids by the agent in the conventional auction. This process prevents shill bidders from increasing unfair profits. Furthermore, even though shill bidders participate in the auction, the seller's profit does not decrease using our proposed method. Thus, our mechanism detects shill bids when it only detects the possibility of shill bids. Our proposed method has the following three key advantages. First, we propose a method to detect shill bidders by comparison between bidders utilities. Our method is superior than existing complex mechanisms in the point of view of generalization and wide-use, because our auction mechanism employs only VCG. Second, even though there are shill bidders in an auction, incentive compatibility property is preserved using our mechanism. Finally, the schemer, in our mechanism, does never have incentive to make shill bidders. The schemer's utility does not increase in our mechanism even though a schemer make shill bidders. Namely, not to make shill bidders is dominant strategy for the schemer.
[ "combinatorial auctions", "shill bids", "vickrey-clarke-groves mechanism", "pareto efficiency", "incentive compatibility", "computational mechanism design" ]
[ "P", "P", "P", "P", "P", "M" ]
1Lgkts-
Protecting patient privacy from unauthorized release of medical images using a bio-inspired wavelet-based watermarking approach
This paper identifies a novel digital watermarking approach for copyright protection and authentication of medical images based on the wavelet transformation. We consider the problem of protecting patients? medical records and tracing illegally distributed medical images in a group communication environment. We employ the particle swarm algorithm and genetic algorithm optimization principles to obtain performance improvement in our work. In the proposed method, the strength of the embedded watermark and noise are controlled to prevent the images from being used directly regarding to visual properties of the host signal. These parameters and also the places of the embedded watermarks are varied to find the most suitable ones for images with different characteristics. The experimental results show that the proposed algorithm yields a watermark which is invisible to human eyes, robust against a wide variety of common attacks and reliable enough for tracing colluders.
[ "patient privacy", "medical image", "watermarking", "wavelet transformation", "particle swarm algorithm", "genetic algorithm" ]
[ "P", "P", "P", "P", "P", "P" ]
3Ww3TYM
Surface creation on unstructured point sets using neural networks
We present a new point set surfacing method based on a data-driven mapping between the parametric and geometric spaces. Our approach takes as input an unstructured and possibly noisy point set representing a two-manifold in R3 R 3 . To facilitate parameterization, the set is first embedded in R2 R 2 using neighborhood-preserving locally linear embedding. A learning algorithm is then trained to learn a mapping between the embedded two-dimensional (2D) coordinates and the corresponding three-dimensional (3D) space coordinates. The trained learner is then used to generate a tessellation spanning the parametric space, thereby producing a surface in the geometric space. This approach enables the surfacing of noisy and non-uniformly distributed point sets. We discuss the advantages of the proposed method in relation to existing methods, and show its utility on a number of test models, as well as its applications to modeling in virtual reality environments.
[ "point sets", "neural networks", "virtual reality", "surface fitting", "design" ]
[ "P", "P", "P", "M", "U" ]
-wz-utd
mirt: A Multidimensional Item Response Theory Package for the R Environment
Item response theory (IRT) is widely used in assessment and evaluation research to explain how participants respond to item level stimuli. Several R packages can be used to estimate the parameters in various IRT models, the most flexible being the ltm (Rizopoulos 2006), eRm (Mair and Hatzinger 2007), and MCMCpack (Martin, Quinn, and Park 2011) packages. However these packages have limitations in that ltm and eRm can only analyze unidimensional IRT models effectively and the exploratory multidimensional extensions available in MCMCpack requires prior understanding of Bayesian estimation convergence diagnostics and are computationally intensive. Most importantly, multidimensional confirmatory item factor analysis methods have not been implemented in any R package. The mirt package was created for estimating multidimensional item response theory parameters for exploratory and confirmatory models by using maximum-likelihood methods. The Gauss-Hermite quadrature method used in traditional EM estimation (e.g., Bock and Aitkin 1981) is presented for exploratory item response models as well as for confirmatory bifactor models (Gibbons and Hedeker 1992). Exploratory and confirmatory models are estimated by a stochastic algorithm described by Cai (2010a,b). Various program comparisons are presented and future directions for the package are discussed.
[ "r", "confirmatory item factor analysis", "bifactor", "multidimensional irt", "model estimation", "exploratory item factor analysis" ]
[ "P", "P", "P", "R", "R", "R" ]
27SUV&A
The non-grid technique for modeling 3D QSAR using self-organizing neural network (SOM) and PLS analysis: Application to steroids and colchicinoids
A novel method for modeling 3D QSAR has been developed. The method involves a multiple training of a series of self-organizing networks (SOM). The obtained networks have been used for processing the data of one reference molecule. A scheme for the analysis of such data with the PLS analysis has been proposed and tested using the steroids data with corticosteroid binding globulin (CSG) affinity. The predictivity of the CBG models measured with the SDEP parameter is among the best one reported. Although 3-D QSAR models for colchicinoid series is far less predictive, it allows for a discussion on the relative influence of the structural motifs of these compounds.
[ "3d qsar", "self-organizing neural network", "pls analysis", "steroids", "colchicinoids" ]
[ "P", "P", "P", "P", "P" ]
4LAsiSS
Heavy tails in multi-server queue
In this paper, the asymptotic behaviour of the distribution tail of the stationary waiting time W in the GI/GI/2 FCFS queue is studied. Under subexponential-type assumptions on the service time distribution, bounds and sharp asymptotics are given for the probability P{W > x}. We also get asymptotics for the distribution tail of a stationary two-dimensional workload vector and of a stationary queue length. These asymptotics depend heavily on the traffic load.
[ "stationary waiting time", "fcfs multi-server queue", "large deviations", "long tailed distribution", "subexponential distribution" ]
[ "P", "R", "U", "M", "M" ]
4FmhsX3
ON DEFINING INTEGERS AND PROVING ARITHMETIC CIRCUIT LOWER BOUNDS
Let tau (n) denote the minimum number of arithmetic operations sufficient to build the integer n from the constant 1. We prove that if there are arithmetic circuits of size polynomial in n for computing the permanent of n by n matrices, then tau (n!) is polynomially bounded in log n. Under the same assumption on the permanent, we conclude that the Pochhammer-Wilkinson polynomials Pi(n)(k=1) (X - k) and the Taylor approximations Sigma(n)(k= 0) 1/k! X(k) and Sigma(n)(k=1) 1/k X(k) of exp and log, respectively, can be computed by arithmetic circuits of size polynomial in log n (allowing divisions). This connects several so far unrelated conjectures in algebraic complexity.
[ "permanent", "algebraic complexity", "factorials", "integer roots of univariate polynomials" ]
[ "P", "P", "U", "M" ]
25vEVk3
Offline Arabic handwriting recognition: A survey
The automatic recognition of text on scanned images has enabled many applications such as searching for words in large volumes of documents, automatic sorting of postal mail, and convenient editing of previously printed documents. The domain of handwriting in the Arabic script presents unique technical challenges and has been addressed more recently than other domains. Many different methods have been proposed and applied to various types of images. This paper provides a comprehensive review of these methods. It is the first survey to focus on Arabic handwriting recognition and the first Arabic character recognition survey to provide recognition rates and descriptions of test data for the approaches discussed. It includes background on the field, discussion of the methods, and future research directions.
[ "computer vision", "document analysis", "handwriting analysis", "optical character recognition" ]
[ "U", "M", "M", "M" ]
-Yhxvun
Locating human faces within images
This paper presents an intelligent system to locate human faces within images. The proposed system can handle facial pattern variations due to certain changes in pose, illumination, and expression, as well as existence of spectacles, facial-hair, and occlusion. The system consists of three modules: preprocessing, face-components extraction, and final decision-making. In the first module, image processing algorithms are performed on images captured by cameras. Face components are extracted in the second module. A fuzzy neural network-based algorithm is designed for this purpose. In the last module, a commonsense-knowledge base is used for final evaluation of the identified features and determination of the face locations. The performance of the system is evaluated by conducting experiments on seven large test sets.
[ "preprocessing", "face-components extraction", "face locator", "final decision-making", "fuzzy neural networks" ]
[ "P", "P", "P", "M", "M" ]
3AFDV9B
criticality history guided fpga placement algorithm for timing optimization
We present an efficient timing-driven placement algorithm for FPGAs. Our major contribution is a criticality history guided (CHG) approach that can simultaneously reduce the critical path delay and computation time. The proposed approach keeps track of the timing criticality history of each edge and utilizes this information to effectively guide the placer. We also present a cooling schedule that optimizes both timing and run time when combined with the CHG method. The proposed algorithm is applied to the 20 largest MCNC benchmark circuits. Experimental results show that compared with VPR, our algorithm yields an average of 21.7\% reduction (maximum 45.8\%) in the critical path delay and it runs 2.2X fasterthan VPR. In addition, our approach outperforms other algorithms discussed in the literature in both delay and run time.
[ "fpga", "placement", "timing optimization" ]
[ "P", "P", "P" ]
-25H4Fs
Performance of a flux splitting scheme when solving the single-phase pressure equation discretized by MPFA
In this article we present a series of tests to study how well suited the TPFA coefficient matrix is as a preconditioner for the MPFA discrete system of equations in an iterative solver, using a flux splitting method. These tests have been conducted for single-phase flow for a wide range of anisotropy, heterogeneity, and grid skewness (mainly parallelogram grids). We use the K-orthogonal part of the MPFA transmissibilities for a parallelogram grid to govern the TPFA transmissibilities. The convergence of the flux splitting method is for each test case measured by the spectral radius of the iteration matrix.
[ "flux splitting", "mpfa", "convergence", "iteration method" ]
[ "P", "P", "P", "R" ]
3xcVBnC
Efficient call admission control scheme for 4G wireless networks
Next generation wireless networks (NGWNs) will utilize several different radio access technologies, seamlessly integrated to form one access network. This network has the potential to provide many of the requirements that other previous systems did not achieve such as high data transfer rates, effectives user control, seamless mobility, and others which will potentially change the way users utilize mobile devices. NGWN will integrate a multitude of different heterogeneous networks including (a) cellular networks, passed through multiple generations-1G, 2G, 3G, and 3.5G; (b) wireless LANs, championed by the IEEE 802.11 wireless fidelity (WiFi) networks; and (c) broadband wireless access networks (IEEE 802.16, WiMAX). In this paper, a new adaptive quality of service (QoS) oriented CAC scheme is proposed to limit the occurrence of hard IEEE 802.11 WLAN-UMTS handovers to mobile users using real time (RT) applications. This scheme is hybrid, based on the service class differentiation, the location in the heterogeneous infrastructure and a vertical handoff decision function as well. Simulation results show that our policy achieves significant performance gains. It maximizes the utilization of the resources available at the WLAN cells, and meets as much as possible the QoS requirement of higher priority users. Copyright (C) 2008 John Wiley & Sons, Ltd.
[ "call admission control", "qos", "vertical handoff", "next generation heterogeneous wireless networks" ]
[ "P", "P", "P", "R" ]
2hgvzRs
The use of a heel-mounted accelerometer as an adjunct measure of slip distance
A human-centered measure of floor slipperiness could be useful as an adjunct to conventional tribologic measures. This paper reports on the development and evaluation of a measure of slip distance based on variables derived from the signal of a heel-mounted accelerometer. Twenty-one participants walked on a laboratory runway under several surface slipperiness conditions at three walking speeds during a protocol designed to produce a wide range of slip distances at heel strike. Analysis of variance showed significant effects of slip distance (no-slip, micro-slip and slide), walking speed (1.52, 1.78 and 2.13m/s) and their interactions on peak forward acceleration, peak vertical acceleration and deceleration time of the heel following heel strike in 704 trials. Regression analysis of slip distance and deceleration time showed the strongest relationship with R2=0.511 R 2 = 0.511 . Large individual variation in the strength of this relationship was observed. The heel-mounted accelerometer may have utility as an adjunct measure in the evaluation of floor slipperiness, particularly for field applications where direct measurement may not be feasible.
[ "accelerometer", "slip", "slip distance", "micro-slip", "deceleration time", "gait" ]
[ "P", "P", "P", "P", "P", "U" ]
Mz7xcJn
Variety theorem for algebras with fuzzy orders
We present generalization of the Bloom variety theorem of ordered algebras in fuzzy setting. Algebras with fuzzy orders consist of sets of functions which are compatible with fuzzy orders. Fuzzy orders are defined on universe sets of algebras using complete residuated lattices as structures of degrees. In this setting, we show that classes of models of fuzzy sets of inequalities are closed under suitably defined formations of subalgebras, homomorphic images, and direct products. Conversely, we prove that classes having these closure properties are definable by fuzzy sets of inequalities.
[ "variety theorem", "fuzzy order", "complete residuated lattice", "general algebra" ]
[ "P", "P", "P", "R" ]
1:qyvqV
a 2.5 n-lower bound on the combinational complexity of boolean functions
Consider the combinational complexity L(f) of Boolean functions over the basis &Ohgr; &equil; {f f:{0,1} 2 ? {0,1}}. A new Method for proving linear lower bounds of size 2n is presented. Combining it with methods presented in [12] and [15], we establish for a special sequence of functions f n :{0,1} n ? {0,1}: 2.5n ? L(f) &le 6n. Also a trade-off result between circuit complexity and formula size is derived.
[ "lower bound", "combinational", "complexity", "method", "size", "sequence", "circuits" ]
[ "P", "P", "P", "P", "P", "P", "P" ]
45uw6Gz
active feedback in ad hoc information retrieval
Information retrieval is, in general, an iterative search process, in which the user often has several interactions with a retrieval system for an information need. The retrieval system can actively probe a user with questions to clarify the information need instead of just passively responding to user queries. A basic question is thus how a retrieval system should propose questions to the user so that it can obtain maximum benefits from the feedback on these questions. In this paper, we study how a retrieval system can perform active feedback, i.e., how to choose documents for relevance feedback so that the system can learn most from the feedback information. We present a general framework for such an active feedback problem, and derive several practical algorithms as special cases. Empirical evaluation of these algorithms shows that the performance of traditional relevance feedback (presenting the top K documents) is consistently worse than that of presenting documents with more diversity. With a diversity-based selection algorithm, we obtain fewer relevant documents, however, these fewer documents have more learning benefits.
[ "activation", "active feedback", "feedback", "ad hoc information retrieval", "informal", "information retrieval", "retrieval", "general", "iter", "search", "process", "user", "interaction", "systems", "information need", "probe", "queries", "paper", "performance", "documentation", "relevance", "learning", "present", "practical", "algorithm", "empirical evaluation", "diversity", "select", " framework ", "relevance-feedback", "ad-hoc" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "M", "U", "U" ]
Ei4TRPJ
A novel nonlinear adaptive Mooney-viscosity model based on DRPLS-GP algorithm for rubber mixing process
Rubber-mixing process is a typical non-linear batch process with very short operation time (commonly, 2-5min). The large measurement delay of Mooney-viscosity, one of the key quality indexes of mixed rubber, strongly restricts further improvement of the quality of final rubber products and the development of rubber-mixing process control. A novel nonlinear adaptive Mooney-viscosity prediction model based on Discounted-measurement Recursive Partial Least Squares-Gaussian Process (DRPLS-GP) algorithm is developed. Using rheological parameters as the input variables, which could be measured online, the measurement delay of Mooney-viscosity is markedly reduced from about 240 min to 2 min. In DRPLS-GP model, to overcome the noise and the multi-collinearity of original data, orthogonal latent variables (LVs) are extracted by Discounted-measurement Recursive Partial Least Squares (DRPLS) firstly, and then the LVs are inputted to Gaussian Process (GP) as predictors for further regression. Thus relying on the nonlinear regression power of GP and the multivariate regression power of DRPLS, the nonlinear relationship between rheological parameters and Mooney-viscosity could be regressed successfully by DRPLS-GP. In particular, this method could update Mooney-viscosity prediction model without increasing the computation and sampling burden, so it is very practical for industrial application. Moreover, the flexibility of discounted-measurement factor of the novel method ensures the high precise prediction of Mooney-viscosity of different mixed rubber formulas. The results which are obtained by using of 1006 industrial data sampled in a large-scale tire factory located in east China confirm that the predictive performance of DRPLS-GP is superior to other approaches. Crown Copyright (c) 2011 Published by Elsevier B.V. All rights reserved.
[ "mooney-viscosity", "drpls-gp", "rubber mixing process", "rheological parameters" ]
[ "P", "P", "P", "P" ]
JqCqCv7
University students notebook computer use
Recent evidence suggests that university students are self-reporting experiencing musculoskeletal discomfort with computer use similar to levels reported by adult workers. The objective of this study was to determine how university students use notebook computers and to determine what ergonomic strategies might be effective in reducing self-reported musculoskeletal discomfort in this population. Two hundred and eighty-nine university students randomly assigned to one of three towers by the university's Office of Housing participated in this study. The results of this investigation showed a significant reduction in self-reported notebook computer-related discomfort from pre- and post-survey in participants who received notebook computer accessories and in those who received accessories and participatory ergonomics training. A significant increase in post-survey rest breaks was seen. There was a significant correlation between self-reported computer usage and the amount measured using computer usage software (odometer). More research is needed however to determine the most effective ergonomics intervention for university students.
[ "notebook computer accessories", "computer usage software", "self-reported notebook computer-related musculoskeletal discomfort" ]
[ "P", "P", "R" ]
41TNFGA
A Laplace transformation dual-reciprocity boundary element method for a class of two-dimensional microscale thermal problems
The numerical solution of a two-dimensional thermal Problem governed by a third-order partial differential equation derived from a non-Fourier heat flux model which may account for thermal waves and/or microscopic effects is considered A dual-reciprocity boundary element method is proposed for solving the problem in the Laplace transformation domain. The solution in the physical domain is recovered by a numerical inverse Laplace transformation technique.
[ "laplace transform", "boundary element method", "thermal testing" ]
[ "P", "P", "M" ]
2cR1CM6
Parallelization of genetic operations that takes building-block linkage into account
We propose a performance enhancement using parallelization of genetic operations that takes highly fit schemata (building-block) linkages into account. Previously, we used the problem of solving Sudoku puzzles to demonstrate the possibility of shortening processing times through the use of many-core processors for genetic computations. To increase accuracy, we proposed a genetic operation that takes building-block linkages into account. Here, in an evaluation using very difficult problems, we show that the proposed genetic operations are suited to fine-grained parallelization; processing performance increased by approximately 30% (four times) with fine-grained parallel processing of the proposed mutation and crossover methods on Intel Core i5 (NVIDIA GTX5800) compared with non-parallel processing on a CPU. Increasing GPU resources will diminish the conflicts with thread usage in coarse-grained parallelization of individuals and will enable faster processing.
[ "parallelization", "linkage", "sudoku", "genetic algorithms" ]
[ "P", "P", "P", "M" ]
-5Hz8ER
Security model oriented attestation on dynamically reconfigurable component-based systems
As more and more component-based systems (CBS) run in the open and dynamic Internet, it is very important to establish trust between clients and CBS in mutually distrusted domains. One of the key mechanisms to establish trust among different platforms in an open and dynamic environment is remote attestation, which allows a platform to vouch for its trust-related characteristics to a remote challenger. This paper proposes a novel attestation scheme for a dynamically reconfigurable CBS to reliably prove whether its execution satisfies the specified security model, by introducing a TPM-based attestation service to dynamically monitor the execution of the CBS. When only parts of the dynamic CBS are concerned, our scheme enables fine-grained attestation on the execution of an individual component or a sub-system in the dynamic CBS, such that it involves only minimal overhead for attesting the target parts of the CBS. With flexible attestation support, the proposed attestation service can attest a CBS at the granularity from an individual component to the whole CBS. As a case study, we have applied the proposed scheme on OSGi systems and implemented a prototype based on JVMTI for Felix. The evaluation results show that the proposed scheme is both effective and practical.
[ "security model", "component-based systems", "remote attestation", "dynamically reconfigurable cbs", "security policy" ]
[ "P", "P", "P", "P", "M" ]
2WvMrdE
Wireless sensor and actor? networks: research challenges ??
Wireless sensor and actor networks (WSANs) refer to a group of sensors and actors linked by wireless medium to perform distributed sensing and acting tasks. The realization of wireless sensor and actor networks (WSANs) needs to satisfy the requirements introduced by the coexistence of sensors and actors. In WSANs, sensors gather information about the physical world, while actors take decisions and then perform appropriate actions upon the environment, which allows a user to effectively sense and act from a distance. In order to provide effective sensing and acting, coordination mechanisms are required among sensors and actors. Moreover, to perform right and timely actions, sensor data must be valid at the time of acting. This paper explores sensor-actor and actor-actor coordination and describes research challenges for coordination and communication problems.
[ "wireless sensor and actor networks", "coordination", "wireless sensor networks", "ad-hoc networks", "real-time communication", "transport", "routing", "mac", "cross-layering" ]
[ "P", "P", "R", "M", "M", "U", "U", "U", "U" ]
1nqiBh1
improving web search transparency by using a venn diagram interface
The user interfaces of the most popular search engines are largely the same. Typically, users are presented with an ordered list of documents, which provide limited help if users are having trouble finding the information they need. This article presents an interface called the Venn Diagram Interface (VDI) that offers users improved search transparency. The VDI allows users to see how each term, or group of terms, in a query contributes to the entire result set of a search. Furthermore, it allows users to browse the result sets generated by each of these terms. In a test with 10 participants, the VDI was compared against a standard web search interface, Google. With the VDI, users were able to find more documents of higher relevance and were more inclined to continue searching. Their level of interactivity was higher, the quality of the answers they found was perceived to be better. Eight out of 10 users preferred the VDI.
[ "web search", "venn diagram", "google", "visualization", "user study", "usability measure", "information search and retrieva" ]
[ "P", "P", "P", "U", "M", "U", "M" ]
1WQWUwh
Using an ODE solver for a class of integro-differential systems
By a simple extension of the Method of Lines, the ordinary differential equation solver VODPK may be used to solve a certain class of integro-differential equation systems (IDE systems). The problems are characterized by a pair of advected frequency-dependent quantities, coupled to a population variable whose rate includes a spectral integral in one space dimension. We have found that with an appropriate choice of preconditioner to aid in the convergence of the linear iterations, an extremely efficient method is obtained for the solution of these types of IDE system problems. We discuss the semidiscretization process and the formation of the preconditioner in some detail. Finally, we present an application of the technique. (C) 2001 Academic Press.
[ "method of lines", "integro-differential equations" ]
[ "P", "P" ]
4FjQkrT
An adaptive reversible steganographic scheme based on the just noticeable distortion
In this paper, we propose an adaptive reversible steganographic scheme based on the just noticeable distortion (JND). First, the JND value of each cover pixel is calculated using the frequency model of the human visual system (HVS). Then, the prediction value of the cover pixel is acquired by anisotropic interpolation, and also the pixel distribution characteristic is estimated. Finally, whether the cover pixel is embeddable or not is adaptively determined according to the relationship between the prediction error and the JND value. The embedding procedure is based on modifying the prediction error of each cover pixel, and the visual degradation caused by embedding is imperceptible due to the control of JND. Experimental results demonstrate that the proposed scheme provides a greater embedding rate and higher quality of stego image than other methods that have been reported recently.
[ "just noticeable distortion", "embedding rate", "reversible data hiding", "visual quality" ]
[ "P", "P", "M", "R" ]
1sebxK6
difficulties teaching java in cs1 and how we aim to solve them
In 1971 Dijkstra noted that as a teacher of programming he 'feels akin to a teacher of composition at a conservatory. He does not teach his pupils how to compose a particular symphony, he must help his pupils to find their own style and must explain to them what is implied by this' [1]. In similar vein, Don Knuth suggests that 'computer programming is an art, because it applies accumulated knowledge to the world, because it requires skill and ingenuity, and especially because it produces objects of beauty' [2].Traditionally, most Computer Science programs offer an introductory programming methodology course (CS1). In recent years, many institutions have subjected this course to major changes. One common alteration has been a move from a procedural paradigm to an Object Oriented (OO) paradigm. In many cases, this is manifested as a change to programming in Java. Emerging from this transition is the apparent anomaly that many students fail to understand OOP concepts, especially when required to use them in problem solving.Our panel represents researchers from four different countries who have all encountered such problems with a CS1 course. In this light, the panel focuses on CS1 difficulties and aims to address solutions to the 'Java problem'. Although we bring our own insights to the considered issues, we aim to engage the panel audience in discussing the nature of the problem and the propriety of the proposed solutions.
[ "java", "cs1", "teaching programming" ]
[ "P", "P", "R" ]
4tUT6Wv
Employing the One-Sender-Multiple-Receiver Technique in Wireless LANs
In this paper, we study the One-Sender-Multiple-Receiver (OSMR) transmission technique, which allows one sender to send to multiple receivers simultaneously by utilizing multiple antennas at the sender. To study the physical-layer characteristics of OSMR, we implement a prototype OSMR transmitter/receiver with GNU software defined radio and conduct experiments in a university building. Our results are positive and show that wireless channels allow OSMR for a significant percentage of the time. Motivated by our physical-layer study, we propose extensions to the 802.11 MAC protocol to support OSMR transmission, which is backward-compatible with existing 802.11 devices. We also note that the access point (AP) needs a packet scheduling algorithm to efficiently exploit OSMR. We show that the scheduling problem without considering the packet transmission overhead can be formalized as a linear programming problem, but the scheduling problem considering the overhead is NP-hard. We then propose a practical scheduler based on a two-phase algorithm that can also handle channel fluctuations. We test the proposed protocol and algorithm with simulations driven by traffic traces collected from wireless LANs and channel-state traces collected from our experiments, and the results show that OSMR significantly improves the downlink performance.
[ "one-sender-multiple-receiver (osmr)", "packet scheduling", "wireless local area network (lan)" ]
[ "P", "P", "M" ]
4FtbQLu
Granular neural networks
We discuss an idea of granular computing regarded as a development environment of neural networks and leading to the emergence of a new class of granular neural networks. Such networks are viewed as new computing architectures that are focused on processing information granules rather than being geared towards plain numeric processing as usually encountered in most neural networks. The considered information granules are represented as constructs that may be formalized in the setting of set theory, fuzzy sets, rough sets or specified within a probabilistic environment. We discuss several main approaches to the design of information granules. A number of fundamental issues are tackled including specificity of information granules vis-a-vis learning complexity in the neural networks along with their generalization features. We also provide with a list of architectures of granular neural networks and elaborate on the associated training (learning) scenarios.
[ "granular computing", "information granules", "information granules", "fuzzy sets", "neurocomputing", "information granulation", "interval analysis", "context-based clustering", "fuzzy neural networks" ]
[ "P", "P", "P", "P", "U", "P", "U", "U", "R" ]
26gjFtk
command execution in a heterogeneous environment
As a user's computing environment grows from a single time-shared host to a network of specialized and general-purpose machines, the capability for the user to access all of these resources in a consistent and transparent manner becomes desirable. Instead of viewing commands as binary files, we expect the user to view commands as services provided by servers in the network. The user interacts with a personal workstation that locates and executes services on his behalf. Executing a single service provided by any server in the network is useful, but the user would also like to combine services from different machines to perform complex computations. To provide this facility we expand on the UNIX notion of pipes to a generalized pipeline mechanism containing services from a variety of servers. In this paper we explain the merits of a multi-machine pipeline for solving problems of accessing services in a heterogeneous environment. We also give a design and performance evaluation of a general mechanism for multi-machine pipes using the DARPA UDP and TCP protocols.
[ "heterogeneity", "environments", "user", "computation", "network", "general", "access", "resource", "consistency", "transparency", "service", "server", "personality", "complexity", "pipeline", "paper", "design", "performance evaluation", "sharing", "timing" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "U", "U" ]
28rq-Mr
Anti-IL-17A Autovaccination Prevents Clinical and Histological Manifestations of Experimental Autoimmune Encephalomyelitis
Excessive or inappropriate production of IL-17A has been reported in diseases such as rheumatoid arthritis, asthma, and multiple sclerosis. The potential clinical relevance of these correlations was suggested by the protective effects of anti-IL-17A monoclonal antibodies in various mouse disease models. However, the chronic nature of the corresponding human afflictions raises great challenges for Ab-based therapies. An alternative to passive Ab therapy is autovaccination. Covalent association of self-cytokines with foreign proteins has been reported to induce the production of antibodies capable of neutralizing the biological activity of the target cytokine. We recently reported that cross-linking of IL-17A to ovalbumin produced highly immunogenic complexes that induced long-lasting IL-17A-neutralizing antibodies. Vaccinated SJL mice were completely protected against experimental autoimmune encephalomyelitis (EAE) induced by proteolipid protein peptide (PLP 139151), and a monoclonal anti-IL-17A Ab (MM17F3), derived from C57Bl/6 mice vaccinated against IL-17A-OVA, also prevented disease development. Here we report that this Ab also protects C57Bl/6 mice from myelin oligdendrocyte glycoprotein (MOG)-induced EAE. Histological analysis of brain sections of C57Bl/6 mice treated with MM17F3 showed a complete absence of inflammatory infiltrates and evidence for a marked inhibition of chemokine and cytokine messages in the spinal cord. These results further extend the analytical and therapeutic potential of the autovaccine procedure
[ "vaccine", "autoimmunity", "cytokine", "eae", "th17" ]
[ "P", "P", "P", "P", "U" ]
VQMCD&9
Adaptive Low Resolution Pruning for fast Full Search-equivalent pattern matching
Several recent proposals have shown the feasibility of significantly speeding-up pattern matching by means of Full Search-equivalent techniques, i.e. without approximating the outcome of the search with respect to a brute force investigation. These techniques are generally heavily based on efficient incremental calculation schemes aimed at avoiding unnecessary computations. In a very recent and extensive experimental evaluation, Low Resolution Pruning turned out to be in most cases the best performing approach. In this paper we propose a computational analysis of several incremental techniques specifically designed to enhance the efficiency of LRP. In addition, we propose a novel LRP algorithm aimed at minimizing the theoretical number of operations by adaptively exploiting different incremental approaches. We demonstrate the effectiveness of our proposal by means of experimental evaluation on a large dataset.
[ "low resolution pruning", "full search-equivalent", "pattern matching", "template matching" ]
[ "P", "P", "P", "M" ]
1zeN29z
Short-term traffic flow rate forecasting based on identifying similar traffic patterns
A non-parametric model for short-term traffic forecast is proposed. An enhanced K-nearest neighbors (K-NN) is developed and implemented. The proposed non-parametric model outperformed advanced parametric models. The model was applied on 36 datasets collected from different regions.
[ "traffic patterns", "non-parametric modeling", "short-term traffic forecasting", "k-nearest neighbor", "weighted euclidean distance", "traffic management" ]
[ "P", "P", "P", "P", "U", "M" ]
2VRVbVV
The power of tuning: A novel approach for the efficient design of survivable networks
Current survivability schemes typically offer two degrees of protection, namely full protection (from a single failure) or no protection at all. Full protection translates into rigid design constraints, i.e., the employment of disjoint paths. We introduce the concept of tunable survivability that bridges the gap between full and no protection. First, we establish several fundamental properties of connections with tunable survivability. With that at hand, we devise efficient polynomial (optimal) connection establishment schemes for both 1 : 1 and 1 + 1 protection architectures. Then, we show that the concept of tunable survivability gives rise to a novel hybridprotection architecture, which offers improved performance over the standard 1 : 1 and 1 + 1 architectures. Next, we investigate some related QoS extensions. Finally, we demonstrate the advantage of tunable survivability over full survivability. In particular, we show that, by just slightly alleviating the requirement of full survivability, we obtain major improvements in terms of the "feasibility" as well as the "quality" of the solution.
[ "survivability", "path restoration/protection", "routing" ]
[ "P", "M", "U" ]
2A:n&4S
Saliency-based automatic thumbnail generation
This article presents an automatic process used to automatically crop images, This tool is necessary to meet the constraints (small display) of the new portable device (Mobile phone...), The principle of the reframing rests on both the detection and the selection on the most interesting parts of the picture. These areas are detected by a computational model of the bottom-up visual attention. First, the results of this model are compared to results stemming from an eye tracking apparatus. Second, the performances of the whole system are presented.
[ "thumbnail", "bottom-up visual attention", "visual system", "eye tracking experiments" ]
[ "P", "P", "R", "M" ]
55KLQ7&
Buckets, heaps, lists, and monotone priority queues
We introduce the heap-on-top (hot) priority queue data structure that combines the multilevel bucket data structure of Denardo and Fox with a heap. Our data structure has superior operation bounds than either structure taken alone. We use the new data structure to obtain an improved bound for Dijkstra's shortest path algorithm. We also discuss a practical implementation of hot queues. Our experimental results in the context of Dijkstra's algorithm show that this implementation of hot queues performs very well and is more robust than implementations based only on heap or multilevel bucket data structures.
[ "priority queues", "data structures", "shortest paths" ]
[ "P", "P", "P" ]
2DzxiBf
Data mining in an engineering design environment: OR applications from graph matching
Data mining has been making inroads into the engineering design environmentan area that generates large amounts of heterogeneous data for which suitable mining methods are not readily available. For instance, an unsupervised data mining task (clustering) requires an accurate measure of distance or similarity. This paper focuses on the development of an accurate similarity measure for bills of materials (BOM) that can be used to cluster BOMs into product families and subfamilies. The paper presents a new problem called tree bundle matching (TBM) that is identified as a result of the research, gives a non-polynomial formulation, a proof that the problem is NP-hard, and suggests possible heuristic approaches. In a typical life cycle of an engineering project or product, enormous amounts of diverse engineering data are generated. Some of these include BOM, product design models in CAD, engineering drawings, manufacturing process plans, quality and test data, and warranty records. Such data contain information crucial for efficient and timely development of new products and variants; however, this information is often not available to designers. Our research employs data mining methods to extract this design information and improve its accessibility to design engineers. This paper focuses on one aspect of the overall research agenda, clustering BOMs into families and subfamilies. It extends previous work on a graph-based similarity measure for BOMs (a class of unordered trees) by presenting a new TBM problem, and proves the problem to be NP-hard. The overall contribution of this work is to demonstrate the OR applications from graph matching, stochastic methods, optimization, and others to data mining in the engineering design environment.
[ "clustering", "similarity measure", "bills of material", "unordered trees", "weighted bipartite matching", "matching problems" ]
[ "P", "P", "P", "P", "M", "R" ]
2fAchHS
The combinatorics of tandem duplication
Tandem duplication is a rearrangement process whereby a segment of DNA is replicated and proximally inserted. A sequence of these events is termed an evolution. Many different configurations can arise from such evolutions, generating some interesting combinatorial properties. Firstly, new DNA connections arising in an evolution can be algebraically represented with a word producing automaton. The number of words arising from n n tandem duplications can then be recursively derived. Secondly, many distinct evolutions result in the same sequence of words. With the aid of a bi-colored 2d-tree, a Hasse diagram corresponding to a partially ordered set is constructed, for which the number of linear extensions equates to the number of evolutions generating a given word sequence. Thirdly, we implement some subtree prune and graft operations on this structure to show that the total number of possible evolutions arising from n n tandem duplications is ? k = 1 n ( 4 k ? ( 2 k + 1 ) ) . The space of structures arising from tandem duplication thus grows at a super-exponential rate with leading order term O ( 4 1 2 n 2 ) .
[ "combinatorics", "tandem duplication", "rearrangements", "evolution", "posets" ]
[ "P", "P", "P", "P", "U" ]
54GMDm3
Racial Microaggressions in Academic Libraries: Results of a Survey of Minority and Non-minority Librarians
There is relatively little literature on racism within the profession of academic librarianship. To investigate academic librarians' experiences of racism, this research project uses the framework of racial microaggressions, which are subtle, denigrating messages directed toward people of color. According to the results of an online survey, some librarians of color have had racial microaggressions directed at them by their colleagues. Non-minority librarians, however, are unlikely to recognize these disparaging exchanges.
[ "racial microaggressions", "academic libraries", "racism", "diversity" ]
[ "P", "P", "P", "U" ]
3i3u3rZ
Cross-Pose Face Recognition - A Virtual View Generation Approach Using Clustering Based LVTM
This paper presents an approach for cross-pose face recognition by virtual view generation using an appearance clustering based local view transition model. Previously, the traditional global pattern based view transition model (VTM) method was extended to its local version called LVTM, which learns the linear transformation of pixel values between frontal and non-frontal image pairs from training images using partial image in a small region for each location, instead of transforming the entire image pattern. In this paper we show that the accuracy of the appearance transition model and the recognition rate can be further improved by better exploiting the inherent linear relationship between frontal-nonfrontal face image patch pairs. This is achieved based on the observation that variations in appearance caused by pose are closely related to the corresponding 3D structure and intuitively frontal-nonfrontal patch pairs from more similar local 3D face structures should have a stronger linear relationship. Thus for each specific location, instead of learning a common transformation as in the LVTM, the corresponding local patches are first clustered based on an appearance similarity distance metric and then the transition models are learned separately for each cluster. In the testing stage, each local patch for the input non-frontal probe image is transformed using the learned local view transition model corresponding to the most visually similar cluster. The experimental results on a real-world face dataset demonstrated the superiority of the proposed method in terms of recognition rate.
[ "face recognition", "clustering", "local view transition model", "pose invariant" ]
[ "P", "P", "P", "M" ]
45dcwS2
Software process simulationat a crossroads?
Software process simulation (SPS) has been evolving over the past two decades after being introduced to the software engineering community in the 1980s. At that time the SPS technology attracted a great deal of interest from both academics and practitioners in the software process communityeven to the extent of being one of the recommended techniques for achieving multiple Key Process Areas of Level 4 of the Capability Maturity Model Integration. However, in recent years, the growth of SPS seems to have slowed along with the number of reported applications in industry. This article summarizes the special panel that was held during ICSSP 2012 whose goals were to assess whether this technology remains applicable to today's software engineering projects and challenges and to point out the most beneficial opportunities for future research and industry application. Copyright 2014 John Wiley & Sons, Ltd.
[ "software process", "software process simulation", "systems process", "process modeling" ]
[ "P", "P", "M", "R" ]
2mw8PpS
Efficient mobile phone Chinese optical character recognition systems by use of heuristic fuzzy rules and bigram Markov language models
Statistical language models are very useful tools to improve the recognition accuracy of optical character recognition (OCR) systems. In previous systems, segmentation by maximum word matching, semantic class segmentation, or trigram language models have been used. However, these methods have some disadvantages, such as inaccuracies due to a preference for longer words (which may be erroneous), failure to recognize word dependencies, complex semantic training data segmentation, and a requirement of high memory. To overcome these problems, we propose a novel bigram Markov language model in this paper. This type of model does not have large word preferences and does not require semantically segmented training data. Furthermore, unlike trigram models, the memory requirement is small. Thus, the scheme is suitable for handheld and pocket computers, which are expected to be a major future application of text recognition systems. However, due to a simple language model, the bigram Markov model alone can introduce more errors. Hence in this paper, a novel algorithm combining bigram Markov language models with heuristic fuzzy rules is described. It is found that the recognition accuracy is improved through the use of the algorithm, and it is well suited to mobile and pocket computer applications, including as we will show in the experimental results, the ability to run on mobile phones. The main contribution of this paper is to show how fuzzy techniques as linguistic rules can be used to enhance the accuracy of a crisp recognition system, and still have low computational complexity.
[ "heuristic", "statistical language model", "markov model", "fuzzy logic" ]
[ "P", "P", "P", "M" ]
2PyY3dC
A VLSI array architecture for Hough transform
In this article, an asynchronous array architecture for straight line Hough transform (HT) is proposed using a scaling-free modified Co-Ordinate Rotation Digital Computer (CORDIC) unit as a basic processing element (PE). It exhibits four-fold angle parallelism by dividing the Hough space into four subspaces to reduce the computation burden to 25% of the conventional requirements. A distributed accumulator arrangement scheme is adopted to ensure conflict free voting operation. The architecture is then extended to compute circular and elliptic HT given their centers and orientations. Compared to some other existing architectures, this one exhibits higher computation speed. Society. Published by
[ "hough transform", "cordic", "low power", "image processing", "multiplierless array architecture" ]
[ "P", "P", "U", "M", "M" ]
4nMcjzJ
Static deformations and vibration analysis of composite and sandwich plates using a layerwise theory and multiquadrics discretizations
In this paper, the static and free vibration analysis of composite plates are performed, using a layerwise deformation theory and multiquadrics discretization. This meshless discretization method considers radial basis functions as the approximation method for both the equations of motion and the boundary conditions. The combination of this layerwise theory and the multiquadrics discretization method allows a very accurate prediction of the natural frequencies.
[ "static deformations", "free vibrations", "composite plates", "layerwise deformation theory", "radial basis functions" ]
[ "P", "P", "P", "P", "P" ]
zu4XJ5u
Investigating dynamic interaction between the one DOF manipulator and vehicle of a mobile manipulator
A manipulator mounted on a moving vehicle is called a mobile manipulator. A mobile manipulator with an appropriate suspension system can pass over uneven surfaces, thus having an infinite workspace. If the manipulator could operate while the vehicle is traveling, the efficiency concerning with the time and energy used for stopping and starting will be increased. This paper presents the kinematic and dynamic modeling of a one degree of freedom manipulator attached to a vehicle with a two degrees of freedom suspension system. The vehicle is considered to move with a constant linear speed over an uneven surface while the end effector tracks a desired trajectory in a fixed reference frame. In addition, the effects of dynamic interaction between the manipulator and vehicle (including the suspension system's effects) have been studied. Simulation results from straight line trajectory are presented to illustrate these effects.
[ "mobile manipulators", "kinematics and dynamics", "path following", "trajectory planning", "maple" ]
[ "P", "P", "U", "M", "U" ]
1WwGkGK
Color fringe removal in narrow color regions of digital images
Color fringe is an artifact that should be removed in a color digital image. This paper proposes a detection method of color fringe in narrow color regions. Near-saturation region (NSR) information as well as gradient magnitudes and directions of RGB color components are used to detect transition regions, in which color fringe occurs. Four gradient directions are used in order to improve the performance of color fringe detection. NSR is used to prevent excessive detection of possible color fringe regions. Then, the detected color fringe is removed using the color difference values. Experimental results show the effectiveness of the proposed color fringe removal method.
[ "color fringe", "narrow color regions", "digital image", "transition region" ]
[ "P", "P", "P", "P" ]
415RHp6
Supply chain modeling: past, present and future
Over the years, most of the firms have focused their attention to the effectiveness and efficiency of separate business functions. As a new way of doing business, however, a growing number of firms have begun to realize the strategic importance of planning, controlling, and designing a supply chain as a whole. In an effort to help firms capture the synergy of inter-functional and inter-organizational integration and coordination across the supply chain and to subsequently make better supply chain decisions, this paper synthesizes past supply chain modeling efforts and identifies key challenges and opportunities associated with supply chain modeling. We also provide various guidelines for the successful development and implementation of supply chain models. (C) 2002 Elsevier Science Ltd. All rights reserved.
[ "supply chain", "analytical models", "decision support systems" ]
[ "P", "M", "M" ]
38&S8ZY
Activity assigning of fourth party logistics by particle swarm optimization-based preemptive fuzzy integer goal programming
This paper proposes modified particle swarm optimization to solve the problem of activity assignment of fourth party logistics (4PL) with preemptive structure. In practice, decision makers must consider goals of different importance when they encounter 4PL decision problems. Previous studies have adopted weighted fuzzy goal programming to design optimization problems. However, it is difficult for decision makers to determine proper weights. This paper proposes a decision making method based on preemptive fuzzy goal programming and a modified PSO. The proposed method does not require weights, and prevents results without feasible solutions caused by improper resource setting. Furthermore, this paper proposes a modified PSO with mutation operator extension. Numerical analysis shows that proposed modified PSOs prevent algorithms from caving prematurely into local optimums.
[ "fourth party logistics", "particle swarm optimization", "preemptive fuzzy goal programming", "assignment problem" ]
[ "P", "P", "P", "R" ]
3R&DXte
Sensor/Actuator Faults Detection for Networked Control Systems via Predictive Control
Quantized Fault detection for sensor/actuator faults of networked control systems(NCSs) with time delays both in the sensor-to-controller channel and the controller-to-actuator channel, is concerned in the paper. The fault model is set up based on the possible cases of sensor/actuator faults. Then the model predictive control is used to compensate the time delay. When the sensors and actuators are all healthy, an H? stability criteria of the state predictive observer is obtained in terms of linear matrix inequality. A new threshold computational method that conforms to the actual situations is proposed. Then the thresholds of false alarm rate(FAR) and miss detection rate(MDR) are presented by using our proposed method, which are also compared with the ones given in the existing literatures. Finally, some numerical simulations are shown to demonstrate the effectiveness of the proposed method.
[ "fault detection", "networked control system", "predictive control", "false alarm rate(far)", "miss detection rate(mdr)" ]
[ "P", "P", "P", "P", "P" ]
-X&PMK8
a novel image edge detection using fractal compression
Image edges are the foundation of image texture and shape figure extraction. In this paper we propose a novel edge detection method based on the self-similarity of fractal compression. We point out that the mean-square-error distance (MSE) of fractal compression can be used to extract edge of fractal image effectively. The self-similarity coefficient between the local range block and the searching domain block is centered at the current pixel being processed, and near-center self-affine transform is applied in local searching process, finally a binary operator is used to threshold its magnitude and produce the edge map of the image. The results of experiments show that the proposed new algorithm for image edge detection is valid and effective. It also has good anti-noise performance..
[ "image edge detection", "self-similarity", "mse", "fractal coding" ]
[ "P", "P", "P", "M" ]
3DZLjvy
Microcontroller based system for 2D localisation
In this paper a new system for the 2D localisation of moving objects is presented. Particularly, the system is useful for mobile robot navigation in industrial, service or research applications. The system combines laser and radiofrequency technology to determine object or robot position and orientation. The system is robust, simple and inexpensive, and compensates for the effect of the object or robot movement in the localisation algorithm. (c) 2005 Elsevier Ltd. All rights reserved.
[ "moving object localisation", "vehicle localisation", "triangulation" ]
[ "R", "M", "U" ]
4u4cPgC
Cognitive tools shape thought: diagrams in design
Thinking often entails interacting with cognitive tools. In many cases, notably design, the predominant tool is the page. The page allows externalizing, organizing, and reorganizing thought. Yet, the page has its own properties that by expressing thought affect it: path, proximity, place, and permanence. The effects of these properties were evident in designs of information systems created by students Paths were interpreted as routes through components. Proximity was used to group subsystems. Horizontal position on the page was used to express temporal sequence and vertical position to reflect real-world spatial position. The permanence of designs on the page guided but also constrained generation of alternative designs. Cognitive tools both reflect and affect thought.
[ "cognitive tool", "design", "diagrammatic reasoning", "creativity", "affordance", "spatial thinking", "information systems design" ]
[ "P", "P", "U", "U", "U", "R", "R" ]
-SXFM7Z
a layout-aware analog synthesis procedure inclusive of dynamic module geometry selection
We propose an algorithm for sizing analog circuits using parasitic aware circuit matrix models. A novel scheme of separating schematic and parasitic models is proposed. As layout details are not abstracted in the circuit performance, the developed models can be used for different module geometries. Regression models developed make parasitic estimation much faster than a layout inclusive approach. The proposed approach is successfully used for dynamic module geometry selection during synthesis. Experiments conducted on operational amplifier and filter topologies demonstrate the accuracy of our proposed approach. For both circuits, results are within a mean error of 1 percent compared to an exact layout and spice approach.
[ "layout-aware", "sizing", "matrix-models" ]
[ "P", "P", "U" ]
3N73gHu
On tests of symmetry, marginal homogeneity and quasi-symmetry in two-way contingency tables based on minimum phi-divergence estimator with constraints
The restricted minimum phi-divergence estimator, [Pardo, J.A., Pardo, L. and Zografos, K., 2002, Minimum phi-divergence estimators with constraints in multinomial populations. Journal of Statistical Planning and Inference, 104, 221-237], is employed to obtain estimates of the cell frequencies of an 1 x 1 contingency table under hypotheses of symmetry, marginal homogeneity or quasi-symmetry. The associated phi-divergence statistics are distributed asymptotically as chi-squared distributions under the null hypothesis. The new estimators and test statistics contain, as particular cases, the classical estimators and test statistics previously presented in the literature for the cited problems. A simulation study is presented, for the symmetry problem, to choose the best function phi(2) for estimation and the best function phi(1) for testing.
[ "symmetry", "marginal", "homogeneity", "quasi-symmetry", "two-way contingency tables", "minimum phi-divergence estimator", "phi-divergence test statistics" ]
[ "P", "P", "P", "P", "P", "P", "R" ]
JUpzd3u
Reference-based combined deterministicstochastic subspace identification for experimental and operational modal analysis
The modal analysis of mechanical or civil engineering structures consists of three steps: data collection, system identification and modal parameter estimation. The system identification step plays a crucial role in the quality of the modal parameters, that are derived from the identified system model, as well as in the number of modal parameters that can be determined. This explains the increasing interest in sophisticated system identification methods for both experimental and operational modal analysis. In purely operational or output-only modal analysis, absolute scaling of the obtained mode shapes is not possible and the frequency content of the ambient forces could be narrow banded so that only a limited number of modes are obtained. This drives the demand for system identification methods that take both artificial and ambient excitation into account so that the amplitude of the artificial excitation can be small compared to that of the ambient excitation. An accurate, robust and efficient system identification method that meets this requirements is combined deterministicstochastic subspace identification. It can be used both for experimental modal analysis and for operational modal analysis with deterministic inputs. In this paper, the method is generalized to a reference-based version which is faster and, if the chosen reference outputs have the highest SNR values, more accurate than the classical algorithm. The algorithm is validated with experimental data from the Z24 bridge that overpassing the A1 highway between Bern and Zurich in Switzerland, that have been proposed as a benchmark for the assessment of system identification methods for the modal analysis of large structures. With the presented algorithm, the most complete set of modes reported so far is obtained.
[ "operational modal analysis", "civil engineering structures", "system identification", "experimental modal analysis", "stochastic systems", "subspace methods", "mechanical systems" ]
[ "P", "P", "P", "P", "M", "R", "R" ]
493NyyY
A mixture model for random graphs
The Erdos-Renyi model of a network is simple and possesses many explicit expressions for average and asymptotic properties, but it does not fit well to real-world networks. The vertices of those networks are often structured in unknown classes (functionally related proteins or social communities) with different connectivity properties. The stochastic block structures model was proposed for this purpose in the context of social sciences, using a Bayesian approach. We consider the same model in a frequentest statistical framework. We give the degree distribution and the clustering coefficient associated with this model, a variational method to estimate its parameters and a model selection criterion to select the number of classes. This estimation procedure allows us to deal with large networks containing thousands of vertices. The method is used to uncover the modular structure of a network of enzymatic reactions.
[ "mixture models", "random graphs", "variational method" ]
[ "P", "P", "P" ]
2h7zQzA
Simulating cellular dynamics through a coupled transcription, translation, metabolic model
In order to predict cell behavior in response to changes in its surroundings or to modifications of its genetic code, the dynamics of a cell are modeled using equations of metabolism, transport, transcription and translation implemented in the Karyote software. Our methodology accounts for the organelles of eukaryotes and the specialized zones in prokaryotes by dividing the volume of the cell into discrete compartments. Each compartment exchanges mass with others either through membrane transport or with a time delay effect associated with molecular migration. Metabolic and macromolecular reactions take place in user-specified compartments. Coupling among processes are accounted for and multiple scale techniques allow for the computation of processes that occur on a wide range of time scales. Our model is implemented to simulate the evolution of concentrations for a user-specifiable set of molecules and reactions that participate in cellular activity. The underlying equations integrate metabolic, transcription and translation reaction networks and provide a framework for simulating whole cells given a user-specified set of reactions. A rate equation formulation is used to simulate transcription from an input DNA sequence while the resulting mRNA is used via ribosome-mediated polymerization kinetics to accomplish translation. Feedback associated with the creation of species necessary for metabolism by the mRNA and protein synthesis modifies the rates of production of factors (e.g. nucleotides and amino acids) that affect the dynamics of transcription and translation. The concentrations of predicted proteins are compared with time series or steady state experiments. The expression and sequence of the predicted proteins are compared with experimental data via the construction of synthetic tryptic digests and associated mass spectra. We present the mathematical model showing the coupling of transcription, translation and metabolism in Karyote and illustrate some of its unique characteristics.
[ "cellular dynamics", "cell modeling", "multi-scale kinetics", "stoichiometric analysis", "bio-polymerization kinetics" ]
[ "P", "R", "M", "U", "M" ]
17xhVo5
Lexicography and degeneracy: can a pure cutting plane algorithm work?
We discuss an implementation of the lexicographic version of Gomory's fractional cutting plane method for ILP problems and of two heuristics mimicking the latter. In computational testing on a battery of MIPLIB problems we compare the performance of these variants with that of the standard Gomory algorithm, both in the single-cut and in the multi-cut (rounds of cuts) version, and show that they provide a radical improvement over the standard procedure. In particular, we report the exact solution of ILP instances from MIPLIB such as stein15, stein27, and bm23, for which the standard Gomory cutting plane algorithm is not able to close more than a tiny fraction of the integrality gap. We also offer an explanation for this surprising phenomenon.
[ "cutting plane methods", "gomory cuts", "degeneracy in linear programming", "lexicographic dual simplex", "computational analysis" ]
[ "P", "P", "M", "M", "M" ]
-1yqfHL
Valuing interdependent multi-stage IT investments: A real options approach
In this paper, we use the market asset disclaimer assumption and develop a binomial lattice based real options model to include cash flow interdependencies between multi-stage information technology (IT) investments. Using a simple two-stage IT investment problem with interdependent cash flows, we apply the binomial lattice based real options model to obtain combined valuation of the two-stage IT investment. In addition to investment valuation, our experience with the two-stage IT investment valuation suggests that the binomial lattice based real options model provides a powerful decision aid tool for appropriate timing, delaying and abandoning of the second-stage IT investment.
[ "real options", "capital budegeting", "it investment analysis" ]
[ "P", "U", "M" ]
4wU6R1K
Optimal call admission control on a single link with a GPS scheduler
The problem of call admission control (CAC) is considered for leaky bucket constrained sessions with deterministic service guarantees (zero loss and finite delay bound) served by a generalized processor sharing scheduler at a single node in the presence of best effort traffic. Based on an optimization process, a CAC algorithm capable of determining the (unique) optimal solution is derived. The derived algorithm is also applicable, under a slight modification, in a system where the best effort traffic is absent and is capable of guaranteeing that if it does not find a solution to the CAC problem, then a solution does not exist. The numerical results indicate that the CAC algorithm can achieve a significant improvement on bandwidth utilization as compared to a (deterministic) effective bandwidth-based CAC scheme.
[ "call admission control", "generalized processor sharing", "optimal bandwidth allocation" ]
[ "P", "P", "M" ]
4p374kr
Optimal Number of Active Users for Minimizing Average Data Delivery Delay in People-Centric Urban Sensing
We present a numerical analysis of the optimal number of active mobile users for minimizing average data delivery delay in intelligent people-centric urban sensing, in which context-aware mobile devices act as sensor-data carriers and sensor nodes act as data accumulators within CDMA cellular networks. In the analysis, we compute the optimal number of mobile users for different environmental conditions and then investigate the minimum average data delivery delay for this optimal number of mobile users.
[ "optimal number of active users", "urban sensing", "cdma", "minimum delay", "sensor data" ]
[ "P", "P", "P", "R", "R" ]
3c-gYAn
Optimizing reliability and service parts logistics for a time-varying installed base
Performance based contracting (PBC) emerges as a new after-sales service practice to support the operation and maintenance of capital equipment or systems. Under the PBC framework, the goal of the study is to increase the system operational availability while minimizing the logistics footprint through the design for reliability. We consider the situation where the number of installed systems randomly increases over the planning horizon, resulting in a non-stationary maintenance and repair demand. Renewal equation and Poisson process are used to estimate the aggregate fleet failures. We propose a dynamic stocking policy that adaptively replenishes the inventory to meet the time-varying parts demand. An optimization model is formulated and solved under a multi-phase adaptive inventory control policy. The study provides theoretical insights into the performance-driven service operation in the context of changing system fleet size due to new installations. Trade-offs between reliability design and inventory level are examined and compared in various shipment scenarios. Numerical examples drawn from semiconductor equipment industry are used to demonstrate the applicability and the performance of the proposed method.
[ "reliability", "repairable inventory", "life cycle cost analysis", "non-stationary demand", "performance based logistics" ]
[ "P", "R", "U", "R", "R" ]
4HYyL:U
On the complexity of some cryptographic problems based on the general decoding problem
A new probabilistic algorithm for decoding one received word from a set of many given received words, into a codeword such that the Hamming distance between the received word and the codeword is at most t, is proposed. The new algorithm is applicable to several cryptographic problems, such as the Stern identification scheme, the McEliece public-key cryptosystem, and in correlation attacks on stream ciphers. When applicable, it runs significantly faster than previous algorithms used for attacks on these cryptosystems.
[ "complexity", "correlation attack", "decoding algorithms", "mceliece cryptosystem", "stern identification system" ]
[ "P", "P", "R", "R", "M" ]
-DUsDUa
The intentional relationship of representation between the constructs of a language and reality
Specifications of conceptualisations (ontologies) are often employed for representing reality, both in knowledge representation and software engineering. While languages offer sophisticated constructs and rigorous semantics for building conceptual entities, no attention is paid to the relationship between such entities and the world they intend to represent. This paper studies such a relationship and provides empirical evidences in favour of two main hypotheses: (1) conceptualisations are insufficient to fully represent the specifics of reality; (2) languages (both representation and design-oriented) are general representations of (classes of) systems in the world, and they can be characterised as scientific theories. The first hypothesis establishes a problem for which we propose a solution based on the explicit elaboration of statements claiming the similarity (in some respects and to certain degrees of accuracy) between conceptual entities and real-world systems of interest. The second hypothesis constitutes a new perspective for understanding languages, whose advantages to representation and design are discussed in detail.
[ "language definition (19.1)", "ontologies (26)", "conceptual modelling (7)", "knowledge representation techniques" ]
[ "M", "M", "M", "M" ]
4eiC:cm
evolution of three help desks
The IT environment in higher education is diverse in both the technology being used and the various constituents being supported. To help support this heterogeneous environment many universities and colleges have developed either a support center or a help desk to provide a nexus for support across campus and responsibilities within their IT departments. This paper describes the evolution of support centers and help desk at three different universities. The panel will facilitate discussion on the problems, obstacles, achievements, and unique ideas used by other institutions in their evolution of a support center.
[ "help desk", "higher education", "support center" ]
[ "P", "P", "P" ]
4QEhCdm
Program equivalence in a simple language with state
We examine different approaches to reasoning about program equivalence in a higher-order language which incorporates a basic notion of state: references of unit type (names). In particular, we present three such methods stemming from logical relations, bisimulation techniques and game semantics respectively. The methods are tested against a surprisingly difficult example equivalence at second order which exploits the intricacies of the language with respect to privacy and flow of names, and the ensuing notion of local state.
[ "program equivalence", "logical relations", "game semantics", "nominal computation", "higher-order computation and local state", "environmental bisimulations" ]
[ "P", "P", "P", "U", "M", "M" ]
2mh8Cnj
A substructure-based SAR model for odor perception in humans relevant to health risk assessments
The ability of humans to perceive odors is a very complex phenomenon involving the selective binding of molecules to approximately 1000 olfactory receptors. Accordingly, the derivation of a substructure-based SAR model can be expected to be problematic. Yet, based upon published data on odor thresholds of volatile organic chemicals, we were able to derive such an SAR model. An examination of the structural determinants and related modulators indicates that lipophilicity is a major contributor to olfactory perception. The availability of a substructure-based SAR model permits an examination of the relationship between the presence in the environment of odorous chemicals and public health risks.
[ "odor", "olfactory receptors", "lipophilicity", "structure-activity" ]
[ "P", "P", "P", "U" ]
4oHn5t&
Determination of the electronic parameters of nanostructure SnO2/p-Si diode
The currentvoltage and capacitancevoltage characteristics of the nanostructure SnO2/p-Si diode have been investigated. The optical band gap and microstructure properties of the SnO2 film were analyzed by optical absorption method and scanning electron microscopy, respectively. The optical band of the film was found to be 3.58eV with a direct optical transition. The scanning electron microcopy results show that the SnO2 film has the nanostructure. The ideality factor, barrier height and series resistance values of the nanostructure SnO2/p-Si diode were found to be 2.1, 0.87eV and 36.35k?, respectively. The barrier height obtained from CV measurement is higher than obtained from IV measurement and this discrepancy can be explained by introducing a spatial distribution of barrier heights due to barrier height inhomogeneities, which are available at the nanostructure SnO2/p-Si interface. The interface state density of the diode was determined by conductance technique and was found to be 8.411010eV?1cm?2. It is evaluated that the nanostructure of the SnO2 film has an important effect on the ideality factor, barrier height and interface state density parameters of SnO2/p-Si diode.
[ "sno2", "solgel spin coating", "heterojunction diode" ]
[ "P", "U", "M" ]
3rSshTa
Using backpropagation neural network for face recognition with 2D+3D hybrid information
Biometric measurements received an increasing interest for security applications in the last two decades. After the 911 terrorist attacks, face recognition has been an active research in this area. However, very few research group focus on face recognition from both 2D and 3D facial images. Almost all existing recognition systems rely on a single type of face information: 2D intensity (or color) image or 3D range data set [Wang, Y., Chua, C., & Ho, Y. (2002). Facial feature detection and face recognition from 3D and 3D images. Pattern Recognition Letters, 23, 1191 - 1202]. The objective of this study is to develop an effective face recognition system that extracts and combines 2D and 3D face features to improve the recognition performance. The proposed method derived the information of 3D face (disparity face) using a designed synchronous Hopfield neural network. Then, we retrieved 2D and 3D face features with principle component analysis (PCA) and local autocorrelation coefficient (LAC) respectively. Eventually, the information of features was learned and classified using backpropagation neural networks. An experiment was conducted with 100 subjects, and for each subject thirteen stereo face images were taken with different expressions. Among them, seven faces with expressions were used for training, and the rest of the expressions were used for testing. The experimental results show that the proposed method effectively improved the recognition rate by combining the 2D with 3D face information. (c) 2007 Published by Elsevier Ltd.
[ "backpropagation neural network", "face recognition", "disparity face", "principle component analysis (pca)", "local autocorrelation coefficients (lac)", "stereovision" ]
[ "P", "P", "P", "P", "P", "U" ]
27iFoWb
Investigations of the development rate of irradiated PMMA microstructures in deep X-ray lithography
Deep X-ray lithography is a fabrication method for producing microstructure with high aspect ratios using resist layers of 100 ?m to several millimeters. We have measured the development rates of non-crosslinked PMMA microstructures (minimum size 30 ?m diameter) at room temperature for dose values between 2 and 16 kJ/cm3. We used dip development and megasonic supported development (10 W/cm2). The development rates cover three orders of magnitude, and the megasonic supported development rate is about four times larger than that for dip development.
[ "microstructure", "non-crosslinked pmma", "dip development", "megasonic supported development", "liga process", "gg developer" ]
[ "P", "P", "P", "P", "U", "M" ]
-dV5YmF
Classification schemes for positive solutions of nonlinear differential systems
Classification schemes for solutions of a class of nonlinear two-dimensional nonlinear differential systems are given in terms of their asymptotic magnitudes, and necessary as well as sufficient conditions for the existence of these solutions are also provided. (C) 2002 Elsevier Science Ltd. All rights reserved.
[ "classification scheme", "positive solutions", "nonlinear differential systems", "existence", "two dimensional", "fixed-point theorems" ]
[ "P", "P", "P", "P", "U", "U" ]
4kY3::W
Generalized polynomial chaos decomposition and spectral methods for the stochastic Stokes equations
In this paper we propose and analyze a high order numerical method to solve the Stokes equations with random coefficients. A stochastic Galerkin approach, based on a finite dimensional Karhunen-Loeve decomposition technique for the stochastic inputs, is used to reduce the original stochastic Stokes equations into a set of deterministic equations for the expansion coefficients. Then a P-N x PN-2 spectral method, together with a block Jacobi iteration is applied to solve the resulting problem. We establish the well-posedness of the weak formulation and its discrete counterpart. Moreover, we provide a rigorous convergence analysis and demonstrate exponential convergence with respect to the degrees of the polynomials chaos expansion used for the approximation in the random direction. Finally, a series of numerical tests is presented to support the theoretical results. (c) 2012 Elsevier Ltd. All rights reserved.
[ "generalized polynomial chaos", "spectral methods", "stochastic stokes equations", "error estimate" ]
[ "P", "P", "P", "U" ]
3&ms7fK
Temperature impact on switching characteristics of resistive memory devices with HfOx/TiOx/HfOx stack dielectric
The forming free switching behavior is observed in the HfOx/TiOx/HfOx stack. Temperature dependent characteristics have been studied in 298373K range. No significant dispersion in set and reset voltage is observed at high temperature. The conduction is due to the formation of a high density of localized O-vacancies.
[ "temperature impact", "resistive memory", "stack dielectric" ]
[ "P", "P", "P" ]