name
stringlengths
7
10
title
stringlengths
13
125
abstract
stringlengths
67
3.02k
fulltext
stringclasses
1 value
keywords
stringlengths
17
734
train_100
Separate accounts go mainstream [investment]
New entrants are shaking up the separate-account industry by supplying Web-based platforms that give advisers the tools to pick independent money managers
separate-account industry;web-based platforms;investment;financial advisors;independent money managers
train_1000
Does classicism explain universality? Arguments against a pure classical
component of mind One of the hallmarks of human cognition is the capacity to generalize over arbitrary constituents. Marcus (Cognition 66, p.153; Cognitive Psychology 37, p. 243, 1998) argued that this capacity, called "universal generalization" (universality), is not supported by connectionist models. Instead, universality is best explained by classical symbol systems, with connectionism as its implementation. Here it is argued that universality is also a problem for classicism in that the syntax-sensitive rules that are supposed to provide causal explanations of mental processes are either too strict, precluding possible generalizations; or too lax, providing no information as to the appropriate alternative. Consequently, universality is not explained by a classical theory
human cognition;connectionist models;classicism;universal generalization;mental processes;universality;syntax-sensitive rules;classical component of mind;causal explanations;classical symbol systems
train_1001
A conflict between language and atomistic information
Fred Dretske and Jerry Fodor are responsible for popularizing three well-known theses in contemporary philosophy of mind: the thesis of Information-Based Semantics (IBS), the thesis of Content Atomism (Atomism) and the thesis of the Language of Thought (LOT). LOT concerns the semantically relevant structure of representations involved in cognitive states such as beliefs and desires. It maintains that all such representations must have syntactic structures mirroring the structure of their contents. IBS is a thesis about the nature of the relations that connect cognitive representations and their parts to their contents (semantic relations). It holds that these relations supervene solely on relations of the kind that support information content, perhaps with some help from logical principles of combination. Atomism is a thesis about the nature of the content of simple symbols. It holds that each substantive simple symbol possesses its content independently of all other symbols in the representational system. I argue that Dretske's and Fodor's theories are false and that their falsehood results from a conflict IBS and Atomism, on the one hand, and LOT, on the other
desires;ibs;cognitive states;beliefs;information-based semantics;language of thought;lot;philosophy of mind;content atomism
train_1002
Selective representing and world-making
We discuss the thesis of selective representing-the idea that the contents of the mental representations had by organisms are highly constrained by the biological niches within which the organisms evolved. While such a thesis has been defended by several authors elsewhere, our primary concern here is to take up the issue of the compatibility of selective representing and realism. We hope to show three things. First, that the notion of selective representing is fully consistent with the realist idea of a mind-independent world. Second, that not only are these two consistent, but that the latter (the realist conception of a mind-independent world) provides the most powerful perspective from which to motivate and understand the differing perceptual and cognitive profiles themselves. Third, that the (genuine and important) sense in which organism and environment may together constitute an integrated system of scientific interest poses no additional threat to the realist conception
organisms;mind-independent world;selective representing;realism;cognitive profiles;mental representations;world-making
train_1003
Lob's theorem as a limitation on mechanism
We argue that Lob's Theorem implies a limitation on mechanism. Specifically, we argue, via an application of a generalized version of Lob's Theorem, that any particular device known by an observer to be mechanical cannot be used as an epistemic authority (of a particular type) by that observer: either the belief-set of such an authority is not mechanizable or, if it is, there is no identifiable formal system of which the observer can know (or truly believe) it to be the theorem-set. This gives, we believe, an important and hitherto unnoticed connection between mechanism and the use of authorities by human-like epistemic agents
theorem-set;human-like epistemic agents;lob theorem;belief-set;limitation on mechanism;formal system;epistemic authority
train_1004
Games machines play
Individual rationality, or doing what is best for oneself, is a standard model used to explain and predict human behavior, and von Neumann-Morgenstern game theory is the classical mathematical formalization of this theory in multiple-agent settings. Individual rationality, however, is an inadequate model for the synthesis of artificial social systems where cooperation is essential, since it does not permit the accommodation of group interests other than as aggregations of individual interests. Satisficing game theory is based upon a well-defined notion of being good enough, and does accommodate group as well as individual interests through the use of conditional preference relationships, whereby a decision maker is able to adjust its preferences as a function of the preferences, and not just the options, of others. This new theory is offered as an alternative paradigm to construct artificial societies that are capable of complex behavior that goes beyond exclusive self interest
cooperation;game theory;conditional preference relationships;human behavior;multiple-agent;decision theory;group rationality;artificial social systems;individual rationality;self interest;artificial societies
train_1005
The average-case identifiability and controllability of large scale systems
Needs for increased product quality, reduced pollution, and reduced energy and material consumption are driving enhanced process integration. This increases the number of manipulated and measured variables required by the control system to achieve its objectives. This paper addresses the question of whether processes tend to become increasingly more difficult to identify and control as the process dimension increases. Tools and results of multivariable statistics are used to show that, under a variety of assumed distributions on the elements, square processes of higher dimension tend to be more difficult to identify and control, whereas the expected controllability and identifiability of nonsquare processes depends on the relative numbers of measured and manipulated variables. These results suggest that the procedure of simplifying the control problem so that only a square process is considered is a poor practice for large scale systems
process control;chemical engineering;large scale systems;process identification;average-case controllability;high dimension square processes;multivariable statistics;manipulated variables;monte carlo simulations;measured variables;average-case identifiability;enhanced process integration;nonsquare processes
train_1006
Robust model-order reduction of complex biological processes
This paper addresses robust model-order reduction of a high dimensional nonlinear partial differential equation (PDE) model of a complex biological process. Based on a nonlinear, distributed parameter model of the same process which was validated against experimental data of an existing, pilot-scale biological nutrient removal (BNR) activated sludge plant, we developed a state-space model with 154 state variables. A general algorithm for robustly reducing the nonlinear PDE model is presented and, based on an investigation of five state-of-the-art model-order reduction techniques, we are able to reduce the original model to a model with only 30 states without incurring pronounced modelling errors. The singular perturbation approximation balanced truncating technique is found to give the lowest modelling errors in low frequency ranges and hence is deemed most suitable for controller design and other real-time applications
state-space model;nonlinear distributed parameter model;modelling errors;complex biological processes;pilot-scale bnr activated sludge plant;biological nutrient removal activated sludge processes;hankel singular values;high dimensional nonlinear partial differential equation model;singular perturbation approximation balanced truncating technique;controller design;robust model-order reduction
train_1007
Conditions for decentralized integral controllability
The term decentralized integral controllability (DIC) pertains to the existence of stable decentralized controllers with integral action that have closed-loop properties such as stable independent detuning. It is especially useful to select control structures systematically at the early stage of control system design because the only information needed for DIC is the steady-state process gain matrix. Here, a necessary and sufficient condition conjectured in the literature is proved. The real structured singular value which can exploit realness of the controller gain is used to describe computable conditions for DIC. The primary usage of DIC is to eliminate unworkable pairings. For this, two other simple necessary conditions are proposed. Examples are given to illustrate the effectiveness of the proposed conditions for DIC
systematic control structure selection;unworkable pairing elimination;stable independent detuning;real structured singular value;controller gain realness;necessary sufficient conditions;steady-state process gain matrix;closed-loop properties;integral action;stable decentralized controllers;control system design;schur complement;decentralized integral controllability
train_1008
Quadratic programming algorithms for large-scale model predictive control
Quadratic programming (QP) methods are an important element in the application of model predictive control (MPC). As larger and more challenging MPC applications are considered, more attention needs to be focused on the construction and tailoring of efficient QP algorithms. In this study, we tailor and apply a new QP method, called QPSchur, to large MPC applications, such as cross directional control problems in paper machines. Written in C++, QPSchur is an object oriented implementation of a novel dual space, Schur complement algorithm. We compare this approach to three widely applied QP algorithms and show that QPSchur is significantly more efficient (up to two orders of magnitude) than the other algorithms. In addition, detailed simulations are considered that demonstrate the importance of the flexible, object oriented construction of QPSchur, along with additional features for constraint handling, warm starts and partial solution
cross directional control problems;quadratic programming algorithms;dual space schur complement algorithm;constraint handling;partial solution;flexible object oriented construction;warm starts;simulations;large-scale model predictive control;qpschur;object oriented implementation;paper machines
train_1009
Robust output feedback model predictive control using off-line linear matrix
inequalities A fundamental question about model predictive control (MPC) is its robustness to model uncertainty. In this paper, we present a robust constrained output feedback MPC algorithm that can stabilize plants with both polytopic uncertainty and norm-bound uncertainty. The design procedure involves off-line design of a robust constrained state feedback MPC law and a state estimator using linear matrix inequalities (LMIs). Since we employ an off-line approach for the controller design which gives a sequence of explicit control laws, we are able to analyze the robust stabilizability of the combined control laws and estimator, and by adjusting the design parameters, guarantee robust stability of the closed-loop system in the presence of constraints. The algorithm is illustrated with two examples
model uncertainty robustness;explicit control law sequence;closed-loop system;robust constrained state feedback mpc law;robust constrained output feedback mpc algorithm;off-line linear matrix inequalities;robust output feedback model predictive control;asymptotically stable invariant ellipsoid;polytopic uncertainty;norm-bound uncertainty;controller design procedure;state estimator
train_1010
Robust self-tuning PID controller for nonlinear systems
In this paper, we propose a robust self-tuning PID controller suitable for nonlinear systems. The control system employs a preload relay (P_Relay) in series with a PID controller. The P_Relay ensures a high gain to yield a robust performance. However, it also incurs a chattering phenomenon. In this paper, instead of viewing the chattering as an undesirable yet inevitable feature, we use it as a naturally occurring signal for tuning and re-tuning the PID controller as the operating regime digresses. No other explicit input signal is required. Once the PID controller is tuned for a particular operating point, the relay may be disabled and chattering ceases correspondingly. However, it is invoked when there is a change in setpoint to another operating regime. In this way, the approach is also applicable to time-varying systems as the PID tuning can be continuous, based on the latest set of chattering characteristics. Analysis is provided on the stability properties of the control scheme. Simulation results for the level control of fluid in a spherical tank using the scheme are also presented
robust performance;stability properties;relay disabling;controller re-tuning;robust self-tuning pid controller;simulation results;fluid level control;operating regime;time-varying systems;continuous tuning;controller tuning;chattering phenomenon;naturally occurring signal;nonlinear systems;spherical tank;preload relay
train_1011
A self-organizing context-based approach to the tracking of multiple robot
trajectories We have combined competitive and Hebbian learning in a neural network designed to learn and recall complex spatiotemporal sequences. In such sequences, a particular item may occur more than once or the sequence may share states with another sequence. Processing of repeated/shared states is a hard problem that occurs very often in the domain of robotics. The proposed model consists of two groups of synaptic weights: competitive interlayer and Hebbian intralayer connections, which are responsible for encoding respectively the spatial and temporal features of the input sequence. Three additional mechanisms allow the network to deal with shared states: context units, neurons disabled from learning, and redundancy used to encode sequence states. The network operates by determining the current and the next state of the learned sequences. The model is simulated over various sets of robot trajectories in order to evaluate its storage and retrieval abilities; its sequence sampling effects; its robustness to noise and its tolerance to fault
hebbian intralayer connections;context units;self-organizing context-based approach;competitive learning;sequence sampling effects;shared states;unsupervised learning;sequence states;trajectories tracking;storage abilities;robot trajectories;fault tolerance;synaptic weights;retrieval abilities;complex spatiotemporal sequences;competitive interlayer connections;hebbian learning
train_1012
Evolving receptive-field controllers for mobile robots
The use of evolutionary methods to generate controllers for real-world autonomous agents has attracted attention. Most of the pertinent research has employed genetic algorithms or variations thereof. Research has applied an alternative evolutionary method, evolution strategies, to the generation of simple Braitenberg vehicles. This application accelerates the development of such controllers by more than an order of magnitude (a few hours compared to more than two days). Motivated by this useful speedup, the paper investigates the evolution of more complex architectures, receptive-field controllers, that can employ nonlinear interactions and, therefore, can yield more complex behavior. It is interesting to note that the evolution strategy yields the same efficacy in terms of function evaluations, even though the second class of controllers requires up to 10 times more parameters than the simple Braitenberg architecture. In addition to the speedup, there is an important theoretical reason for preferring an evolution strategy over a genetic algorithm for this problem, namely the presence of epistasis
evolutionary methods;simple braitenberg vehicles;scalability;nonlinear interactions;complex behavior;evolution strategies;real-world autonomous agents;mobile robots;radial basis functions;receptive-field controllers
train_1013
A scalable intelligent takeoff controller for a simulated running jointed leg
Running with jointed legs poses a difficult control problem in robotics. Neural controllers are attractive because they allow the robot to adapt to changing environmental conditions. However, scalability is an issue with many neural controllers. The paper describes the development of a scalable neurofuzzy controller for the takeoff phase of the running stride. Scalability is achieved by selecting a controller whose size does not grow with the dimensionality of the problem. Empirical results show that with proper design the takeoff controller scales from a leg with a single movable link to one with three movable links without a corresponding growth in size and without a loss of accuracy
neural controllers;scalability;simulated running jointed leg;intelligent robotic control;running stride;scalable neurofuzzy controller;changing environmental conditions;takeoff phase;scalable intelligent takeoff controller
train_1014
Modelling of complete robot dynamics based on a multi-dimensional, RBF-like
neural architecture A neural network based identification approach of manipulator dynamics is presented. For a structured modelling, RBF-like static neural networks are used in order to represent and adapt all model parameters with their non-linear dependences on the joint positions. The neural architecture is hierarchically organised to reach optimal adjustment to structural a priori-knowledge about the identification problem. The model structure is substantially simplified by general system analysis independent of robot type. But also a lot of specific features of the utilised experimental robot are taken into account. A fixed, grid based neuron placement together with application of B-spline polynomial basis functions is utilised favourably for a very effective recursive implementation of the neural architecture. Thus, an online identification of a dynamic model is submitted for a complete 6 joint industrial robot
complete 6 joint industrial robot;online identification;fixed grid based neuron placement;online learning;multi-dimensional rbf-like neural architecture;recursive implementation;manipulator dynamics;general system analysis;neural architecture;complete robot dynamics;static neural networks;dynamic model;b-spline polynomial basis functions
train_1015
Scalable techniques from nonparametric statistics for real time robot learning
Locally weighted learning (LWL) is a class of techniques from nonparametric statistics that provides useful representations and training algorithms for learning about complex phenomena during autonomous adaptive control of robotic systems. The paper introduces several LWL algorithms that have been tested successfully in real-time learning of complex robot tasks. We discuss two major classes of LWL, memory-based LWL and purely incremental LWL that does not need to remember any data explicitly. In contrast to the traditional belief that LWL methods cannot work well in high-dimensional spaces, we provide new algorithms that have been tested on up to 90 dimensional learning problems. The applicability of our LWL algorithms is demonstrated in various robot learning examples, including the learning of devil-sticking, pole-balancing by a humanoid robot arm, and inverse-dynamics learning for a seven and a 30 degree-of-freedom robot. In all these examples, the application of our statistical neural networks techniques allowed either faster or more accurate acquisition of motor control than classical control engineering
real time robot learning;purely incremental learning;autonomous adaptive control;scalable techniques;memory-based learning;inverse-dynamics learning;nonparametric regression;locally weighted learning;humanoid robot arm;pole-balancing;devil-sticking;nonparametric statistics;complex phenomena;statistical neural networks techniques;training algorithms
train_1016
A scalable model of cerebellar adaptive timing and sequencing: the recurrent
slide and latch (RSL) model From the dawn of modern neural network theory, the mammalian cerebellum has been a favored object of mathematical modeling studies. Early studies focused on the fanout, convergence, thresholding, and learned weighting of perceptual-motor signals within the cerebellar cortex. This led to the still viable idea that the granule cell stage in the cerebellar cortex performs a sparse expansive recoding of the time-varying input vector. This recoding reveals and emphasizes combinations in a distributed representation that serves as a basis for the learned, state-dependent control actions engendered by cerebellar outputs to movement related centers. To make optimal use of available signals, the cerebellum must be able to sift the evolving state representation for the most reliable predictors of the need for control actions, and to use those predictors even if they appear only transiently and well in advance of the optimal time for initiating the control action. The paper proposes a modification to prior, population, models for cerebellar adaptive timing and sequencing. Since it replaces a population with a single element, the proposed RSL model is in one sense maximally efficient, and therefore optimal from the perspective of scalability
recurrent slide and latch model;recurrent network;sparse expansive recoding;mammalian cerebellum;cerebellar sequencing;time-varying input vector;cerebellar adaptive timing;scalable model;neural network theory;granule cell stage;distributed representation
train_1017
Searching a scalable approach to cerebellar based control
Decades of research into the structure and function of the cerebellum have led to a clear understanding of many of its cells, as well as how learning might take place. Furthermore, there are many theories on what signals the cerebellum operates on, and how it works in concert with other parts of the nervous system. Nevertheless, the application of computational cerebellar models to the control of robot dynamics remains in its infant state. To date, few applications have been realized. The currently emerging family of light-weight robots poses a new challenge to robot control: due to their complex dynamics traditional methods, depending on a full analysis of the dynamics of the system, are no longer applicable since the joints influence each other dynamics during movement. Can artificial cerebellar models compete here?
nervous system;light-weight robots;cerebellar based control;robot control;computational cerebellar models;scalable approach
train_1018
Fabrication of polymeric microlens of hemispherical shape using micromolding
Polymeric microlenses play an important role in reducing the size, weight, and cost of optical data storage and optical communication systems. We fabricate polymeric microlenses using the microcompression molding process. The design and fabrication procedures for mold insertion is simplified using silicon instead of metal. PMMA powder is used as the molding material. Governed by process parameters such as temperature and pressure histories, the micromolding process is controlled to minimize various defects that develop during the molding process. The radius of curvature and magnification ratio of fabricated microlens are measured as 150 mu m and over 3.0, respectively
microcompression molding process;weight;optical communication systems;size;fabrication procedures;optical data storage;molding material;design procedures;temperature;polymeric microlens fabrication;micromolding;mold insertion;cost;micromolding process;pressure;process parameters;300 micron;magnification ratio;polymeric microlenses;pmma powder;silicon;hemispherical shape microlens
train_1019
Optical setup and analysis of disk-type photopolymer high-density holographic
storage A relatively simple scheme for disk-type photopolymer high-density holographic storage based on angular and spatial multiplexing is described. The effects of the optical setup on the recording capacity and density are studied. Calculations and analysis show that this scheme is more effective than a scheme based on the spatioangular multiplexing for disk-type photopolymer high-density holographic storage, which has a limited medium thickness. Also an optimal beam recording angle exists to achieve maximum recording capacity and density
optimal beam recording angle;spatial multiplexing;spatio-angular multiplexing;disk-type photopolymer high-density holographic storage;angular multiplexing;recording capacity;recording density;optical setup;limited medium thickness;maximum recording capacity;maximum density
train_102
Harmless delays in Cohen-Grossberg neural networks
Without assuming monotonicity and differentiability of the activation functions and any symmetry of interconnections, we establish some sufficient conditions for the globally asymptotic stability of a unique equilibrium for the Cohen-Grossberg (1983) neural network with multiple delays. Lyapunov functionals and functions combined with the Razumikhin technique are employed. The criteria are all independent of the magnitudes of the delays, and thus the delays under these conditions are harmless
multiple delays;activation functions;monotonicity;differentiability;razumikhin technique;harmless delays;cohen-grossberg neural networks;lyapunov functionals;interconnections;globally asymptotic stability
train_1020
Supersampling multiframe blind deconvolution resolution enhancement of adaptive
optics compensated imagery of low earth orbit satellites We describe a postprocessing methodology for reconstructing undersampled image sequences with randomly varying blur that can provide image enhancement beyond the sampling resolution of the sensor. This method is demonstrated on simulated imagery and on adaptive-optics-(AO)-compensated imagery taken by the Starfire Optical Range 3.5-m telescope that has been artificially undersampled. Also shown are the results of multiframe blind deconvolution of some of the highest quality optical imagery of low earth orbit satellites collected with a ground-based telescope to date. The algorithm used is a generalization of multiframe blind deconvolution techniques that include a representation of spatial sampling by the focal plane array elements based on a forward stochastic model. This generalization enables the random shifts and shape of the AO-compensated point spread function (PSF) to be used to partially eliminate the aliasing effects associated with sub-Nyquist sampling of the image by the focal plane array. The method could be used to reduce resolution loss that occurs when imaging in wide-field-of-view (FOV) modes
randomly varying blur;starfire optical range telescope;sub-nyquist sampling;wide-field-of-view modes;spatial sampling;random shifts;image enhancement;postprocessing methodology;multiframe blind deconvolution;sensor sampling resolution;simulated imagery;ground-based telescope;forward stochastic model;undersampled image sequence reconstruction;focal plane array elements;resolution loss;3.5 m;adaptive optics compensated imagery;aliasing effects;low earth orbit satellites;ao-compensated point spread function;supersampling multiframe blind deconvolution resolution enhancement
train_1021
Error-probability analysis of MIL-STD-1773 optical fiber data buses
We have analyzed the error probabilities of MIL-STD-1773 optical fiber data buses with three modulation schemes, namely, original Manchester II bi-phase coding, PTMBC, and EMBC-BSF. Using these derived expressions of error probabilities, we can also compare the receiver sensitivities of such optical fiber data buses
receiver sensitivities;manchester bi-phase coding;optical fiber data buses;modulation schemes;error probabilities
train_1022
Bad pixel identification by means of principal components analysis
Bad pixels are defined as those pixels showing a temporal evolution of the signal different from the rest of the pixels of a given array. Principal component analysis helps us to understand the definition of a statistical distance associated with each pixels, and using this distance it is possible to identify those pixels labeled as bad pixels. The spatiality of a pixel is also calculated. An assumption about the normality of the distribution of the distances of the pixels is revised. Although the influence on the robustness of the identification algorithm is negligible, the definition of a parameter related with this nonnormality helps to identify those principal components and eigenimages responsible for the departure from a multinormal distribution. The method for identifying the bad pixels is successfully applied to a set of frames obtained from a CCD visible and a focal plane array (FPA) IR camera
principal components analysis;robustness;bad pixel identification;multinormal distribution;ccd visible camera;temporal evolution;statistical distance;focal plane array;ir camera;identification algorithm;eigenimages
train_1023
Simple nonlinear dual-window operator for edge detection
We propose a nonlinear edge detection technique based on a two-concentric-circular-window operator. We perform a preliminary selection of edge candidates using a standard gradient and use the dual-window operator to reveal edges as zero-crossing points of a simple difference function depending only on the minimum and maximum values in the two windows. Comparisons with other well-established techniques are reported in terms of visual appearance and computational efficiency. They show that detected edges are surely comparable with Canny's and Laplacian of Gaussian algorithms, with a noteworthy reduction in terms of computational load
canny's algorithms;difference function;edge detection;nonlinear dual-window operator;gaussian algorithms;two-concentric-circular-window operator;nonlinear processing;maximum values;standard gradient;computational load;minimum values;dual window operator;zero-crossing points;laplacian algorithms;nonlinear edge detection technique;detected edges;computational efficiency
train_1024
Rational systems exhibit moderate risk aversion with respect to "gambles" on
variable-resolution compression In an embedded wavelet scheme for progressive transmission, a tree structure naturally defines the spatial relationship on the hierarchical pyramid. Transform coefficients over each tree correspond to a unique local spatial region of the original image, and they can be coded bit-plane by bit-plane through successive-approximation quantization. After receiving the approximate value of some coefficients, the decoder can obtain a reconstructed image. We show a rational system for progressive transmission that, in absence of a priori knowledge about regions of interest, chooses at any truncation time among alternative trees for further transmission in such a way as to avoid certain forms of behavioral inconsistency. We prove that some rational transmission systems might exhibit aversion to risk involving "gambles" on tree-dependent quality of encoding while others favor taking such risks. Based on an acceptable predictor for visual distinctness from digital imagery, we demonstrate that, without any outside knowledge, risk-prone systems as well as those with strong risk aversion appear in capable of attaining the quality of reconstructions that can be achieved with moderate risk-averse behavior
tree structure;hierarchical pyramid spatial relationship;progressive transmission;digital imagery;decision problem;progressive transmission utility functions;moderate risk aversion;behavioral inconsistency avoidance;gambles;visual distinctness;variable-resolution compression;image encoding;reconstructed image;transform coefficients;truncation time;rational system;embedded wavelet scheme;local spatial region;successive-approximation quantization;information theoretic measure;rate control optimization;acceptable predictor;embedded coding
train_1025
Watermarking techniques for electronic delivery of remote sensing images
Earth observation missions have recently attracted a growing interest, mainly due to the large number of possible applications capable of exploiting remotely sensed data and images. Along with the increase of market potential, the need arises for the protection of the image products. Such a need is a very crucial one, because the Internet and other public/private networks have become preferred means of data exchange. A critical issue arising when dealing with digital image distribution is copyright protection. Such a problem has been largely addressed by resorting to watermarking technology. A question that obviously arises is whether the requirements imposed by remote sensing imagery are compatible with existing watermarking techniques. On the basis of these motivations, the contribution of this work is twofold: assessment of the requirements imposed by remote sensing applications on watermark-based copyright protection, and modification of two well-established digital watermarking techniques to meet such constraints. More specifically, the concept of near-lossless watermarking is introduced and two possible algorithms matching such a requirement are presented. Experimental results are shown to measure the impact of watermark introduction on a typical remote sensing application, i.e., unsupervised image classification
near-lossless watermarking;remote sensing images;electronic delivery;digital image distribution;digital watermarking;watermarking techniques;copyright protection;earth observation missions;unsupervised image classification
train_1026
Use of SPOT images as a tool for coastal zone management and monitoring of
environmental impacts in the coastal zone Modern techniques such as remote sensing have been one of the main factors leading toward the achievement of serious plans regarding coastal management. A multitemporal analysis of land use in certain areas of the Colombian Caribbean Coast is described. It mainly focuses on environmental impacts caused by anthropogenic activities, such as deforestation of mangroves due to shrimp farming. Selection of sensitive areas, percentage of destroyed mangroves, possible endangered areas, etc., are some of the results of this analysis. Recommendations for a coastal management plan in the area have also resulted from this analysis. Some other consequences of the deforestation of mangroves in the coastal zone and the construction of shrimp ponds are also analyzed, such as the increase of erosion problems in these areas and water pollution, among others. The increase of erosion in these areas has also changed part of their morphology, which has been studied by the analysis of SPOT images in previous years. A serious concern exists about the future of these areas. For this reason new techniques like satellite images (SPOT) have been applied with good results, leading to more effective control and coastal management in the area. The use of SPOT images to study changes of the land use of the area is a useful technique to determine patterns of human activities and suggest solutions for severe problems in these areas
colombian caribbean coast;satellite images;remote sensing;erosion problems;shrimp ponds;endangered areas;shrimp farming;vector overlay;anthropogenic activities;sedimentation;water pollution;human activities;land use;coastal zone management;mangrove deforestation;supervised classification;spot images;environmental impact monitoring;multitemporal analysis;vectorization
train_1027
Extracting straight road structure in urban environments using IKONOS satellite
imagery We discuss a fully automatic technique for extracting roads in urban environments. The method has its bases in a vegetation mask derived from multispectral IKONOS data and in texture derived from panchromatic IKONOS data. These two techniques together are used to distinguish road pixels. We then move from individual pixels to an object-based representation that allows reasoning on a higher level. Recognition of individual segments and intersections and the relationships among them are used to determine underlying road structure and to then logically hypothesize the existence of additional road network components. We show results on an image of San Diego, California. The object-based processing component may be adapted to utilize other basis techniques as well, and could be used to build a road network in any scene having a straight-line structured topology
object-based processing component;vegetation mask;san diego;panchromatic ikonos data;straight road structure;individual segment recognition;fully automatic technique;vectorized road network;high-resolution imagery;road network components;large-scale feature extraction;urban environments;ikonos satellite imagery;object-based representation;texture;higher level reasoning;straight-line structured topology;road pixels
train_1028
Novel approach to super-resolution pits readout
We proposed a novel method to realize the readout of super-resolution pits by using a super-resolution reflective film to replace the reflective layer of the conventional ROM. At the same time, by using Sb as the super-resolution reflective layer and SiN as a dielectric layer, the super-resolution pits with diameters of 380 nm were read out by a setup whose laser wavelength is 632.8 nm and numerical aperture is 0.40. In addition, the influence of the Sb thin film thickness on the readout signal was investigated, the results showed that the optimum Sb thin film thickness is 28 to 30 nm, and the maximum CNR is 38 to 40 dB
super-resolution reflective film;sb super-resolution reflective layer;numerical aperture;632.8 nm;sb-sin;380 nm;readout signal;sin dielectric layer;sb thin film thickness;super-resolution pits readout;maximum cnr;28 to 30 nm
train_1029
Effect of insulation layer on transcribability and birefringence distribution
in optical disk substrate As the need for information storage media with high storage density increases, digital video disks (DVDs) with smaller recording marks and thinner optical disk substrates than those of conventional DVDs are being required. Therefore, improving the replication quality of land-groove or pit structure and reducing the birefringence distribution are emerging as important criteria in the fabrication of high-density optical disk substrates. We control the transcribability and distribution of birefringence by inserting an insulation layer under the stamper during injection-compression molding of DVD RAM substrates. The effects of the insulation layer on the geometrical and optical properties, such as transcribability and birefringence distribution, are examined experimentally. The inserted insulation layer is found to be very effective in improving the quality of replication and leveling out the first peak of the gapwise birefringence distribution near the mold wall and reducing the average birefringence value, because the insulation layer retarded the growth of the solidified layer
land-groove;information storage media;optical properties;mold wall;dvd ram substrates;stamper;injection-compression molding;thinner optical disk substrates;insulation layer;gapwise birefringence distribution;polyimide thermal insulation layer;solidified layer growth retardation;transcribability;smaller recording marks;pit structure;birefringence distribution;high storage density;digital video disks;optical disk substrate;fabrication;geometrical properties;replication quality
train_1030
Comparison of automated digital elevation model extraction results using
along-track ASTER and across-track SPOT stereo images A digital elevation model (DEM) can be extracted automatically from stereo satellite images. During the past decade, the most common satellite data used to extract DEM was the across-track SPOT. Recently, the addition of along-track ASTER data, which can be downloaded freely, provides another attractive alternative to extract DEM data. This work compares the automated DEM extraction results using an ASTER stereo pair and a SPOT stereo pair over an area of hilly mountains in Drum Mountain, Utah, when compared to a USGS 7.5-min DEM standard product. The result shows that SPOT produces better DEM results in terms of accuracy and details, if the radiometric variations between the images, taken on subsequent satellite revolutions, are small. Otherwise, the ASTER stereo pair is a better choice because of simultaneous along-track acquisition during a single pass. Compared to the USGS 7.5-min DEM, the ASTER and the SPOT extracted DEMs have a standard deviation of 11.6 and 4.6 m, respectively
automated digital elevation model extraction;radiometric variations;across-track spot stereo images;stereo satellite images;along-track aster data;spot stereo image pair;aster stereo pair;simultaneous along-track acquisition
train_1031
Noise-constrained hyperspectral data compression
Storage and transmission requirements for hyperspectral data sets are significant. To reduce hardware costs, well-designed compression techniques are needed to preserve information content while maximizing compression ratios. Lossless compression techniques maintain data integrity, but yield small compression ratios. We present a slightly lossy compression algorithm that uses the noise statistics of the data to preserve information content while maximizing compression ratios. The adaptive principal components analysis (APCA) algorithm uses noise statistics to determine the number of significant principal components and selects only those that are required to represent each pixel to within the noise level. We demonstrate the effectiveness of these methods with airborne visible/infrared spectrometer (AVIRIS), hyperspectral digital imagery collection experiment (HYDICE), hyperspectral mapper (HYMAP), and Hyperion datasets
airborne visible/infrared spectrometer hyperspectral digital imagery collection experiment;transmission requirements;hardware costs;slightly lossy compression algorithm;hymap;storage requirements;noise-constrained hyperspectral data compression;data integrity;aviris hydice;hyperion datasets;adaptive principal components analysis algorithm;information content;hyperspectral mapper;noise statistics;lossless compression techniques;gaussian statistics;hyperspectral data sets;noise level;compression ratios
train_1032
Satellite image collection optimization
Imaging satellite systems represent a high capital cost. Optimizing the collection of images is critical for both satisfying customer orders and building a sustainable satellite operations business. We describe the functions of an operational, multivariable, time dynamic optimization system that maximizes the daily collection of satellite images. A graphical user interface allows the operator to quickly see the results of what if adjustments to an image collection plan. Used for both long range planning and daily collection scheduling of Space Imaging's IKONOS satellite, the satellite control and tasking (SCT) software allows collection commands to be altered up to 10 min before upload to the satellite
satellite control tasking software;long range planning;collection commands;graphical user interface;satellite image collection optimization;imaging satellite systems;daily collection scheduling;space imaging ikonos satellite;image collection plan;multivariable time dynamic optimization system
train_1033
Optical two-step modified signed-digit addition based on binary logic gates
A new modified signed-digit (MSD) addition algorithm based on binary logic gates is proposed for parallel computing. It is shown that by encoding each of the input MSD digits and flag digits into a pair of binary bits, the number of addition steps can be reduced to two. The flag digit is introduced to characterize the next low order pair (NLOP) of the input digits in order to suppress carry propagation. The rules for two-step addition of binary coded MSD (BCMSD) numbers are formulated that can be implemented using optical shadow-casting logic system
binary bits;two-step addition;input msd digits;modified signed-digit addition algorithm;optical two-step modified signed-digit addition;flag digits;optical shadow-casting logic system;addition steps;low order pair;carry propagation suppression;binary logic gates;binary coded msd;parallel computing
train_1034
Vibration control of the rotating flexible-shaft/multi-flexible-disk system
with the eddy-current damper In this paper, the rotating flexible-Timoshenko-shaft/flexible-disk coupling system is formulated by applying the assumed-mode method into the kinetic and strain energies, and the virtual work done by the eddy-current damper. From Lagrange's equations, the resulting discretized equations of motion can be simplified as a bilinear system (BLS). Introducing the control laws, including the quadratic, nonlinear and optimal feedback control laws, into the BLS, it is found that the eddy-current damper can be used to suppress flexible and shear vibrations simultaneously, and the system is globally asymptotically stable. Numerical results are provided to validate the theoretical analysis
shear vibrations;quadratic feedback control laws;rotating flexible-shaft/multi-flexible-disk system;flexible vibrations;bilinear system;nonlinear feedback control laws;discretized equations of motion;assumed-mode method;lagrange's equations;virtual work;optimal feedback control laws;eddy-current damper;rotating flexible-timoshenko-shaft/flexible-disk coupling system
train_1035
H/sub 2/ optimization of the three-element type dynamic vibration absorbers
The dynamic vibration absorber (DVA) is a passive vibration control device which is attached to a vibrating body (called a primary system) subjected to exciting force or motion. In this paper, we will discuss an optimization problem of the three-element type DVA on the basis of the H/sub 2/ optimization criterion. The objective of the H/sub 2/ optimization is to reduce the total vibration energy of the system for overall frequencies; the total area under the power spectrum response curve is minimized in this criterion. If the system is subjected to random excitation instead of sinusoidal excitation, then the H/sub 2/ optimization is probably more desirable than the popular H/sub infinity / optimization. In the past decade there has been increasing interest in the three-element type DVA. However, most previous studies on this type of DVA were based on the H/sub infinity / optimization design, and no one has been able to find the algebraic solution as of yet. We found a closed-form exact solution for a special case where the primary system has no damping. Furthermore, the general case solution including the damped primary system is presented in the form of a numerical solution. The optimum parameters obtained here are compared to those of the conventional Voigt type DVA. They are also compared to other optimum parameters based on the H/sub infinity / criterion
voigt type dynamic vibration absorber;h/sub 2/ optimization;power spectrum response;three-element type dynamic vibration absorbers;passive vibration control
train_1036
Nonlinear control of a shape memory alloy actuated manipulator
This paper presents a nonlinear, robust control algorithm for accurate positioning of a single degree of freedom rotary manipulator actuated by Shape Memory Alloy (SMA). A model for an SMA actuated manipulator is presented. The model includes nonlinear dynamics of the manipulator, a constitutive model of Shape Memory Alloy, and electrical and heat transfer behavior of SMA wire. This model is used for open and closed loop motion simulations of the manipulator. Experiments are presented that show results similar to both closed and open loop simulation results. Due to modeling uncertainty and nonlinear behavior of the system, classic control methods such as Proportional-Integral-Derivative control are not able to present fast and accurate performance. Hence a nonlinear, robust control algorithm is presented based on Variable Structure Control. This algorithm is a control gain switching technique based on the weighted average of position and velocity feedbacks. This method has been designed through simulation and tested experimentally. Results show fast, accurate, and robust performance of the control system. Computer simulation and experimental results for different stabilization and tracking situations are also presented
feedback;control gain switching;manipulator;stabilization;shape memory alloy;variable structure control;positioning;tracking;open loop;nonlinear dynamics;nonlinear control;closed loop
train_1037
A stochastic averaging approach for feedback control design of nonlinear
systems under random excitations This paper presents a method for designing and quantifying the performance of feedback stochastic controls for nonlinear systems. The design makes use of the method of stochastic averaging to reduce the dimension of the state space and to derive the Ito stochastic differential equation for the response amplitude process. The moment equation of the amplitude process closed by the Rayleigh approximation is used as a means to characterize the transient performance of the feedback control. The steady state and transient response of the amplitude process are used as the design criteria for choosing the feedback control gains. Numerical examples are studied to demonstrate the performance of the control
steady state;rayleigh approximation;stochastic averaging;transient response;random excitations;ito stochastic differential equation;feedback control;nonlinear systems;feedback stochastic controls
train_1038
The analysis and control of longitudinal vibrations from wave viewpoint
The analysis and control of longitudinal vibrations in a rod from feedback wave viewpoint are synthesized. Both collocated and noncollocated feedback wave control strategies are explored. The control design is based on the local properties of wave transmission and reflection in the vicinity of the control force applied area, hence there is no complex closed form solution involved. The controller is designed to achieve various goals, such as absorbing the incoming vibration energy, creating a vibration free zone and eliminating standing waves in the structure. The findings appear to be very useful in practice due to the simplicity in the implementation of the controllers
control force;feedback waves;vibration free zone;vibration energy;control design;collocated feedback wave control;noncollocated feedback wave control;standing waves;complex closed form solution;longitudinal vibration control;wave transmission;wave reflection
train_1039
Design of an adaptive vibration absorber to reduce electrical transformer
structural vibration This paper considers the design of a vibration absorber to reduce structural vibration at multiple frequencies, with an enlarged bandwidth control at these target frequencies. While the basic absorber is a passive device a control system has been added to facilitate tuning, effectively giving the combination of a passive and active device, which leads to far greater stability and robustness. Experimental results demonstrating the effectiveness of the absorber are also described
bandwidth control;adaptive vibration absorber;structural vibration;electrical transformer
train_1040
CRONE control: principles and extension to time-variant plants with
asymptotically constant coefficients The principles of CRONE control, a frequency-domain robust control design methodology based on fractional differentiation, are presented. Continuous time-variant plants with asymptotically constant coefficients are analysed in the frequency domain, through their representation using time-variant frequency responses. A stability theorem for feedback systems including time-variant plants with asymptotically constant coefficients is proposed. Finally, CRONE control is extended to robust control of these plants
feedback systems;frequency-domain robust control design;crone control;stability theorem;asymptotically constant coefficients;time-variant frequency responses;automatic control;time-variant plants;robust control;fractional differentiation
train_1041
Fractional differentiation in passive vibration control
From a single-degree-of-freedom model used to illustrate the concept of vibration isolation, a method to transform the design for a suspension into a design for a robust controller is presented. Fractional differentiation is used to model the viscoelastic behaviour of the suspension. The use of fractional differentiation not only permits optimisation of just four suspension parameters, showing the 'compactness' of the fractional derivative operator, but also leads to robustness of the suspension's performance to uncertainty of the sprung mass. As an example, an engine suspension is studied
vibration isolation;sprung mass;engine suspension;suspension;passive vibration control;robust controller;viscoelastic behaviour;fractional differentiation
train_1042
Chaotic phenomena and fractional-order dynamics in the trajectory control of
redundant manipulators Redundant manipulators have some advantages when compared with classical arms because they allow the trajectory optimization, both on the free space and on the presence of obstacles, and the resolution of singularities. For this type of arms the proposed kinematic control algorithms adopt generalized inverse matrices but, in general, the corresponding trajectory planning schemes show important limitations. Motivated by these problems this paper studies the chaos revealed by the pseudoinverse-based trajectory planning algorithms, using the theory of fractional calculus
trajectory planning schemes;classical arms;kinematic control algorithms;trajectory control;generalized inverse matrices;fractional-order dynamics;chaotic phenomena;redundant manipulators;fractional calculus;trajectory optimization
train_1043
Fractional motion control: application to an XY cutting table
In path tracking design, the dynamic of actuators must be taken into account in order to reduce overshoots appearing for small displacements. A new approach to path tracking using fractional differentiation is proposed with its application on a XY cutting table. It permits the generation of optimal movement reference-input leading to a minimum path completion time, taking into account both maximum velocity, acceleration and torque and the bandwidth of the closed-loop system. Fractional differentiation is used here through a Davidson-Cole filter. A methodology aiming at improving the accuracy especially on checkpoints is presented. The reference-input obtained is compared with spline function. Both are applied to an XY cutting table model and actuator outputs compared
xy cutting table;closed-loop system;davidson-cole filter;minimum path completion time;spline function;optimization;actuators;fractional motion control;path tracking design;fractional differentiation
train_1044
Analogue realizations of fractional-order controllers
An approach to the design of analogue circuits, implementing fractional-order controllers, is presented. The suggested approach is based on the use of continued fraction expansions; in the case of negative coefficients in a continued fraction expansion, the use of negative impedance converters is proposed. Several possible methods for obtaining suitable rational approximations and continued fraction expansions are discussed. An example of realization of a fractional-order I/sup lambda / controller is presented and illustrated by obtained measurements. The suggested approach can be used for the control of very fast processes, where the use of digital controllers is difficult or impossible
rational approximations;negative impedance converters;negative coefficients;fractional-order controllers;continued fraction expansions;analogue realizations;fast processes;fractional integration;digital controllers;fraction expansion;fractional differentiation
train_1045
Using fractional order adjustment rules and fractional order reference models
in model-reference adaptive control This paper investigates the use of Fractional Order Calculus (FOC) in conventional Model Reference Adaptive Control (MRAC) systems. Two modifications to the conventional MRAC are presented, i.e., the use of fractional order parameter adjustment rule and the employment of fractional order reference model. Through examples, benefits from the use of FOC are illustrated together with some remarks for further research
mrac;fractional order reference models;fractional order adjustment rules;model-reference adaptive control;foc;fractional calculus
train_1046
A suggestion of fractional-order controller for flexible spacecraft attitude
control A controller design method for flexible spacecraft attitude control is proposed. The system is first described by a partial differential equation with internal damping. Then the frequency response is analyzed, and the three basic characteristics of the flexible system, namely, average function, lower bound and upper bound are defined. On this basis, a fractional-order controller is proposed, which functions as phase stabilization control for lower frequency and smoothly enters to amplitude stabilization at higher frequency by proper amplitude attenuation. It is shown that the equivalent damping ratio increases in proportion to the square of frequency
frequency response;fractional-order controller;partial differential equation;amplitude stabilization;internal damping;flexible spacecraft attitude control;phase stabilization control;damping ratio
train_1047
Dynamics and control of initialized fractional-order systems
Due to the importance of historical effects in fractional-order systems, this paper presents a general fractional-order system and control theory that includes the time-varying initialization response. Previous studies have not properly accounted for these historical effects. The initialization response, along with the forced response, for fractional-order systems is determined. The scalar fractional-order impulse response is determined, and is a generalization of the exponential function. Stability properties of fractional-order systems are presented in the complex w-plane, which is a transformation of the s-plane. Time responses are discussed with respect to pole positions in the complex w-plane and frequency response behavior is included. A fractional-order vector space representation, which is a generalization of the state space concept, is presented including the initialization response. Control methods for vector representations of initialized fractional-order systems are shown. Finally, the fractional-order differintegral is generalized to continuous order-distributions which have the possibility of including all fractional orders in a transfer function
forced response;vector space representation;impulse response;exponential function;fractional-order differintegral;dynamics;control;state space concept;initialization response;transfer function;initialized fractional-order systems
train_1048
Parallel and distributed Haskells
Parallel and distributed languages specify computations on multiple processors and have a computation language to describe the algorithm, i.e. what to compute, and a coordination language to describe how to organise the computations across the processors. Haskell has been used as the computation language for a wide variety of parallel and distributed languages, and this paper is a comprehensive survey of implemented languages. It outlines parallel and distributed language concepts and classifies Haskell extensions using them. Similar example programs are used to illustrate and contrast the coordination languages, and the comparison is facilitated by the common computation language. A lazy language is not an obvious choice for parallel or distributed computation, and we address the question of why Haskell is a common functional computation language
parallel languages;parallel haskell;functional programming;multiple processors;distributed languages;coordination language;lazy language;functional computation language;distributed haskell
train_1049
A typed representation for HTML and XML documents in Haskell
We define a family of embedded domain specific languages for generating HTML and XML documents. Each language is implemented as a combinator library in Haskell. The generated HTML/XML documents are guaranteed to be well-formed. In addition, each library can guarantee that the generated documents are valid XML documents to a certain extent (for HTML only a weaker guarantee is possible). On top of the libraries, Haskell serves as a meta language to define parameterized documents, to map structured documents to HTML/XML, to define conditional content, or to define entire Web sites. The combinator libraries support element-transforming style, a programming style that allows programs to have a visual appearance similar to HTML/XML documents, without modifying the syntax of Haskell
combinator library;element-transforming style;html documents;functional programming;parameterized documents;software libraries;embedded domain specific languages;conditional content;web sites;typed representation;xml documents;haskell;meta language;syntax
train_105
Greenberger-Horne-Zeilinger paradoxes for many qubits
We construct Greenberger-Horne-Zeilinger (GHZ) contradictions for three or more parties sharing an entangled state, the dimension of each subsystem being an even integer d. The simplest example that goes beyond the standard GHZ paradox (three qubits) involves five ququats (d = 4). We then examine the criteria that a GHZ paradox must satisfy in order to be genuinely M partite and d dimensional
entangled state;many qubits;greenberger-horne-zeilinger paradoxes;ghz paradox;ghz contradictions
train_1050
Secrets of the Glasgow Haskell compiler inliner
Higher-order languages such as Haskell encourage the programmer to build abstractions by composing functions. A good compiler must inline many of these calls to recover an efficiently executable program. In principle, inlining is dead simple: just replace the call of a function by an instance of its body. But any compiler-writer will tell you that inlining is a black art, full of delicate compromises that work together to give good performance without unnecessary code bloat. The purpose of this paper is, therefore, to articulate the key lessons we learned from a full-scale "production" inliner, the one used in the Glasgow Haskell compiler. We focus mainly on the algorithmic aspects, but we also provide some indicative measurements to substantiate the importance of various aspects of the inliner
glasgow haskell compiler inliner;performance;functional programming;executable program;higher-order languages;algorithmic aspects;abstractions;functional language;optimising compiler
train_1051
Faking it: simulating dependent types in Haskell
Dependent types reflect the fact that validity of data is often a relative notion by allowing prior data to affect the types of subsequent data. Not only does this make for a precise type system, but also a highly generic one: both the type and the program for each instance of a family of operations can be computed from the data which codes for that instance. Recent experimental extensions to the Haskell type class mechanism give us strong tools to relativize types to other types. We may simulate some aspects of dependent typing by making counterfeit type-level copies of data, with type constructors simulating data constructors and type classes simulating datatypes. This paper gives examples of the technique and discusses its potential
counterfeit type-level copies;dependent types;type class mechanism;dependent typing;functional programming;precise type system;datatypes;data validity;haskell;data constructors;type constructors
train_1052
Developing a high-performance web server in Concurrent Haskell
Server applications, and in particular network-based server applications, place a unique combination of demands on a programming language: lightweight concurrency, high I/O throughput, and fault tolerance are all important. This paper describes a prototype Web server written in Concurrent Haskell (with extensions), and presents two useful results: firstly, a conforming server could be written with minimal effort, leading to an implementation in less than 1500 lines of code, and secondly the naive implementation produced reasonable performance. Furthermore, making minor modifications to a few time-critical components improved performance to a level acceptable for anything but the most heavily loaded Web servers
high i/o throughput;network-based server applications;high-performance web server;conforming server;concurrent haskell;fault tolerance;time-critical components;lightweight concurrency
train_1053
A static semantics for Haskell
This paper gives a static semantics for Haskell 98, a non-strict purely functional programming language. The semantics formally specifies nearly all the details of the Haskell 98 type system, including the resolution of overloading, kind inference (including defaulting) and polymorphic recursion, the only major omission being a proper treatment of ambiguous overloading and its resolution. Overloading is translated into explicit dictionary passing, as in all current implementations of Haskell. The target language of this translation is a variant of the Girard-Reynolds polymorphic lambda calculus featuring higher order polymorphism. and explicit type abstraction and application in the term language. Translated programs can thus still be type checked, although the implicit version of this system is impredicative. A surprising result of this formalization effort is that the monomorphism restriction, when rendered in a system of inference rules, compromises the principal type property
inference rules;polymorphic lambda calculus;kind inference;type system;static semantics;type checking;higher order polymorphism;polymorphic recursion;monomorphism restriction;term language;explicit type abstraction;nonstrict purely functional programming language;explicit dictionary passing;overloading;formal specification;haskell 98
train_1054
Choice preferences without inferences: subconscious priming of risk attitudes
We present a procedure for subconscious priming of risk attitudes. In Experiment 1, we were reliably able to induce risk-seeking or risk-averse preferences across a range of decision scenarios using this priming procedure. In Experiment 2, we showed that these priming effects can be reversed by drawing participants' attention to the priming event. Our results support claims that the formation of risk preferences can be based on preconscious processing, as for example postulated by the affective primacy hypothesis, rather than rely on deliberative mental operations, as posited by several current models of judgment and decision making
preconscious processing;risk-averse preferences;affective primacy hypothesis;deliberative mental operations;choice preferences;decision scenarios;risk attitudes;risk-seeking preferences;subconscious priming
train_1055
A re-examination of probability matching and rational choice
In a typical probability learning task participants are presented with a repeated choice between two response alternatives, one of which has a higher payoff probability than the other. Rational choice theory requires that participants should eventually allocate all their responses to the high-payoff alternative, but previous research has found that people fail to maximize their payoffs. Instead, it is commonly observed that people match their response probabilities to the payoff probabilities. We report three experiments on this choice anomaly using a simple probability learning task in which participants were provided with (i) large financial incentives, (ii) meaningful and regular feedback, and (iii) extensive training. In each experiment large proportions of participants adopted the optimal response strategy and all three of the factors mentioned above contributed to this. The results are supportive of rational choice theory
feedback;meaningful regular feedback;choice anomaly;rationality;optimal response strategy;rational choice theory;response probabilities;probability matching;payoff probability;extensive training;large financial incentives;probability learning task
train_1056
Eliminating recency with self-review: the case of auditors' 'going concern'
judgments This paper examines the use of self-review to debias recency. Recency is found in the 'going concern' judgments of staff auditors, but is successfully eliminated by the auditor's use of a simple self-review technique that would be extremely easy to implement in audit practice. Auditors who self-review are also less inclined to make audit report choices that are inconsistent with their going concern judgments. These results are important because the judgments of staff auditors often determine the type and extent of documentation in audit workpapers and serve as preliminary inputs for senior auditors' judgments and choices. If staff auditors' judgments are affected by recency, the impact of this bias may be impounded in the ultimate judgments and choices of senior auditors. Since biased judgments can expose auditors to significant costs involving extended audit procedures, legal liability and diminished reputation, simple debiasing techniques that reduce this exposure are valuable. The paper also explores some future research needs and other important issues concerning judgment debiasing in applied professional settings
judgment debiasing;recency debiasing;audit report choices;senior auditors;self-review;applied professional settings;documentation;accountability;legal liability;diminished reputation;audit workpapers;staff auditors;extended audit procedures;probability judgments;auditor going concern judgments
train_1057
Acceptance of a price discount: the role of the semantic relatedness between
purchases and the comparative price format Two studies are reported where people are asked to accept or not a price reduction on a target product. In the high (low) relative saving version, the regular price of the target product is low (high). In both versions, the absolute value of the price reduction is the same as well as the total of regular prices of planned purchases. As first reported by Tversky and Kahneman (1981), findings show that the majority of people accept the price discount in the high-relative saving version whereas the minority do it in the low one. In Study 1, findings show that the previous preference reversal disappears when planned purchases are strongly related. Also, a previously unreported preference reversal is found. The majority of people accept the price discount when the products are weakly related whereas the minority accept when the products are strongly related. In Study 2, findings show that the classic preference reversal disappears as a function of the comparative price format. Also, another previously unreported preference reversal is found. When the offered price reduction relates to a low-priced product, people are more inclined to accept it with a control than a minimal comparative price format. Findings reported in Studies 1 and 2 are interpreted in terms of mental accounting shifts
high relative saving version;planned purchases;semantic relatedness hypothesis;mental accounting shifts;low-priced product;preference reversal;comparative price format;low relative saving version;price discount acceptance
train_1058
Bigger is better: the influence of physical size on aesthetic preference
judgments The hypothesis that the physical size of an object can influence aesthetic preferences was investigated. In a series of four experiments, participants were presented with pairs of abstract stimuli and asked to indicate which member of each pair they preferred. A preference for larger stimuli was found on the majority of trials using various types of stimuli, stimuli of various sizes, and with both adult and 3-year-old participants. This preference pattern was disrupted only when participants had both stimuli that provided a readily accessible alternative source of preference-evoking information and sufficient attentional resources to make their preference judgments
decision making;attentional resources;adult participants;preference formation;abstract stimuli;physical size influence;preference-evoking information;preference pattern;aesthetic preference judgments;child participants;judgment cues
train_1059
Mustering motivation to enact decisions: how decision process characteristics
influence goal realization Decision scientists tend to focus mainly on decision antecedents, studying how people make decisions. Action psychologists, in contrast, study post-decision issues, investigating how decisions, once formed, are maintained, protected, and enacted. Through the research presented here, we seek to bridge these two disciplines, proposing that the process by which decisions are reached motivates subsequent pursuit and benefits eventual realization. We identify three characteristics of the decision process (DP) as having motivation-mustering potential: DP effort investment, DP importance, and DP confidence. Through two field studies tracking participants' decision processes, pursuit and realization, we find that after controlling for the influence of the motivational mechanisms of goal intention and implementation intention, the three decision process characteristics significantly influence the successful enactment of the chosen decision directly. The theoretical and practical implications of these findings are considered and future research opportunities are identified
post-decision issues;decision process characteristics;research opportunities;motivation-mustering potential;goal realization;goal intention;decision enactment;decision process importance;decision process confidence;decision scientists;decision process investment;motivation;action psychologists
train_106
Quantum Zeno subspaces
The quantum Zeno effect is recast in terms of an adiabatic theorem when the measurement is described as the dynamical coupling to another quantum system that plays the role of apparatus. A few significant examples are proposed and their practical relevance discussed. We also focus on decoherence-free subspaces
dynamical coupling;adiabatic theorem;quantum zeno subspaces;measurement;decoherence-free subspaces
train_1060
Variety identification of wheat using mass spectrometry with neural networks
and the influence of mass spectra processing prior to neural network analysis The performance of matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry with neural networks in wheat variety classification is further evaluated. Two principal issues were studied: (a) the number of varieties that could be classified correctly; and (b) various means of preprocessing mass spectrometric data. The number of wheat varieties tested was increased from 10 to 30. The main pre-processing method investigated was based on Gaussian smoothing of the spectra, but other methods based on normalisation procedures and multiplicative scatter correction of data were also used. With the final method, it was possible to classify 30 wheat varieties with 87% correctly classified mass spectra and a correlation coefficient of 0.90
correctly classified mass spectra;normalisation procedures;correlation coefficient;mass spectrometric data;mass spectra processing;multiplicative scatter correction;gaussian smoothing;neural network analysis;wheat variety classification;pre-processing- method;matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry;variety identification
train_1061
Abacus, EFI and anti-virus
The Extensible Firmware Interface (EFI) standard emerged as a logical step to provide flexibility and extensibility to boot sequence processes, enabling the complete abstraction of a system's BIOS interface from the system's hardware. In doing so, this provided the means of standardizing a boot-up sequence, extending device drivers and boot time applications' portability to non PC-AT-based architectures, including embedded systems like Internet appliances, TV Internet set-top boxes and 64-bit Itanium platforms
embedded systems;anti-virus;extensible firmware interface standard
train_1062
Fidelity of quantum teleportation through noisy channels
We investigate quantum teleportation through noisy quantum channels by solving analytically and numerically a master equation in the Lindblad form. We calculate the fidelity as a function of decoherence rates and angles of a state to be teleported. It is found that the average fidelity and the range of states to be accurately teleported depend on types of noises acting on quantum channels. If the quantum channels are subject to isotropic noise, the average fidelity decays to 1/2, which is smaller than the best possible value of 2/3 obtained only by the classical communication. On the other hand, if the noisy quantum channel is modeled by a single Lindblad operator, the average fidelity is always greater than 2/3
analytical solution;noisy quantum channels;recipient;classical communication;dual classical channels;quantum channels;alice;numerical solution;bob;quantum teleportation;fidelity;isotropic noise;lindblad operator;eigenstate;sender
train_1063
Operations that do not disturb partially known quantum states
Consider a situation in which a quantum system is secretly prepared in a state chosen from the known set of states. We present a principle that gives a definite distinction between the operations that preserve the states of the system and those that disturb the states. The principle is derived by alternately applying a fundamental property of classical signals and a fundamental property of quantum ones. The principle can be cast into a simple form by using a decomposition of the relevant Hilbert space, which is uniquely determined by the set of possible states. The decomposition implies the classification of the degrees of freedom of the system into three parts depending on how they store the information on the initially chosen state: one storing it classically, one storing it nonclassically, and the other one storing no information. Then the principle states that the nonclassical part is inaccessible and the classical part is read-only if we are to preserve the state of the system. From this principle, many types of no-cloning, no-broadcasting, and no-imprinting conditions can easily be derived in general forms including mixed states. It also gives a unified view on how various schemes of quantum cryptography work. The principle helps one to derive optimum amount of resources (bits, qubits, and ebits) required in data compression or in quantum teleportation of mixed-state ensembles
quantum system;hilbert space;nonclassical part;degrees of freedom;partially known quantum states;quantum teleportation;secretly prepared quantum state;ebits;bits;quantum cryptography;classical signals;mixed-state ensembles;qubits
train_1064
Quantum-controlled measurement device for quantum-state discrimination
We propose a "programmable" quantum device that is able to perform a specific generalized measurement from a certain set of measurements depending on a quantum state of a "program register." In particular, we study a situation when the programmable measurement device serves for the unambiguous discrimination between nonorthogonal states. The particular pair of states that can be unambiguously discriminated is specified by the state of a program qubit. The probability of successful discrimination is not optimal for all admissible pairs. However, for some subsets it can be very close to the optimal value
quantum-state discrimination;quantum-controlled measurement device;quantum state;nonorthogonal states;programmable quantum device;program register;program qubit
train_1065
Quantum universal variable-length source coding
We construct an optimal quantum universal variable-length code that achieves the admissible minimum rate, i.e., our code is used for any probability distribution of quantum states. Its probability of exceeding the admissible minimum rate exponentially goes to 0. Our code is optimal in the sense of its exponent. In addition, its average error asymptotically tends to 0
quantum information theory;quantum universal variable-length source coding;admissible minimum rate;optimal code;quantum states;optimal quantum universal variable-length code;probability distribution;average error;exponent;quantum cryptography
train_1066
Application of artificial intelligence to search ground-state geometry of
clusters We introduce a global optimization procedure, the neural-assisted genetic algorithm (NAGA). It combines the power of an artificial neural network (ANN) with the versatility of the genetic algorithm. This method is suitable to solve optimization problems that depend on some kind of heuristics to limit the search space. If a reasonable amount of data is available, the ANN can "understand" the problem and provide the genetic algorithm with a selected population of elements that will speed up the search for the optimum solution. We tested the method in a search for the ground-state geometry of silicon clusters. We trained the ANN with information about the geometry and energetics of small silicon clusters. Next, the ANN learned how to restrict the configurational space for larger silicon clusters. For Si/sub 10/ and Si/sub 20/, we noticed that the NAGA is at least three times faster than the "pure" genetic algorithm. As the size of the cluster increases, it is expected that the gain in terms of time will increase as well
neural-assisted genetic algorithm;si/sub 20/;cluster size;ground-state geometry;atomic clusters;artificial intelligence;optimum solution;si/sub 10/;artificial neural network;population;silicon clusters;global optimization procedure
train_1067
Quantum-information processing by nuclear magnetic resonance: Experimental
implementation of half-adder and subtractor operations using an oriented spin-7/2 system The advantages of using quantum systems for performing many computational tasks have already been established. Several quantum algorithms have been developed which exploit the inherent property of quantum systems such as superposition of states and entanglement for efficiently performing certain tasks. The experimental implementation has been achieved on many quantum systems, of which nuclear magnetic resonance has shown the largest progress in terms of number of qubits. This paper describes the use of a spin-7/2 as a three-qubit system and experimentally implements the half-adder and subtractor operations. The required qubits are realized by partially orienting /sup 133/Cs nuclei in a liquid-crystalline medium, yielding a quadrupolar split well-resolved septet. Another feature of this paper is the proposal that labeling of quantum states of system can be suitably chosen to increase the efficiency of a computational task
quadrupolar split well-resolved septet;/sup 133/cs;quantum-information processing;computational tasks;quantum systems;quantum states;/sup 133/cs nuclei;nuclear magnetic resonance;state superposition;computational task;subtractor operations;half-adder operations;quantum algorithms;entanglement;oriented spin-7/2 system;three-qubit system;liquid-crystalline medium;qubits
train_1068
Quantum phase gate for photonic qubits using only beam splitters and
postselection We show that a beam splitter of reflectivity one-third can be used to realize a quantum phase gate operation if only the outputs conserving the number of photons on each side are postselected
postselection;photonic qubits;quantum computation;quantum phase gate;photon number conservation;postselected quantum phase gate;quantum phase gate operation;multiqubit networks;postselected quantum gate;outputs;postselected photon number conserving outputs;quantum information processing;optical quantum gate operations;polarization beam splitters;reflectivity
train_1069
Entangling atoms in bad cavities
We propose a method to produce entangled spin squeezed states of a large number of atoms inside an optical cavity. By illuminating the atoms with bichromatic light, the coupling to the cavity induces pairwise exchange of excitations which entangles the atoms. Unlike most proposals for entangling atoms by cavity QED, our proposal does not require the strong coupling regime g/sup 2// kappa Gamma >>1, where g is the atom cavity coupling strength, kappa is the cavity decay rate, and Gamma is the decay rate of the atoms. In this work the important parameter is Ng/sup 2// kappa Gamma , where N is the number of atoms, and our proposal permits the production of entanglement in bad cavities as long as they contain a large number of atoms
bichromatic light illumination;strong coupling regime;pairwise exchange;atom cavity coupling strength;excitations;coupling;entangled spin squeezed states;optical cavity;cavity qed;bad cavities;atom entanglement;cavity decay rate
train_107
Deterministic single-photon source for distributed quantum networking
A sequence of single photons is emitted on demand from a single three-level atom strongly coupled to a high-finesse optical cavity. The photons are generated by an adiabatically driven stimulated Raman transition between two atomic ground states, with the vacuum field of the cavity stimulating one branch of the transition, and laser pulses deterministically driving the other branch. This process is unitary and therefore intrinsically reversible, which is essential for quantum communication and networking, and the photons should be appropriate for all-optical quantum information processing
adiabatically driven stimulated raman transition;deterministic single-photon source;all-optical quantum information processing;single three-level atom;quantum communication;high-finesse optical cavity;distributed quantum networking;vacuum field
train_1070
Universal simulation of Hamiltonian dynamics for quantum systems with
finite-dimensional state spaces What interactions are sufficient to simulate arbitrary quantum dynamics in a composite quantum system? Dodd et al. [Phys. Rev. A 65, 040301(R) (2002)] provided a partial solution to this problem in the form of an efficient algorithm to simulate any desired two-body Hamiltonian evolution using any fixed two-body entangling N-qubit Hamiltonian, and local unitaries. We extend this result to the case where the component systems are qudits, that is, have D dimensions. As a consequence we explain how universal quantum computation can be performed with any fixed two-body entangling N-qudit Hamiltonian, and local unitaries
d-dimensional component systems;two-body hamiltonian evolution;composite quantum system;quantum dynamics;hamiltonian dynamics;fixed two-body entangling n-qubit hamiltonian;quantum systems;universal quantum computation;local unitaries;fixed two-body entangling n-qudit hamiltonian;universal simulation;finite- dimensional state spaces
train_1071
Dense coding in entangled states
We consider the dense coding of entangled qubits shared between two parties, Alice and Bob. The efficiency of classical information gain through quantum entangled qubits is also considered for the case of pairwise entangled qubits and maximally entangled qubits. We conclude that using the pairwise entangled qubits can be more efficient when two parties communicate whereas using the maximally entangled qubits can be more efficient when the N parties communicate
entangled states;classical information gain efficiency;dense coding;pairwise entangled qubits;alice;bob;quantum communication;maximally entangled qubits;quantum information processing
train_1072
Quantum-state information retrieval in a Rydberg-atom data register
We analyze a quantum search protocol to retrieve phase information from a Rydberg-atom data register using a subpicosecond half-cycle electric field pulse. Calculations show that the half-cycle pulse can perform the phase retrieval only within a range of peak field values. By varying the phases of the constituent orbitals of the Rydberg wave packet register, we demonstrate coherent control of the phase retrieval process. By specially programming the phases of the orbitals comprising the initial wave packet, we show that it is possible to use the search method as a way to synthesize single energy eigenstates
half-cycle pulse;coherent control;phase retrieval;subpicosecond half-cycle electric field pulse;phase information;constituent orbitals;rydberg-atom data register;rydberg wave packet register;initial wave packet;quantum search protocol;quantum-state information retrieval;peak field values;search method;single energy eigenstates
train_1073
Quantum retrodiction in open systems
Quantum retrodiction involves finding the probabilities for various preparation events given a measurement event. This theory has been studied for some time but mainly as an interesting concept associated with time asymmetry in quantum mechanics. Recent interest in quantum communications and cryptography, however, has provided retrodiction with a potential practical application. For this purpose quantum retrodiction in open systems should be more relevant than in closed systems isolated from the environment. In this paper we study retrodiction in open systems and develop a general master equation for the backward time evolution of the measured state, which can be used for calculating preparation probabilities. We solve the master equation, by way of example, for the driven two-level atom coupled to the electromagnetic field
time asymmetry;probabilities;quantum retrodiction;measurement event;open systems;retrodictive master equation;cryptography;driven two level atom-electromagnetic field coupling;preparation events;quantum communications;quantum mechanics;preparation probabilities;backward time evolution
train_1074
Inhibiting decoherence via ancilla processes
General conditions are derived for preventing the decoherence of a single two-state quantum system (qubit) in a thermal bath. The employed auxiliary systems required for this purpose are merely assumed to be weak for the general condition while various examples such as extra qubits and extra classical fields are studied for applications in quantum information processing. The general condition is confirmed by well known approaches toward inhibiting decoherence. An approach to decoherence-free quantum memories and quantum operations is presented by placing the qubit into the center of a sphere with extra qubits on its surface
general condition;single two-state quantum system;quantum operations;decoherence-free quantum memories;extra classical fields;qubit;extra qubits;ancilla processes;thermal bath;decoherence;auxiliary systems;quantum information processing;sphere surface;decoherence inhibition
train_1075
Numerical simulation of information recovery in quantum computers
Decoherence is the main problem to be solved before quantum computers can be built. To control decoherence, it is possible to use error correction methods, but these methods are themselves noisy quantum computation processes. In this work, we study the ability of Steane's and Shor's fault-tolerant recovering methods, as well as a modification of Steane's ancilla network, to correct errors in qubits. We test a way to measure correctly ancilla's fidelity for these methods, and state the possibility of carrying out an effective error correction through a noisy quantum channel, even using noisy error correction methods
ancilla fidelity;numerical simulation;error correction methods;quantum computers;noisy quantum computation processes;fault-tolerant recovering methods;noisy quantum channel;noisy error correction methods;decoherence control;ancilla network;information recovery;qubits
train_1076
Delayed-choice entanglement swapping with vacuum-one-photon quantum states
We report the experimental realization of a recently discovered quantum-information protocol by Peres implying an apparent nonlocal quantum mechanical retrodiction effect. The demonstration is carried out by a quantum optical method by which each singlet entangled state is physically implemented by a two-dimensional subspace of Fock states of a mode of the electromagnetic field, specifically the space spanned by the vacuum and the one-photon state, along lines suggested recently by E. Knill et al. [Nature (London) 409, 46 (2001)] and by M. Duan et al. [ibid. 414, 413 (2001)]
fock states;state entanglement;two-dimensional subspace;quantum optical method;one-photon state;quantum-information;singlet entangled state;delayed-choice entanglement;nonlocal quantum mechanical retrodiction effect;electromagnetic field mode;vacuum-one-photon quantum states;vacuum state
train_1077
Quantum learning and universal quantum matching machine
Suppose that three kinds of quantum systems are given in some unknown states |f>/sup (X)N/, |g/sub 1/>/sup (X)K/, and |g/sub 2/>/sup (X)K/, and we want to decide which template state |g/sub 1/> or |g/sub 2/>, each representing the feature of the pattern class C/sub 1/ or C/sub 2/, respectively, is closest to the input feature state |f>. This is an extension of the pattern matching problem into the quantum domain. Assuming that these states are known a priori to belong to a certain parametric family of pure qubit systems, we derive two kinds of matching strategies. The first one is a semiclassical strategy that is obtained by the natural extension of conventional matching strategies and consists of a two-stage procedure: identification (estimation) of the unknown template states to design the classifier (learning process to train the classifier) and classification of the input system into the appropriate pattern class based on the estimated results. The other is a fully quantum strategy without any intermediate measurement, which we might call as the universal quantum matching machine. We present the Bayes optimal solutions for both strategies in the case of K=1, showing that there certainly exists a fully quantum matching procedure that is strictly superior to the straightforward semiclassical extension of the conventional matching strategy based on the learning process
learning process;two-stage procedure;semiclassical strategy;quantum domain;quantum strategy;quantum learning;matching strategies;universal quantum matching machine;qubit systems;matching strategy;bayes optimal solutions;semiclassical extension;pattern matching problem;quantum matching procedure;pattern class
train_1078
Action aggregation and defuzzification in Mamdani-type fuzzy systems
Discusses the issues of action aggregation and defuzzification in Mamdani-type fuzzy systems. The paper highlights the shortcomings of defuzzification techniques associated with the customary interpretation of the sentence connective 'and' by means of the set union operation. These include loss of smoothness of the output characteristic and inaccurate mapping of the fuzzy response. The most appropriate procedure for aggregating the outputs of different fuzzy rules and converting them into crisp signals is then suggested. The advantages in terms of increased transparency and mapping accuracy of the fuzzy response are demonstrated
transparency;set union operation;crisp signals;mapping accuracy;sentence connective;mamdani-type fuzzy systems;fuzzy response;defuzzification;fuzzy rules;action aggregation
train_1079
A novel robot hand with embedded shape memory alloy actuators
Describes the development of an active robot hand, which allows smooth and lifelike motions for anthropomorphic grasping and fine manipulations. An active robot finger 10 mm in outer diameter with a shape memory alloy (SMA) wire actuator embedded in the finger with a constant distance from the geometric centre of the finger was designed and fabricated. The practical specifications of the SMA wire and the flexible rod were determined on the basis of a series of formulae. The active finger consists of two bending parts, the SMA actuators and a connecting part. The mechanical properties of the bending part are investigated. The control system on the basis of resistance feedback is also presented. Finally, a robot hand with three fingers was designed and the grasping experiment was carried out to demonstrate its performance
lifelike motions;active finger;flexible rod;embedded shape memory alloy actuators;resistance feedback;anthropomorphic grasping;fine manipulations;active robot hand
train_108
Exploiting randomness in quantum information processing
We consider how randomness can be made to play a useful role in quantum information processing-in particular, for decoherence control and the implementation of quantum algorithms. For a two-level system in which the decoherence channel is non-dissipative, we show that decoherence suppression is possible if memory is present in the channel. Random switching between two potentially harmful noise sources can then provide a source of stochastic control. Such random switching can also be used in an advantageous way for the implementation of quantum algorithms
stochastic control;noise;randomness;two-level system;quantum algorithms;random switching;decoherence control;quantum information processing
train_1080
Car-caravan snaking. 2 Active caravan braking
For part 1, see ibid., p.707-22. Founded on the review and results of Part 1, Part 2 contains a description of the virtual design of an active braking system for caravans or other types of trailer, to suppress snaking vibrations, while being simple from a practical viewpoint. The design process and the design itself are explained. The performance is examined by simulations and it is concluded that the system is effective, robust and realizable with modest and available components
active caravan braking;virtual design;dynamics;car-caravan snaking;trailer;snaking vibrations suppression
train_1081
Stability of W-methods with applications to operator splitting and to geometric
theory We analyze the stability properties of W-methods applied to the parabolic initial value problem u' + Au = Bu. We work in an abstract Banach space setting, assuming that A is the generator of an analytic semigroup and that B is relatively bounded with respect to A. Since W-methods treat the term with A implicitly, whereas the term involving B is discretized in an explicit way, they can be regarded as splitting methods. As an application of our stability results, convergence for nonsmooth initial data is shown. Moreover, the layout of a geometric theory for discretizations of semilinear parabolic problems u' + Au = f (u) is presented
nonsmooth initial data;geometric theory;analytic semigroup;linearly implicit runge-kutta methods;abstract banach space;w-methods stability;parabolic initial value problem;operator splitting
train_1082
Numerical approximation of nonlinear BVPs by means of BVMs
Boundary Value Methods (BVMs) would seem to be suitable candidates for the solution of nonlinear Boundary Value Problems (BVPs). They have been successfully used for solving linear BVPs together with a mesh selection strategy based on the conditioning of the linear systems. Our aim is to extend this approach so as to use them for the numerical approximation of nonlinear problems. For this reason, we consider the quasi-linearization technique that is an application of the Newton method to the nonlinear differential equation. Consequently, each iteration requires the solution of a linear BVP. In order to guarantee the convergence to the solution of the continuous nonlinear problem, it is necessary to determine how accurately the linear BVPs must be solved. For this goal, suitable stopping criteria on the residual and on the error for each linear BVP are given. Numerical experiments on stiff problems give rather satisfactory results, showing that the experimental code, called TOM, that uses a class of BVMs and the quasi-linearization technique, may be competitive with well known solvers for BVPs
stopping criteria;mesh selection strategy;quasi-linearization technique;boundary value methods;bvms;newton method;stiff problems;nonlinear differential equation;numerical approximation;nonlinear boundary value problems
train_1083
Differential algebraic systems anew
It is proposed to figure out the leading term in differential algebraic systems more precisely. Low index linear systems with those properly stated leading terms are considered in detail. In particular, it is asked whether a numerical integration method applied to the original system reaches the inherent regular ODE without conservation, i.e., whether the discretization and the decoupling commute in some sense. In general one cannot expect this commutativity so that additional difficulties like strong stepsize restrictions may arise. Moreover, abstract differential algebraic equations in infinite-dimensional Hilbert spaces are introduced, and the index notion is generalized to those equations. In particular, partial differential algebraic equations are considered in this abstract formulation
stepsize restrictions;numerical integration method;low index linear systems;abstract differential algebraic equations;commutativity;differential algebraic systems;inherent regular ode
train_1084
On quasi-linear PDAEs with convection: applications, indices, numerical
solution For a class of partial differential algebraic equations (PDAEs) of quasi-linear type which include nonlinear terms of convection type, a possibility to determine a time and spatial index is considered. As a typical example we investigate an application from plasma physics. Especially we discuss the numerical solution of initial boundary value problems by means of a corresponding finite difference splitting procedure which is a modification of a well-known fractional step method coupled with a matrix factorization. The convergence of the numerical solution towards the exact solution of the corresponding initial boundary value problem is investigated. Some results of a numerical solution of the plasma PDAE are given
indices;initial boundary value problems;plasma physics;quasi-linear partial differential algebraic equations;numerical solution;finite difference splitting procedure;spatial index;convection;fractional step method;matrix factorization
train_1085
A variable-stepsize variable-order multistep method for the integration of
perturbed linear problems G. Scheifele (1971) wrote the solution of a perturbed oscillator as an expansion in terms of a new set of functions, which extends the monomials in the Taylor series of the solution. Recently, P. Martin and J.M. Ferrandiz (1997) constructed a multistep code based on the Scheifele technique, and it was generalized by D.J. Lopez and P. Martin (1998) for perturbed linear problems. However, the remarked codes are constant steplength methods, and efficient integrators must be able to change the steplength. In this paper we extend the ideas of F.T. Krogh (1974) from Adams methods to the algorithm proposed by Lopez and Martin, and we show the advantages of the new code in perturbed problems
multistep code;adams methods;perturbed linear problems integration;variable-stepsize variable-order multistep method;taylor series;constant steplength methods;monomials;perturbed oscillator
train_1086
Some recent advances in validated methods for IVPs for ODEs
Compared to standard numerical methods for initial value problems (IVPs) for ordinary differential equations (ODEs), validated methods (often called interval methods) for IVPs for ODEs have two important advantages: if they return a solution to a problem, then (1) the problem is guaranteed to have a unique solution, and (2) an enclosure of the true solution is produced. We present a brief overview of interval Taylor series (ITS) methods for IVPs for ODEs and discuss some recent advances in the theory of validated methods for IVPs for ODEs. In particular, we discuss an interval Hermite-Obreschkoff (IHO) scheme for computing rigorous bounds on the solution of an IVP for an ODE, the stability of ITS and IHO methods, and a new perspective on the wrapping effect, where we interpret the problem of reducing the wrapping effect as one of finding a more stable scheme for advancing the solution
wrapping effect;interval hermite-obreschkoff scheme;interval methods;validated methods;ordinary differential equations;interval taylor series;qr algorithm;initial value problems
train_1087
Implementation of DIMSIMs for stiff differential systems
Some issues related to the implementation of diagonally implicit multistage integration methods for stiff differential systems are discussed. They include reliable estimation of the local discretization error, construction of continuous interpolants, solution of nonlinear systems of equations by simplified Newton iterations, choice of initial stepsize and order, and step and order changing strategy. Numerical results are presented which indicate that an experimental Matlab code based on type 2 methods of order one, two and three outperforms ode15s code from Matlab ODE suite on problems whose Jacobian has eigenvalues which are close to the imaginary axis
stiff differential systems;simplified newton iterations;local discretization error;diagonally implicit multistage integration methods;reliable estimation;dimsims;nonlinear systems of equations;interpolants;experimental matlab code
train_1088
Parallel implicit predictor corrector methods
The performance of parallel codes for the solution of initial value problems is usually strongly sensitive to the dimension of the continuous problem. This is due to the overhead related to the exchange of information among the processors and motivates the problem of minimizing the amount of communications. According to this principle, we define the so called Parallel Implicit Predictor Corrector Methods and in this class we derive A-stable, L-stable and numerically zero-stable formulas. The latter property refers to the zero-stability condition of a given formula when roundoff errors are introduced in its coefficients due to their representation in finite precision arithmetic. Some numerical experiment show the potentiality of this approach
roundoff errors;parallel implicit predictor corrector methods;finite precision arithmetic;numerically zero-stable formulas;initial value problems;zero-stability condition
train_1089
Accuracy and stability of splitting with Stabilizing Corrections
This paper contains a convergence analysis for the method of stabilizing corrections, which is an internally consistent splitting scheme for initial-boundary value problems. To obtain more accuracy and a better treatment of explicit terms several extensions are regarded and analyzed. The relevance of the theoretical results is tested for convection-diffusion-reaction equations
stabilizing corrections;convection-diffusion-reaction equations;splitting scheme;convergence analysis;stability;initial-boundary value problems
train_109
An entanglement measure based on the capacity of dense coding
An asymptotic entanglement measure for any bipartite states is derived in the light of the dense coding capacity optimized with respect to local quantum operations and classical communications. General properties and some examples with explicit forms of this entanglement measure are investigated
dense coding capacity;local quantum operations;optimization;classical communications;entanglement measure;asymptotic entanglement measure;bipartite states
train_1090
On the contractivity of implicit-explicit linear multistep methods
This paper is concerned with the class of implicit-explicit linear multistep methods for the numerical solution of initial value problems for ordinary differential equations which are composed of stiff and nonstiff parts. We study the contractivity of such methods, with regard to linear autonomous systems of ordinary differential equations and a (scaled) Euclidean norm. In addition, we derive a strong stability result based on the stability regions of these methods
linear autonomous systems;ordinary differential equations;numerical solution;stability result;euclidean norm;contractivity;implicit-explicit linear multistep methods;initial value problems
train_1091
Car-caravan snaking. 1. The influence of pintle pin friction
A brief review of knowledge of car-caravan snaking is carried out. Against the background described, a fairly detailed mathematical model of a contemporary car-trailer system is constructed and a baseline set of parameter values is given. In reduced form, the model is shown to give results in accordance with literature. The properties of the baseline combination are explored using both linear and non-linear versions of the model. The influences of damping at the pintle joint and of several other design parameters on the stability of the linear system in the neighbourhood of the critical snaking speed are calculated and discussed. Coulomb friction damping at the pintle pin is then included and simulations are used to indicate the consequent amplitude-dependent behaviour. The friction damping, especially when its level has to be chosen by the user, is shown to give dangerous characteristics, despite having some capacity for stabilization of the snaking motions. It is concluded that pintle pin friction damping does not represent a satisfactory solution to the snaking problem. The paper sets the scene for the development of an improved solution
mathematical model;pintle pin friction;amplitude-dependent behaviour;coulomb friction damping;car-caravan snaking;linear system;car-trailer system;critical snaking speed
train_1092
Ride quality evaluation of an actively-controlled stretcher for an ambulance
This study considers the subjective evaluation of ride quality during ambulance transportation using an actively-controlled stretcher (ACS). The ride quality of a conventional stretcher and an assistant driver's seat is also compared. Braking during ambulance transportation generates negative foot-to-head acceleration in patients and causes blood pressure to rise in the patient's head. The ACS absorbs the foot-to-head acceleration by changing the angle of the stretcher, thus reducing the blood pressure variation. However, the ride quality of the ACS should be investigated further because the movement of the ACS may cause motion sickness and nausea. Experiments of ambulance transportation, including rapid acceleration and deceleration, are performed to evaluate the effect of differences in posture of the transported subject on the ride quality; the semantic differential method and factor analysis are used in the investigations. Subjects are transported using a conventional stretcher with head forward, a conventional stretcher with head backward, the ACS, and an assistant driver's seat for comparison with transportation using a stretcher. Experimental results show that the ACS gives the most comfortable transportation when using a stretcher. Moreover, the reduction of the negative foot-to-head acceleration at frequencies below 0.2 Hz and the small variation of the foot-to-head acceleration result in more comfortable transportation. Conventional transportation with the head forward causes the worst transportation, although the characteristics of the vibration of the conventional stretcher seem to be superior to that of the ACS
braking;motion sickness;factor analysis;head backward;assistant driver seat;conventional stretcher;ambulance;negative foot-to-head acceleration;rapid acceleration;stretcher angle;comfortable transportation;vibration;nausea;patient head;posture differences;head forward;subjective evaluation;rapid deceleration;blood pressure variation;actively-controlled stretcher;transported subject;ambulance transportation;ride quality evaluation;semantic differential method
README.md exists but content is empty.
Downloads last month
156