name
stringlengths
7
10
title
stringlengths
13
125
abstract
stringlengths
67
3.02k
fulltext
stringclasses
1 value
keywords
stringlengths
17
734
train_1093
A fuzzy logic approach to accommodate thermal stress and improve the start-up
phase in combined cycle power plants Use of combined cycle power generation plant has increased dramatically over the last decade. A supervisory control approach based on a dynamic model is developed, which makes use of proportional-integral-derivative (PID), fuzzy logic and fuzzy PID schemes. The aim is to minimize the steam turbine plant start-up time, without violating maximum thermal stress limits. An existing start-up schedule provides the benchmark by which the performance of candidate controllers is assessed. Improvements regarding possible reduced start-up times and satisfaction of maximum thermal stress restrictions have been realized using the proposed control scheme
pid control;combined cycle power plants;start-up schedule;steam turbine plant start-up time minimization;fuzzy pid schemes;fuzzy logic approach;supervisory control;maximum thermal stress limits;dynamic model
train_1094
Efficient allocation of knowledge in distributed business structures
Accelerated business processes demand new concepts and realizations of information systems and knowledge databases. This paper presents the concept of the collaborative information space (CIS), which supplies the necessary tools to transform individual knowledge into collective useful information. The creation of 'information objects' in the CIS allows an efficient allocation of information in all business process steps at any time. Furthermore, the specific availability of heterogeneous, distributed data is realized by a Web-based user interface, which enables effective search by a multidimensionally hierarchical composition
multidimensionally hierarchical composition;knowledge databases;collaborative information space;distributed business structures;business process steps;information objects;web-based user interface;heterogeneous distributed data;information systems;accelerated business processes;interactive system;efficient knowledge allocation
train_1095
Development of a real-time monitoring system
This paper describes a pattern recognition (PR) technique, which uses learning vector quantization (LVQ). This method is adapted for practical application to solve problems in the area of condition monitoring and fault diagnosis where a number of fault signatures are involved. In these situations, the aim is health monitoring, including identification of deterioration of the healthy condition and identification of causes of the failure in real-time. For this reason a fault database is developed which contains the collected information about various states of operation of the system in the form of pattern vectors. The task of the real-time monitoring system is to correlate patterns of unknown faults with the known fault signatures in the fault database. This will determine cause of failure and degree of deterioration of the system under test. The problem of fault diagnosis may involve a large number of patterns and large sampling time, which affects the learning stage of neural networks. The study here also aims to find a fast learning model of neural networks for instances when a high number of patterns and numerous processing elements are involved. It begins searching for an appropriate solution. The study is extended to the enforcement learning models and considers LVQ as a network emerged from the competitive learning model through enforcement training. Finally, tests show an accuracy of 92.3 per cent in the fault diagnostic capability of the technique
fault diagnosis;coolant system;pattern recognition technique;enforcement training;learning vector quantization;large sampling time;competitive learning model;fault database;health monitoring;pattern correlation;fault diagnostic capability;pattern vectors;fast learning model;condition monitoring;real-time failure cause identification;fault signatures;lvq;cnc machine centre;real-time monitoring system;deterioration identification;neural networks
train_1096
Evaluating alternative manufacturing control strategies using a benchmark
system This paper describes an investigation of the effects of dynamic job routing and job sequencing decisions on the performance of a distributed control system and its adaptability against disturbances. This experimental work was carried out to compare the performance of alternative control strategies in various manufacturing environments and to investigate the relationship between the 'control' and 'controlled' systems. The experimental test-bed presented in this paper consists of an agent-based control system (implemented in C++) and a discrete-event simulation model. Using this test-bed, various control strategies were tested on a benchmark manufacturing system by varying production volumes (to model the production system with looser/tighter schedules) and disturbance frequencies. It was found that hybrid strategies that combine reactive agent mechanisms (and allocation strategies such as the contract net) with appropriate job sequencing heuristics provide the best performance, particularly when job congestion increases on a shop-floor
discrete-event simulation model;hybrid strategies;disturbance adaptability;production volumes;benchmark system;dynamic job routing;experimental test-bed;agent-based control system;benchmark manufacturing system;reactive agent mechanisms;job congestion;allocation strategies;contract net;alternative manufacturing control strategies;job sequencing decisions;distributed control system;disturbance frequencies
train_1097
A study on an automatic seam tracking system by using an electromagnetic sensor
for sheet metal arc welding of butt joints Many sensors, such as the vision sensor and the laser displacement sensor, have been developed to automate the arc welding process. However, these sensors have some problems due to the effects of arc light, fumes and spatter. An electromagnetic sensor, which utilizes the generation of an eddy current, was developed for detecting the weld line of a butt joint in which the root gap size was zero. An automatic seam tracking system designed for sheet metal arc welding was constructed with a sensor. Through experiments, it was revealed that the system had an excellent seam tracking accuracy of the order of +or-0.2 mm
automatic seam tracking system;seam tracking accuracy;weld line detection;root gap size;sheet metal arc welding;electromagnetic sensor;butt joints;eddy current generation
train_1098
Instability phenomena in the gas-metal arc welding self-regulation process
Arc instability is a very important determinant of weld quality. The instability behaviour of the gas-metal arc welding (GMAW) process is characterized by strong oscillations in arc length and current. In the paper, a model of the GMAW process is developed using an exact arc voltage characteristic. This model is used to study stability of the self-regulation process and to develop a simulation program that helps to understand the transient or dynamic nature of the GMAW process and relationships among current, electrode extension and contact tube-work distance. The process is shown to exhibit instabilities at both long electrode extension and normal extension. Results obtained from simulation runs of the model were also experimentally confirmed by the present author, as reported in this study. In order to explain the concept of the instability phenomena, the metal transfer mode and the arc voltage-current characteristic were examined. Based on this examination, the conclusion of this study is that their combined effects lead to the oscillations in arc current and length
gas-metal arc welding;exact arc voltage characteristic;weld quality;instability phenomena;metal transfer mode;self-regulation process;gmaw process;arc instability
train_1099
WebCAD: A computer aided design tool constrained with explicit 'design for
manufacturability' rules for computer numerical control milling A key element in the overall efficiency of a manufacturing enterprise is the compatibility between the features that have been created in a newly designed part, and the capabilities of the downstream manufacturing processes. With this in mind, a process-aware computer aided design (CAD) system called WebCAD has been developed. The system restricts the freedom of the designer in such a way that the designed parts can be manufactured on a three-axis computer numerical control milling machine. This paper discusses the vision of WebCAD and explains the rationale for its development in comparison with commercial CAD/CAM (computer aided design/manufacture) systems. The paper then goes on to describe the implementation issues that enforce the manufacturability rules. Finally, certain design tools are described that aid a user during the design process. Some examples are given of the parts designed and manufactured with WebCAD
internet-based cad/cam;webcad;design for manufacturability rules;computer aided design tool;computer numerical control milling;design tools;three-axis cnc milling machine;process-aware cad system;cad/cam systems;manufacturability rules;manufacturing enterprise efficiency
train_11
Does social capital determine innovation? To what extent?
This paper deals with two questions: Does social capital determine innovation in manufacturing firms? If it is the case, to what extent? To deal with these questions, we review the literature on innovation in order to see how social capital came to be added to the other forms of capital as an explanatory variable of innovation. In doing so, we have been led to follow the dominating view of the literature on social capital and innovation which claims that social capital cannot be captured through a single indicator, but that it actually takes many different forms that must be accounted for. Therefore, to the traditional explanatory variables of innovation, we have added five forms of structural social capital (business network assets, information network assets, research network assets, participation assets, and relational assets) and one form of cognitive social capital (reciprocal trust). In a context where empirical investigations regarding the relations between social capital and innovation are still scanty, this paper makes contributions to the advancement of knowledge in providing new evidence regarding the impact and the extent of social capital on innovation at the two decisionmaking stages considered in this study
participation assets;two-stage decision-making process;research network assets;cognitive social capital;degree of radicalness;innovation;reciprocal trust;business network assets;structural social capital;information network assets;manufacturing firms;relational assets
train_110
A switching synchronization scheme for a class of chaotic systems
In this Letter, we propose an observer-based synchronization scheme for a class of chaotic systems. This class of systems are given by piecewise-linear dynamics. By using some properties of such systems, we give a procedure to construct the gain of the observer. We prove various stability results and comment on the robustness of the proposed scheme. We also present some simulation results
chaotic systems;robustness;switching synchronization scheme;state observers;piecewise-linear dynamics
train_1100
Evaluation of existing and new feature recognition algorithms. 2. Experimental
results For pt.1 see ibid., p.839-851. This is the second of two papers investigating the performance of general-purpose feature detection techniques. The first paper describes the development of a methodology to synthesize possible general feature detection face sets. Six algorithms resulting from the synthesis have been designed and implemented on a SUN Workstation in C++ using ACIS as the geometric modelling system. In this paper, extensive tests and comparative analysis are conducted on the feature detection algorithms, using carefully selected components from the public domain, mostly from the National Design Repository. The results show that the new and enhanced algorithms identify face sets that previously published algorithms cannot detect. The tests also show that each algorithm can detect, among other types, a certain type of feature that is unique to it. Hence, most of the algorithms discussed in this paper would have to be combined to obtain complete coverage
concavity;national design repository;face sets;general-purpose feature detection techniques;feature recognition algorithms;convex hull
train_1101
Evaluation of existing and new feature recognition algorithms. 1. Theory and
implementation This is the first of two papers evaluating the performance of general-purpose feature detection techniques for geometric models. In this paper, six different methods are described to identify sets of faces that bound depression and protrusion faces. Each algorithm has been implemented and tested on eight components from the National Design Repository. The algorithms studied include previously published general-purpose feature detection algorithms such as the single-face inner-loop and concavity techniques. Others are improvements to existing algorithms such as extensions of the two-dimensional convex hull method to handle curved faces as well as protrusions. Lastly, new algorithms based on the three-dimensional convex hull, minimum concave, visible and multiple-face inner-loop face sets are described
minimum concave;national design repository;cad/cam software;two-dimensional convex hull method;geometric models;sets of faces;geometric reasoning algorithms;visible inner-loop face sets;three-dimensional convex hull;curved faces;feature recognition algorithms;general-purpose feature detection techniques;single-face inner-loop technique;depression faces;protrusion faces;multiple-face inner-loop face sets;concavity technique
train_1102
Design and implementation of a reusable and extensible HL7 encoding/decoding
framework The Health Level Seven (HL7), an international standard for electronic data exchange in all health care environments, enables disparate computer applications to exchange key sets of clinical and administrative information. Above all, it defines the standard HL7 message formats prescribed by the standard encoding rules. In this paper, we propose a flexible, reusable, and extensible HL7 encoding and decoding framework using a message object model (MOM) and message definition repository (MDR). The MOM provides an abstract HL7 message form represented by a group of objects and their relationships. It reflects logical relationships among the standard HL7 message elements such as segments, fields, and components, while enforcing the key structural constraints imposed by the standard. Since the MOM completely eliminates the dependency of the HL7 encoder and decoder on platform-specific data formats, it makes it possible to build the encoder and decoder as reusable standalone software components, enabling the interconnection of arbitrary heterogeneous hospital information systems (HIS) with little effort. Moreover, the MDR, an external database of key definitions for HL7 messages, helps make the encoder and decoder as resilient as possible to future modifications of the standard HL7 message formats. It is also used by the encoder and decoder to perform a well-formedness check for their respective inputs (i.e., HL7 message objects expressed in the MOM and encoded HL7 message strings). Although we implemented a prototype version of the encoder and decoder using JAVA, they can be easily packaged and delivered as standalone components using the standard component frameworks
abstract message form;structural constraints;health level seven;corba;international standard;message definition repository;mom;clinical information;health care environments;logical relationships;standalone software components;electronic data exchange;message object model;java;extensible encoding/decoding framework;mdr;his;activex;administrative information;reusable framework;javabean;hl7 message formats;external database;key definitions;heterogeneous hospital information systems
train_1103
New age computing [autonomic computing]
Autonomic computing (AC), sometimes called self-managed computing, is the name chosen by IBM to describe the company's new initiative aimed at making computing more reliable and problem-free. It is a response to a growing realization that the problem today with computers is not that they need more speed or have too little memory, but that they crash all too often. This article reviews current initiatives being carried out in the AC field by the IT industry, followed by key challenges which require to be addressed in its development and implementation
computing reliability;new age computing;adaptive algorithms;it industry initiatives;problem-free computing;ac implementation;autonomic computing;ac requirements;ac development;computer crash;self-managed computing;open standards;ac;self-healing computing;ibm initiative;computer memory;computer speed
train_1104
A 3-stage pipelined architecture for multi-view images decoder
In this paper, we proposed the architecture of the decoder which implements the multi-view images decoding algorithm. The study of the hardware structure of the multi-view image processing has not been accomplished. The proposed multi-view images decoder operates in a three stage pipelined manner and extracts the depth of the pixels of the decoded image every clock. The multi-view images decoder consists of three modules, Node selector which transfers the value of the nodes repeatedly and Depth Extractor which extracts the depth of each pixel from the four values of the nodes and Affine Transformer which generates the projecting position on the image plane from the values of the pixels and the specified viewpoint. The proposed architecture is designed and simulated by the Max+PlusII design tool and the operating frequency is 30 MHz. The image can be constructed in a real time by the decoder with the proposed architecture
three-stage pipelined architecture;depth extractor;operating frequency;30 mhz;hardware structure;viewpoint;node selector;pixel depth;max+plusii design tool;multi-view images decoder;affine transformer
train_1105
Fuzzy business [Halden Reactor Project]
The Halden Reactor Project has developed two systems to investigate how signal validation and thermal performance monitoring techniques can be improved. PEANO is an online calibration monitoring system that makes use of artificial intelligence techniques. The system has been tested in cooperation with EPRI and Edan Engineering, using real data from a US PWR plant. These tests showed that PEANO could reliably assess the performance of the process instrumentation at different plant conditions. Real cases of zero and span drifts were successfully detected by the system. TEMPO is a system for thermal performance monitoring and optimisation, which relies on plant-wide first principle models. The system has been installed on a Swedish BWR plant. Results obtained show an overall rms deviation from measured values of a few tenths of a percent, and giving goodness-of-fits in the order of 95%. The high accuracy demonstrated is a good basis for detecting possible faults and efficiency losses in steam turbine cycles
feedwater flow;calibration;thermal performance monitoring;steam generators;pwr;bwr;steam turbine cycles;peano;artificial intelligence;tempo;halden reactor project;fuzzy logic
train_1106
Virtual projects at Halden [Reactor Project]
The Halden man-machine systems (MMS) programme for 2002 is intended to address issues related to human factors, control room design, computer-based support system areas and system safety and reliability. The Halden MMS programme is intended to address extensive experimental work in the human factors, control room design and computer-based support system areas. The work is based on experiments and demonstrations carried out in the experimental facility HAMMLAB. Pilot-versions of several operator aids are adopted and integrated to the HAMMLAB simulators and demonstrated in a full dynamic setting. The Halden virtual reality laboratory has recently become an integral and important part of the programme
computer-based support system;virtual reality;man-machine systems programme;human factors;control room design;safety;halden reactor project;reliability
train_1107
A knowledge-navigation system for dimensional metrology
Geometric dimensioning and tolerancing (GD&T) is a method to specify the dimensions and form of a part so that it will meet its design intent. GD&T is difficult to master for two main reasons. First, it is based on complex 3D geometric entities and relationships. Second, the geometry is associated with a large, diverse knowledge base of dimensional metrology with many interconnections. This paper describes an approach to create a dimensional metrology knowledge base that is organized around a set of key concepts and to represent those concepts as virtual objects that can be navigated with interactive, computer visualization techniques to access the associated knowledge. The approach can enable several applications. First is the application to convey the definition and meaning of GD&T over a broad range of tolerance types. Second is the application to provide a visualization of dimensional metrology knowledge within a control hierarchy of the inspection process. Third is the application to show the coverage of interoperability standards to enable industry to make decisions on standards development and harmonization efforts. A prototype system has been implemented to demonstrate the principles involved in the approach
knowledge navigation;geometric dimensioning;web;dimensional metrology;visualization;manufacturing training;vrml;inspection;tolerancing;interoperability standards
train_1108
The visible cement data set
With advances in x-ray microtomography, it is now possible to obtain three-dimensional representations of a material's microstructure with a voxel size of less than one micrometer. The Visible Cement Data Set represents a collection of 3-D data sets obtained using the European Synchrotron Radiation Facility in Grenoble, France in September 2000. Most of the images obtained are for hydrating portland cement pastes, with a few data sets representing hydrating Plaster of Paris and a common building brick. All of these data sets are being made available on the Visible Cement Data Set website at http://visiblecement.nist.gov. The website includes the raw 3-D datafiles, a description of the material imaged for each data set, example two-dimensional images and visualizations for each data set, and a collection of C language computer programs that will be of use in processing and analyzing the 3-D microstructural images. This paper provides the details of the experiments performed at the ESRF, the analysis procedures utilized in obtaining the data set files, and a few representative example images for each of the three materials investigated
x-ray microtomography;3d representations;plaster of paris;european synchrotron radiation facility;microstructural images;microstructure;cement hydration;two-dimensional images;building brick;voxel size;esrf;hydrating portland cement pastes
train_1109
The existence condition of gamma -acyclic database schemes with MVDs
constraints It is very important to use database technology for a large-scale system such as ERP and MIS. A good database design may improve the performance of the system. Some research shows that a gamma -acyclic database scheme has many good properties, e.g., each connected join expression is monotonous, which helps to improve query performance of the database system. Thus what conditions are needed to generate a gamma -acyclic database scheme for a given relational scheme? In this paper, the sufficient and necessary condition of the existence of gamma -acyclic, join-lossless and dependencies-preserved database schemes meeting 4NF is given
query performance;connected join expression;mvds constraints;gamma -acyclic database schemes;large-scale system;existence condition;sufficient and necessary condition;database technology
train_111
Modification for synchronization of Rossler and Chen chaotic systems
Active control is an effective method for making two identical Rossler and Chen systems be synchronized. However, this method works only for a certain class of chaotic systems with known parameters both in drive systems and response systems. Modification based on Lyapunov stability theory is proposed in order to overcome this limitation. An adaptive synchronization controller, which can make the states of two identical Rossler and Chen systems globally asymptotically synchronized in the presence of system's unknown constant parameters, is derived. Especially, when some unknown parameters are positive, we can make the controller more simple, besides, the controller is independent of those positive uncertain parameters. At last, when the condition that arbitrary unknown parameters in two systems are identical constants is cancelled, we demonstrate that it is possible to synchronize two chaotic systems. All results are proved using a well-known Lyapunov stability theorem. Numerical simulations are given to validate the proposed synchronization approach
adaptive synchronization controller;active control;global asymptotic synchronization;lyapunov stability theory;response systems;chen chaotic systems;rossler chaotic systems;synchronization
train_1110
A hybrid model for smoke simulation
A smoke simulation approach based on the integration of traditional particle systems and density functions is presented in this paper. By attaching a density function to each particle as its attribute, the diffusion of smoke can be described by the variation of particles' density functions, along with the effect on airflow by controlling particles' movement and fragmentation. In addition, a continuous density field for realistic rendering can be generated quickly through the look-up tables of particle's density functions. Compared with traditional particle systems, this approach can describe smoke diffusion, and provide a continuous density field for realistic rendering with much less computation. A quick rendering scheme is also presented in this paper as a useful preview tool for tuning appropriate parameters in the smoke model
look-up tables;rendering;smoke simulation;hybrid model;continuous density field;density functions
train_1111
The contiguity in R/M
An r.e. degree c is contiguous if deg/sub wtt/(A)=deg/sub wtt/(B) for any r.e. sets A,B in c. In this paper, we generalize the notation of contiguity to the structure R/M, the upper semilattice of the r.e. degree set R modulo the cappable r.e. degree set M. An element [c] in R/M is contiguous if [deg/sub wtt/(A)]=[deg/sub wtt/(B)] for any r.e. sets A, B such that deg/sub T/(A),deg/sub T/(B) in [c]. It is proved in this paper that every nonzero element in R/M is not contiguous, i.e., for every element [c] in R/M, if [c] not=[o] then there exist at least two r.e. sets A, B such that deg/sub T/(A), deg/sub T/(B) in [c] and [deg/sub wtt/(A)] not=[deg/sub wtt/(B)]
upper semilattice;turing degree;recursion theory;contiguity;nonzero element;recursively enumerable set
train_1112
Blending parametric patches with subdivision surfaces
In this paper the problem of blending parametric surfaces using subdivision patches is discussed. A new approach, named removing-boundary, is presented to generate piecewise-smooth subdivision surfaces through discarding the outmost quadrilaterals of the open meshes derived by each subdivision step. Then the approach is employed both to blend parametric bicubic B-spline surfaces and to fill n-sided holes. It is easy to produce piecewise-smooth subdivision surfaces with both convex and concave corners on the boundary, and limit surfaces are guaranteed to be C/sup 2/ continuous on the boundaries except for a few singular points by the removing-boundary approach. Thus the blending method is very efficient and the blending surface generated is of good effect
subdivision surfaces;piecewise-smooth subdivision surfaces;piecewise smooth subdivision surfaces;parametric bicubic b-spline surfaces;quadrilaterals;parametric surfaces blending;subdivision patches
train_1113
Word spotting based on a posterior measure of keyword confidence
In this paper, an approach of keyword confidence estimation is developed that well combines acoustic layer scores and syllable-based statistical language model (LM) scores. An a posteriori (AP) confidence measure and its forward-backward calculating algorithm are deduced. A zero false alarm (ZFA) assumption is proposed for evaluating relative confidence measures by word spotting task. In a word spotting experiment with a vocabulary of 240 keywords, the keyword accuracy under the AP measure is above 94%, which well approaches its theoretical upper limit. In addition, a syllable lattice Hidden Markov Model (SLHMM) is formulated and a unified view of confidence estimation, word spotting, optimal path search, and N-best syllable re-scoring is presented. The proposed AP measure can be easily applied to various speech recognition systems as well
confidence estimation;optimal path search;acoustic layer scores;speech recognition systems;syllable-based statistical language model scores;a posterior measure;syllable lattice hidden markov model;n-best syllable re-scoring;relative confidence measures;forward-backward calculating algorithm;zero false alarm assumption;keyword confidence;word spotting task;word spotting;a posteriori confidence measure
train_1114
A new algebraic modelling approach to distributed problem-solving in MAS
This paper is devoted to a new algebraic modelling approach to distributed problem-solving in multi-agent systems (MAS), which is featured by a unified framework for describing and treating social behaviors, social dynamics and social intelligence. A conceptual architecture of algebraic modelling is presented. The algebraic modelling of typical social behaviors, social situation and social dynamics is discussed in the context of distributed problem-solving in MAS. The comparison and simulation on distributed task allocations and resource assignments in MAS show more advantages of the algebraic approach than other conventional methods
social behaviors;multi-agent systems;distributed task allocations;resource assignments;unified framework;social dynamics;social intelligence;algebraic modelling approach;distributed problem-solving
train_1115
Four-point wavelets and their applications
Multiresolution analysis (MRA) and wavelets provide useful and efficient tools for representing functions at multiple levels of details. Wavelet representations have been used in a broad range of applications, including image compression, physical simulation and numerical analysis. In this paper, the authors construct a new class of wavelets, called four-point wavelets, based on an interpolatory four-point subdivision scheme. They are of local support, symmetric and stable. The analysis and synthesis algorithms have linear time complexity. Depending on different weight parameters w, the scaling functions and wavelets generated by the four-point subdivision scheme are of different degrees of smoothness. Therefore the user can select better wavelets relevant to the practice among the classes of wavelets. The authors apply the four-point wavelets in signal compression. The results show that the four-point wavelets behave much better than B-spline wavelets in many situations
weight parameters;image compression;four-point wavelets;b-spline wavelets;linear time complexity;physical simulation;interpolatory four-point subdivision scheme;wavelet representations;numerical analysis;multiresolution analysis;scaling functions
train_1116
An interlingua-based Chinese-English MT system
Chinese-English machine translation is a significant and challenging problem in information processing. The paper presents an interlingua-based Chinese-English natural language translation system (ICENT). It introduces the realization mechanism of Chinese language analysis, which contains syntactic parsing and semantic analyzing and gives the design of interlingua in details. Experimental results and system evaluation are given. The result is satisfying
syntactic parsing;semantic analyzing;interlingua-based chinese-english machine translation system;information processing;natural language translation system
train_1117
An attack-finding algorithm for security protocols
This paper proposes an automatic attack construction algorithm in order to find potential attacks on security protocols. It is based on a dynamic strand space model, which enhances the original strand space model by introducing active nodes on strands so as to characterize the dynamic procedure of protocol execution. With exact causal dependency relations between messages considered in the model, this algorithm can avoid state space explosion caused by asynchronous composition. In order to get a finite state space, a new method called strand-added on demand is exploited, which extends a bundle in an incremental manner without requiring explicit configuration of protocol execution parameters. A finer granularity model of term structure is also introduced, in which subterms are divided into check subterms and data subterms. Moreover, data subterms can be further classified based on the compatible data subterm relation to obtain automatically the finite set of valid acceptable terms for an honest principal. In this algorithm, terms core is designed to represent the intruder's knowledge compactly, and forward search technology is used to simulate attack patterns easily. Using this algorithm, a new attack on the Dolve-Yao protocol can be found, which is even more harmful because the secret is revealed before the session terminates
dolve-yao protocol;attack-finding algorithm;state space explosion;asynchronous composition;data subterms;dynamic strand space model;security protocols;strand-added on demand;strand space model;check subterms
train_1118
Run-time data-flow analysis
Parallelizing compilers have made great progress in recent years. However, there still remains a gap between the current ability of parallelizing compilers and their final goals. In order to achieve the maximum parallelism, run-time techniques were used in parallelizing compilers during last few years. First, this paper presents a basic run-time privatization method. The definition of run-time dead code is given and its side effect is discussed. To eliminate the imprecision caused by the run-time dead code, backward data-flow information must be used. Proteus Test, which can use backward information in run-time, is then presented to exploit more dynamic parallelism. Also, a variation of Proteus Test, the Advanced Proteus Test, is offered to achieve partial parallelism. Proteus Test was implemented on the parallelizing compiler AFT. In the end of this paper the program fpppp.f of Spec95fp Benchmark is taken as an example, to show the effectiveness of Proteus Test
dynamic parallelism;run-time data flow analysis;run-time dead code;parallelizing compilers;proteus test;backward data-flow information;run-time privatization method
train_1119
A component-based software configuration management model and its supporting
system Software configuration management (SCM) is an important key technology in software development. Component-based software development (CBSD) is an emerging paradigm in software development. However, to apply CBSD effectively in real world practice, supporting SCM in CBSD needs to be further investigated. In this paper, the objects that need to be managed in CBSD is analyzed and a component-based SCM model is presented. In this model, components, as the integral logical constituents in a system, are managed as the basic configuration items in SCM, and the relationships between/among components are defined and maintained. Based on this model, a configuration management system is implemented
integral logical constituents;software development;version control;component-based software configuration management model;software reuse
train_112
Revisiting Hardy's paradox: Counterfactual statements, real measurements,
entanglement and weak values Hardy's (1992) paradox is revisited. Usually the paradox is dismissed on grounds of counterfactuality, i.e., because the paradoxical effects appear only when one considers results of experiments which do not actually take place. We suggest a new set of measurements in connection with Hardy's scheme, and show that when they are actually performed, they yield strange and surprising outcomes. More generally, we claim that counterfactual paradoxes point to a deeper structure inherent to quantum mechanics
real measurements;gedanken-experiments;hardy paradox;entanglement;quantum mechanics;counterfactual statements;weak values;paradoxical effects
train_1120
An effective feedback control mechanism for DiffServ architecture
As a scalable QoS (Quality of Service) architecture, Diffserv (Differentiated Service) mainly consists of two components: traffic conditioning at the edge of the Diffserv domain and simple packet forwarding inside the DiffServ domain. DiffServ has many advantages such as flexibility, scalability and simplicity. But when providing AF (Assured Forwarding) services, DiffServ has some problems such as unfairness among aggregated flows or among micro-flows belonging to an aggregated flow. In this paper, a feedback mechanism for AF aggregated flows is proposed to solve this problem. Simulation results show that this mechanism does improve the performance of DiffServ. First, it can improve the fairness among aggregated flows and make DiffServ more friendly toward TCP (Transmission Control Protocol) flows. Second, it can decrease the buffer requirements at the congested router and thus obtain lower delay and packet loss rate. Third, it also keeps almost the same link utility as in normal DiffServ. Finally, it is simple and easy to be implemented
diffserv;traffic conditioning;fairness;qos architecture;qos;tcp;af;feedback control;feedback mechanism;packet forwarding
train_1121
Optimal bandwidth utilization of all-optical ring with a converter of degree 4
In many models of all-optical routing, a set of communication paths in a network is given, and a wavelength is to be assigned to each path so that paths sharing an edge receive different wavelengths. The goal is to assign as few wavelengths as possible, in order to use the optical bandwidth efficiently. If a node of a network contains a wavelength converter, any path that passes through this node may change its wavelength. Having converters at some of the nodes can reduce the number of wavelengths required for routing. This paper presents a wavelength converter with degree 4 and gives a routing algorithm which shows that any routing with load L can be realized with L wavelengths when a node of an all-optical ring hosts such a wavelength converter. It is also proved that 4 is the minimum degree of the converter to reach the full utilization of the available wavelengths if only one node of an all-optical ring hosts a converter
wavelength converter;all-optical network;all-optical ring;all-optical routing;wavelength translation;wavelength assignment;communication paths
train_1122
Hybrid broadcast for the video-on-demand service
Multicast offers an efficient means of distributing video contents/programs to multiple clients by batching their requests and then having them share a server's video stream. Batching customers' requests is either client-initiated or server-initiated. Most advanced client-initiated video multicasts are implemented by patching. Periodic broadcast, a typical server-initiated approach, can be entirety-based or segment-based. This paper focuses on the performance of the VoD service for popular videos. First, we analyze the limitation of conventional patching when the customer request rate is high. Then, by combining the advantages of each of the two broadcast schemes, we propose a hybrid broadcast scheme for popular videos, which not only lowers the service latency but also improves clients' interactivity by using an active buffering technique. This is shown to be a good compromise for both lowering service latency and improving the VCR-like interactivity
video-on-demand;hybrid broadcast scheme;conventional patching;quality-of-service;interactivity;scheduling;multicast;customer request rate
train_1123
A transactional asynchronous replication scheme for mobile database systems
In mobile database systems, mobility of users has a significant impact on data replication. As a result, the various replica control protocols that exist today in traditional distributed and multidatabase environments are no longer suitable. To solve this problem, a new mobile database replication scheme, the Transaction-Level Result-Set Propagation (TLRSP) model, is put forward in this paper. The conflict detection and resolution strategy based on TLRSP is discussed in detail, and the implementation algorithm is proposed. In order to compare the performance of the TLRSP model with that of other mobile replication schemes, we have developed a detailed simulation model. Experimental results show that the TLRSP model provides an efficient support for replicated mobile database systems by reducing reprocessing overhead and maintaining database consistency
mobile database replication;mobile computing;mobile database;conflict reconciliation;multidatabase;data replication;distributed database;transaction;transaction-level result-set propagation
train_1124
Data extraction from the Web based on pre-defined schema
With the development of the Internet, the World Wide Web has become an invaluable information source for most organizations. However, most documents available from the Web are in HTML form which is originally designed for document formatting with little consideration of its contents. Effectively extracting data from such documents remains a nontrivial task. In this paper, we present a schema-guided approach to extracting data from HTML pages. Under the approach, the user defines a schema specifying what to be extracted and provides sample mappings between the schema and the HTML page. The system will induce the mapping rules and generate a wrapper that takes the HTML page as input and produces the required data in the form of XML conforming to the user-defined schema. A prototype system implementing the approach has been developed. The preliminary experiments indicate that the proposed semi-automatic approach is not only easy to use but also able to produce a wrapper that extracts required data from inputted pages with high accuracy
internet;schema;information source;html;distributed database;queries;wrapper generation;data integration;data extraction
train_1125
Structure of weakly invertible semi-input-memory finite automata with delay 1
Semi-input-memory finite automata, a kind of finite automata introduced by the first author of this paper for studying error propagation, are a generalization of input memory finite automata by appending an autonomous finite automaton component. In this paper, we give a characterization of the structure of weakly invertible semi-input-memory finite automata with delay 1, in which the state graph of each autonomous finite automaton is a cycle. From a result on mutual invertibility of finite automata obtained by the authors recently, it leads to a characterization of the structure of feedforward inverse finite automata with delay 1
semi-input-memory;delay 1;weakly invertible;invertibility;feedforward inverse finite automata;semi-input-memory finite automata;state graph;finite automata
train_1126
A note on an axiomatization of the core of market games
As shown by Peleg (1993), the core of market games is characterized by nonemptiness, individual rationality, superadditivity, the weak reduced game property, the converse reduced game property, and weak symmetry. It was not known whether weak symmetry was logically independent. With the help of a certain transitive 4-person TU game, it is shown that weak symmetry is redundant in this result. Hence, the core on market games is axiomatized by the remaining five properties, if the universe of players contains at least four members
market game core axiomatization;nonempty games;converse reduced game property;weak reduced game property;weak symmetry;individual rationality;redundant;superadditive games;transitive 4-person tu game
train_1127
Repeated games with lack of information on one side: the dual differential
approach We introduce the dual differential game of a repeated game with lack of information on one side as the natural continuous time version of the dual game introduced by De Meyer (1996). A traditional way to study the value of differential games is through discrete time approximations. Here, we follow the opposite approach: We identify the limit value of a repeated game in discrete time as the value of a differential game. Namely, we use the recursive structure for the finitely repeated version of the dual game to construct a differential game for which the upper values of the uniform discretization satisfy precisely the same property. The value of the dual differential game exists and is the unique viscosity solution of a first-order derivative equation with a limit condition. We identify the solution by translating viscosity properties in the primal
repeated games;discrete time;repeated game;viscosity solution;discrete time approximations;limit condition;limit value;dual differential game
train_1128
The semi-algebraic theory of stochastic games
The asymptotic behavior of the min-max value of a finite-state zero-sum discounted stochastic game, as the discount rate approaches 0, has been studied in the past using the theory of real-closed fields. We use the theory of semi-algebraic sets and mappings to prove some asymptotic properties of the min-max value, which hold uniformly for all stochastic games in which the number of states and players' actions are predetermined to some fixed values. As a corollary, we prove a uniform polynomial convergence rate of the value of the N-stage game to the value of the nondiscount game, over a bounded set of payoffs
min-max value;semi-algebraic set theory;finite-state zero-sum discounted stochastic game;n-stage game;asymptotic behavior;discount rate;two-player zero-sum finite-state stochastic games;uniform polynomial convergence rate
train_1129
Computing stationary Nash equilibria of undiscounted single-controller
stochastic games Given a two-person, nonzero-sum stochastic game where the second player controls the transitions, we formulate a linear complementarity problem LCP(q, M) whose solution gives a Nash equilibrium pair of stationary strategies under the limiting average payoff criterion. The matrix M constructed is of the copositive class so that Lemke's algorithm will process it. We will also do the same for a special class of N-person stochastic games called polymatrix stochastic games
polymatrix stochastic games;stationary strategies;stationary nash equilibria;n-person stochastic games;undiscounted single-controller stochastic games;two-person nonzero-sum stochastic game;lemke algorithm;linear complementarity problem;copositive class matrix;limiting average payoff criterion
train_113
Quantum limit on computational time and speed
We investigate if physical laws can impose limits on computational time and speed of a quantum computer built from elementary particles. We show that the product of the speed and the running time of a quantum computer is limited by the type of fundamental interactions present inside the system. This will help us to decide as to what type of interaction should be allowed in building quantum computers in achieving the desired speed
quantum computer;fundamental interactions;computational speed;quantum limit;computational time
train_1130
Node-capacitated ring routing
We consider the node-capacitated routing problem in an undirected ring network along with its fractional relaxation, the node-capacitated multicommodity flow problem. For the feasibility problem, Farkas' lemma provides a characterization for general undirected graphs, asserting roughly that there exists such a flow if and only if the so-called distance inequality holds for every choice of distance functions arising from nonnegative node weights. For rings, this (straightforward) result will be improved in two ways. We prove that, independent of the integrality of node capacities, it suffices to require the distance inequality only for distances arising from (0-1-2)-valued node weights, a requirement that will be called the double-cut condition. Moreover, for integer-valued node capacities, the double-cut condition implies the existence of a half-integral multicommodity flow. In this case there is even an integer-valued multicommodity flow that violates each node capacity by at most one. Our approach gives rise to a combinatorial, strongly polynomial algorithm to compute either a violating double-cut or a node-capacitated multicommodity flow. A relation of the problem to its edge-capacitated counterpart will also be explained
double-cut condition;distance inequality;half-integral multicommodity flow;undirected graphs;node-capacitated multicommodity flow problem;violating double-cut;undirected ring network;integer-valued multicommodity flow;edge-cut criterion;fractional relaxation;node-capacitated routing problem;nonnegative node weights;feasibility problem;node capacity integrality;node-capacitated ring routing;distance functions;integer-valued node capacities;farkas lemma;combinatorial strongly polynomial algorithm
train_1131
A min-max theorem on feedback vertex sets
We establish a necessary and sufficient condition for the linear system {x : Hx >or= e, x >or= 0} associated with a bipartite tournament to be totally dual integral, where H is the cycle-vertex incidence matrix and e is the all-one vector. The consequence is a min-max relation on packing and covering cycles, together with strongly polynomial time algorithms for the feedback vertex set problem and the cycle packing problem on the corresponding bipartite tournaments. In addition, we show that the feedback vertex set problem on general bipartite tournaments is NP-complete and approximable within 3.5 based on the min-max theorem
combinatorial optimization problems;np-complete problem;min-max theorem;linear programming duality theory;feedback vertex sets;feedback vertex set problem;cycle-vertex incidence matrix;strongly polynomial time algorithms;all-one vector;totally dual integral system;linear system;cycle packing problem;graphs;bipartite tournament;necessary sufficient condition;covering cycles
train_1132
Semidefinite programming vs. LP relaxations for polynomial programming
We consider the global minimization of a multivariate polynomial on a semi-algebraic set Omega defined with polynomial inequalities. We then compare two hierarchies of relaxations, namely, LP relaxations based on products of the original constraints, in the spirit of the RLT procedure of Sherali and Adams (1990), and recent semidefinite programming (SDP) relaxations introduced by the author. The comparison is analyzed in light of recent results in real algebraic geometry on various representations of polynomials, positive on a compact semi-algebraic set
real algebraic geometry;multivariate polynomial;polynomial inequalities;global minimization;semidefinite programming relaxations;rlt procedure;semi-algebraic set;reformulation linearization technique;polynomial programming;constraint products;lp relaxations
train_1133
An analytic center cutting plane method for semidefinite feasibility problems
Semidefinite feasibility problems arise in many areas of operations research. The abstract form of these problems can be described as finding a point in a nonempty bounded convex body Gamma in the cone of symmetric positive semidefinite matrices. Assume that Gamma is defined by an oracle, which for any given m * m symmetric positive semidefinite matrix Gamma either confirms that Y epsilon Gamma or returns a cut, i.e., a symmetric matrix A such that Gamma is in the half-space {Y : A . Y <or= A . Y}. We study an analytic center cutting plane algorithm for this problem. At each iteration, the algorithm computes an approximate analytic center of a working set defined by the cutting plane system generated in the previous iterations. If this approximate analytic center is a solution, then the algorithm terminates; otherwise the new cutting plane returned by the oracle is added into the system. As the number of iterations increases, the working set shrinks and the algorithm eventually finds a solution to the problem. All iterates generated by the algorithm are positive definite matrices. The algorithm has a worst-case complexity of O*(m/sup 3// epsilon /sup 2/) on the total number of cuts to be used, where epsilon is the maximum radius of a ball contained by Gamma
working set;approximate analytic center;operations research;analytic center cutting plane method;worst-case complexity;iteration;oracle;symmetric positive semidefinite matrices;nonempty bounded convex body;semidefinite feasibility problems;maximum ball radius
train_1134
Relationship between strong monotonicity property, P/sub 2/-property, and the
GUS-property in semidefinite linear complementarity problems In a recent paper on semidefinite linear complementarity problems, Gowda and Song (2000) introduced and studied the P-property, P/sub 2/-property, GUS-property, and strong monotonicity property for linear transformation L: S/sup n/ to S/sup n/, where S/sup n/ is the space of all symmetric and real n * n matrices. In an attempt to characterize the P/sub 2/-property, they raised the following two questions: (i) Does the strong monotonicity imply the P/sub 2/-property? (ii) Does the GUS-property imply the P/sub 2/-property? In this paper, we show that the strong monotonicity property implies the P/sub 2/-property for any linear transformation and describe an equivalence between these two properties for Lyapunov and other transformations. We show by means of an example that the GUS-property need not imply the P/sub 2/-property, even for Lyapunov transformations
p/sub 2/-property;semidefinite linear complementarity problems;lyapunov transformations;strong monotonicity property;symmetric real matrices;linear transformation;gus-property
train_1135
A combinatorial, graph-based solution method for a class of continuous-time
optimal control problems The paper addresses a class of continuous-time, optimal control problems whose solutions are typically characterized by both bang-bang and "singular" control regimes. Analytical study and numerical computation of such solutions are very difficult and far from complete when only techniques from control theory are used. This paper solves optimal control problems by reducing them to the combinatorial search for the shortest path in a specially constructed graph. Since the nodes of the graph are weighted in a sequence-dependent manner, we extend the classical, shortest-path algorithm to our case. The proposed solution method is currently limited to single-state problems with multiple control functions. A production planning problem and a train operation problem are optimally solved to illustrate the method
sequence-dependent manner;weighted graph nodes;shortest path algorithm;single-state problems;production planning problem;combinatorial search;combinatorial graph-based solution;numerical computation;singular control regimes;train operation problem;bang-bang control regimes;multiple control functions;continuous-time optimal control problems
train_1136
Q-learning for risk-sensitive control
We propose for risk-sensitive control of finite Markov chains a counterpart of the popular Q-learning algorithm for classical Markov decision processes. The algorithm is shown to converge with probability one to the desired solution. The proof technique is an adaptation of the o.d.e. approach for the analysis of stochastic approximation algorithms, with most of the work involved used for the analysis of the specific o.d.e.s that arise
risk-sensitive control;stochastic approximation algorithms;classical markov decision processes;algorithm convergence;ordinary differential equations;dynamic programming;finite markov chains;proof technique;q-learning algorithm;reinforcement learning algorithms
train_1137
On deciding stability of constrained homogeneous random walks and queueing
systems We investigate stability of scheduling policies in queueing systems. To this day no algorithmic characterization exists for checking stability of a given policy in a given queueing system. In this paper we introduce a certain generalized priority policy and prove that the stability of this policy is algorithmically undecidable. We also prove that stability of a homogeneous random walk in L/sub +//sup d/ is undecidable. Finally, we show that the problem of computing a fluid limit of a queueing system or of a constrained homogeneous random walk is undecidable. To the best of our knowledge these are the first undecidability results in the area of stability of queueing systems and random walks in L/sub +//sup d/. We conjecture that stability of common policies like First-In-First-Out and priority policy is also an undecidable problem
scheduling policy stability;priority policy;generalized priority policy;undecidability results;queueing systems;homogeneous random walk stability;undecidable problem;first-in-first-out policy;constrained homogeneous random walks;fluid limit computation
train_1138
Approximating martingales for variance reduction in Markov process simulation
"Knowledge of either analytical or numerical approximations should enable more efficient simulation estimators to be constructed." This principle seems intuitively plausible and certainly attractive, yet no completely satisfactory general methodology has been developed to exploit it. The authors present a new approach for obtaining variance reduction in Markov process simulation that is applicable to a vast array of different performance measures. The approach relies on the construction of a martingale that is then used as an internal control variate
variance reduction;performance measures;martingales;markov process simulation;approximating martingale-process method;complex stochastic processes;single-server queue;internal control variate
train_1139
Development and evaluation of a case-based reasoning classifier for prediction
of breast biopsy outcome with BI-RADS/sup TM/ lexicon Approximately 70-85% of breast biopsies are performed on benign lesions. To reduce this high number of biopsies performed on benign lesions, a case-based reasoning (CBR) classifier was developed to predict biopsy results from BI-RADS/sup TM/ findings. We used 1433 (931 benign) biopsy-proven mammographic cases. CBR similarity was defined using either the Hamming or Euclidean distance measure over case features. Ten features represented each case: calcification distribution, calcification morphology, calcification number, mass margin, mass shape, mass density, mass size, associated findings, special cases, and age. Performance was evaluated using Round Robin sampling, Receiver Operating Characteristic (ROC) analysis, and bootstrap. To determine the most influential features for the CBR, an exhaustive feature search was performed over all possible feature combinations (1022) and similarity thresholds. Influential features were defined as the most frequently occurring features in the feature subsets with the highest partial ROC areas (/sub 0.90/AUC). For CBR with Hamming distance, the most influential features were found to be mass margin, calcification morphology, age, calcification distribution, calcification number, and mass shape, resulting in an /sub 0.90/AUC of 0.33. At 95% sensitivity, the Hamming CBR would spare from biopsy 34% of the benign lesions. At 98% sensitivity, the Hamming CBR would spare 27% benign lesions. For the CBR with Euclidean distance, the most influential feature subset consisted of mass margin, calcification morphology, age, mass density, and associated findings, resulting in /sub 0.90/AUC of 0.37. At 95% sensitivity, the Euclidean CBR would spare from biopsy 41% benign lesions. At 98% sensitivity, the Euclidean CBR would spare 27% benign lesions. The profile of cases spared by both distance measures at 98% sensitivity indicates that the CBR is a potentially useful diagnostic tool for the classification of mammographic lesions, by recommending short-term follow-up for likely benign lesions that is in agreement with final biopsy results and mammographer's intuition
associated findings;calcification number;mass shape;bootstrap;hamming distance measure;mass density;calcification distribution;similarity thresholds;breast biopsy outcome;age;benign lesions;cbr similarity;calcification morphology;case-based reasoning classifier;influential features;euclidean distance measure;round robin sampling;feature combinations;mammographic lesion classification;biopsy-proven mammographic cases;mass size;receiver operating characteristic analysis;short-term follow-up;feature subsets;highest partial roc areas;bi-rads lexicon;diagnostic tool;special cases;mass margin
train_114
Cooperative three- and four-player quantum games
A cooperative multi-player quantum game played by 3 and 4 players has been studied. A quantum superposed operator is introduced in this work which solves the non-zero sum difficulty in previous treatments. The role of quantum entanglement of the initial state is discussed in detail
nonzero sum difficulty;quantum entanglement;quantum superposed operator;cooperative three-player quantum games;cooperative four-player quantum games;initial state
train_1140
Computer aided classification of masses in ultrasonic mammography
Frequency compounding was recently investigated for computer aided classification of masses in ultrasonic B-mode images as benign or malignant. The classification was performed using the normalized parameters of the Nakagami distribution at a single region of interest at the site of the mass. A combination of normalized Nakagami parameters from two different images of a mass was undertaken to improve the performance of classification. Receiver operating characteristic (ROC) analysis showed that such an approach resulted in an area of 0.83 under the ROC curve. The aim of the work described in this paper is to see whether a feature describing the characteristic of the boundary can be extracted and combined with the Nakagami parameter to further improve the performance of classification. The combination of the features has been performed using a weighted summation. Results indicate a 10% improvement in specificity at a sensitivity of 96% after combining the information at the site and at the boundary. Moreover, the technique requires minimal clinical intervention and has a performance that reaches that of the trained radiologist. It is hence suggested that this technique may be utilized in practice to characterize breast masses
minimal clinical intervention;breast masses;computer aided classification;receiver operating characteristic;roc curve;frequency compounding;specificity;sensitivity;normalized parameters;ultrasonic b-mode images;ultrasonic mammography;weighted summation;normalized nakagami parameters;benign;single region of interest;nakagami distribution;malignant
train_1141
Reproducibility of mammary gland structure during repeat setups in a supine
position Purpose: In breast conserving therapy, complete excision of the tumor with an acceptable cosmetic outcome depends on accurate localization in terms of both the position of the lesion and its extent. We hypothesize that preoperative contrast-enhanced magnetic resonance (MR) imaging of the patient in a supine position may be used for accurate tumor localization and marking of its extent immediately prior to surgery. Our aims in this study are to assess the reproducibility of mammary gland structure during repeat setups in a supine position, to evaluate the effect of a breast immobilization device, and to derive reproducibility margins that take internal tissue shifts into account occurring between repeat setups. Materials Methods: The reproducibility of mammary gland structure during repeat setups in a supine position is estimated by quantification of tissue shifts in the breasts of healthy volunteers between repeat MR setups. For each volunteer fiducials are identified and registered with their counter locations in corresponding MR volumes. The difference in position denotes the shift of breast tissue. The dependence on breast volume and the part of the breast, as well as the effect of a breast immobilization cast are studied. Results: The tissue shifts are small with a mean standard deviation on the order of 1.5 mm, being slightly larger in large breasts (V>1000 cm/sup 3/), and in the posterior part (toward the pectoral muscle) of both small and large breasts. The application of a breast immobilization cast reduces the tissue shifts in large breasts. A reproducibility margin on the order of 5 mm will take the internal tissue shifts into account that occur between repeat setups. Conclusion: The results demonstrate a high reproducibility of mammary gland structure during repeat setups in a supine position
internal tissue shifts;mammary gland structure reproducibility;breast immobilization device;accurate tumor localization;localization methods;breast conserving therapy;repeat setups;reproducibility margins;contrast-enhanced magnetic resonance imaging;supine position
train_1142
Fast and accurate leaf verification for dynamic multileaf collimation using an
electronic portal imaging device A prerequisite for accurate dose delivery of IMRT profiles produced with dynamic multileaf collimation (DMLC) is highly accurate leaf positioning. In our institution, leaf verification for DMLC was initially done with film and ionization chamber. To overcome the limitations of these methods, a fast, accurate and two-dimensional method for daily leaf verification, using our CCD-camera based electronic portal imaging device (EPID), has been developed. This method is based on a flat field produced with a 0.5 cm wide sliding gap for each leaf pair. Deviations in gap widths are detected as deviations in gray scale value profiles derived from the EPID images, and not by directly assessing leaf positions in the images. Dedicated software was developed to reduce the noise level in the low signal images produced with the narrow gaps. The accuracy of this quality assurance procedure was tested by introducing known leaf position errors. It was shown that errors in leaf gap as small as 0.01-0.02 cm could be detected, which is certainly adequate to guarantee accurate dose delivery of DMLC treatments, even for strongly modulated beam profiles. Using this method, it was demonstrated that both short and long term reproducibility in leaf positioning were within 0.01 cm (1 sigma ) for all gantry angles, and that the effect of gravity was negligible
gray scale value profiles;electronic portal imaging device;intensity modulated radiation therapy profiles;two-dimensional method;ionization chamber;accurate leaf verification;leaf pair;ccd-camera based electronic portal imaging device;signal images;accurate dose delivery;leaf position errors;gantry angles;dynamic multileaf collimation;leaf positioning;electronic portal imaging device images;modulated beam profiles;gap widths;noise level;sliding gap
train_1143
A three-source model for the calculation of head scatter factors
Accurate determination of the head scatter factor S/sub c/ is an important issue, especially for intensity modulated radiation therapy, where the segmented fields are often very irregular and much less than the collimator jaw settings. In this work, we report an S/sub c/ calculation algorithm for symmetric, asymmetric, and irregular open fields shaped by the tertiary collimator (a multileaf collimator or blocks) at different source-to-chamber distance. The algorithm was based on a three-source model, in which the photon radiation to the point of calculation was treated as if it originated from three effective sources: one source for the primary photons from the target and two extra-focal photon sources for the scattered photons from the primary collimator and the flattening filter, respectively. The field mapping method proposed by Kim et al. [Phys. Med. Biol. 43, 1593-1604 (1998)] was extended to two extra-focal source planes and the scatter contributions were integrated over the projected areas (determined by the detector's eye view) in the three source planes considering the source intensity distributions. The algorithm was implemented using Microsoft Visual C/C++ in the MS Windows environment. The only input data required were head scatter factors for symmetric square fields, which are normally acquired during machine commissioning. A large number of different fields were used to evaluate the algorithm and the results were compared with measurements. We found that most of the calculated S/sub c/'s agreed with the measured values to within 0.4%. The algorithm can also be easily applied to deal with irregular fields shaped by a multileaf collimator that replaces the upper or lower collimator jaws
target;intensity modulated radiation therapy;extra-focal photon sources;blocks;irregular open fields;segmented fields;head scatter factors;fields;source-to-chamber distance;collimator jaw settings;input data;primary collimator;calculation algorithm;asymmetric;source intensity distributions;lower collimator jaws;flattening filter;multileaf collimator;extra-focal source planes;scattered photons;field mapping method;three-source model;upper collimator jaws;ms windows environment;machine commissioning;symmetric;photon radiation;tertiary collimator;symmetric square fields
train_1144
Simultaneous iterative reconstruction of emission and attenuation images in
positron emission tomography from emission data only For quantitative image reconstruction in positron emission tomography attenuation correction is mandatory. In case that no data are available for the calculation of the attenuation correction factors one can try to determine them from the emission data alone. However, it is not clear if the information content is sufficient to yield an adequate attenuation correction together with a satisfactory activity distribution. Therefore, we determined the log likelihood distribution for a thorax phantom depending on the choice of attenuation and activity pixel values to measure the crosstalk between both. In addition an iterative image reconstruction (one-dimensional Newton-type algorithm with a maximum likelihood estimator), which simultaneously reconstructs the images of the activity distribution and the attenuation coefficients is used to demonstrate the problems and possibilities of such a reconstruction. As result we show that for a change of the log likelihood in the range of statistical noise, the associated change in the activity value of a structure is between 6% and 263%. In addition, we show that it is not possible to choose the best maximum on the basis of the log likelihood when a regularization is used, because the coupling between different structures mediated by the (smoothing) regularization prevents an adequate solution due to crosstalk. We conclude that taking into account the attenuation information in the emission data improves the performance of image reconstruction with respect to the bias of the activities, however, the reconstruction still is not quantitative
thorax phantom;positron emission tomography attenuation correction;attenuation correction factors;crosstalk;activity distribution;statistical noise;iterative image reconstruction;one-dimensional newton-type algorithm;maximum likelihood estimator;image reconstruction;attenuation information;attenuation coefficients;smoothing;activity pixel values;log likelihood distribution
train_1145
Mammogram synthesis using a 3D simulation. II. Evaluation of synthetic
mammogram texture We have evaluated a method for synthesizing mammograms by comparing the texture of clinical and synthetic mammograms. The synthesis algorithm is based upon simulations of breast tissue and the mammographic imaging process. Mammogram texture was synthesized by projections of simulated adipose tissue compartments. It was hypothesized that the synthetic and clinical texture have similar properties, assuming that the mammogram texture reflects the 3D tissue distribution. The size of the projected compartments was computed by mathematical morphology. The texture energy and fractal dimension were also computed and analyzed in terms of the distribution of texture features within four different tissue regions in clinical and synthetic mammograms. Comparison of the cumulative distributions of the mean features computed from 95 mammograms showed that the synthetic images simulate the mean features of the texture of clinical mammograms. Correlation of clinical and synthetic texture feature histograms, averaged over all images, showed that the synthetic images can simulate the range of features seen over a large group of mammograms. The best agreement with clinical texture was achieved for simulated compartments with radii of 4-13.3 mm in predominantly adipose tissue regions, and radii of 2.7-5.33 and 1.3-2.7 mm in retroareolar and dense fibroglandular tissue regions, respectively
cumulative distributions;x-ray image acquisition;computationally compressed phantom;synthetic images;retroareolar tissue regions;synthetic mammogram texture;adipose tissue compartments;breast tissue simulation;dense fibroglandular tissue regions;3d simulation;mammogram synthesis;fractal dimension;3d tissue distribution;mathematical morphology
train_1146
Mammogram synthesis using a 3D simulation. I. Breast tissue model and image
acquisition simulation A method is proposed for generating synthetic mammograms based upon simulations of breast tissue and the mammographic imaging process. A computer breast model has been designed with a realistic distribution of large and medium scale tissue structures. Parameters controlling the size and placement of simulated structures (adipose compartments and ducts) provide a method for consistently modeling images of the same simulated breast with modified position or acquisition parameters. The mammographic imaging process is simulated using a compression model and a model of the X-ray image acquisition process. The compression model estimates breast deformation using tissue elasticity parameters found in the literature and clinical force values. The synthetic mammograms were generated by a mammogram acquisition model using a monoenergetic parallel beam approximation applied to the synthetically compressed breast phantom
computer breast model;force values;rectangular slice approximation;linear young's moduli;composite beam model;tissue elasticity parameters;image acquisition simulation;breast lesions;monoenergetic parallel beam approximation;3d simulation;mammogram synthesis;breast tissue model;ducts;adipose compartments;mammographic compression;x-ray image acquisition
train_1147
Angular disparity in ETACT scintimammography
Emission tuned aperture computed tomography (ETACT) has been previously shown to have the potential for the detection of small tumors (<1 cm) in scintimammography. However, the optimal approach to the application of ETACT in the clinic has yet to be determined. Therefore, we sought to determine the effect of the angular disparity between the ETACT projections on image quality through the use of a computer simulation. A small, spherical tumor of variable size (5, 7.5 or 10 mm) was placed at the center of a hemispherical breast (15 cm diameter). The tumor to nontumor ratio was either 5:1 or 10:1. The detector was modeled to be a gamma camera fitted with a 4-mm-diam pinhole collimator. The pinhole-to-detector and the pinhole-to-tumor distances were 25 and 15 cm, respectively. A ray tracing technique was used to generate three sets of projections (10 degrees , 15 degrees , and 20 degrees , angular disparity). These data were blurred to a resolution consistent with the 4 mm pinhole. The TACT reconstruction method was used to reconstruct these three image sets. The tumor contrast and the axial spatial resolution was measured. Smaller angular disparity led to an improvement in image contrast but at a cost of degraded axial spatial resolution. The improvement in contrast is due to a slight improvement in the in-plane spatial resolution. Since improved contrast should lead to better tumor detectability, smaller angular disparity should be used. However, the difference in contrast between 10 degrees and 15 degrees was very slight and therefore a reasonable clinical choice for angular disparity is 15 degrees
pinhole-to-tumor distances;computer simulation;image sets;hemispherical breast;angular disparity;spherical tumor;pinhole collimator;clinical choice;emission tuned aperture computed tomography scintimammography;ray tracing technique;pinhole-to-detector distances;tuned aperture computed tomography reconstruction method;image quality;axial spatial resolution;in-plane spatial resolution;gamma camera;small tumors
train_1148
Benchmarking of the Dose Planning Method (DPM) Monte Carlo code using electron
beams from a racetrack microtron A comprehensive set of measurements and calculations has been conducted to investigate the accuracy of the Dose Planning Method (DPM) Monte Carlo code for dose calculations from 10 and 50 MeV scanned electron beams produced from a racetrack microtron. Central axis depth dose measurements and a series of profile scans at various depths were acquired in a water phantom using a Scanditronix type RK ion chamber. Source spatial distributions for the Monte Carlo calculations were reconstructed from in-air ion chamber measurements carried out across the two-dimensional beam profile at 100 cm downstream from the source. The in-air spatial distributions were found to have full width at half maximum of 4.7 and 1.3 cm, at 100 cm from the source, for the 10 and 50 MeV beams, respectively. Energy spectra for the 10 and 50 MeV beams were determined by simulating the components of the microtron treatment head using the code MCNP4B. DPM calculations are on average within +or-2% agreement with measurement for all depth dose and profile comparisons conducted in this study. The accuracy of the DPM code illustrated in this work suggests that DPM may be used as a valuable tool for electron beam dose calculations
two-dimensional beam profile;water phantom;50 mev;profile scans;racetrack microtron;10 mev;dose planning method monte carlo code;mcnp4b;ion chamber;source spatial distributions;scoring parameters;electron beam dose calculations;electron transport;in-air spatial distributions;scanned electron beams;benchmarking;central axis depth dose measurements;radiotherapy treatment planning
train_1149
Deterministic calculations of photon spectra for clinical accelerator targets
A method is proposed to compute photon energy spectra produced in clinical electron accelerator targets, based on the deterministic solution of the Boltzmann equation for coupled electron-photon transport in one-dimensional (1-D) slab geometry. It is shown that the deterministic method gives similar results as Monte Carlo calculations over the angular range of interest for therapy applications. Relative energy spectra computed by deterministic and 3-D Monte Carlo methods, respectively, are compared for several realistic target materials and different electron beams, and are found to give similar photon energy distributions and mean energies. The deterministic calculations typically require 1-2 mins of execution time on a Sun Workstation, compared to 2-36 h for the Monte Carlo runs
angular range of interest;coupled electron-photon transport;therapy applications;deterministic calculations;one-dimensional slab geometry;integrodifferential equation;3-d monte carlo methods;pencil beam source representations;linear accelerator;boltzmann equation;relative energy spectra;therapy planning;clinical electron accelerator targets;photon energy spectra
train_115
Non-optimal universal quantum deleting machine
We verify the non-existence of some standard universal quantum deleting machine. Then a non-optimal universal quantum deleting machine is constructed and we emphasize the difficulty for improving its fidelity. In a way, our results complement the universal quantum cloning machine established by Buzek and Hillery (1996), and manifest some of their distinctions
universal quantum cloning machine;nuqdm;fidelity;nonoptimal universal quantum deleting machine
train_1150
Effect of multileaf collimator leaf width on physical dose distributions in the
treatment of CNS and head and neck neoplasms with intensity modulated radiation therapy The purpose of this work is to examine physical radiation dose differences between two multileaf collimator (MLC) leaf widths (5 and 10 mm) in the treatment of CNS and head and neck neoplasms with intensity modulated radiation therapy (IMRT). Three clinical patients with CNS tumors were planned with two different MLC leaf sizes, 5 and 10 mm, representing Varian-120 and Varian-80 Millennium multileaf collimators, respectively. Two sets of IMRT treatment plans were developed. The goal of the first set was radiation dose conformality in three dimensions. The goal for the second set was organ avoidance of a nearby critical structure while maintaining adequate coverage of the target volume. Treatment planning utilized the CadPlan/Helios system (Varian Medical Systems, Milpitas CA) for dynamic MLC treatment delivery. All beam parameters and optimization (cost function) parameters were identical for the 5 and 10 mm plans. For all cases the number of beams, gantry positions, and table positions were taken from clinically treated three-dimensional conformal radiotherapy plans. Conformality was measured by the ratio of the planning isodose volume to the target volume. Organ avoidance was measured by the volume of the critical structure receiving greater than 90% of the prescription dose (V/sub 90/). For three patients with squamous cell carcinoma of the head and neck (T2-T4 N0-N2c M0) 5 and 10 mm leaf widths were compared for parotid preservation utilizing nine coplanar equally spaced beams delivering a simultaneous integrated boost. Because modest differences in physical dose to the parotid were detected, a NTCP model based upon the clinical parameters of Eisbruch et al. was then used for comparisons. The conformality improved in all three CNS cases for the 5 mm plans compared to the 10 mm plans. For the organ avoidance plans, V/sub 90/ also improved in two of the three cases when the 5 mm leaf width was utilized for IMRT treatment delivery. In the third case, both the 5 and 10 mm plans were able to spare the critical structure with none of the structure receiving more than 90% of the prescription dose, but in the moderate dose range, less dose was delivered to the critical structure with the 5 mm plan. For the head and neck cases both the 5 and 10*2.5 mm beamlets dMLC sliding window techniques spared the contralateral parotid gland while maintaining target volume coverage. The mean parotid dose was modestly lower with the smaller beamlet size (21.04 Gy vs 22.36 Gy). The resulting average NTCP values were 13.72% for 10 mm dMLC and 8.24% for 5 mm dMLC. In conclusion, five mm leaf width results in an improvement in physical dose distribution over 10 mm leaf width that may be clinically relevant in some cases. These differences may be most pronounced for single fraction radiosurgery or in cases where the tolerance of the sensitive organ is less than or close to the target volume prescription
physical dose distributions;conformal radiotherapy;optimization parameters;head and neck neoplasms;collimator rotation;acceptable tumor coverage;multileaf collimator leaf width;10 mm;intensity modulated radiation therapy;treatment planning;21.04 gy;cns neoplasms;5 mm;22.36 gy;cns tumors;beamlet size;parotid preservation;minimal toxicity;single fraction radiosurgery
train_1151
A method for geometrical verification of dynamic intensity modulated
radiotherapy using a scanning electronic portal imaging device In order to guarantee the safe delivery of dynamic intensity modulated radiotherapy (IMRT), verification of the leaf trajectories during the treatment is necessary. Our aim in this study is to develop a method for on-line verification of leaf trajectories using an electronic portal imaging device with scanning read-out, independent of the multileaf collimator. Examples of such scanning imagers are electronic portal imaging devices (EPIDs) based on liquid-filled ionization chambers and those based on amorphous silicon. Portal images were acquired continuously with a liquid-filled ionization chamber EPID during the delivery, together with the signal of treatment progress that is generated by the accelerator. For each portal image, the prescribed leaf and diaphragm positions were computed from the dynamic prescription and the progress information. Motion distortion effects of the leaves are corrected based on the treatment progress that is recorded for each image row. The aperture formed by the prescribed leaves and diaphragms is used as the reference field edge, while the actual field edge is found using a maximum-gradient edge detector. The errors in leaf and diaphragm position are found from the deviations between the reference field edge and the detected field edge. Earlier measurements of the dynamic EPID response show that the accuracy of the detected field edge is better than 1 mm. To ensure that the verification is independent of inaccuracies in the acquired progress signal, the signal was checked with diode measurements beforehand. The method was tested on three different dynamic prescriptions. Using the described method, we correctly reproduced the distorted field edges. Verifying a single portal image took 0.1 s on an 866 MHz personal computer. Two flaws in the control system of our experimental dynamic multileaf collimator were correctly revealed with our method. First, the errors in leaf position increase with leaf speed, indicating a delay of approximately 0.8 s in the control system. Second, the accuracy of the leaves and diaphragms depends on the direction of motion. In conclusion, the described verification method is suitable for detailed verification of leaf trajectories during dynamic IMRT
scanning read-out;liquid-filled ionization chambers;geometrical verification method;reference field edge;leaf trajectories;treatment planning;safe delivery;diaphragm positions;control system;dynamic multileaf collimator;dynamic intensity modulated radiotherapy;distorted field edges;motion distortion effects;dose distributions;leaf positions;on-line verification
train_1152
Incorporating multi-leaf collimator leaf sequencing into iterative IMRT
optimization Intensity modulated radiation therapy (IMRT) treatment planning typically considers beam optimization and beam delivery as separate tasks. Following optimization, a multi-leaf collimator (MLC) or other beam delivery device is used to generate fluence patterns for patient treatment delivery. Due to limitations and characteristics of the MLC, the deliverable intensity distributions often differ from those produced by the optimizer, leading to differences between the delivered and the optimized doses. Objective function parameters are then adjusted empirically, and the plan is reoptimized to achieve a desired deliverable dose distribution. The resulting plan, though usually acceptable, may not be the best achievable. A method has been developed to incorporate the MLC restrictions into the optimization process. Our in-house IMRT system has been modified to include the calculation of the deliverable intensity into the optimizer. In this process, prior to dose calculation, the MLC leaf sequencer is used to convert intensities to dynamic MLC sequences, from which the deliverable intensities are then determined. All other optimization steps remain the same. To evaluate the effectiveness of deliverable-based optimization, 17 patient cases have been studied. Compared with standard optimization plus conversion to deliverable beams, deliverable-based optimization results show improved isodose coverage and a reduced dose to critical structures. Deliverable-based optimization results are close to the original nondeliverable optimization results, suggesting that IMRT can overcome the MLC limitations by adjusting individual beamlets. The use of deliverable-based optimization may reduce the need for empirical adjustment of objective function parameters and reoptimization of a plan to achieve desired results
fluence patterns;empirical adjustment;objective function parameters;intensity modulated radiation therapy;newton method;beam delivery;treatment planning;deliverable dose distribution;iterative optimization;beam optimization;optimized intensity;gradient-based search algorithm;dose-volume objective values;tumor dose;multileaf collimator leaf sequencing;beamlet ray intensities
train_1153
Direct aperture optimization: A turnkey solution for step-and-shoot IMRT
IMRT treatment plans for step-and-shoot delivery have traditionally been produced through the optimization of intensity distributions (or maps) for each beam angle. The optimization step is followed by the application of a leaf-sequencing algorithm that translates each intensity map into a set of deliverable aperture shapes. In this article, we introduce an automated planning system in which we bypass the traditional intensity optimization, and instead directly optimize the shapes and the weights of the apertures. We call this approach "direct aperture optimization." This technique allows the user to specify the maximum number of apertures per beam direction, and hence provides significant control over the complexity of the treatment delivery. This is possible because the machine dependent delivery constraints imposed by the MLC are enforced within the aperture optimization algorithm rather than in a separate leaf-sequencing step. The leaf settings and the aperture intensities are optimized simultaneously using a simulated annealing algorithm. We have tested direct aperture optimization on a variety of patient cases using the EGS4/BEAM Monte Carlo package for our dose calculation engine. The results demonstrate that direct aperture optimization can produce highly conformal step-and-shoot treatment plans using only three to five apertures per beam direction. As compared with traditional optimization strategies, our studies demonstrate that direct aperture optimization can result in a significant reduction in both the number of beam segments and the number of monitor units. Direct aperture optimization therefore produces highly efficient treatment deliveries that maintain the full dosimetric benefits of IMRT
intensity distributions;leaf settings;monitor units;automated planning system;maps;beam segments;aperture intensities;optimization step;turnkey solution;leaf-sequencing algorithm;aperture optimization algorithm;direct aperture optimization;patient cases;aperture shapes;highly conformal step-and-shoot treatment plans;mlc;deliverable aperture shapes;aperture weights;egs4/beam monte carlo package;treatment delivery complexity;full dosimetric benefits;intensity map;imrt treatment plans;step-and-shoot imrt;machine dependent delivery constraints;dose calculation engine;highly efficient treatment deliveries;beam angle;simulated annealing algorithm
train_1154
The effect of voxel size on the accuracy of dose-volume histograms of prostate
/sup 125/I seed implants Cumulative dose-volume histograms (DVH) are crucial in evaluating the quality of radioactive seed prostate implants. When calculating DVHs, the choice of voxel size is a compromise between computational speed (larger voxels) and accuracy (smaller voxels). We quantified the effect of voxel size on the accuracy of DVHs using an in-house computer program. The program was validated by comparison with a hand-calculated DVH for a single 0.4-U iodine-125 model 6711 seed. We used the program to find the voxel size required to obtain accurate DVHs of five iodine-125 prostate implant patients at our institution. One-millimeter cubes were sufficient to obtain DVHs that are accurate within 5% up to 200% of the prescription dose. For the five patient plans, we obtained good agreement with the VariSeed (version 6.7, Varian, USA) treatment planning software's DVH algorithm by using voxels with a sup-inf dimension equal to the spacing between successive transverse seed implant planes (5 mm). The volume that receives at least 200% of the target dose, V/sub 200/, calculated by VariSeed was 30% to 43% larger than that calculated by our program with small voxels. The single-seed DVH calculated by VariSeed fell below the hand calculation by up to 50% at low doses (30 Gy), and above it by over 50% at high doses (>250 Gy)
in-house computer program;cumulative dose-volume histograms;computational speed;/sup 125/i model;single-seed dose-volume histograms;hand-calculated dose-volume histograms;prostate /sup 125/i seed implants;voxel size;radioactive seed prostate implants;i;/sup 125/i prostate implant patients;variseed treatment planning software's dose-volume histogram algorithm
train_1155
A leaf sequencing algorithm to enlarge treatment field length in IMRT
With MLC-based IMRT, the maximum usable field size is often smaller than the maximum field size for conventional treatments. This is due to the constraints of the overtravel distances of MLC leaves and/or jaws. Using a new leaf sequencing algorithm, the usable IMRT field length (perpendicular to the MLC motion) can be mostly made equal to the full length of the MLC field without violating the upper jaw overtravel limit. For any given intensity pattern, a criterion was proposed to assess whether an intensity pattern can be delivered without violation of the jaw position constraints. If the criterion is met, the new algorithm will consider the jaw position constraints during the segmentation for the step and shoot delivery method. The strategy employed by the algorithm is to connect the intensity elements outside the jaw overtravel limits with those inside the jaw overtravel limits. Several methods were used to establish these connections during segmentation by modifying a previously published algorithm (areal algorithm), including changing the intensity level, alternating the leaf-sequencing direction, or limiting the segment field size. The algorithm was tested with 1000 random intensity patterns with dimensions of 21*27 cm/sup 2/, 800 intensity patterns with higher intensity outside the jaw overtravel limit, and three different types of clinical treatment plans that were undeliverable using a segmentation method from a commercial treatment planning system. The new algorithm achieved a success rate of 100% with these test patterns. For the 1000 random patterns, the new algorithm yields a similar average number of segments of 36.9+or-2.9 in comparison to 36.6+or-1.3 when using the areal algorithm. For the 800 patterns with higher intensities outside the jaw overtravel limits, the new algorithm results in an increase of 25% in the average number of segments compared to the areal algorithm. However, the areal algorithm fails to create deliverable segments for 90% of these patterns. Using a single isocenter, the new algorithm provides a solution to extend the usable IMRT field length from 21 to 27 cm for IMRT on a commercial linear accelerator using the step and shoot delivery method
intensity pattern;commercial treatment planning system;usable intensity modulated radiation therapy field length;multileaf collimators jaws;leaf-sequencing direction;overtravel distances;random patterns;jaw overtravel limits;leaf sequencing algorithm;upper jaw overtravel limit;step and shoot delivery method;intensity elements;segment field size;conformal radiation therapy;commercial linear accelerator;deliverable segments;single isocenter;random intensity patterns;jaw position constraints;segmentation method;multileaf collimators leaves;treatment field length;areal algorithm;multileaf-based collimators intensity modulated radiation therapy
train_1156
Favorable noise uniformity properties of Fourier-based interpolation and
reconstruction approaches in single-slice helical computed tomography Volumes reconstructed by standard methods from single-slice helical computed tomography (CT) data have been shown to have noise levels that are highly nonuniform relative to those in conventional CT. These noise nonuniformities can affect low-contrast object detectability and have also been identified as the cause of the zebra artifacts that plague maximum intensity projection (MIP) images of such volumes. While these spatially variant noise levels have their root in the peculiarities of the helical scan geometry, there is also a strong dependence on the interpolation and reconstruction algorithms employed. In this paper, we seek to develop image reconstruction strategies that eliminate or reduce, at its source, the nonuniformity of noise levels in helical CT relative to that in conventional CT. We pursue two approaches, independently and in concert. We argue, and verify, that Fourier-based longitudinal interpolation approaches lead to more uniform noise ratios than do the standard 360LI and 180LI approaches. We also demonstrate that a Fourier-based fan-to-parallel rebinning algorithm, used as an alternative to fanbeam filtered backprojection for slice reconstruction, also leads to more uniform noise ratios, even when making use of the 180LI and 360LI interpolation approaches
maximum intensity projection images;low-contrast object detectability;fourier-based fan-to-parallel rebinning algorithm;fourier-based interpolation;reconstruction approaches;helical span geometry;single-slice helical computed tomography;noise uniformity properties;medical diagnostic imaging;more uniform noise ratios;zebra artifacts;conventional ct
train_1157
Portal dose image prediction for dosimetric treatment verification in
radiotherapy. II. An algorithm for wedged beams A method is presented for calculation of a two-dimensional function, T/sub wedge/(x,y), describing the transmission of a wedged photon beam through a patient. This in an extension of the method that we have published for open (nonwedged) fields [Med. Phys. 25, 830-840 (1998)]. Transmission functions for open fields are being used in our clinic for prediction of portal dose images (PDI, i.e., a dose distribution behind the patient in a plane normal to the beam axis), which are compared with PDIs measured with an electronic portal imaging device (EPID). The calculations are based on the planning CT scan of the patient and on the irradiation geometry as determined in the treatment planning process. Input data for the developed algorithm for wedged beams are derived from (the already available) measured input data set for transmission prediction in open beams, which is extended with only a limited set of measurements in the wedged beam. The method has been tested for a PDI plane at 160 cm from the focus, in agreement with the applied focus-to-detector distance of our fluoroscopic EPIDs. For low and high energy photon beams (6 and 23 MV) good agreement (~1%) has been found between calculated and measured transmissions for a slab and a thorax phantom
virtual wedges;slab phantom;portal dose image prediction;irradiation geometry;thorax phantom;electronic portal imaging devices;dosimetric treatment verification;high energy photon beams;fluoroscopic ccd camera;transmission dosimetry;6 mv;wedged photon beam;radiotherapy;in vivo dosimetry;open beams;low energy photon beams;two-dimensional function;planning ct scan;cadplan planning system;wedged beams algorithm;23 mv;pencil beam algorithm
train_1158
From powder to perfect parts
GKN Sinter Metals has increased productivity and quality by automating the powder metal lines that produce its transmission parts
gkn sinter metals;automating;robotic systems;gentle transfer units;powder metal lines;conveyors
train_1159
Sigma -admissible families over linear orders
Admissible sets of the form HYP(M), where M is a recursively saturated system, are treated. We provide descriptions of subsets M, which are Sigma /sub */-sets in HYP(M), and of families of subsets M, which form Sigma -regular families in HYP(M), in terms of the concept of being fundamental couched in the article. Fundamental subsets and families are characterized for models of dense linear orderings
dense linear orderings;sigma -admissible families;linear orders;recursively saturated system;hyp(m);fundamental subsets
train_116
Frontier between separability and quantum entanglement in a many spin system
We discuss the critical point x/sub c/ separating the quantum entangled and separable states in two series of N spins S in the simple mixed state characterized by the matrix operator rho = x| phi >< phi |+1-x/D/sup N/I/sub D/N, where x in [0, 1], D = 2S + 1, I/sub D/N is the D/sup N/ * D/sup N/ unity matrix and | phi > is a special entangled state. The cases x = 0 and x = 1 correspond respectively to fully random spins and to a fully entangled state. In the first of these series we consider special states | phi > invariant under charge conjugation, that generalizes the N = 2 spin S = 1/2 Einstein-Podolsky-Rosen state, and in the second one we consider generalizations of the Werner (1989) density matrices. The evaluation of the critical point x/sub c/ was done through bounds coming from the partial transposition method of Peres (1996) and the conditional nonextensive entropy criterion. Our results suggest the conjecture that whenever the bounds coming from both methods coincide the result of x/sub c/ is the exact one. The results we present are relevant for the discussion of quantum computing, teleportation and cryptography
teleportation;critical point;entangled state;werner density matrices;many spin system;einstein-podolsky-rosen state;quantum entanglement;random spin;partial transposition method;quantum computing;cryptography;separable states;separability;charge conjugation;nonextensive entropy criterion;unity matrix;matrix operator
train_1160
Monoids all polygons over which are omega -stable: proof of the Mustafin-Poizat
conjecture A monoid S is called an omega -stabilizer (superstabilizer, or stabilizer) if every S-polygon has an omega -stable (superstable, or stable) theory. It is proved that every omega -stabilizer is a regular monoid. This confirms the Mustafin-Poizat conjecture and allows us to end up the description of omega -stabilizers
mustafin-poizat conjecture;regular monoid;s-polygon;monoids all polygons;omega -stabilizer
train_1161
Model theory for hereditarily finite superstructures
We study model-theoretic properties of hereditarily finite superstructures over models of not more than countable signatures. A question is answered in the negative inquiring whether theories of hereditarily finite superstructures which have a unique (up to isomorphism) hereditarily finite superstructure can be described via definable functions. Yet theories for such superstructures admit a description in terms of iterated families TF and SF. These are constructed using a definable union taken over countable ordinals in the subsets which are unions of finitely many complete subsets and of finite subsets, respectively. Simultaneously, we describe theories that share a unique (up to isomorphism) countable hereditarily finite superstructure
countable signatures;iterated families;countable hereditarily finite superstructure;finitely many complete subsets;definable union;model theory;model-theoretic properties
train_1162
Recognition of finite simple groups S/sub 4/(q) by their element orders
It is proved that among simple groups S/sub 4/(q) in the class of finite-groups, only the groups S/sub 4/(3/sup n/), where n is an odd number greater than unity, are recognizable by a set of their element orders. It is also shown that simple groups U/sub 3/(9), /sup 3/D/sub 4/(2), G/sub 2/(4), S/sub 6/(3), F/sub 4/(2), and /sup 2/E/sub 6/(2) are recognizable, but L/sub 3/(3) is not
divisibility relation;element orders;finite simple groups recognition
train_1163
Evaluating the complexity of index sets for families of general recursive
functions in the arithmetic hierarchy The complexity of index sets of families of general recursive functions is evaluated in the Kleene-Mostowski arithmetic hierarchy
general recursive functions;kleene-mostowski arithmetic hierarchy;index sets complexity;arithmetic hierarchy
train_1164
Friedberg numberings of families of n-computably enumerable sets
We establish a number of results on numberings, in particular, on Friedberg numberings, of families of d.c.e. sets. First, it is proved that there exists a Friedberg numbering of the family of all d.c.e. sets. We also show that this result, patterned on Friedberg's famous theorem for the family of all c.e. sets, holds for the family of all n-c.e. sets for any n > 2. Second, it is stated that there exists an infinite family of d.c.e. sets without a Friedberg numbering. Third, it is shown that there exists an infinite family of c.e. sets (treated as a family of d.c.e. sets) with a numbering which is unique up to equivalence. Fourth, it is proved that there exists a family of d.c.e. sets with a least numbering (under reducibility) which is Friedberg but is not the only numbering (modulo reducibility)
computability theory;infinite family;friedberg numberings;families of n-computably enumerable sets
train_1165
Recognizing groups G/sub 2/(3/sup n/) by their element orders
It is proved that a finite group that is isomorphic to a simple non-Abelian group G = G/sub 2/(3/sup n/) is, up to isomorphism, recognized by a set omega (G) of its element orders, that is, H approximately= G if omega (H) = omega (G) for some finite group H
element orders;isomorphism;finite group
train_1166
Embedding the outer automorphism group Out(F/sub n/) of a free group of rank n
in the group Out(F/sub m/) for m > n It is proved that for every n >or= 1, the group Out(F/sub n/) is embedded in the group Out(F/sub m/) with m = 1 + (n - 1)k/sup n/, where k is an arbitrary natural number coprime to n - 1
free group;outer automorphism group embedding;arbitrary natural number coprime
train_1167
A new approach to the d-MC problem
Many real-world systems are multi-state systems composed of multi-state components in which the reliability can be computed in terms of the lower bound points of level d, called d-Mincuts (d-MCs). Such systems (electric power, transportation, etc.) may be regarded as flow networks whose arcs have independent, discrete, limited and multi-valued random capacities. In this paper, all MCs are assumed to be known in advance, and the authors focused on how to verify each d-MC candidate before using d-MCs to calculate the network reliability. The proposed algorithm is more efficient than existing algorithms. The algorithm runs in O(p sigma mn) time, a significant improvement over the previous O(p sigma m/sup 2/) time bounds based on max-flow/min-cut, where p and or are the number of MCs and d-MC candidates, respectively. It is simple, intuitive and uses no complex data structures. An example is given to show how all d-MC candidates are found and verified by the proposed algorithm. Then the reliability of this example is computed
multi-state systems;time bounds;max-flow/min-cut;flow networks;multi-state components;d-mc problem;reliability computation;failure analysis algorithm;d-mincuts
train_1168
Computing failure probabilities. Applications to reliability analysis
The paper presents one method for calculating failure probabilities with applications to reliability analysis. The method is based on transforming the initial set of variables to a n-dimensional uniform random variable in the unit hypercube, together with the limit condition set and calculating the associated probability using a recursive method based on the Gauss-Legendre quadrature formulas to calculate the resulting multiple integrals. An example of application is used to illustrate the proposed method
multiple integrals calculation;gauss-legendre quadrature formulae;n-dimensional uniform random variable;recursive method;tail approximation;limit condition;reliability analysis applications;failure probabilities computation;unit hypercube
train_1169
An efficient algorithm for sequential generation of failure states in a network
with multi-mode components In this work, a new algorithm for the sequential generation of failure states in a network with multi-mode components is proposed. The algorithm presented in the paper transforms the state enumeration problem into a K-shortest paths problem. Taking advantage of the inherent efficiency of an algorithm for shortest paths enumeration and also of the characteristics of the reliability problem in which it will be used, an algorithm with lower complexity than the best algorithm in the literature for solving this problem, was obtained. Computational results will be presented for comparing the efficiency of both algorithms in terms of CPU time and for problems of different size
multi-mode components reliability;sequential failure states generation algorithm;network failure states;cpu time;state enumeration problem;k-shortest paths problem
train_117
Multiresolution Markov models for signal and image processing
Reviews a significant component of the rich field of statistical multiresolution (MR) modeling and processing. These MR methods have found application and permeated the literature of a widely scattered set of disciplines, and one of our principal objectives is to present a single, coherent picture of this framework. A second goal is to describe how this topic fits into the even larger field of MR methods and concepts-in particular, making ties to topics such as wavelets and multigrid methods. A third goal is to provide several alternate viewpoints for this body of work, as the methods and concepts we describe intersect with a number of other fields. The principle focus of our presentation is the class of MR Markov processes defined on pyramidally organized trees. The attractiveness of these models stems from both the very efficient algorithms they admit and their expressive power and broad applicability. We show how a variety of methods and models relate to this framework including models for self-similar and 1/f processes. We also illustrate how these methods have been used in practice
multigrid methods;1/f processes;multiresolution markov models;wavelets;statistical multiresolution modeling;pyramidally organized trees;self-similar processes
train_1170
Upper bound analysis of oblique cutting with nose radius tools
A generalized upper bound model for calculating the chip flow angle in oblique cutting using flat-faced nose radius tools is described. The projection of the uncut chip area on the rake face is divided into a number of elements parallel to an assumed chip flow direction. The length of each of these elements is used to find the length of the corresponding element on the shear surface using the ratio of the shear velocity to the chip velocity. The area of each element is found as the cross product of the length and its width along the cutting edge. Summing up the area of the elements along the shear surface, the total shear surface area is obtained. The friction area is calculated using the similarity between orthogonal and oblique cutting in the 'equivalent' plane that includes both the cutting velocity and chip velocity. The cutting power is obtained by summing the shear power and the friction power. The actual chip flow angle and chip velocity are obtained by minimizing the cutting power with respect to both these variables. The shape of the curved shear surface, the chip cross section and the cutting force obtained from this model are presented
friction area;uncut chip area;nose radius tools;chip velocity;chip flow angle;shear surface;upper bound analysis;oblique cutting;shear velocity
train_1171
Manufacturing data analysis of machine tool errors within a contemporary small
manufacturing enterprise The main focus of the paper is directed at the determination of manufacturing errors within the contemporary smaller manufacturing enterprise sector. The manufacturing error diagnosis is achieved through the manufacturing data analysis of the results obtained from the inspection of the component on a co-ordinate measuring machine. This manufacturing data analysis activity adopts a feature-based approach and is conducted through the application of a forward chaining expert system, called the product data analysis distributed diagnostic expert system, which forms part of a larger prototype feedback system entitled the production data analysis framework. The paper introduces the manufacturing error categorisations that are associated with milling type operations, knowledge acquisition and representation, conceptual structure and operating procedure of the prototype manufacturing data analysis facility. The paper concludes with a brief evaluation of the logic employed through the simulation of manufacturing error scenarios. This prototype manufacturing data analysis expert system provides a valuable aid for the rapid diagnosis and elimination of manufacturing errors on a 3-axis vertical machining centre in an environment where operator expertise is limited
machine tool errors;milling type operations;fixturing errors;knowledge acquisition;feature-based approach;forward chaining expert system;conceptual structure;operating procedure;2 1/2d components;3-axis vertical machining centre;knowledge representation;manufacturing data analysis;co-ordinate measuring machine;inspection;product data analysis distributed diagnostic expert system;contemporary small manufacturing enterprise;programming errors
train_1172
Marble cutting with single point cutting tool and diamond segments
An investigation has been undertaken into the frame sawing with diamond blades. The kinematic behaviour of the frame sawing process is discussed. Under different cutting conditions, cutting and indenting-cutting tests are carried out by single point cutting tools and single diamond segments. The results indicate that the depth of cut per diamond grit increases as the blades move forward. Only a few grits per segment can remove the material in the cutting process. When the direction of the stroke changes, the cutting forces do not decrease to zero because of the residual plastic deformation beneath the diamond grits. The plastic deformation and fracture chipping of material are the dominant removal processes, which can be explained by the fracture theory of brittle material indentation
indenting-cutting tests;residual plastic deformation;cutting tests;diamond segments;fracture theory;kinematic behaviour;single point cutting tool;frame sawing;marble cutting;fracture chipping;removal processes;brittle material indentation
train_1173
A comprehensive chatter prediction model for face turning operation including
tool wear effect Presents a three-dimensional mechanistic frequency domain chatter model for face turning processes, that can account for the effects of tool wear including process damping. New formulations are presented to model the variation in process damping forces along nonlinear tool geometries such as the nose radius. The underlying dynamic force model simulates the variation in the chip cross-sectional area by accounting for the displacements in the axial and radial directions. The model can be used to determine stability boundaries under various cutting conditions and different states of flank wear. Experimental results for different amounts of wear are provided as a validation for the model
flank wear;tool wear effect;chatter prediction model;face turning operation;axial directions;process damping;three-dimensional mechanistic frequency domain chatter model;radial directions;stability boundaries
train_1174
Optimization of cutting conditions for single pass turning operations using a
deterministic approach An optimization analysis, strategy and CAM software for the selection of economic cutting conditions in single pass turning operations are presented using a deterministic approach. The optimization is based on criteria typified by the maximum production rate and includes a host of practical constraints. It is shown that the deterministic optimization approach involving mathematical analyses of constrained economic trends and graphical representation on the feed-speed domain provides a clearly defined strategy that not only provides a unique global optimum solution, but also the software that is suitable for on-line CAM applications. A numerical study has verified the developed optimization strategies and software and has shown the economic benefits of using optimization
deterministic approach;economic cutting conditions;process planning;cam software;cutting conditions optimization;constrained economic trends;maximum production rate;mathematical analyses;single pass turning operations
train_1175
Prediction of tool and chip temperature in continuous and interrupted machining
A numerical model based on the finite difference method is presented to predict tool and chip temperature fields in continuous machining and time varying milling processes. Continuous or steady state machining operations like orthogonal cutting are studied by modeling the heat transfer between the tool and chip at the tool-rake face contact zone. The shear energy created in the primary zone, the friction energy produced at the rake face-chip contact zone and the heat balance between the moving chip and stationary tool are considered. The temperature distribution is solved using the finite difference method. Later, the model is extended to milling where the cutting is interrupted and the chip thickness varies with time. The proposed model combines the steady-state temperature prediction in continuous machining with transient temperature evaluation in interrupted cutting operations where the chip and the process change in a discontinuous manner. The mathematical models and simulation results are in satisfactory agreement with experimental temperature measurements reported in the literature
primary zone;heat transfer;finite difference method;tool temperature prediction;interrupted machining;thermal properties;tool-rake face contact zone;orthogonal cutting;continuous machining;temperature distribution;time varying milling processes;friction energy;shear energy;numerical model;chip temperature prediction;first-order dynamic system
train_1176
A summary of methods applied to tool condition monitoring in drilling
Presents a summary of the monitoring methods, signal analysis and diagnostic techniques for tool wear and failure monitoring in drilling that have been tested and reported in the literature. The paper covers only indirect monitoring methods such as force, vibration and current measurements. Signal analysis techniques cover all the methods that have been used with indirect measurements including e.g. statistical parameters and Fast Fourier and Wavelet Transform. Only a limited number of automatic diagnostic tools have been developed for diagnosis of the condition of the tool in drilling. All of these rather diverse approaches that have been available are covered in this study. Only in a few of the papers have attempts been made to compare the chosen approach with other methods. Many of the papers only present one approach and unfortunately quite often the test material of the study is limited especially in what comes to the cutting process parameter variation and also workpiece material
tool wear;force measurements;indirect monitoring methods;diagnostic techniques;failure monitoring;wavelet transform;current measurements;vibration measurements;automatic diagnostic tools;tool condition monitoring;fast fourier transform;drilling;signal analysis;monitoring methods;statistical parameters
train_1177
Comparative statistical analysis of hole taper and circularity in laser
percussion drilling Investigates the relationships and parameter interactions between six controllable variables on the hole taper and circularity in laser percussion drilling. Experiments have been conducted on stainless steel workpieces and a comparison was made between stainless steel and mild steel. The central composite design was employed to plan the experiments in order to achieve required information with reduced number of experiments. The process performance was evaluated. The ratio of minimum to maximum Feret's diameter was considered as circularity characteristic of the hole. The models of these three process characteristics were developed by linear multiple regression technique. The significant coefficients were obtained by performing analysis of variance (ANOVA) at 1, 5 and 7% levels of significance. The final models were checked by complete residual analysis and finally were experimentally verified. It was found that the pulse frequency had a significant effect on the hole entrance diameter and hole circularity in drilling stainless steel unlike the drilling of mild steel where the pulse frequency had no significant effect on the hole characteristics
pulse frequency;laser percussion drilling;analysis of variance;central composite design;anova;mild steel;equivalent entrance diameter;laser pulse width;ferets diameter;process performance;complete residual analysis;linear multiple regression technique;stepwise regression method;hole taper;least squares procedure;stainless steel workpieces;laser peak power;focal plane position;comparative statistical analysis;circularity;assist gas pressure
train_1178
Network-centric systems
The author describes a graduate-level course that addresses cutting-edge issues in network-centric systems while following a more traditional graduate seminar format
graduate level course;network-centric systems
train_1179
Evolution complexity of the elementary cellular automaton rule 18
Cellular automata are classes of mathematical systems characterized by discreteness (in space, time, and state values), determinism, and local interaction. Using symbolic dynamical theory, we coarse-grain the temporal evolution orbits of cellular automata. By means of formal languages and automata theory, we study the evolution complexity of the elementary cellular automaton with local rule number 18 and prove that its width 1-evolution language is regular, but for every n >or= 2 its width n-evolution language is not context free but context sensitive
elementary cellular automaton;formal languages;complexity;cellular automata;evolution complexity;symbolic dynamical theory
train_118
Sensorless control of induction motor drives
Controlled induction motor drives without mechanical speed sensors at the motor shaft have the attractions of low cost and high reliability. To replace the sensor the information on the rotor speed is extracted from measured stator voltages and currents at the motor terminals. Vector-controlled drives require estimating the magnitude and spatial orientation of the fundamental magnetic flux waves in the stator or in the rotor. Open-loop estimators or closed-loop observers are used for this purpose. They differ with respect to accuracy, robustness, and sensitivity against model parameter variations. Dynamic performance and steady-state speed accuracy in the low-speed range can be achieved by exploiting parasitic effects of the machine. The overview in this paper uses signal flow graphs of complex space vector quantities to provide an insightful description of the systems used in sensorless control of induction motors
induction motor drives;model parameter variations;vector-controlled drives;closed-loop observers;stator voltages;space vector quantities;robustness;sensorless control;steady-state speed accuracy;signal flow graphs;stator currents;parasitic effects;open-loop estimators;spatial orientation;fundamental magnetic flux waves;magnitude;sensitivity;reliability
train_1180
Decomposition of additive cellular automata
Finite additive cellular automata with fixed and periodic boundary conditions are considered as endomorphisms over pattern spaces. A characterization of the nilpotent and regular parts of these endomorphisms is given in terms of their minimal polynomials. Generalized eigenspace decomposition is determined and relevant cyclic subspaces are described in terms of symmetries. As an application, the lengths and frequencies of limit cycles in the transition diagram of the automaton are calculated
endomorphisms;cellular automata;finite cellular automaton;computational complexity;transition diagram
train_1181
Dynamic neighborhood structures in parallel evolution strategies
Parallelizing is a straightforward approach to reduce the total computation time of evolutionary algorithms. Finding an appropriate communication network within spatially structured populations for improving convergence speed and convergence probability is a difficult task. A new method that uses a dynamic communication scheme in an evolution strategy will be compared with conventional static and dynamic approaches. The communication structure is based on a so-called diffusion model approach. The links between adjacent individuals are dynamically chosen according to deterministic or probabilistic rules. Due to self-organization effects, efficient and stable communication structures are established that perform robustly and quickly on a multimodal test function
multimodal test function;parallelizing;convergence speed;convergence probability;parallel evolutionary algorithms;evolutionary algorithms
train_1182
Optimization of the memory weighting function in stochastic functional
self-organized sorting performed by a team of autonomous mobile agents The activity of a team of autonomous mobile agents formed by identical "robot-like-ant" individuals capable of performing a random walk through an environment that are able to recognize and move different "objects" is modeled. The emergent desired behavior is a distributed sorting and clustering based only on local information and a memory register that records the past objects encountered. An optimum weighting function for the memory registers is theoretically derived. The optimum time-dependent weighting function allows sorting and clustering of the randomly distributed objects in the shortest time. By maximizing the average speed of a texture feature (the contrast) we check the central assumption, the intermediate steady-states hypothesis, of our theoretical result. It is proved that the algorithm optimization based on maximum speed variation of the contrast feature gives relationships similar to the theoretically derived annealing law
autonomous mobile agents;algorithm optimization;random walk;sorting;memory weighting function;clustering